Skip to main content

A tool for generating function arguments and choosing what function to call with local LLMs

Project description

Local LLM function calling

Documentation Status PyPI version

Overview

The local-llm-function-calling project is designed to constrain the generation of Hugging Face text generation models by enforcing a JSON schema and facilitating the formulation of prompts for function calls, similar to OpenAI's function calling feature, but actually enforcing the schema unlike OpenAI.

The project provides a Generator class that allows users to easily generate text while ensuring compliance with the provided prompt and JSON schema. By utilizing the local-llm-function-calling library, users can conveniently control the output of text generation models. It uses my own quickly sketched json-schema-enforcer project as the enforcer.

Features

  • Constrains the generation of Hugging Face text generation models to follow a JSON schema.
  • Provides a mechanism for formulating prompts for function calls, enabling precise data extraction and formatting.
  • Simplifies the text generation process through a user-friendly Generator class.

Installation

To install the local-llm-function-calling library, use the following command:

pip install local-llm-function-calling

Usage

Here's a simple example demonstrating how to use local-llm-function-calling:

from local_llm_function_calling import Generator

# Define a function and models
functions = [
    {
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA",
                    "maxLength": 20,
                },
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
            },
            "required": ["location"],
        },
    }
]

# Initialize the generator with the Hugging Face model and our functions
generator = Generator.hf(functions, "gpt2")

# Generate text using a prompt
function_call = generator.generate("What is the weather like today in Brooklyn?")
print(function_call)

Custom constraints

You don't have to use my prompting methods; you can craft your own prompts and your own constraints, and still benefit from the constrained generation:

from local_llm_function_calling import Constrainer
from local_llm_function_calling.model.huggingface import HuggingfaceModel

# Define your own constraint
# (you can also use local_llm_function_calling.JsonSchemaConstraint)
def lowercase_sentence_constraint(text: str):
    # Has to return (is_valid, is_complete)
    return [text.islower(), text.endswith(".")]

# Create the constrainer
constrainer = Constrainer(HuggingfaceModel("gpt2"))

# Generate your text
generated = constrainer.generate("Prefix.\n", lowercase_sentence_constraint, max_len=10)

Extending and Customizing

To extend or customize the prompt structure, you can subclass the TextPrompter class. This allows you to modify the prompt generation process according to your specific requirements.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

local_llm_function_calling-0.1.15.tar.gz (11.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

local_llm_function_calling-0.1.15-py3-none-any.whl (15.1 kB view details)

Uploaded Python 3

File details

Details for the file local_llm_function_calling-0.1.15.tar.gz.

File metadata

  • Download URL: local_llm_function_calling-0.1.15.tar.gz
  • Upload date:
  • Size: 11.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.11.5 Linux/6.4.12-zen1-1-zen

File hashes

Hashes for local_llm_function_calling-0.1.15.tar.gz
Algorithm Hash digest
SHA256 0b016be0ceed87324042410e8f2af6d5b083fa4088e336cbba0a0e10ab1f0caa
MD5 b03a04840db9560657d02d15852167ff
BLAKE2b-256 e42bc6bd8955ddf3975fafa8a897fc7a6ea56936e8df079359aacaae16b70c6a

See more details on using hashes here.

File details

Details for the file local_llm_function_calling-0.1.15-py3-none-any.whl.

File metadata

File hashes

Hashes for local_llm_function_calling-0.1.15-py3-none-any.whl
Algorithm Hash digest
SHA256 b8bf77653727e4b8560df9018e2d66622329221caa1e58cd417b2cffe2b35534
MD5 a35752b6a91c66bbd8a638c4a9fdcdda
BLAKE2b-256 d26868f6f258d741a5e08c2e9de38206d5c2ba67f31685d5a0a6f3ee276d8418

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page