No project description provided
Project description
Local LLM function calling
Overview
The local-llm-function-calling
project is designed to constrain the generation of Hugging Face text generation models by enforcing a JSON schema and facilitating the formulation of prompts for function calls, similar to OpenAI's function calling feature, but actually enforcing the schema unlike OpenAI.
The project provides a Generator
class that allows users to easily generate text while ensuring compliance with the provided prompt and JSON schema. By utilizing the local-llm-function-calling
library, users can conveniently control the output of text generation models. It uses my own quickly sketched json-schema-enforcer
project as the enforcer.
Features
- Constrains the generation of Hugging Face text generation models to follow a JSON schema.
- Provides a mechanism for formulating prompts for function calls, enabling precise data extraction and formatting.
- Simplifies the text generation process through a user-friendly
Generator
class.
Installation
To install the local-llm-function-calling
library, use the following command:
pip install local-llm-function-calling
Usage
Here's a simple example demonstrating how to use local-llm-function-calling
:
from local_llm_function_calling import Generator
# Define a function and models
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
]
# Initialize the generator with the Hugging Face model, tokenizer, and functions
generator = Generator(functions, "gpt2")
# Generate text using a prompt
function_call = generator.generate("What is the weather like today in Brooklyn?")
print(function_call)
Extending and Customizing
To extend or customize the prompt structure, you can subclass the TextPrompter
class. This allows you to modify the prompt generation process according to your specific requirements.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for local_llm_function_calling-0.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 970c609a0ab45996c92f9c377ba996dae7b9c8ab68e3df40038166c61a313f25 |
|
MD5 | 262a0f6d806e166118e8ed040c00145c |
|
BLAKE2b-256 | 4caa416b543b678494be5bf88d4c67394fb2df5fb965ea1ecfa86e12cc331a1b |
Hashes for local_llm_function_calling-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cd44e8ccaaba76decf05b2650f1e53c038454998594d23e7cfb1302acd1ece86 |
|
MD5 | d84475d908376a1374fca68503286051 |
|
BLAKE2b-256 | 07afc96bda2ee75b434d03edba12c58714c6262163edb8d77e07cb40643fc295 |