A guidance compatibility layer for llama-cpp-python
Project description
llama.cpp Guidance
The llama-cpp-guidance
package provides an LLM client compatibility layer between llama-cpp-python and guidance.
Installation
The llama-cpp-guidance
package can be installed using pip.
pip install llama-cpp-guidance
[!INFO] It is highly recommended that you follow the installation instructions for llama-cpp-python after installing
llama-cpp-guidance
to ensure that you have hardware acceleration setup appropriately.
Basic Usage
Once installed, you can use the LlamaCpp
class like any other guidance-compatible LLM class.
from pathlib import Path
from llama_cpp_guidance.llm import LlamaCpp
import guidance
guidance.llm = LlamaCpp(
model_path=Path("../path/to/llamacpp/model.gguf"),
n_gpu_layers=1,
n_threads=8
)
program = guidance(
"The best thing about the beach is {{~gen 'best' temperature=0.7 max_tokens=10}}"
)
output = program()
print(output)
The best thing about the beach is that there’s always something to do.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for llama_cpp_guidance-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 68aa90c5cc26f4eaf5789cb6f53573740df8723936913ab02f76a9aef4de13cd |
|
MD5 | 2d46869e7eade59361bbe37d1ef59a0e |
|
BLAKE2b-256 | 64d408f7ee4448fcaf9134a163a0fa23d59efe944a23073ed9808290d119ca3e |