A guidance compatibility layer for llama-cpp-python
Project description
llama.cpp Guidance
The llama-cpp-guidance
package provides an LLM client compatibility layer between llama-cpp-python and guidance.
Installation
The llama-cpp-guidance
package can be installed using pip.
pip install llama-cpp-guidance
⚠️ It is highly recommended that you follow the installation instructions for llama-cpp-python after installing llama-cpp-guidance
to ensure that you have hardware acceleration setup appropriately.
Basic Usage
Once installed, you can use the LlamaCpp
class like any other guidance-compatible LLM class.
from pathlib import Path
from llama_cpp_guidance.llm import LlamaCpp
import guidance
guidance.llm = LlamaCpp(
model_path=Path("../path/to/llamacpp/model.gguf"),
n_gpu_layers=1,
n_threads=8
)
program = guidance(
"The best thing about the beach is {{~gen 'best' temperature=0.7 max_tokens=10}}"
)
output = program()
print(output)
The best thing about the beach is that there’s always something to do.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for llama_cpp_guidance-0.1.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 48808ea69536fcea314c3bb270a61fc7349552fba511febca11b5c2c03d85576 |
|
MD5 | ec223d079d3f1d795a720ae18a13d367 |
|
BLAKE2b-256 | 026e9938266b0cd737942f4b5683f1b239364e292a37901b82ab10d00cc6cddc |