Skip to main content

Access llamafile localhost models via LLM

Project description

llm-llamafile

PyPI Changelog Tests License

Access llamafile localhost models via LLM

Installation

Install this plugin in the same environment as LLM.

llm install llm-llamafile

Usage

Make sure you have a llamafile running on localhost, serving an OpenAI compatible API endpoint on port 8080.

You can then use llm to interact with that model like so:

llm -m llamafile "3 neat characteristics of a pelican"

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-llamafile
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest

Project details


Release history Release notifications | RSS feed

This version

0.1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_llamafile-0.1.tar.gz (6.2 kB view hashes)

Uploaded Source

Built Distribution

llm_llamafile-0.1-py3-none-any.whl (6.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page