LLM plugin for models hosted on Replicate
Project description
llm-replicate
LLM plugin for models hosted on Replicate
Installation
First, install the LLM command-line utility.
Now install this plugin in the same environment as LLM.
llm install llm-replicate
Configuration
You will need an API key from Replicate. You can obtain one here.
You can set that as an environment variable called REPLICATE_API_TOKEN
, or add it to the llm
set of saved keys using:
llm keys set replicate
Enter key: <paste key here>
To fetch and save details of the default collection of language models hosted on Replicate, run this:
llm replicate fetch-models
To add specific models that aren't listed in that collection, use the llm replicate add
command.
For the Llama 2 model from a16z-infra/llama13b-v2-chat run this:
llm replicate add a16z-infra/llama13b-v2-chat \
--chat --alias llama2
The --chat
flag indicates that this is a chat model, which means it will be able to work with -c
continue mode.
Usage
To run a prompt against a model, pass its name or an alias to llm -m
:
llm -m llama2 "Ten great names for a pet pelican"
Sure, here are ten great names for a pet pelican:
- Pelty
- Peanut
- Puddles
- Nibbles
- Fuzzy
- Gizmo
- Hank
- Luna
- Scooter
- Splishy
I hope these suggestions help you find the perfect name for your pet pelican! Do you have any other questions?
Chat models can support continuing conversations, for example:
llm -c "Five more and make them more nautical"
Ahoy matey! Here be five more nautical-themed names for yer pet pelican:
- Captain Hook
- Anchoryn
- Seadog
- Plunder
- Pointe Pelican
I hope these suggestions help ye find the perfect name for yer feathered friend! Do ye have any other questions, matey?
Run llm models list
to see the full list of models:
llm models list
You should see something like this:
Replicate: replicate-flan-t5-xl
Replicate: replicate-llama-7b
Replicate: replicate-gpt-j-6b
Replicate: replicate-dolly-v2-12b
Replicate: replicate-oasst-sft-1-pythia-12b
Replicate: replicate-stability-ai-stablelm-tuned-alpha-7b
Replicate: replicate-vicuna-13b
Replicate: replicate-replit-code-v1-3b
Replicate: replicate-replit-replit-code-v1-3b
Replicate: replicate-joehoover-falcon-40b-instruct (aliases: falcon)
Replicate (chat): replicate-a16z-infra-llama13b-v2-chat (aliases: llama2)
Then run a prompt through a specific model like this:
llm -m replicate-vicuna-13b "Five great names for a pet llama"
Registering extra models
To register additional models that are not included in the default Language models collection, find their ID on Replicate and use the llm replicate add
command.
For example, to add the joehoover/falcon-40b-instruct model, run this:
llm replicate add joehoover/falcon-40b-instruct \
--alias falcon
This adds the model with the alias falcon
- you can have 0 or more aliases for a model.
Now you can run it like this:
llm -m replicate-joehoover-falcon-40b-instruct \
"Three reasons to get a pet falcon"
Or using the alias like this:
llm -m falcon "Three reasons to get a pet falcon"
You can edit the list of models you have registered using the default $EDITOR
like this:
llm replicate edit-models
If you register a model using the --chat
option that model will be treated slightly differently. Prompts sent to the model will be formatted like this:
User: user input here
Assistant:
If you use -c
conversation mode the prompt will include previous messages in the conversation, like this:
User: Ten great names for a pet pelican
Assistant: Sure, here are ten great names for a pet pelican:
1. Pelty
2. Peanut
3. Puddles
4. Nibbles
5. Fuzzy
6. Gizmo
7. Hank
8. Luna
9. Scooter
10. Splishy
I hope these suggestions help you find the perfect name for your pet pelican! Do you have any other questions?
User: Five more and make them more nautical
Assistant:
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-palm
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for llm_replicate-0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ed37d5d49f865b0db9b5a6bbfacbacfcde7cd81b16522e50d7b9adbf62bbbefe |
|
MD5 | 7fbb6a82d2965ee168ad5bc84d262bd7 |
|
BLAKE2b-256 | 0ef99dee8be16ce8684825debad78ea1d7c5820ac1b3e7ccad8b52f7733a34ea |