Run evals against prompts using LLM
Project description
llm-evals-plugin
Run evals against prompts using LLM
Very early alpha: everything is likely to change.
Installation
Install this plugin in the same environment as LLM.
llm install llm-evals-plugin
Usage
See issue 1.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-evals-plugin
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
llm install -e '.[test]'
To run the tests:
pytest
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llm_evals_plugin-0.1a0.tar.gz
(2.9 kB
view hashes)
Built Distribution
Close
Hashes for llm_evals_plugin-0.1a0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 233c74a893c30391f3c239718930297cf4764ff7261f49a6e55b87160b24cedd |
|
MD5 | 6c2daf8c5b85db31d7df784d889af8e1 |
|
BLAKE2b-256 | 186964001098a1bd794431e92e2ce628090864d2076ca53de83cd213262195d1 |