Skip to main content

Tools for testing, debugging, and evaluating LLM features.

Project description

Baserun

Twitter

Baserun is the testing and observability platform for LLM apps.

Quick Start

1. Install Baserun

pip install baserun

2. Generate an API key

Create an account at https://baserun.ai. Then generate an API key for your project in the settings tab. Set it as an environment variable:

export BASERUN_API_KEY="your_api_key_here"

Or set baserun.api_key to its value:

baserun.api_key = "br-..."

3. Start testing

Use our pytest plugin and start immediately testing with Baserun. By default all OpenAI and Anthropic requests will be automatically logged.

# test_module.py

import openai

def test_paris_trip():
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        temperature=0.7,
        messages=[
            {
                "role": "user",
                "content": "What are three activities to do in Paris?"
            }
        ],
    )
    
    assert "Eiffel Tower" in response['choices'][0]['message']['content']

To run the test and log to baserun:

pytest --baserun test_module.py
...
========================Baserun========================
Test results available at: https://baserun.ai/runs/<id>
=======================================================

Production usage

You can use Baserun for production observability as well. To do so, simply call baserun.init() somewhere during your application's startup, and add the @baserun.trace decorator to the function you want to observe (e.g. a request/response handler). For example,

import sys
import openai
import baserun


@baserun.trace
def answer_question(question: str) -> str:
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": question}],
    )
    return response["choices"][0]["message"]["content"]


if __name__ == "__main__":
    baserun.init()
    print(answer_question(sys.argv[-1]))

Documentation

For a deeper dive on all capabilities and more advanced usage, please refer to our Documentation.

Contributing

Contributions to baserun-py are welcome! Below are some guidelines to help you get started.

Dependencies

Install the dependencies:

pip install -r requirements.txt

Install the dev dependencies with:

pip install -r requirements-dev.txt

Tests

You can run tests using pytest. Note is that in pytest the remote server is mocked, so network requests are not actually made to Baserun's backend.

If you want to emulate production tracing, we have a utility for that:

python tests/testing_functions.py {function_to_test}

Take a look at the list of functions in that file: any function with the @baserun.trace decorator can be used.

gRPC and Protobuf

If you're making changes to baserun.proto, you'll need to compile those changes. Run the following command:

python -m grpc_tools.protoc -Ibaserun --python_out=baserun --pyi_out=baserun --grpc_python_out=baserun baserun/v1/baserun.proto

A Note on Breaking Changes

Be cautious when making breaking changes to protobuf definitions. These could impact backward compatibility and require corresponding server-side changes, so be sure to discuss it with our maintainers.

License

MIT License

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

baserun-0.8.0.tar.gz (32.6 kB view hashes)

Uploaded Source

Built Distribution

baserun-0.8.0-py3-none-any.whl (35.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page