Skip to main content

OpenInference Bedrock Instrumentation

Project description

OpenInference AWS Bedrock Instrumentation

Python autoinstrumentation library for AWS Bedrock calls made using boto3.

This package implements OpenInference tracing for invoke_model calls made using a boto3 bedrock-runtime client. These traces are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as Arize phoenix.

pypi

Installation

pip install openinference-instrumentation-bedrock

Quickstart

In a notebook environment (jupyter, colab, etc.) install openinference-instrumentation-bedrock, arize-phoenix and boto3.

You can test out this quickstart guide in Google Colab!

pip install openinference-instrumentation-bedrock arize-phoenix boto3

Ensure that boto3 is configured with AWS credentials.

First, import dependencies required to autoinstrument AWS Bedrock and set up phoenix as an collector for OpenInference traces.

from urllib.parse import urljoin

import boto3
import phoenix as px

from openinference.instrumentation.bedrock import BedrockInstrumentor
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

Next, we'll start a phoenix server and set it as a collector.

px.launch_app()
session_url = px.active_session().url
phoenix_otlp_endpoint = urljoin(session_url, "v1/traces")
phoenix_exporter = OTLPSpanExporter(endpoint=phoenix_otlp_endpoint)
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(span_exporter=phoenix_exporter))
trace_api.set_tracer_provider(tracer_provider=tracer_provider)

Instrumenting boto3 is simple:

BedrockInstrumentor().instrument()

Now, all calls to invoke_model are instrumented and can be viewed in the phoenix UI.

session = boto3.session.Session()
client = session.client("bedrock-runtime")
prompt = b'{"prompt": "Human: Hello there, how are you? Assistant:", "max_tokens_to_sample": 1024}'
response = client.invoke_model(modelId="anthropic.claude-v2", body=prompt)
response_body = json.loads(response.get("body").read())
print(response_body["completion"])

More Info

More details about tracing with OpenInference and phoenix can be found in the phoenix documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page