Skip to main content

An integration of Qdrant ANN vector database backend with txtai

Project description

qdrant-txtai

txtai simplifies building AI-powered semantic search applications using Transformers. It leverages the neural embeddings and their properties to encode high-dimensional data in a lower-dimensional space and allows to find similar objects based on their embeddings' proximity.

Implementing such application in real-world use cases requires storing the embeddings in an efficient way though, namely in a vector database like Qdrant. It offers not only a powerful engine for neural search, but also allows setting up a whole cluster if your data does not fit a single machine anymore. It is production grade and can be launched easily with Docker.

Combining the easiness of txtai with Qdrant's performance enables you to build production-ready semantic search applications way faster than before.

Installation

The library might be installed with pip as following:

pip install qdrant-txtai

Usage

Running the txtai application with Qdrant as a vector storage requires launching a Qdrant instance. That might be done easily with Docker:

docker run -p 6333:6333 -p:6334:6334 qdrant/qdrant:v0.10.2

Running the txtai application might be done either programmatically or by providing configuration in a YAML file.

Programmatically

from txtai.embeddings import Embeddings

embeddings = Embeddings({
    "embeddings": {
        "path": "sentence-transformers/all-MiniLM-L6-v2",
        "backend": "qdrant_txtai.ann.qdrant.Qdrant",
    },
})
embeddings.index([(0, "Correct", None), (1, "Not what we hoped", None)])
result = embeddings.search("positive", 1)
print(result)

Via YAML configuration

# app.yml
embeddings:
  path: sentence-transformers/all-MiniLM-L6-v2
  backend: qdrant_txtai.ann.qdrant.Qdrant
CONFIG=app.yml uvicorn "txtai.api:app"
curl -X GET "http://localhost:8000/search?query=positive"

Configuration properties

qdrant-txtai allows you to configure both the connection details, and some internal properties of the vector collection which may impact both speed and accuracy. Please refer to Qdrant docs if you are interested in the meaning of each property.

The example below presents all the available options:

embeddings:
  path: sentence-transformers/all-MiniLM-L6-v2
  backend: qdrant_txtai.ann.qdrant.Qdrant
  metric: l2 # allowed values: l2 / cosine / ip
  qdrant:
    host: localhost
    port: 6333
    grpc_port: 6334
    prefer_grpc: true
    collection: CustomCollectionName
    hnsw:
      m: 8
      ef_construct: 256
      full_scan_threshold:
      ef_search: 512

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qdrant-txtai-0.10.3.tar.gz (7.9 kB view hashes)

Uploaded Source

Built Distribution

qdrant_txtai-0.10.3-py3-none-any.whl (8.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page