Skip to main content

Just a bunch of useful embeddings to get started quickly.

Project description

embetter

"Just a bunch of useful embeddings to get started quickly."


Embetter implements scikit-learn compatible embeddings for computer vision and text. It should make it very easy to quickly build proof of concepts using scikit-learn pipelines and, in particular, should help with bulk labelling. It's a also meant to play nice with bulk and scikit-partial but it can also be used together with your favorite ANN solution like weaviate, chromadb and hnswlib.

Install

You can install via pip.

python -m pip install embetter

Many of the embeddings are optional depending on your use-case, so if you want to nit-pick to download only the tools that you need:

python -m pip install "embetter[text]"
python -m pip install "embetter[sentence-tfm]"
python -m pip install "embetter[spacy]"
python -m pip install "embetter[sense2vec]"
python -m pip install "embetter[gensim]"
python -m pip install "embetter[bpemb]"
python -m pip install "embetter[vision]"
python -m pip install "embetter[all]"

API Design

This is what's being implemented now.

# Helpers to grab text or image from pandas column.
from embetter.grab import ColumnGrabber

# Representations/Helpers for computer vision
from embetter.vision import ImageLoader, TimmEncoder, ColorHistogramEncoder

# Representations for text
from embetter.text import SentenceEncoder, Sense2VecEncoder, BytePairEncoder, spaCyEncoder, GensimEncoder

# Representations from multi-modal models
from embetter.multi import ClipEncoder

# Finetuning components 
from embetter.finetune import ForwardFinetuner, ContrastiveFinetuner

# External embedding providers, typically needs an API key
from embetter.external import CohereEncoder, OpenAIEncoder

All of these components are scikit-learn compatible, which means that you can apply them as you would normally in a scikit-learn pipeline. Just be aware that these components are stateless. They won't require training as these are all pretrained tools.

Text Example

import pandas as pd
from sklearn.pipeline import make_pipeline 
from sklearn.linear_model import LogisticRegression

from embetter.grab import ColumnGrabber
from embetter.text import SentenceEncoder

# This pipeline grabs the `text` column from a dataframe
# which then get fed into Sentence-Transformers' all-MiniLM-L6-v2.
text_emb_pipeline = make_pipeline(
  ColumnGrabber("text"),
  SentenceEncoder('all-MiniLM-L6-v2')
)

# This pipeline can also be trained to make predictions, using
# the embedded features. 
text_clf_pipeline = make_pipeline(
  text_emb_pipeline,
  LogisticRegression()
)

dataf = pd.DataFrame({
  "text": ["positive sentiment", "super negative"],
  "label_col": ["pos", "neg"]
})
X = text_emb_pipeline.fit_transform(dataf, dataf['label_col'])
text_clf_pipeline.fit(dataf, dataf['label_col']).predict(dataf)

Image Example

The goal of the API is to allow pipelines like this:

import pandas as pd
from sklearn.pipeline import make_pipeline 
from sklearn.linear_model import LogisticRegression

from embetter.grab import ColumnGrabber
from embetter.vision import ImageLoader, TimmEncoder

# This pipeline grabs the `img_path` column from a dataframe
# then it grabs the image paths and turns them into `PIL.Image` objects
# which then get fed into MobileNetv2 via TorchImageModels (timm).
image_emb_pipeline = make_pipeline(
  ColumnGrabber("img_path"),
  ImageLoader(convert="RGB"),
  TimmEncoder("mobilenetv2_120d")
)

dataf = pd.DataFrame({
  "img_path": ["tests/data/thiscatdoesnotexist.jpeg"]
})
image_emb_pipeline.fit_transform(dataf)

Batched Learning

All of the encoding tools you've seen here are also compatible with the partial_fit mechanic in scikit-learn. That means you can leverage scikit-partial to build pipelines that can handle out-of-core datasets.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

embetter-0.5.3.tar.gz (22.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

embetter-0.5.3-py2.py3-none-any.whl (35.0 kB view details)

Uploaded Python 2Python 3

File details

Details for the file embetter-0.5.3.tar.gz.

File metadata

  • Download URL: embetter-0.5.3.tar.gz
  • Upload date:
  • Size: 22.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for embetter-0.5.3.tar.gz
Algorithm Hash digest
SHA256 020d1bb477b218b8d0a42fa5c31b76a30c72caa59dc162b762f7d5135208ae5b
MD5 d067d9b008ff19d67286c991dd848acf
BLAKE2b-256 1b310dc3b1447ec79d826a71c074776a6e60b741492b73cd5ca895910d59e9a7

See more details on using hashes here.

File details

Details for the file embetter-0.5.3-py2.py3-none-any.whl.

File metadata

  • Download URL: embetter-0.5.3-py2.py3-none-any.whl
  • Upload date:
  • Size: 35.0 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for embetter-0.5.3-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 4517b8c2620b710bfbd7fc5fd77c4c971cf82f58ece953d88ca4977d2891ee79
MD5 f44dda2e6ff5ca00d64272c7750d33fb
BLAKE2b-256 cbbfa92d97c4a8240bbc9c222cd78c05a32adfbae984ab58bd601486c69fe4c6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page