Skip to main content

Created for ONS. Proof-of-concept mmap'd Rust word2vec implementation linked with category matching

Project description

bonn-py

NLP Category-Matching tools

A Rust microservice to match queries on the ONS Website to groupings in the ONS taxonomy

Getting started

Set up taxonomy.json

This should be adapted from the taxonomy.json.example and placed in the root directory.

Download or create embeddings

These are most simply sourced as pretrained fifu models, but can be dynamically generated using the embedded FinalFusion libraries.

To build wheels for distribution, use:

make

Manual building

Quick Local Setup

  1. setup .env file - $ cp .env.local .env

  2. make wheels

  3. make sure you've placed taxonomy.json in the root folder (This should be obtained from ONS).

  4. [TODO: genericize] you need an elasticsearch container forwarded to port:9200 (you can customize the port in .env) with a dump matching the appropriate schema https://gitlab.com/flaxandteal/onyx/dp-search-api in this readme you can checkout how to setup elasticsearch.

Install finalfusion utils

cd core
RUSTFLAGS="-C link-args=-lcblas -llapack" cargo install finalfusion-utils --features=opq

Optional: Convert the model to quantized fifu format

Note: if you try to use the full wiki bin you'll need about 128GB of RAM...

finalfusion quantize -f fasttext -q opq <fasttext.bin> fasttext.fifu.opq

Install deps and build

poetry shell
cd core
poetry install
cd ../api
poetry install
exit

Run

poetry run python -c "from bonn import FfModel; FfModel('test_data/wiki.en.fifu').eval('Hello')"

Algorithm

The following requirements were identified:

  • Fast response to live requests
  • Low running resource requirements, as far as possible
  • Ability to limit risk of unintended bias in results, and making results explainable
  • Minimal needed preprocessing of data (at least for first version)
  • Non-invasive - ensuring that the system can enhance existing work by ONS teams, with minimal changes required to incorporate
  • Runs effectively and reproducibly in ONS workflows

We found that the most effective approach was to use the standard Wikipedia unstructured word2vec model as the ML basis.

This has an additional advantage that we have been able to prototype incorporating other language category matching into the algorithm, although further work is required, including manual review by native speakers and initial results suggest that a larger language corpus would be required for training.

Using finalfusion libraries in Rust enables mmapping for memory efficiency.

Category Vectors

A bag of words is formed, to make a vector for the category - a weighted average of the terms, according to the attribute contributing it:

Grouping Score basis
Category (top-level) Literal words within title
Subcategory (second-level) Literal words within title
Subsubcategory (third-level) Literal words within title
Related words across whole category Common thematic words across all datasets within the category
Related words across subsubcategory Common thematic words across all datasets within the subsubcategory

To build a weighted bag of words, the system finds thematically-distinctive words occurring in dataset titles and descriptions present in the categories, according to the taxonomy. The "thematic distinctiveness" of words in a dataset description is defined by exceeding a similarity threshold to terms in the category title.

These can then be compared to search queries word-by-word, obtaining a score for each taxonomy entry, for a given phrase.

Scoring Adjustment

In addition to the direct cosine similarity of these vectors, we:

  • remove any stopwords from the search scoring, with certain additional words that should not affect the category matching (“data”, “statistics”, “measure(s)”)
  • apply an overall significance boost for a category, using the magnitude of the average word vector for its bag as a proxy for how “significant” it is that it matches a query phrase (so categories that match overly frequently, such as “population”, are slightly deprioritized)
  • enhance or reduce contribution from each of the words in the query based on their commonality across categories.

To do the last, a global count of (lemmatized) words appearing in dataset descriptions/titles across all categories is made, and common terms are deprioritized within the bag according to an exponential decay function - this allows us to rely more heavily on words that strongly signpost a category (such as “education” or “school”) without being confounded by words many categories contain (such as “price” or “economic”).

Once per-category scores for a search phrase are obtained, we filter them based on:

  • appearance thresholds, to ensure we only return matches over a minimal viable score;
  • a signal-to-noise ratio filter (SNR) that returns a small number of notably high-scoring categories or a larger group of less distinguishable top scorers, according to a supplied SNR ratio.

License

Prepared by Flax & Teal Limited for ONS Alpha project. Copyright © 2022, Office for National Statistics (https://www.ons.gov.uk)

Released under MIT license, see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bonn-0.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.8 MB view details)

Uploaded CPython 3.7mmanylinux: glibc 2.17+ x86-64

File details

Details for the file bonn-0.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for bonn-0.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 94b8d06536fbce97425f786a7294cd8004ef7ee64238c683ab11b6e534f965c8
MD5 ac2e182ed14392b33a1de43d91f05c6f
BLAKE2b-256 21a67f1845567efa22ac4b29d53636341ee7d80ad84ea92c7f01620dcb446b3a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page