Skip to main content

Algorithms for monitoring and explaining machine learning models

Project description

Alibi Logo

Build Status Documentation Status codecov Python version PyPI version GitHub Licence Slack channel

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.

If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.

Goals

  • Provide high quality reference implementations of black-box ML model explanation and interpretation algorithms
  • Define a consistent API for interpretable ML methods
  • Support multiple use cases (e.g. tabular, text and image data classification, regression)

Installation

Alibi can be installed from PyPI:

pip install alibi

This will install alibi with all its dependencies:

  attrs
  beautifulsoup4
  numpy
  Pillow
  pandas
  prettyprinter
  requests
  scikit-learn
  scikit-image
  scipy
  shap
  spacy
  tensorflow

To run all the example notebooks, you may additionally run pip install alibi[examples] which will install the following:

  Keras
  seaborn
  xgboost

Supported algorithms

Model explanations

These algorithms provide instance-specific (sometimes also called local) explanations of ML model predictions. Given a single instance and a model prediction they aim to answer the question "Why did my model make this prediction?" The following algorithms all work with black-box models meaning that the only requirement is to have acces to a prediction function (which could be an API endpoint for a model in production).

The following table summarizes the capabilities of the current algorithms:

Explainer Model types Classification Categorical data Tabular Text Images Need training set
Anchors black-box For Tabular
CEM black-box, TF/Keras Optional
Counterfactual Instances black-box, TF/Keras No
Kernel SHAP black-box
Prototype Counterfactuals black-box, TF/Keras Optional

Model confidence metrics

These algorihtms provide instance-specific scores measuring the model confidence for making a particular prediction.

Algorithm Model types Classification Regression Categorical data Tabular Text Images Need training set
Trust Scores black-box ✔(1) ✔(2) Yes
Linearity Measure black-box Optional

(1) Depending on model

(2) May require dimensionality reduction

Example outputs

Anchor method applied to the InceptionV3 model trained on ImageNet:

Prediction: Persian Cat Anchor explanation
Persian Cat Persian Cat Anchor

Contrastive Explanation method applied to a CNN trained on MNIST:

Prediction: 4 Pertinent Negative: 9 Pertinent Positive: 4
mnist_orig mnsit_pn mnist_pp

Trust scores applied to a softmax classifier trained on MNIST:

trust_mnist

Citations

If you use alibi in your research, please consider citing it.

BibTeX entry:

@software{alibi,
  title = {Alibi: Algorithms for monitoring and explaining machine learning models},
  author = {Klaise, Janis and Van Looveren, Arnaud and Vacanti, Giovanni and Coca, Alexandru},
  url = {https://github.com/SeldonIO/alibi},
  version = {0.4.0},
  date = {2020-03-20},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alibi-0.4.0.tar.gz (119.6 kB view hashes)

Uploaded Source

Built Distribution

alibi-0.4.0-py3-none-any.whl (143.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page