Skip to main content

Extract calibrated explanations from machine learning models.

Project description

Calibrated Explanations (Documentation)

Calibrated Explanations PyPI version Conda Version Build Status for Calibrated Explanations Lint Status for Calibrated Explanations Documentation Status License Downloads

calibrated-explanations is a Python package for the Calibrated Explanations method, supporting both classification and regression. The proposed method is based on Venn-Abers (classification) and Conformal Predictive Systems (regression) and has the following characteristics:

  • Fast, reliable, stable and robust feature importance explanations.
  • Calibration of the underlying model to ensure that predictions reflect reality.
  • Uncertainty quantification of the prediction from the underlying model and the feature importance weights.
  • Rules with straightforward interpretation in relation to the feature weights.
  • Possibility to generate counterfactual rules with uncertainty quantification of the expected predictions.
  • Conjunctional rules conveying joint contribution between features.

Below is an example of a counterfactual explanation for an instance of the Diabetes dataset (positive class means having diabetes). The light blue area in the background is representing the calibrated probability interval (for the positive class) of the underlying model, as indicated by Venn-Abers. The darker blue bars for each rule show the probability intervals that Venn-Abers indicate for an instance changing a feature value in accordance with the rule condition.

Counterfactual explanation for Diabetes

Getting started

The notebooks folder contains a number of notebooks illustrating different use cases for calibrated-explanations. The following are commented and should be a good start:

Classification

Let us illustrate how we may use calibrated-explanations to generate explanations from a classifier trained on a dataset from www.openml.org, which we first split into a training and a test set using train_test_split from sklearn, and then further split the training set into a proper training set and a calibration set:

from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split

dataset = fetch_openml(name="wine", version=7, as_frame=True)

X = dataset.data.values.astype(float)
y = dataset.target.values

feature_names = dataset.feature_names

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=2, stratify=y)

X_prop_train, X_cal, y_prop_train, y_cal = train_test_split(X_train, y_train,
                                                            test_size=0.25)

We now fit a model on our data.

from sklearn.ensemble import RandomForestClassifier

rf = RandomForestClassifier(n_jobs=-1)

rf.fit(X_prop_train, y_prop_train)

Factual Explanations

Lets extract explanations for our test set using the calibrated-explanations package by importing CalibratedExplainer from calibrated_explanations.

from calibrated_explanations import CalibratedExplainer, __version__
print(__version__)

explainer = CalibratedExplainer(rf, X_cal, y_cal, feature_names=feature_names)

factual_explanations = explainer.explain_factual(X_test)

Once we have the explanations, we can plot all of them using plot_all. Default, a regular plot, without uncertainty intervals included, is created. To include uncertainty intervals, change the parameter uncertainty=True. To plot only a single instance, the plot_explanation function can be called, submitting the index of the test instance to plot. You can also add and remove conjunctive rules.

factual_explanations.plot_all()
factual_explanations.plot_all(uncertainty=True)

factual_explanations.plot_explanation(0, uncertainty=True)

factual_explanations.add_conjunctions().plot_all()
factual_explanations.remove_conjunctions().plot_all()

Counterfactual Explanations

An alternative to factual rules is to extract counterfactual rules. explain_counterfactual can be called to get counterfactual rules with an appropriate discretizer automatically assigned.

counterfactual_explanations = explainer.explain_counterfactual(X_test)

Counterfactuals are also visualized using the plot_all. Plotting an individual counterfactual explanation is done using plot_explanation, submitting the index of the test instance to plot. Adding or removing conjunctions is done as before.

counterfactual_explanations.plot_all()
counterfactual_explanations.plot_explanation(0)
counterfactual_explanations.add_conjunctions().plot_all()

Individual explanations can also be plotted using plot_explanation.

factual_explanations.get_explanation(0).plot_explanation()
counterfactual_explanations.get_explanation(0).plot_explanation()

Support for multiclass

calibrated-explanations supports multiclass which is demonstrated in demo_multiclass. That notebook also demonstrates how both feature names and target and categorical labels can be added to improve the interpretability.

Regression

Extracting explanations for regression is very similar to how it is done for classification.

dataset = fetch_openml(name="house_sales", version=3)

X = dataset.data.values.astype(float)
y = dataset.target.values

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1)

X_prop_train, X_cal, y_prop_train, y_cal = train_test_split(X_train, y_train,
                                                            test_size=0.25)

Let us now fit a RandomForestRegressor from sklearn to the proper training set:

from sklearn.ensemble import RandomForestRegressor

rf = RandomForestRegressor()
rf.fit(X_prop_train, y_prop_train)

Factual Explanations

Define a CalibratedExplainer object using the new model and data. The mode parameter must be explicitly set to regression. Regular and uncertainty plots work in the same way as for classification.

explainer = CalibratedExplainer(rf, X_cal, y_cal, mode='regression')

factual_explanations = explainer.explain_factual(X_test)

factual_explanations.plot_all()
factual_explanations.plot_all(uncertainty=True)

factual_explanations.add_conjunctions().plot_all()

Default, the confidence interval is set to a symmetric interval of 90% (defined as low_high_percentiles=(5,95)). The intervals can cover any user specified interval, including one-sided intervals. To define a one-sided upper-bounded 90% interval, set low_high_percentiles=(-np.inf,90), and to define a one-sided lower-bounded 95% interval, set low_high_percentiles=(5,np.inf). Percentiles can also be set to any other values in the range [0,100] (exclusive), and intervals do not have to be symmetric.

lower_bounded_explanations = explainer.explain_factual(X_test, low_high_percentiles=(5,np.inf))
asymmetric_explanations = explainer.explain_factual(X_test, low_high_percentiles=(5,75))

Counterfactual Explanations

The explain_counterfactual will work exactly the same as for classification. Counterfactual plots work in the same way as for classification.

counterfactual_explanations = explainer.explain_counterfactual(X_test)

counterfactual_explanations.plot_all()
counterfactual_explanations.add_conjunctions().plot_all()

counterfactual_explanations.plot_explanation(0)

The parameter low_high_percentiles works in the same way as for factual explanations.

Probabilistic Regression Explanations

It is possible to create probabilistic explanations for regression, providing the probability that the target value is below the provided threshold (which is 180 000 in the examples below). All methods are the same as for normal regression and classification, except that the explain_factual and explain_counterfactual methods need the additional threshold value (here 180 000).

factual_explanations = explainer.explain_factual(X_test, 180000)

factual_explanations.plot_all()
factual_explanations.plot_all(uncertainty=True)

factual_explanations.add_conjunctions().plot_all()

counterfactual_explanations = explainer.explain_counterfactual(X_test, 180000)

counterfactual_explanations.plot_all()
counterfactual_explanations.add_conjunctions().plot_all()

Additional Regression Use Cases

Regression offers many more options and to learn more about them, see the demo_regression or the demo_probabilistic_regression notebooks.

Top

Install

First, you need a Python environment.

Then calibrated-explanations can be installed from PyPI:

pip install calibrated-explanations

or from conda-forge:

conda install -c conda-forge calibrated-explanations

or by following further instructions at conda-forge.

The dependencies are:

Top

Development

This project has tests that can be executed using pytest. Just run the following command from the project root.

pytest

To build the Sphinx documentation, run the following command in the project root:

sphinx-build docs docs/_build

Then open the docs/_build/index.html file on your web browser.

The calibrated-explanations documentation on readthedocs is automatically updated form GitHub's main branch. If there is an issue with the documentation build there, the build tab of the project page should have the logs.

To make a new release on PyPI, just follow the release guide.

Top

Documentation

For documentation, see calibrated-explanations.readthedocs.io.

Top

Further reading and citing

The calibrated-explanations method for classification is introduced in the paper:

Löfström, H., Löfström, T., Johansson, U., and Sönströd, C. Calibrated Explanations: with Uncertainty Information and Counterfactuals. arXiv preprint arXiv:2305.02305.

The extensions for regression are introduced in the paper:

Löfström, T., Löfström, H., Johansson, U., Sönströd, C., and Matela, R. Calibrated Explanations for Regression. arXiv preprint arXiv:2308.16245.

The paper that originated the idea of calibrated-explanations is:

Löfström, H., Löfström, T., Johansson, U., & Sönströd, C. (2023). Investigating the impact of calibration on the quality of explanations. Annals of Mathematics and Artificial Intelligence, 1-18.

If you use calibrated-explanations for a scientific publication, you are kindly requested to cite one of the papers above.

Bibtex entry for the original paper:

@misc{calibrated-explanations,
      title = 	      {Calibrated Explanations: with Uncertainty Information and Counterfactuals},
      author =          {L\"ofstr\"om, Helena and L\"ofstr\"om, Tuwe and Johansson, Ulf and S\"onstr\"od, Cecilia},
      year =            {2023},
      eprint =          {2305.02305},
      archivePrefix =   {arXiv},
      primaryClass =    {cs.AI}
}

Bibtex entry for the regression paper:

@misc{cal-expl-regression,
      title = 	      {Calibrated Explanations for Regression},
      author =          {L\"ofstr\"om, Tuwe and L\"ofstr\"om, Helena and Johansson, Ulf and S\"onstr\"od, Cecilia and Matela, Rudy},
      year =            {2023},
      eprint =          {2308.16245},
      archivePrefix =   {arXiv},
      primaryClass =    {cs.LG}
}

Top

Acknowledgements

This research is funded by the Swedish Knowledge Foundation together with industrial partners supporting the research and education environment on Knowledge Intensive Product Realization SPARK at Jönköping University, Sweden, through projects: AFAIR grant no. 20200223 and PREMACOP grant no. 20220187. Helena Löfström is a PhD student in the Industrial Graduate School in Digital Retailing (INSiDR) at the University of Borås, funded by the Swedish Knowledge Foundation, grant no. 20160035.

Rudy Matela has been our git guru and has helped us with the release process.

We have used both the ConformalPredictiveSystem and DifficultyEstimator classes from Henrik Boströms crepes package to provide support for regression.

We have used the VennAbers class from Ivan Petejs venn-abers package to provide support for probabilistic explanations (both classification and probabilistic regression).

We have used code from Marco Tulio Correia Ribeiros lime package for the Disccretizer class.

The check_is_fitted and safe_instance functions in calibrated_explanations.utils are copied from sklearn and shap.

Top

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

calibrated_explanations-0.2.2.tar.gz (40.7 kB view hashes)

Uploaded Source

Built Distribution

calibrated_explanations-0.2.2-py3-none-any.whl (35.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page