Skip to main content

Implementations of various interpretable models

Project description


Python package for concise, transparent, and accurate predictive modeling.
All sklearn-compatible and easy to use.
For interpretability in NLP, check out our new package: imodelsX

๐Ÿ“š docs โ€ข ๐Ÿ“– demo notebooks

Modern machine-learning models are increasingly complex, often making them difficult to interpret. This package provides a simple interface for fitting and using state-of-the-art interpretable models, all compatible with scikit-learn. These models can often replace black-box models (e.g. random forests) with simpler models (e.g. rule lists) while improving interpretability and computational efficiency, all without sacrificing predictive accuracy! Simply import a classifier or regressor and use the fit and predict methods, same as standard scikit-learn models.

from sklearn.model_selection import train_test_split
from imodels import get_clean_dataset, HSTreeClassifierCV # import any imodels model here

# prepare data (a sample clinical dataset)
X, y, feature_names = get_clean_dataset('csi_pecarn_pred')
X_train, X_test, y_train, y_test = train_test_split(
    X, y, random_state=42)

# fit the model
model = HSTreeClassifierCV(max_leaf_nodes=4)  # initialize a tree model and specify only 4 leaf nodes
model.fit(X_train, y_train, feature_names=feature_names)   # fit model
preds = model.predict(X_test) # discrete predictions: shape is (n_test, 1)
preds_proba = model.predict_proba(X_test) # predicted probabilities: shape is (n_test, n_classes)
print(model) # print the model
------------------------------
Decision Tree with Hierarchical Shrinkage
Prediction is made by looking at the value in the appropriate leaf of the tree
------------------------------
|--- FocalNeuroFindings2 <= 0.50
|   |--- HighriskDiving <= 0.50
|   |   |--- Torticollis2 <= 0.50
|   |   |   |--- value: [0.10]
|   |   |--- Torticollis2 >  0.50
|   |   |   |--- value: [0.30]
|   |--- HighriskDiving >  0.50
|   |   |--- value: [0.68]
|--- FocalNeuroFindings2 >  0.50
|   |--- value: [0.42]

Installation

Install with pip install imodels (see here for help).

Supported models

Model Reference Description
Rulefit rule set ๐Ÿ—‚๏ธ, ๐Ÿ”—, ๐Ÿ“„ Fits a sparse linear model on rules extracted from decision trees
Skope rule set ๐Ÿ—‚๏ธ, ๐Ÿ”— Extracts rules from gradient-boosted trees, deduplicates them,
then linearly combines them based on their OOB precision
Boosted rule set ๐Ÿ—‚๏ธ, ๐Ÿ”—, ๐Ÿ“„ Sequentially fits a set of rules with Adaboost
Slipper rule set ๐Ÿ—‚๏ธ, ใ…คใ…ค๐Ÿ“„ Sequentially learns a set of rules with SLIPPER
Bayesian rule set ๐Ÿ—‚๏ธ, ๐Ÿ”—, ๐Ÿ“„ Finds concise rule set with Bayesian sampling (slow)
Optimal rule list ๐Ÿ—‚๏ธ, ๐Ÿ”—, ๐Ÿ“„ Fits rule list using global optimization for sparsity (CORELS)
Bayesian rule list ๐Ÿ—‚๏ธ, ๐Ÿ”—, ๐Ÿ“„ Fits compact rule list distribution with Bayesian sampling (slow)
Greedy rule list ๐Ÿ—‚๏ธ, ๐Ÿ”— Uses CART to fit a list (only a single path), rather than a tree
OneR rule list ๐Ÿ—‚๏ธ, ใ…คใ…ค๐Ÿ“„ Fits rule list restricted to only one feature
Optimal rule tree ๐Ÿ—‚๏ธ, ๐Ÿ”—, ๐Ÿ“„ Fits succinct tree using global optimization for sparsity (GOSDT)
Greedy rule tree ๐Ÿ—‚๏ธ, ๐Ÿ”—, ๐Ÿ“„ Greedily fits tree using CART
C4.5 rule tree ๐Ÿ—‚๏ธ, ๐Ÿ”—, ๐Ÿ“„ Greedily fits tree using C4.5
TAO rule tree ๐Ÿ—‚๏ธ, ใ…คใ…ค๐Ÿ“„ Fits tree using alternating optimization
Iterative random
forest
๐Ÿ—‚๏ธ, ๐Ÿ”—, ๐Ÿ“„ Repeatedly fit random forest, giving features with
high importance a higher chance of being selected
Sparse integer
linear model
๐Ÿ—‚๏ธ, ใ…คใ…ค๐Ÿ“„ Sparse linear model with integer coefficients
Greedy tree sums ๐Ÿ—‚๏ธ, ใ…คใ…ค๐Ÿ“„ Sum of small trees with very few total rules (FIGS)
Hierarchical
shrinkage wrapper
๐Ÿ—‚๏ธ, ใ…คใ…ค๐Ÿ“„ Improve a decision tree, random forest, or
gradient-boosting ensemble with ultra-fast, post-hoc regularization
Distillation
wrapper
๐Ÿ—‚๏ธ Train a black-box model,
then distill it into an interpretable model
More models โŒ› (Coming soon!) Lightweight Rule Induction, MLRules, ...

Docs ๐Ÿ—‚๏ธ, Reference code implementation ๐Ÿ”—, Research paper ๐Ÿ“„

Demo notebooks

Demos are contained in the notebooks folder.

Quickstart demo Shows how to fit, predict, and visualize with different interpretable models
Autogluon demo Fit/select an interpretable model automatically using Autogluon AutoML
Quickstart colab demo Shows how to fit, predict, and visualize with different interpretable models
Clinical decision rule notebook Shows an example of using imodels for deriving a clinical decision rule
Posthoc analysis We also include some demos of posthoc analysis, which occurs after fitting models: posthoc.ipynb shows different simple analyses to interpret a trained model and uncertainty.ipynb contains basic code to get uncertainty estimates for a model

What's the difference between the models?

The final form of the above models takes one of the following forms, which aim to be simultaneously simple to understand and highly predictive:

Rule set Rule list Rule tree Algebraic models

Different models and algorithms vary not only in their final form but also in different choices made during modeling, such as how they generate, select, and postprocess rules:

Rule candidate generation Rule selection Rule postprocessing
Ex. RuleFit vs. SkopeRules RuleFit and SkopeRules differ only in the way they prune rules: RuleFit uses a linear model whereas SkopeRules heuristically deduplicates rules sharing overlap.
Ex. Bayesian rule lists vs. greedy rule lists Bayesian rule lists and greedy rule lists differ in how they select rules; bayesian rule lists perform a global optimization over possible rule lists while Greedy rule lists pick splits sequentially to maximize a given criterion.
Ex. FPSkope vs. SkopeRules FPSkope and SkopeRules differ only in the way they generate candidate rules: FPSkope uses FPgrowth whereas SkopeRules extracts rules from decision trees.

Support for different tasks

Different models support different machine-learning tasks. Current support for different models is given below (each of these models can be imported directly from imodels (e.g. from imodels import RuleFitClassifier):

Model Binary classification Regression Notes
Rulefit rule set RuleFitClassifier RuleFitRegressor
Skope rule set SkopeRulesClassifier
Boosted rule set BoostedRulesClassifier BoostedRulesRegressor
SLIPPER rule set SlipperClassifier
Bayesian rule set BayesianRuleSetClassifier Fails for large problems
Optimal rule list (CORELS) OptimalRuleListClassifier Requires corels, fails for large problems
Bayesian rule list BayesianRuleListClassifier
Greedy rule list GreedyRuleListClassifier
OneR rule list OneRClassifier
Optimal rule tree (GOSDT) OptimalTreeClassifier Requires gosdt, fails for large problems
Greedy rule tree (CART) GreedyTreeClassifier GreedyTreeRegressor
C4.5 rule tree C45TreeClassifier
TAO rule tree TaoTreeClassifier TaoTreeRegressor
Iterative random forest IRFClassifier Requires irf
Sparse integer linear model SLIMClassifier SLIMRegressor Requires extra dependencies for speed
Greedy tree sums (FIGS) FIGSClassifier FIGSRegressor
Hierarchical shrinkage HSTreeClassifierCV HSTreeRegressorCV Wraps any sklearn tree-based model
Distillation DistilledRegressor Wraps any sklearn-compatible models

Extras

Data-wrangling functions for working with popular tabular datasets (e.g. compas). These functions, in conjunction with imodels-data and imodels-experiments, make it simple to download data and run experiments on new models.
Explain classification errors with a simple posthoc function. Fit an interpretable model to explain a previous model's errors (ex. in this notebook๐Ÿ““).
Fast and effective discretizers for data preprocessing.
Discretizer Reference Description
MDLP ๐Ÿ—‚๏ธ, ๐Ÿ”—, ๐Ÿ“„ Discretize using entropy minimization heuristic
Simple ๐Ÿ—‚๏ธ, ๐Ÿ”— Simple KBins discretization
Random Forest ๐Ÿ—‚๏ธ Discretize into bins based on random forest split popularity
Rule-based utils for customizing models The code here contains many useful and customizable functions for rule-based learning in the util folder. This includes functions / classes for rule deduplication, rule screening, and converting between trees, rulesets, and neural networks.

Our favorite models

After developing and playing with imodels, we developed a few new models to overcome limitations of existing interpretable models.

FIGS: Fast interpretable greedy-tree sums

๐Ÿ“„ Paper, ๐Ÿ”— Post, ๐Ÿ“Œ Citation

Fast Interpretable Greedy-Tree Sums (FIGS) is an algorithm for fitting concise rule-based models. Specifically, FIGS generalizes CART to simultaneously grow a flexible number of trees in a summation. The total number of splits across all the trees can be restricted by a pre-specified threshold, keeping the model interpretable. Experiments across a wide array of real-world datasets show that FIGS achieves state-of-the-art prediction performance when restricted to just a few splits (e.g. less than 20).

Example FIGS model. FIGS learns a sum of trees with a flexible number of trees; to make its prediction, it sums the result from each tree.

Hierarchical shrinkage: post-hoc regularization for tree-based methods

๐Ÿ“„ Paper (ICML 2022), ๐Ÿ”— Post, ๐Ÿ“Œ Citation

Hierarchical shrinkage is an extremely fast post-hoc regularization method which works on any decision tree (or tree-based ensemble, such as Random Forest). It does not modify the tree structure, and instead regularizes the tree by shrinking the prediction over each node towards the sample means of its ancestors (using a single regularization parameter). Experiments over a wide variety of datasets show that hierarchical shrinkage substantially increases the predictive performance of individual decision trees and decision-tree ensembles.

HS Example. HS applies post-hoc regularization to any decision tree by shrinking each node towards its parent.

References

Readings
  • Interpretable ML good quick overview: murdoch et al. 2019, pdf
  • Interpretable ML book: molnar 2019, pdf
  • Case for interpretable models rather than post-hoc explanation: rudin 2019, pdf
  • Review on evaluating interpretability: doshi-velez & kim 2017, pdf
Reference implementations (also linked above) The code here heavily derives from the wonderful work of previous projects. We seek to to extract out, unify, and maintain key parts of these projects.
Related packages
  • gplearn: symbolic regression/classification
  • pysr: fast symbolic regression
  • pygam: generative additive models
  • interpretml: boosting-based gam
  • h20 ai: gams + glms (and more)
  • optbinning: data discretization / scoring models
Updates
  • For updates, star the repo, see this related repo, or follow @csinva_
  • Please make sure to give authors of original methods / base implementations appropriate credit!
  • Contributing: pull requests very welcome!

If it's useful for you, please star/cite the package, and make sure to give authors of original methods / base implementations credit:

@software{
imodels2021,
title        = {imodels: a python package for fitting interpretable models},
journal      = {Journal of Open Source Software},
publisher    = {The Open Journal},
year         = {2021},
author       = {Singh, Chandan and Nasseri, Keyan and Tan, Yan Shuo and Tang, Tiffany and Yu, Bin},
volume       = {6},
number       = {61},
pages        = {3192},
doi          = {10.21105/joss.03192},
url          = {https://doi.org/10.21105/joss.03192},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

imodels-1.3.16.tar.gz (170.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

imodels-1.3.16-py3-none-any.whl (195.4 kB view details)

Uploaded Python 3

File details

Details for the file imodels-1.3.16.tar.gz.

File metadata

  • Download URL: imodels-1.3.16.tar.gz
  • Upload date:
  • Size: 170.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.15

File hashes

Hashes for imodels-1.3.16.tar.gz
Algorithm Hash digest
SHA256 2e56158ae6a1960f5410dec04c04231f8e22e60735385e6b5cb0a1ce03f16496
MD5 8c274d24b062127dec0fbb9cd79e9ec4
BLAKE2b-256 2a1b0cd0677ce6208030026c55c9bda2217f0be353327681e5367155891e0627

See more details on using hashes here.

File details

Details for the file imodels-1.3.16-py3-none-any.whl.

File metadata

  • Download URL: imodels-1.3.16-py3-none-any.whl
  • Upload date:
  • Size: 195.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.15

File hashes

Hashes for imodels-1.3.16-py3-none-any.whl
Algorithm Hash digest
SHA256 89bd7908aac8a5e0b815f71fe1124c094bf35f30fc307c67b60e1bfdc940d1dd
MD5 2b1b90307f492d133e4ede5edae63c6b
BLAKE2b-256 ff5da1f4a592ebee031437352031eb8b1fa40ea810566d6b734f22577263470e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page