Local Explainability Model Run

From The Foundation for Best Practices in Machine Learning
Technical Best Practices > Explainability > Local Explainability Model Run

Local Explainability Model Run

Control

Document and run as many types of local explainability Models as is reasonably practical, such as perturbation-based techniques or gradient-based techniques or, for more specific examples, Local Interpretable Model-Agnostic Explanations (LIME), SHAP values, Anchor explanations amongst others. When there is doubt about the stability of the techniques being used, test their quality through alternative parameterizations or by comparing across techniques.


Aim

To (a) generate global explainability of the model; (b) help promote model debugging; (c) ensure explainability fidelity and stability through numerous explainability model runs; and (d) highlight associated risks that might occur in the Product Lifecycle.


Additional Information