Performance Uncertainty and Sensitivity Analysis

From The Foundation for Best Practices in Machine Learning
Technical Best Practices > Performance Robustness > Performance Uncertainty and Sensitivity Analysis

Performance Uncertainty and Sensitivity Analysis

Control

Document and assess the probability distribution of the model performance using cross-validation, statistical and simulation techniques under - (a) the assumption that the distribution of training and validation data is representative of the distribution of live data; and (b) multiple realistic variations to the Model data due to both statistical and contextual causes. If Model performance variation is high, improve Model and/or take measures to mitigate performance variation impact.


Aim

To (a) assess and control for the range of expected values of Model performance under both constant and changing conditions; (b) assess and control for whether trained model performance is consistent with these ranges; (c) identify main sources of uncertainty and variation for further control; and (d) highlight associated risks that might occur in the Product Lifecycle.


Additional Information