Calibration Testing Across (Sub)populations

From The Foundation for Best Practices in Machine Learning
Revision as of 15:14, 16 May 2021 by Violetamisheva (talk | contribs) (Created page with "{{#isin: Fairness & Non-Discrimination }} =Calibration Testing Across (Sub)populations= <!-- Please note that edits to the Control and Aim sections are subject to greater scr...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Technical Best Practices > Fairness & Non-Discrimination > Calibration Testing Across (Sub)populations

Calibration Testing Across (Sub)populations


If applicable, test Model(s) for calibration. Evaluate whether (Sub)populations members with the same predicted Outcome have an equal probability of actually being in the positive class.


To (a) ensure that Subpopulations each have the same likelihood of deserving the Positive Outcome for a given Model prediction; and (b) highlight associated risks that might occur in the Product Lifecycle.

Additional Information