Explainability (xAI) (Sub)population Outcomes

From The Foundation for Best Practices in Machine Learning
Technical Best Practices > Fairness & Non-Discrimination > Explainability (xAI) (Sub)population Outcomes

Explainability (xAI) (Sub)population Outcomes

Control

Use explainability techniques appropriate to the model architecture and use-case to determine whether input Features are being used within the Model to create disproportionately unfavorable Outcomes for (Sub)populations. (See Section 16 - Explainability for further information.)


Aim

To (a) identify sources of unfavorable Outcomes for (Sub)populations; (b) help inform the process of generating alternative Models that are fairer; and (c) highlight associated risks that might occur in the Product Lifecycle.


Additional Information