Fairness & Non-Discrimination

From The Foundation for Best Practices in Machine Learning
Technical Best Practices > Fairness & Non-Discrimination


Hint
To view additional information and to make edit suggestions, click the individual items.

Fairness & Non-Discrimination

Objective
To (a) identify and mitigate risk of disproportionately unfavorable Outcomes for protected (Sub)populations; and (b) minimise the unequal distribution of Product and Model errors to prevent reinforcing and/or deriving social inequalities and/or ills, and (c) promote compliance with existing anti-discrimination laws and statutes.


11.1. Product Definition(s)

Objective
To (a) ensure accurate definitions and pragmatic formulations regarding Fairness and Non-discrimination requirements that align with the Product Definitions; and (b) enable adequate vigil of associated risks throughout the Product Lifecycle.
Item nr. Item Name and Page Control Aim
11.1.1. (Sub)populations Definition

Define (Sub)populations that are subject to Fairness concern, with input from Domain and/or legal experts when relevant.

To (a) ensure that vulnerable and affected populations are appropriately identified in all subsequent Fairness testing and Model build; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.1.2. (Sub)population Data

Gather data on (Sub)population membership. If a proxy approach is used, ensure the performance of the proxy is adequate in this context.

To (a) facilitate Fairness testing pre- and post-Model deployment; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.1.3. (Sub)population Outcome Perceptions

Document and assess whether scored (Sub)populations would view Model Outcomes as favorable or not, using input from subject matter experts and stakeholders in affected (Sub)populations. Document and assess any divergent views amongst (Sub)populations.

To (a) ensure uniformity in (Sub)population outcome perception, if applicable; (b) highlight Outcome effects for different (Sub)populations; and (c) highlight associated risks that might occur in the Product Lifecycle.

11.1.4. Erroneous Outcome Consequence Estimation Divergence

Document and assess the results of erroneous (false positive & false negative) outcome consequences, both real and perceived, specifically in terms of divergence between relevant (Sub)populations. If material divergence present, take measures to harmonise Outcome perceptions and/or mitigate erroneous Outcome consequences in Model design, exploration, development, and production.

To (a) ensure uniformity in erroneous Outcomes for (Sub)populations; (b) highlight outcome effects for different (Sub)populations; and (c) highlight associated risks that might occur in the Product Lifecycle.

11.1.5. Positive Outcome Spread

Document and assess the degree to which Model positive outcomes can be distributed to non-scored (Sub)population, when contextually appropriate. If present, take measures to promote Model Outcome distribution in Model design, exploration, development, and production.

To (a) ensure the non-prejudicial spread of positive Model Outcomes; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.1.6. Enduring Bias Estimation

Document and assess whether exclusions from Product usage might perpetuate pre-existing societal inequalities between (Sub)populations. If present, take measures to mitigate societal inequalities perpetuation in Model design, exploration, development, and production.

To (a) ensure the non-prejudicial spread of Model Outcomes; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.1.7. Appropriate Fairness Metrics

Consult Domain experts to inform which Fairness metrics are contextually most appropriate for the Model when conducting Fairness testing.

To (a) ensure that fairness testing and subsequent Model changes (i) result in outcome changes which are relevant for (Sub)populations; and/or (ii) are consistent with regulatory guidance and context-specific best practices; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.1.8. Model Implications

Document and assess the downside risks of Model misclassification/inaccuracy for modeled populations. Use the relative severity of these risks to inform the choice of Fairness metrics.

To (a) ensure that improving in the chosen Fairness metrics achieves the greatest Fairness in Model decisioning after deployed; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.1.9. Fairness Testing Approach

Document and assess the Fairness testing methodologies that will be applied to Model and/or candidate Models, along with any applicable thresholds for statistical/practical significance, acceptable performance loss tolerance, amongst other metrics.

To (a) prevent Fairness testing methodology and associated thresholds change during Model review; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.2. Exploration

Objective
To identify and control for Fairness and Non-Discrimination risks based on the available datasets.
Item nr. Item Name and Page Control Aim
11.2.1. (Sub)population Data Access

Keep separate Model development data and (Sub)population membership data (if applicable Regulations allow the possession and processing of such in the first place), especially if the use of (Sub)population data in the Model is prohibited or would introduce fairness concerns.

To (a) guarantee that (Sub)population membership data does not inadvertently leak into a Model during development.

11.2.2. Univariate Assessments

Document and perform univariate assessments of relationship between (Sub)populations and Model input Features, including appropriate correlation statistics.

To (a) identify input Feature trends associated with (Sub)populations; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.2.3. Prohibited Data Sources

Develop and maintain an index of data sources or features that should not be made available or utilized because of the risks of harming (Sub)populations, specifically Protected Classes.

To (a) prohibit the actioning of data sources that will disproportionately prejudice (Sub)populations; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.2.4. Data Representativeness

Ensure the membership rates of (Sub)populations in Model development data align with expectations and that data is representative of Domain populations.

To (a) guarantee that Model performance and Fairness testing during model development will provide a consistent picture of Model performance after deployment; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.2.5. (Sub)population Proxies and Relationships

Document and assess the relationship between potential input Features and (membership of) (Sub)populations of interest based on, amongst other things, (i) reviews with diverse Domain experts, (ii) explicit encoding of (Sub)population membership, (iii) correlation analyses, (iv) visualization methods. If relationships exist, the concerned input Features should be excluded from Model datasets, unless a convincing case can be made that an (adapted version of) the input Feature will not adversely affect any (Sub)populations, and document this.

To (a) prevent Model decisions based directly or indirectly on protected attributes or protected class membership; (b) reduce the risk of Model bias against relevant (Sub)populations; (c) understand any differences in data distributions across (Sub)populations before development begins; and (c) highlight associated risks that might occur in the Product Lifecycle.

11.3. Development

Objective
To minimise the unequal distribution of Product and Model errors for (Sub)populations during Model development in the most appropriate manner.
Item nr. Item Name and Page Control Aim
11.3.1. Explainability (xAI) (Sub)population Outcomes

Use explainability techniques appropriate to the model architecture and use-case to determine whether input Features are being used within the Model to create disproportionately unfavorable Outcomes for (Sub)populations. (See Section 16 - Explainability for further information.)

To (a) identify sources of unfavorable Outcomes for (Sub)populations; (b) help inform the process of generating alternative Models that are fairer; and (c) highlight associated risks that might occur in the Product Lifecycle.

11.3.2. Model Architecture and Interpretability

Choose Model architecture that maximizes interpretability and identification of causes of unfairness. Consider different methodologies within the same Model architecture (ex. monotonic XGBoost, explainable neural networks). Evaluate whether Product Aims can be accomplished with a more interpretable Model.

To (a) provide information that can guide Model-builders; (b) ensure that Model decisions are made in line with expectations; (c) allow Product Subjects and/or End Users to understand why they received corresponding Outcomes; (d) help inform the causes of Fairness issues if issues are detected; and (e) highlight associated risks that might occur in the Product Lifecycle.

11.3.3. Fairness Testing of Outcomes

Focus fairness testing initially on outcomes that are immediately experienced by (Sub)populations. For example, if a model uses a series of sub-Models to generate a score and a threshold is applied to that score to determine an Outcome, focus on Fairness issues related to that Outcome. If issues are identified, then diagnose the issue by moving "up-the-chain" and testing the Model score and sub-Models.

To (a) ensure that the testing performed best reflects what will happen when Models are deployed in the real world; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.3.4. Disparate Impact Testing

If applicable, test Model(s) for disparate impact. Evaluate whether Model(s) predict a Positive Outcome at the same rate across (Sub)populations.

To (a) ensure that (Sub)population members are receiving the Positive Outcome as often as their peers; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.3.5. Equalized Opportunity Testing

If applicable, test Model(s) for equalized opportunity. Evaluate whether Model(s) predict a Positive Outcome for (Sub)population members that are actually in the positive class at the same rates as across (Sub)populations.

To (a) ensure that (Sub)population members who should receive the Positive Outcome are receiving the Positive Outcome as often as their peers; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.3.6. Equalized Odds Testing

If applicable, test Model(s) for equalized odds. Evaluate whether Model(s) predict a Positive & Negative Outcome for (Sub)population members that are actually in the positive & negative class respectively at the same rates across (Sub)populations.

To (a) ensure that (i) protected (Sub)populations who should receive the Positive Outcome are receiving the Positive Outcome as often as other (Sub)populations, and (ii) protected (Sub)populations who should not receive the Positive Outcome are not receiving the Positive Outcome as often as other (Sub)populations; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.3.7. Conditional Statistical Parity Testing

If applicable, test Model(s) for conditional statistical parity. Evaluate whether Model(s) predict a Positive Outcome at the same rate across (Sub)populations given some predefined set of "legitimate explanatory factors".

To (a) ensure that (Sub)populations members are receiving the Positive Outcome just as often as (Sub)populations with similar underlying characteristics; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.3.8. Calibration Testing Across (Sub)populations

If applicable, test Model(s) for calibration. Evaluate whether (Sub)populations members with the same predicted Outcome have an equal probability of actually being in the positive class.

To (a) ensure that Subpopulations each have the same likelihood of deserving the Positive Outcome for a given Model prediction; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.3.9. Differential Validity Testing

If applicable, test Model(s) for differential validity. Evaluate whether Model performance varies meaningfully by (Sub)population, with a special focus on any groups that are underrepresented in modelling data.

To (a) ensure that the Model's predictive abilities aren't isolated in or concentrated to (Sub)population members; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.3.10. Feature Selection Fairness Review

Evaluate the impact of removing or modifying potentially problematic input Features on Fairness metrics and Model quality.

To (a) assess whether more fair alternative Models can be made that fulfill Model objectives; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.3.11. Modeling Methodology Fairness Review

Evaluate the impact of changing Modelling methodology choices (f.e. algorithm, segmentation, hyperparameters, etc.) on Fairness metrics and Model quality.

To (a) assess whether more fair alternative Models can be made that fulfill the Model objectives; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.4. Production

Objective
To maintain operationalised Fairness at the level established during Model Development.
Item nr. Item Name and Page Control Aim
11.4.1. Domain Population Stability

Continually assess the stability of the Domain population being scored, both in terms of its composition relative to the Model development population, and the quality of the Model by class.

To (a) ensure the continued accuracy of Fairness tests and metrics; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.4.2. Fairness Testing Schedule

Define a policy for timing of re-assessment of Model fairness that includes re-testing at regular intervals and/or established trigger events (e.g. any modifications to Model inputs or structure, changes to the composition of the modeled population, impactful policy changes).

To (a) detect issues with Model Fairness that may not have existed during pre-deployment of the Model; and (b) highlight associated risks that might occur in the Product Lifecycle.

11.4.3. Input Data Transparency

Ensure that Product Subjects have the ability to observe attributes relied on in the modeling decision and correct inaccuracy. Collect data around this process and use it to identify issues in the data sourcing/aggregation pipeline.

To (a) ensure that the Model is making decisions on accurate data; (b) learn whether there are problems with Model's data assets; and (c) highlight associated risks that might occur in the Product Lifecycle.

11.4.4. Feature Attribution

Ensure that Product Subjects can understand why the Model made the decision it did, or how the Model output contributed to the decision. Ideally, an understanding would include which features were most important in the decision and give some guidance as to how the subject could improve in the eyes of the Model. (See Section 13 - Representativeness & Specification for further information.)

To (a) ensure that Product Subjects (i) have some level of trust/understanding in the Model that affect them and (ii) feel that they have agency over the process and that Model Outcomes are not arbitrary.

11.4.5. Product Subject Appeal Process

Incorporate a "right of appeal" procedure into the Model's deployment, where Product Subjects can request a human review of the modeling decision. Collect data around this process and use it to inform Model design choices.

To (a) ensure that Product Subjects are, at a minimum, made aware of the results of Model decisions; and (b) allow inaccurate predictions to be corrected.

11.4.6. Feature attribution Monitoring

As part of regularly scheduled review, or more frequently, monitor any changes in feature attribution or other explainable metric by sub-population. (See Section 15 - Monitoring & Maintenance for further information.)

To (a) detect reasons for changes in Model performance, as well as any changes earlier in the data pipeline; and (b) highlight associated risks that might occur in the Product Lifecycle.