Feedback Loop

From The Foundation for Best Practices in Machine Learning

Feedback Loop


Document and define a feedback loop that enables monitoring of stability, performance, and operations metrics, and counter-metrics, as required by Performance Robustness, Monitoring and Maintenance, and Systemic Stability, as discussed more thoroughly in Section 14 - Performance Robustness; Section 15 - Monitoring & Maintenance; and Section 20 - Systemic Stability. Develop and incorporate a method for flagging bias and for issue reporting. Document and define a process for real-time sharing of testing participant feedback with the development and maintenance teams. Incorporate the Feedback Loop in the Testing Design and Scheduling Framework to ensure that the Features the Model is utilizing are acceptable for the application during the development, deployment, and maintenance phases.


To (a) ensure robust and responsive feedback loop measures that enable monitoring of necessary metrics and effectively integrate into the Testing Design and Scheduling Framework; and (b) highlight associated risks that might occur in the Product Lifecycle.

Additional Information