This wiki has been automatically closed because there have been no edits or log actions made within the last 60 days. If you are a user (who is not the bureaucrat) that wishes for this wiki to be reopened, please request that at Requests for reopening wikis. If this wiki is not reopened within 6 months it may be deleted. Note: If you are a bureaucrat on this wiki, you can go to Special:ManageWiki and uncheck the "Closed" box to reopen it.

Feedback Loop

From The Foundation for Best Practices in Machine Learning

Feedback Loop

Control

Document and define a feedback loop that enables monitoring of stability, performance, and operations metrics, and counter-metrics, as required by Performance Robustness, Monitoring and Maintenance, and Systemic Stability, as discussed more thoroughly in Section 14 - Performance Robustness; Section 15 - Monitoring & Maintenance; and Section 20 - Systemic Stability. Develop and incorporate a method for flagging bias and for issue reporting. Document and define a process for real-time sharing of testing participant feedback with the development and maintenance teams. Incorporate the Feedback Loop in the Testing Design and Scheduling Framework to ensure that the Features the Model is utilizing are acceptable for the application during the development, deployment, and maintenance phases.


Aim

To (a) ensure robust and responsive feedback loop measures that enable monitoring of necessary metrics and effectively integrate into the Testing Design and Scheduling Framework; and (b) highlight associated risks that might occur in the Product Lifecycle.


Additional Information