Targeted Sabotage

From The Foundation for Best Practices in Machine Learning
Revision as of 12:36, 15 May 2021 by JeroenFranse (talk | contribs) (Created page with "{{#isin: Security }} =Targeted Sabotage= <!-- Please note that edits to the Control and Aim sections are subject to greater scrutiny and review than those to the Additional I...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Technical Best Practices > Security > Targeted Sabotage

Targeted Sabotage


Document and assess whether adversarial actors can cause harm to specific targeted Product Subjects by manipulating Product Outputs.


To (a) identify the risks associated with targeted Product Subject physical, financial, social and psychological wellbeing; and (b) highlight associated risks that might occur in the Product Lifecycle.

Additional Information