Adversarial Input Susceptibility

From The Foundation for Best Practices in Machine Learning
Technical Best Practices > Security > Adversarial Input Susceptibility

Adversarial Input Susceptibility

Control

Document and assess the susceptibility of Models to be effectively influenced by manipulated (inferencing) input. Reduce this susceptibility by (a) increasing the representational robustness (f.e. through more complete embeddings or latent space representation); and/or (b) applying robust transformations (possibly cryptographic) and cleaning.


Aim

To (a) warrant the control of the risk of Evasion and Sabotage Attacks, including Adversarial Examples; and (b) highlight associated risks that might occur in the Product Lifecycle.


Additional Information