This wiki has been automatically closed because there have been no edits or log actions made within the last 60 days. If you are a user (who is not the bureaucrat) that wishes for this wiki to be reopened, please request that at Requests for reopening wikis. If this wiki is not reopened within 6 months it may be deleted. Note: If you are a bureaucrat on this wiki, you can go to Special:ManageWiki and uncheck the "Closed" box to reopen it.

Privacy

From The Foundation for Best Practices in Machine Learning


Hint
To view additional information and to make edit suggestions, click the individual items.

Privacy

Objective
To determine the most appropriate and feasible privacy-preserving techniques for the Product.
Item nr. Item Name and Page Control Aim
7.1. Decentralization Method Analysis

Consider the appropriateness of utilizing methods for distributing data or training across decentralized devices, services, or storage. When analyzing federated learning methods, consider Data Capacity Analysis, Product Integration Strategy, Product Traceability, and Fairness & Non-Discrimination, as discussed more thoroughly in Section 4 - Problem Mapping; Section 21 - Product Traceability; and Section 11 - Fairness & Non-Discrimination. When analyzing differential privacy methods, consider Data Quality - Noise, as discussed more thoroughly in Section 12 - Data Quality.

To (a) ensure appropriate privacy-preserving techniques that are aligned with chosen Models; and (b) highlight associated risks that might occur in the Product Lifecycle.

7.2. Cryptographic Methods Analysis

Consider the appropriateness of utilizing methods for encrypting all or various parts of the data and/or Model pipeline. When analyzing homomorphic encryption methods, consider Product Integration Strategy and Product Scaling Analysis, as discussed more thoroughly in Section 4- Problem Mapping. Additionally, consider - (a) whether the types of operations and calculations that can be performed meet the requirements of Model Type - Best Fit Analysis, as discussed more thoroughly in Section 5 - Model Decision-Making; and/or (b) whether the encrypted Model processing speed is acceptable with consideration for real world robustness and direct user interaction, as discussed more thoroughly in Section 14 - Performance Robustness.

To (a) ensure appropriate privacy-preserving techniques that are aligned with chosen Models; and (b) highlight associated risks that might occur in the Product Lifecycle.