Human-Centric Design

From The Foundation for Best Practices in Machine Learning
Technical Best Practices > Human-Centric Design

To view additional information and to make edit suggestions, click the individual items.

Human-Centric Design

To ensure (a) building desirable solutions; (b) human control over Products and Models; and (c) that individuals affected by Product and Model outputs can obtain redress.

19.1. Product Definition(s)

To discover and gain insight so that the Product and Model(s) will solve the right problems, designed for human needs and values, before building it.
Item nr. Item Name and Page Control Aim
19.1.1. Human Centered Machine Learning

Incorporate the human (non-technical) perspective in your (technical) process of exploration, development and production by applying user research, design thinking, prototyping and rapid feedback, and human factors when defining a usable product or model.

To (a) ensure that Product(s) and Model(s) are not only feasible and viable, but also align with a human needs; and (b) highlight associated risks failing such.

19.1.2. UX (or user) research

Focus on understanding user behaviours, needs, and motivations through observation techniques, task analysis, user interviews, and other research methodologies.

To prevent (a) a focus on technology from overshadowing a focus on problem solving; and (b) cognitive biases from adverse influence Product and Model design.

19.1.3. Design for Human values

Include activities for (a) the identification of societal values, (b) deciding on a moral deliberation approach (e.g. through algorithms, user control or regulation), and (c) methods to link values to formal system requirements (e.g. value sensitive design (VSD) mapping).

To reflect societal concerns about the ethics of AI, and ensure that AI systems are developed responsibly, incorporating social, ethical values and ensuring that systems will uphold human values. The moral quality of a technology depends on its consequences.

19.2. Exploration

To (a) cluster, (b) find insights and (c) define the right opportunity area, ensuring to focus on the right questions to solve in preparation for the development and production phase.
Item nr. Item Name and Page Control Aim
19.2.1. Design Thinking

Ensure an iterative development process by (a) empathize: research your users' needs, (b) define: state your users' most important needs and problems to solve, (c) ideate: challenge assumptions and create ideas, (d) prototype: start to create solutions and (e) test: gather user feedback early and often.

To let data scientists organise and strategise their next steps in the exploratory phase.

19.2.2. Ethical assessment

Discuss with your team to what extend (a) the AI product actively or passively discriminates against groups of people in a harmful way; (b) everyone involved in the development and use of the AI product understands, accepts and is able to exercise their rights and responsibilities; (c) the intended users of an AI product can meaningfully understand the purpose of the product, how it works, and (where applicable) how specific decisions were made.

to avoid the possible opportunity areas of your AI model to be harmful in terms of fairness, accountability and transparency.

19.2.3. Estimating the value vs effort of possible opportunity areas

Explore the details of what mental Models and expectations people might bring when interacting with an ML system as well as what data would be needed for that system. E.g. an Impact Matrix.

To reveal the automatic assumptions people will bring to an ML-powered product, to be used as prompts for a product team discussion or as stimuli in user research. (See also Section 4.11. - User Experience Mapping.)

19.3. Development

To (a) ensure rapid iteration and targeted feedback from relevant Stakeholders, allowing a larger range of possible solutions to be considered in the selection process. (b) Increase the creativity and options considered, while avoiding avoiding personal biases and/or pigeon-holing a solution.
Item nr. Item Name and Page Control Aim
19.3.1. Prototyping

1: Focus on quick and minimum viable prototypes that offer enough tangibility to find out whether they solve the initial problem or answer the initial question. Document how test participants react and what assumptions they make when they "use" your mockup.

2: Design a so-called 'Wizard of Oz' test; have participants interact with what they believe to be an autonomous system, but which is actually being controlled by a human (usually a team member).

To gain early feedback (without having to actually have build an ML product) needed to (a) adjust or pivot your Products(s) and/or Model(s) thus ensuring business viability; and/or (b) assess the cost and benefits of potential features with more validity than using dummy examples or conceptual descriptions.

19.3.2. Cost weighing of false positives and false negatives

While all errors are equal to an ML system, not all errors are equal to all people. Discuss with your team how mistakes of your ML model might affect the user's experience of the product.

to avoid sensitive decisions being taken (a) autonomously; or (b) without human consideration.

19.3.3. Visual Storytelling

Focus on explanatory analysis over exploratory analysis, taking the mental models of your target audience in account.

To avoid uninformed decisions about your product or model by non-technical stakeholders, when presenting complex analysis, models, and findings.

19.3.4. Preventative Process Design

Document and assess whether high-risk and/or high-impact Model (sub)problems or dilemmas that are present in the Product (as determined from following the Best Practices Framework) can be mitigated or avoided by applying non-Model process and implementation solutions. If non-Model solutions are not applied, document the reasons for this, document the sustained presence of these risks and implement appropriate incident response measures.

To (a) prevent high-risk and/or high-impact Model (sub)problems or dilemmas through non-Model process and implementation solutions; and (b) highlight associated risks that might occur in the Product Lifecycle.

19.4. Production

To ensure (a) delivering a user-friendly product, (b) increasing the adoption rate of your product or model, focussing on (dis-)trust as main fundamental risk of ML models with (non-technical) end users
Item nr. Item Name and Page Control Aim
19.4.1. Trust - increased by design

Allow for users to develop systems heuristics (ease of use) via design patterns while at the same time facilitate a detailed understanding to those who value the 'intelligent' technology used. (See Section 19.4.2. - Design for Human Error; Section 19.4.3. - Algorithmic transparency; and Section 19.4.4. - Progressive disclosure for further information.)

To avoid (a) that the user does not trust the outcome, and will act counter to the design, causing at best inefficiencies and at worst serious harms, or (b) that -trusting an application will do what we think it will do- an user can confirm their trust is justified.

19.4.2. Design for Human Error

(a) Understand the causes of error and design to minimise those causes; (b) Do sensibility checks. Does the action pass the "common sense" test (e.g. is the number is correct? - 10.000g or 10.000kg) (c) Make it possible to reverse actions - to "undo" them - or make it harder to do what cannot be reversed (eg. add constraints to block errors - either change the color to red or mention "Do you want to delete this file? Are you sure?"). (d) make it easier for people to discover the errors that do occur, and make them easier to correct

To (a) increase trust between the end user and the model; (b) minimize the opportunities for errors while also mitigating the consequences. Increase the trust users have with your product by design for deliberate mis-use of your model (making your model or product "idiot-proof") so users are (a) able to insert data to compare the model outcome with their own expected outcome which will increase their trust, or (b) users able to test the limitations of your product or model -via fake or highly unlikely data- without breaking your product or model.

19.4.3. Algorithmic transparency

Assess the appropriate system heuristics (eg. ease of use), document all factors that influence the algorithmic decisions, and use them as a design tool to make them visible, or transparent, to users who use or are affected by the ML systems.

To (a) increase trust between the end user and the model; (b) increase end-user control; (c) improve acceptance rate of tool; (d) promote user learning with complex data; and (e) enable oversight by developers.

19.4.4. Progressive disclosure

At the point where the end-user interacts with the Product outcomes, show them only the initial features and/or information necessary at that point in the interaction (thus initially hiding more advanced interface controls). Show the secondary features and/or information only when the user requests it (show less, provide more-principle).

To greatly reduce unwanted complexity for the end-user and thus preventing (a) end-user non-adoption or misunderstanding and (b) ensuring an increased feeling of trust by the users.

19.4.5. Human in the loop (HITL)

Embed human interaction with machine learning systems to be able to label or correct inaccuracies in machine predictions.

To avoid the risk of the Product applying a materially detrimental or catastrophic Product Outcome to a Product Subject without human intervention.

19.4.6. Remediation

Document, assess and implement in the Model(s), Product and Organization processes, requirements for enabling Product Subjects to challenge and obtain redress for Product Outcomes applied to them.

To ensure detrimental Product Outcomes are easily reverted when appropriate.