Ava Thomas Wright
AV.wright@northeastern.edu Postdoctoral Research Fellow in AI and Data Ethics here at Northeastern University JD, MS (Artificial Intelligence), PhD (Philosophy) I am here today to talk about “Value Sensitive Design” in AI systems. The goal of Value Sensitive Design is to make socially-informed and thoughtful value-based choices in the technology design process Appreciating that technology design is a value-laden practice Recognizing the value-relevant choice points in the design process Identifying and analyzing the values at issue in particular design choices Reflecting on those values and how they can or should inform technology design
AI Ethics (I): Value- embeddedness in AI system design
Group Activity: Moral Machine http://moralmachine.mit.edu/
Students will work through the scenarios presented in moral machine as a group and decide which option to choose. The instructor might ask students to discuss as a group which choice they should make and then decide by vote. In a larger class, students might break into small groups and work through the activity together. It is important that the group make a choice rather than have everyone do it on their own to highlight an important point in the lesson plan. It is also important to show what happens once all the cases are decided: MM outputs which factors the user takes to be morally relevant and to what extent.
- I. Descriptive
- vs. Prescriptive
(Normative) Claims
Set-up: Review the findings on popular ethical preferences in the MM paper in Nature (see, for example, Figure 2)
The distinction between descriptive and prescriptive ethical questions:
Descriptive: How do people think AVs should behave in accident scenarios? (describes what people's preferences are) Prescriptive: How should AVs behave in accident scenarios? (prescribes what AVs should do, or what AV system designers should do) Some descriptive and prescriptive questions the MM experiment raises: Descriptive:
- Does the MM platform accurately capture people's
preferences about how AVs should behave in accident scenarios?
- Can the MM platform help its users clarify how they
reason about how AVs should behave?
Prescriptive:
- Should designers use the moral machine platform to
make decisions about how to program autonomous vehicles to behave in accident scenarios?
- How should designers determine how to program AVs
to behave in accident scenarios?
- When (if ever) should designers use surveys of ethical
preferences to decide how to program autonomous systems such as AVs?
Group Discussion
Answer the prescriptive and descriptive questions just
- raised. This serves to set up the rest of the lesson plan.
Suggestions
10 minutes: Have students break into small groups to try to answer these questions 5 minutes: Have students write down their individual answers 10 minutes: Have a general group discussion about people’s answers to these questions