ava thomas wright
play

Ava Thomas Wright AV.wright@northeastern.edu Postdoctoral Research - PowerPoint PPT Presentation

Ava Thomas Wright AV.wright@northeastern.edu Postdoctoral Research Fellow in AI and Data Ethics here at Northeastern University JD, MS (Artificial Intelligence), PhD (Philosophy) I am here today to talk about Value Sensitive


  1. Ava Thomas Wright AV.wright@northeastern.edu Postdoctoral Research Fellow in AI and Data Ethics here at  Northeastern University JD, MS (Artificial Intelligence), PhD (Philosophy)  I am here today to talk about “Value Sensitive Design” in AI systems.  The goal of Value Sensitive Design is to make socially-informed and thoughtful value-based choices in the technology design process  Appreciating that technology design is a value-laden practice  Recognizing the value-relevant choice points in the design process  Identifying and analyzing the values at issue in particular design choices  Reflecting on those values and how they can or should inform technology design

  2. AI Ethics (I): Value- embeddedness in AI system design

  3. Group Activity: Moral Machine http://moralmachine.mit.edu/  Students will work through the scenarios presented in moral machine as a group and decide which option to choose. The instructor might ask students to discuss as a group which choice they should make and then decide by vote. In a larger class, students might break into small groups and work through the activity together. It is important that the group make a choice rather than have everyone do it on their own to highlight an important point in the lesson plan. It is also important to show what happens once all the cases are decided: MM outputs which factors the user takes to be morally relevant and to what extent.

  4. I. Descriptive vs. Prescriptive (Normative) Claims

  5. Set-up: Review the findings on popular ethical preferences in the MM paper in Nature (see, for example, Figure 2)

  6. The distinction between descriptive and prescriptive ethical questions: Descriptive: How do Prescriptive: How should people think AVs should AVs behave in accident behave in accident scenarios? ( prescribes scenarios? ( describes what AVs should do, or what people's what AV system preferences are) designers should do)

  7. Descriptive: • Does the MM platform accurately capture people's preferences about how AVs should behave in accident scenarios? • Can the MM platform help its users clarify how they reason about how AVs should behave? Prescriptive: Some • Should designers use the moral machine platform to descriptive make decisions about how to program autonomous and vehicles to behave in accident scenarios? prescriptive • How should designers determine how to program AVs questions the to behave in accident scenarios? MM • When (if ever) should designers use surveys of ethical preferences to decide how to program autonomous experiment systems such as AVs? raises:

  8. Group Discussion  Answer the prescriptive and descriptive questions just raised. This serves to set up the rest of the lesson plan.  Suggestions  10 minutes: Have students break into small groups to try to answer these questions  5 minutes: Have students write down their individual answers  10 minutes: Have a general group discussion about people’s answers to these questions

  9. Aims of Discussion  Dependence relationships between the questions:  If MM is a bad descriptive tool, then we shouldn’t look to it to answer moral questions  Even if MM is a good descriptive tool, nothing immediately follows from that about the answer to prescriptive questions about what you ought to do (sometimes referred to loosely as the "is-ought" gap in moral theory).  The majority's preferences might be unethical or unjust  Examples: Nazi Germany; antebellum South. Or consider a society of cannibals guided by the consensus ethical rule, "Murder is morally permissible so long as one intends to eat one's victim."

  10. The MM thus makes two implicit claims about AV system design: descriptive claim: The MM platform does accurately capture people's ethical preferences about how an AV should behave in accident scenarios. prescriptive claim: AVs should be programmed to act in accordance with the majority's preferences as collected by the MM platform.

  11. Take a 5- minute break?

  12. II. Challenges for the Descriptive claim

  13. Descriptive Claim:  If the MM platform is not a good tool for The MM platform is accurately capturing people's ethical a good tool for preferences about how an AV should accurately behave in accident scenarios., then it should not be used as a tool for answering capturing people's prescriptive questions about how to ethical preferences program autonomous vehicles. about how an AV  Even if you think you should encode the majority's preferences. you first have to should behave in make sure to get them right! accident scenarios.

  14. Issues in the collection of data

  15. 1) Representativeness of sample

  16. For example, Is the data from our class representative of any individual user or even of the group? There are no instructions Users might not take it letting the user know that this seriously data might be used for the programming of AVs There are The people answering questions on the few MM website may not be representative of everyone controls on data collection Users cannot register indifference in MM:

  17.  Issue: But we need to know a lot more about how much Potential noise is introduced response: With enough data, we can ignore the noise that results from the above

  18. 2) Implicit value assumptions or blindspots in data collection practices

  19.  For example, MM does not gather people's preferences with regard to race, ethnicity, apparent LGBT status, etc. Some ethical Many other features that might have influenced results features of could have been tested as well. accident scenarios in MM were selected for testing, but not others. Why?

  20. Potential response: Perhaps MM should disqualify discriminatory ethical preferences, if they exist.  Issue: But MM tests ethical preferences with regard to gender and age.  Designing the experiment to capture some preferences that may be discriminatory but not others is a normative decision that requires an explanation and ethical justification.

  21. III. Big-Picture Takeaways

  22.  Data comes from somewhere and the quality and care taken when collecting it will determine whether the resulting data is useful. Data that is poorly constructed can undermine programmers’ ability to design systems ethically. General  Other disciplines might be needed to help understand or vet data. In the case of MM, a social scientist might Data be able to tell us what kinds of results are significant Collection even with lots of noise. They might also tell us what sorts of controls are needed. Concerns

  23.  A more diverse design team might help reveal blindspots or surface implicit ethical assumptions so that they can be examined.  Such problems do not apply only when the data collected is data concerning people's ethical preferences. Tools or  For example, suppose a hospital with a history of practices for intentionally discriminating against the hiring of female collecting data doctors naively uses its own historical data on the traits of may be successful hires to train a machine learning system to identify high-quality job applicants. The (perhaps unwitting) implicitly biased result would be a sexist algorithm. or contain  We will discuss this more in AI Ethics II module unexamined ethical value assumptions

  24. Design of system may have hidden value assumptions  Even if there is some version of MM that provides reliable information about users’ ethical preferences, the implicit proposal that we should rely on such data to inform how we should develop AVs is a (controversial) prescriptive claim that requires defense.  Arguably this is the main issue with the MM platform and is the topic of the next class.

  25. Review Questions  What is the difference between a descriptive and a prescriptive claim? (the is-ought gap)  What are the main descriptive and prescriptive claims made in the MM platform? What is the logical relationship between them?  Describe some issues with how data on people’s ethical preferences was collected in MM.  Should designers program autonomous systems such as AVs to act in accordance with the ethical preferences of a majority of people as revealed by platforms like the MM? (Q for next time)

  26. Rightful Machines  A rightful machine is an explicitly moral autonomous system that respects principles of justice and the public law of a legitimate state.  Efforts to build such systems must focus first on duties of right , or justice, which take normative priority over contestable duties of ethics in cases of conflict. (This insight resolves the “trolley problem” for purposes of rightful machines.)  Feasibility:  An adequate deontic logic of the law 1) can describe conflicts but 2) normatively requires their resolution  SDL fails, but NMRs can meet these requirements  Legal duties must be precisely specified ob(-A) :- murder(A), not qual(r1(A)).  A rational agent architecture : 1) rat agent (LP) qual(r1(A)) :- act(A), not ob(- constraining 2) control system (ML) for 3) sensors and A). actuators  An implementation : answer-set (logic) programming murder(A) :- intentional(A), act(A), causes_death(A, P), person(P).

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend