Ava Thomas Wright AV.wright@northeastern.edu Postdoctoral Research - - PowerPoint PPT Presentation

ava thomas wright
SMART_READER_LITE
LIVE PREVIEW

Ava Thomas Wright AV.wright@northeastern.edu Postdoctoral Research - - PowerPoint PPT Presentation

Ava Thomas Wright AV.wright@northeastern.edu Postdoctoral Research Fellow in AI and Data Ethics here at Northeastern University JD, MS (Artificial Intelligence), PhD (Philosophy) I am here today to talk about Value Sensitive


slide-1
SLIDE 1

Ava Thomas Wright

AV.wright@northeastern.edu  Postdoctoral Research Fellow in AI and Data Ethics here at Northeastern University  JD, MS (Artificial Intelligence), PhD (Philosophy)  I am here today to talk about “Value Sensitive Design” in AI systems. The goal of Value Sensitive Design is to make socially-informed and thoughtful value-based choices in the technology design process

 Appreciating that technology design is a value-laden practice  Recognizing the value-relevant choice points in the design process  Identifying and analyzing the values at issue in particular design choices  Reflecting on those values and how they can or should inform technology design

slide-2
SLIDE 2

AI Ethics (I): Value- embeddedness in AI system design

slide-3
SLIDE 3

Group Activity: Moral Machine http://moralmachine.mit.edu/

 Students will work through the scenarios presented in moral machine as a group and decide which option to choose. The instructor might ask students to discuss as a group which choice they should make and then decide by vote. In a larger class, students might break into small groups and work through the activity together. It is important that the group make a choice rather than have everyone do it on their own to highlight an important point in the lesson plan. It is also important to show what happens once all the cases are decided: MM outputs which factors the user takes to be morally relevant and to what extent.

slide-4
SLIDE 4
  • I. Descriptive
  • vs. Prescriptive

(Normative) Claims

slide-5
SLIDE 5

Set-up: Review the findings on popular ethical preferences in the MM paper in Nature (see, for example, Figure 2)

slide-6
SLIDE 6

The distinction between descriptive and prescriptive ethical questions:

Descriptive: How do people think AVs should behave in accident scenarios? (describes what people's preferences are) Prescriptive: How should AVs behave in accident scenarios? (prescribes what AVs should do, or what AV system designers should do)

slide-7
SLIDE 7

Some descriptive and prescriptive questions the MM experiment raises:

Descriptive:

  • Does the MM platform accurately capture people's

preferences about how AVs should behave in accident scenarios?

  • Can the MM platform help its users clarify how they

reason about how AVs should behave?

Prescriptive:

  • Should designers use the moral machine platform to

make decisions about how to program autonomous vehicles to behave in accident scenarios?

  • How should designers determine how to program AVs

to behave in accident scenarios?

  • When (if ever) should designers use surveys of ethical

preferences to decide how to program autonomous systems such as AVs?

slide-8
SLIDE 8

Group Discussion

 Answer the prescriptive and descriptive questions just

  • raised. This serves to set up the rest of the lesson plan.

 Suggestions

 10 minutes: Have students break into small groups to try to answer these questions  5 minutes: Have students write down their individual answers  10 minutes: Have a general group discussion about people’s answers to these questions

slide-9
SLIDE 9

Aims of Discussion

 Dependence relationships between the questions:  If MM is a bad descriptive tool, then we shouldn’t look to it to answer moral questions  Even if MM is a good descriptive tool, nothing immediately follows from that about the answer to prescriptive questions about what you ought to do (sometimes referred to loosely as the "is-ought" gap in moral theory).

 The majority's preferences might be unethical or unjust  Examples: Nazi Germany; antebellum South. Or consider a society of cannibals guided by the consensus ethical rule, "Murder is morally permissible so long as one intends to eat

  • ne's victim."
slide-10
SLIDE 10

The MM thus makes two implicit claims about AV system design:

descriptive claim: The MM platform does accurately capture people's ethical preferences about how an AV should behave in accident scenarios. prescriptive claim: AVs should be programmed to act in accordance with the majority's preferences as collected by the MM platform.

slide-11
SLIDE 11

Take a 5- minute break?

slide-12
SLIDE 12
  • II. Challenges

for the Descriptive claim

slide-13
SLIDE 13

Descriptive Claim: The MM platform is a good tool for accurately capturing people's ethical preferences about how an AV should behave in accident scenarios.

 If the MM platform is not a good tool for accurately capturing people's ethical preferences about how an AV should behave in accident scenarios., then it should not be used as a tool for answering prescriptive questions about how to program autonomous vehicles.

 Even if you think you should encode the majority's preferences. you first have to make sure to get them right!

slide-14
SLIDE 14

Issues in the collection

  • f data
slide-15
SLIDE 15

1) Representativeness

  • f sample
slide-16
SLIDE 16

There are few controls on data collection in MM:

For example, Is the data from our class representative of any individual user or even of the group? Users might not take it seriously

There are no instructions letting the user know that this data might be used for the programming of AVs

The people answering questions on the MM website may not be representative of everyone Users cannot register indifference

slide-17
SLIDE 17

Potential response: With enough data, we can ignore the noise that results from the above

 Issue: But we need to know a lot more about how much noise is introduced

slide-18
SLIDE 18

2) Implicit value assumptions or blindspots in data collection practices

slide-19
SLIDE 19

Some ethical features of accident scenarios in MM were selected for testing, but not others. Why?

 For example, MM does not gather people's preferences with regard to race, ethnicity, apparent LGBT status, etc. Many other features that might have influenced results could have been tested as well.

slide-20
SLIDE 20

Potential response: Perhaps MM should disqualify discriminatory ethical preferences, if they exist.

 Issue: But MM tests ethical preferences with regard to gender and age.  Designing the experiment to capture some preferences that may be discriminatory but not others is a normative decision that requires an explanation and ethical justification.

slide-21
SLIDE 21
  • III. Big-Picture

Takeaways

slide-22
SLIDE 22

General Data Collection Concerns

 Data comes from somewhere and the quality and care taken when collecting it will determine whether the resulting data is useful. Data that is poorly constructed can undermine programmers’ ability to design systems ethically.  Other disciplines might be needed to help understand

  • r vet data. In the case of MM, a social scientist might

be able to tell us what kinds of results are significant even with lots of noise. They might also tell us what sorts

  • f controls are needed.
slide-23
SLIDE 23

Tools or practices for collecting data may be implicitly biased

  • r contain

unexamined ethical value assumptions

 A more diverse design team might help reveal blindspots or surface implicit ethical assumptions so that they can be examined.  Such problems do not apply only when the data collected is data concerning people's ethical preferences.

 For example, suppose a hospital with a history of intentionally discriminating against the hiring of female doctors naively uses its own historical data on the traits of successful hires to train a machine learning system to identify high-quality job applicants. The (perhaps unwitting) result would be a sexist algorithm.  We will discuss this more in AI Ethics II module

slide-24
SLIDE 24

Design of system may have hidden value assumptions

 Even if there is some version of MM that provides reliable information about users’ ethical preferences, the implicit proposal that we should rely on such data to inform how we should develop AVs is a (controversial) prescriptive claim that requires defense.  Arguably this is the main issue with the MM platform and is the topic of the next class.

slide-25
SLIDE 25

Review Questions

 What is the difference between a descriptive and a prescriptive claim? (the is-ought gap)  What are the main descriptive and prescriptive claims made in the MM platform? What is the logical relationship between them?  Describe some issues with how data on people’s ethical preferences was collected in MM.  Should designers program autonomous systems such as AVs to act in accordance with the ethical preferences of a majority of people as revealed by platforms like the MM? (Q for next time)

slide-26
SLIDE 26

Rightful Machines

 A rightful machine is an explicitly moral autonomous system that respects principles of justice and the public law of a legitimate state.  Efforts to build such systems must focus first on duties of right, or justice, which take normative priority over contestable duties of ethics in cases

  • f conflict. (This insight resolves the “trolley

problem” for purposes of rightful machines.)  Feasibility:  An adequate deontic logic of the law 1) can describe conflicts but 2) normatively requires their resolution  SDL fails, but NMRs can meet these requirements  Legal duties must be precisely specified  A rational agent architecture: 1) rat agent (LP) constraining 2) control system (ML) for 3) sensors and actuators  An implementation: answer-set (logic) programming

  • b(-A) :- murder(A), not

qual(r1(A)). qual(r1(A)) :- act(A), not ob(- A). murder(A) :- intentional(A), act(A), causes_death(A, P), person(P).