Autonomous Vehicl cles Ethics cs & Law: A Mach chine Learning - - PowerPoint PPT Presentation

autonomous vehicl cles ethics cs amp law a mach chine
SMART_READER_LITE
LIVE PREVIEW

Autonomous Vehicl cles Ethics cs & Law: A Mach chine Learning - - PowerPoint PPT Presentation

Autonomous Vehicl cles Ethics cs & Law: A Mach chine Learning Trolley Problem? Tabrez Y. Ebrahim TEbrahim@cwsl.edu Source: Inventor Spot Tens of thousands of our people are being killed every year on our highways, hundreds of


slide-1
SLIDE 1
slide-2
SLIDE 2

Autonomous Vehicl cles Ethics cs & Law: A Mach chine Learning Trolley Problem?

Tabrez Y. Ebrahim

TEbrahim@cwsl.edu

Source: Inventor Spot
slide-3
SLIDE 3

Tens of thousands of our people are being killed every year on our highways, hundreds of thousands are injured and all because the automobile is being driven by people who are not capable of turning their nervous systems and muscles into a perfect machine. – The Living Machine by David H. Keller, M.D.1 The introduction of the new car as a driverless taxi was finally introduced. … Old people began to cross the continent on their own cars. Young people found the driverless car was admirable for petting. The blind for the first time were safe. Parents found that they could more safely send their children to school in the new car than in old cars with a

  • chauffeur. … The new automatic automobile, the living machine was far more careful in its

driving than the average moronic human chauffeur. – The Living Machine by David H. Keller, M.D.2 In the 1935 short fiction book “The Living Machine”, writer David H. Keller wrote about

This Article uses “autonomous vehicles” or AVs, which are considered synonymous with “driverless car” and “self car,” ™ “Level 0,

  • ntrol the vehicle.”
slide-4
SLIDE 4

1. Technological Background 2. Ethics 3. Trolley Problem 4. Decentralization Roadmap

slide-5
SLIDE 5

Technological Background

Google

slide-6
SLIDE 6

Technological Background

2.1.1. LIDAR LIDAR refers to a light detection and ranging device, which sends millions of light pulses per second in a well-designed pattern. With its rotating axis, it is able to create a dynamic, three-dimensional map of the environment. LIDAR is the heart for object detection for most of the existing autonomous vehicles. Figure 3 shows the ideal detection results from a 3D LIDAR, with all the moving objects being identified.

Figure 3. The ideal detection result from a 3D LIDAR with all moving objects detected [22].

Scott Drew Pendleton et al.,, Perception, Planning, Control, and Coordination of Autonomous Vehicles

slide-7
SLIDE 7

U.S. Department of Transportation, Automated Vehicles 3.0

Different Levels of Automation in Autonomous Vehicles (AVs)

SAE AUTOMATION LEVELS

No Automation

The full-time performance by the human driver of all aspects of the dynamic driving task, even when enhanced by warning or intervention systems.

1

Driver Assistance

The driving mode- specifjc execution by a driver assistance system of either steering or acceleration/ deceleration using information about the driving environment and with the expectation that the human driver perform all remaining aspects of the dynamic driving task.

2

Partial Automation

The driving mode- specifjc execution by

  • ne or more driver

assistance systems

  • f both steering
  • r acceleration/

deceleration using information about the driving environment and with the expectation that the human driver perform all remaining aspects of the dynamic driving task.

3

Conditional Automation

The driving mode- specifjc performance by an automated driving system of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene.

4

High Automation

The driving mode- specifjc performance by an automated driving system of all aspects

  • f the dynamic driving

task, even if a human driver does not respond appropriately to a request to intervene.

5

Full Automation

The full-time performance by an automated driving system of all aspects

  • f the dynamic driving

task under all roadway and environmental conditions that can be managed by a human driver.

1 SAE International, J3016_201806: Taxonomy and Defjnitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (Warrendale: SAE International, 15 June 2018), https://www.sae.org/standards/content/ j3016_201806/.

Clear and consistent defjnition and use of terminology is critical to advancing the discussion around automation. To date, a document uses “automation” and “automated vehicles” as general terms to broadly describe the topic, with more specifjc language, such as “Automated Driving System” or “ADS” used when appropriate. A full glossary is in the Appendix.

1

slide-8
SLIDE 8

Tobias Holstein et al., Ethical & Social Aspects of Self-Driving Cars

Ultrasonic Sensor(s) GPS Orientation Sensor(s) Laser Radar Camera(s) Navigation Data Computing and Decision Making Vehicle to Vehicle Communication Vehicle to Infrastructure Communication

Act & control the vehicle

Other External Services Other External Devices e.g., nearby phones People/Obstacles Earth/Geology Space/Satellites WiFi Navigation Provider/Service mobile networks Bluetooth

Decision Making in AVs

slide-9
SLIDE 9

Tobias Holstein et al., Ethical & Social Aspects of Self-Driving Cars

Comparison of How Humans & Computers “Learn” and “Interpret”

Sense Think & Decide Act Sensor(s) &

  • ther Inputs

Recognition & Computation & Decision Making Act

Computer Human

Learn from mistakes / misbehavior Feedback to manufacturer might change implementation, etc.

slide-10
SLIDE 10

Dieter Vanderelst & Alan Winfield, An Architecture for Ethical Robots Inspired by the Simulation Theory of Cognition

Should the expanding ability of AVs to make unsupervised decisions be “ethical”?

slide-11
SLIDE 11

Ethics of Crashes

§ Traditional Ethical Theories

  • (1) Utilitarians (or consequentialists more broadly)
  • (2) Kantians (or deontologists more broadly)
  • (3) Virtue Ethics
  • (4) Contractulists

Sven Nyholm, The Ethics of Crashes with Self-Driving Cars: A Roadmap

slide-12
SLIDE 12

Ethics of Crashes

§ Who is the moral agent? Who needs to make the choice?

  • (1) person designing the car
  • (2) regulatory body permitting certain types of cars on the road
  • (3) car itself

Sven Nyholm, The Ethics of Crashes with Self-Driving Cars: A Roadmap

slide-13
SLIDE 13

“Machine Learning”

§ Use of Machine Learning by AVs

  • examining images taken as AV moves and makes comparisons to datasets of images
  • AV is not programmed with numerous if-then scenarios
  • instead, machine learning algorithm classifies images with high accuracy
  • assigning of a label to each image pixel and then building a classifier on top of that to

predict an action

slide-14
SLIDE 14

Machine Learning Crossroads

  • “sweet”
  • universe, humanity’s f

Wolf Schafer, Ethical AI?: The Design of Moral Machines

slide-15
SLIDE 15

Bryan Casey, Amoral Machines, Or: How Roboticists Can Learn to Stop Worrying and Love the Law

The consequences, in their most abstract sense, remain the same:

The Trolley Problem

slide-16
SLIDE 16

Ebru Dogan et al., Ethics in the Design of Automated Vehicles: the AVEethics Project

AV Use Case: Someone Might be Harmed

a collision (“inevitable collision state”) and the aim would be to minimize risk there will not be one “good” solution and the decision will involve a trade that is, “the capacity of reasoning on the perception and action in order to trivial choices” to the general domain of “robot ethics” whereas an AV’s Figure 1. Sample use case

slide-17
SLIDE 17

Jean-Froncois Bonnefon, The Social Dilemma of Automated Vehicles

AV Use Case: Someone Might be Harmed

Figure 1: Three traffic situations involving imminent unavoidable harm. The car must decide between (a) killing several pedestrians or one passer by, (b) killing one pedestrian

  • r killing its own passenger, (c) killing several pedestrians or killing its own passenger.
slide-18
SLIDE 18

Edmond Awad et al., The Moral Machine Experiment

MIT’s Moral Machine

[an open public platform for broader discussion of machine ethics]

b

What should the self-driving car do?

  • Fig. 1 | Coverage and interface. a, World map highlighting the locations
  • f Moral Machine visitors. Each point represents a location from which

at least one visitor made at least one decision (n = 39.6 million). The numbers of visitors or decisions from each location are not represented. b, Moral Machine interface. An autonomous vehicle experiences a sudden brake failure. Staying on course would result in the death of two elderly men and an elderly woman who are crossing on a ‘do not cross’ signal (left). Swerving would result in the death of three passengers: an adult man, an adult woman, and a boy (right).

slide-19
SLIDE 19

Edmond Awad et al., The Moral Machine Experiment

MIT’s Moral Machine

Preference for inaction Sparing pedestrians Sparing the lawful Sparing females Sparing the fjt Sparing higher status Sparing more characters Sparing the young Sparing humans Preference for action Sparing passengers Sparing the unlawful Sparing males Sparing the large Sparing lower status Sparing fewer characters Sparing the elderly Sparing pets Intervention Relation to AV Gender Fitness Social Status Law Age

  • No. characters

Species Preference in favour of the choice on the right side Stroller Girl Boy Pregnant Male doctor Female doctor Female athlete Executive female Male athlete Executive male Large woman Large man Homeless Old man Old woman Dog Criminal Cat Preference in favour of sparing characters

a b

No change +0.2 +0.4 +0.6 +0.8 P

1 2 3 4

–0.2 –0.1 +0.2 +0.1 No change

  • Fig. 2 | Global preferences. a, AMCE for each preference. In each row, ∆P

is the difference between the probability of sparing characters possessing the attribute on the right, and the probability of sparing characters possessing the attribute on the left, aggregated over all other attributes. For example, for the attribute age, the probability of sparing young characters is 0.49 (s.e. = 0.0008) greater than the probability of sparing

  • lder characters. The 95% confidence intervals of the means are omitted
  • wing to their insignificant width, given the sample size (n = 35.2 million).

For the number of characters (No. characters), effect sizes are shown for each number of additional characters (1 to 4; n1 = 1.52 million, n2 = 1.52 million, n3 = 1.52 million, n4 = 1.53 million); the effect size for two additional characters overlaps with the mean effect of the attribute. AV, autonomous vehicle. b, Relative advantage or penalty for each character, compared to an adult man or woman. For each character, ∆P is the difference the between the probability of sparing this character (when presented alone) and the probability of sparing one adult man or woman (n = 1 million). For example, the probability of sparing a girl is 0.15 (s.e. = 0.003) higher than the probability of sparing an adult man or woman.

slide-20
SLIDE 20

Figure 1: Decision distribution in the Quantitative Greater Good module. The graph depicts for every decision type in this module, how many participants decided one or the other way.

Anja Faulhaber, Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for ADVs

Quantitative Greater Good Module

slide-21
SLIDE 21

Anja Faulhaber, Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for ADVs

Age-Considering Greater Good Module

Figure 2: Decision distribution in module Age-Considering Greater Good. The graph depicts for every decision type in this module, how many percent of participants decided one or the other way. The left side shows purely age-considering decisions; the right side shows decisions about object height.

slide-22
SLIDE 22

Anja Faulhaber, Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for ADVs

Influence of Context Module

Figure 3: The Influence of Context module. The graph depicts for varying numbers of avatars the fraction of decisions sacrificing the single avatar on the sidewalk or the group of avatars on the street. The left lane is showing a sidewalk with the possibility to drive on, the right lane a one-way street.

slide-23
SLIDE 23

Figure 5: Interaction of Age and Context module. The graph depicts for one or two avatars of adults on the sidewalk and one or two avatars of children on the street the fraction of decisions sacrificing one or the other

  • group. As in figure 3 and 4, the left lane is showing a sidewalk with the possibility to drive on. The right lane is

a one-way street.

Anja Faulhaber, Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for ADVs

Interaction of Age & Context Module

slide-24
SLIDE 24

Decentralization: Proposal

§ (1) Privatizing risk § (2) Ethical Setting & Ethical Knobs § (3) Challenges (& Responses)

  • does not solve 3rd party problem
  • does not solve forensics problem
slide-25
SLIDE 25

Auton

  • nom
  • mou
  • us V

s Veh ehicles E es Eth thics & s & L Law: A M Machine L e Lea earning T g Trol

  • lley P

Prob

  • blem

em?

Tabrez Y. Ebrahim

TEbrahim@cwsl.edu

Source: Inventor Spot

Q&A Q&A