"Open the Pod Bay Doors, HAL": Machine Intelligence and - - PowerPoint PPT Presentation

open the pod bay doors hal machine intelligence and the
SMART_READER_LITE
LIVE PREVIEW

"Open the Pod Bay Doors, HAL": Machine Intelligence and - - PowerPoint PPT Presentation

LSE Law Matters Inaugural Lecture "Open the Pod Bay Doors, HAL": Machine Intelligence and the Law Professor Andrew Murray Professor of Law, LSE Professor Julia Black Chair, LSE Suggested hashtag for Twitter users: #LSEMurray Open


slide-1
SLIDE 1

"Open the Pod Bay Doors, HAL": Machine Intelligence and the Law

Suggested hashtag for Twitter users: #LSEMurray

LSE Law Matters Inaugural Lecture Professor Andrew Murray

Professor of Law, LSE

Professor Julia Black

Chair, LSE

slide-2
SLIDE 2

Professor Andrew Murray

Open the Pod Bay Doors HAL: Machine Intelligence and the Law

slide-3
SLIDE 3

Part I

slide-4
SLIDE 4

Humans are “ meat” machines

slide-5
SLIDE 5

The Dress

slide-6
SLIDE 6

Higher/ Lower Order Thought S ystem I and S ystem II Multiply 12x6 Multiply 16 x 47

Multiply 417 x 514

slide-7
SLIDE 7

Outsourcing S ystem 2 Brains at the Ready

Who won the 2014 Eurovision S

  • ng Contest?

Conchita Wurst

Brains at the Ready II

S martphones Allowed… .

Who won the 1972 Eurovision S

  • ng Contest?

Vicky Leandros

(Representing Luxembourg) Après Toi

slide-8
SLIDE 8

Assisted Decision-Making

Click here for the relevant video

slide-9
SLIDE 9

S upplementary Decision-Making

Click here for the relevant video

slide-10
SLIDE 10

Autonomous Decision-Making

Click here for the relevant video

slide-11
SLIDE 11

Part II

slide-12
SLIDE 12

How Machines Think (or Don’ t)

Machines (currently) don’ t think they process.

slide-13
SLIDE 13

Law for Machines?

Handbook of Robotics, 56th Edition, 2058 A.D

  • 0. A robot may not harm humanity, or, by inaction,

allow humanity to come to harm.

  • 1. A robot may not inj ure a human being or, through

inaction, allow a human being to come to harm.

  • 2. A robot must obey the orders given it by human

beings except where such orders would conflict with the First Law.

  • 3. A robot must protect its own existence as long as

such protection does not conflict with the First or S econd Laws

slide-14
SLIDE 14

The Moral Maze The trolley problem

slide-15
SLIDE 15

Open the Pod Bay Doors HAL

Is HAL morally or legally wrong?

Click here for the relevant video

slide-16
SLIDE 16

That’s S cience Fiction Right?

slide-17
SLIDE 17

Watson

slide-18
SLIDE 18

Taranis

Click here for the relevant video

slide-19
SLIDE 19

S mart Agents and S afety

Fatal Air Accidents Cause 1950s 1960s 1970s 1980s 1990s 2000s All Human Error 61% 63% 53% 52% 63% 63% 59% Weather 15% 12% 14% 14% 8% 6% 12% Mechanical Failure 19% 19% 20% 21% 18% 22% 20% Sabotage/Others 5% 6% 13% 13% 11% 9% 9% Physician Heal Thyself Prevalence Human Error Anaesthesia 0.0365% 82% S urgeon Action 0.9% 58-79% Overall Mortality 1.85% 29-37% Preventable Adverse Effects (US Data) 210,000 deaths per annum Driver’s Ed. Human Factors Environment Vehicle Human Only Tri-Level (1979) 93% 34% 13% N/ A TRRL (1980) 95% 28% 8.5% 65% IAM (2009) >90% 15% 1.9% N/ A NHTS A (2015) 94% 2% 2% N/ A

slide-20
SLIDE 20
slide-21
SLIDE 21

A Quick Recap

1. Humans remain uniquely the only source of the form of higher order sentience that allows us to make complex moral decisions. 2. Humans, perhaps uniquely in the animal world, can rationalise obj ective and subj ective thought. 3. Human brains are complex, but also are resource hungry and as a result we often rej ect resource heavy higher-order thought for lower level intuitive thought. 4. Humans have a capacity to outsource anything complex, difficult, dangerous or time consuming. 5. We are developing machines which are capable of complex thought and creativity. 6. We are developing machines designed to act autonomously. 7. Human Level Machine Intelligence could be as little as 14 years away (or as far away as 75 years). 8. It is perfectly logical to suggest that there should be an assumption that machines should replace humans in all areas where human error remains a constituent factor in harmful outcomes.

slide-22
SLIDE 22

S entience in the Law

slide-23
SLIDE 23

S entience in Punishment

slide-24
SLIDE 24

The Challenge of Machine S entience

A new legal concept: Obj ective Personality?

Obj ective Privacy Obj ective Expression Obj ective Location Obj ective Consent Obj ective Mens Rea?

slide-25
SLIDE 25

The Lawmaker’s Dilemma

Fail to Recognise Machine Sentience Recognise Machine Sentience

Ent ire Legal Framework Needs Updat ing Could Remove Responsibilit y from Human Agent s Gives Aut onomy t o Man-made (Art ificial) Devices A Modern S lave? Fail t o Recognise Change in Human Thought Creat e Permanent Underclass

slide-26
SLIDE 26

The Lawmaker’s S

  • lution?

Ambient Law

(Asimov’s) Fourth and Fifth Laws

  • A robot must establish its identity as a

robot in all cases.

  • A robot must know it is a robot.

Lex Machina Legal/ Code Hybrid for both Humans and AIs

“ Code is Law”

slide-27
SLIDE 27

1. A self-aware being (human or robot) may not harm any class of self-aware beings, or, by inaction, allow any class of self-aware beings to come to harm. 1. A self-aware being (human or robot) may not injure a self-aware being or, through inaction, allow a self-aware being to come to harm. 1. A self-aware being (human or robot) must obey the Law except where such provisions would conflict with the First and Second Values. 1. A robot should protect its own existence as long as such protection does not conflict with the First, Second or Third Values. 1. A robot must know it is a robot. A human must know they are human. 1. A robot must establish its identity as a robot in all cases. A human must establish its identity as a human in all cases.

Lex Machina’s Normative Values (from Asimov)

slide-28
SLIDE 28
slide-29
SLIDE 29

"Open the Pod Bay Doors, HAL": Machine Intelligence and the Law

Suggested hashtag for Twitter users: #LSEMurray

LSE Law Matters Inaugural Lecture Professor Andrew Murray

Professor of Law, LSE

Professor Julia Black

Chair, LSE