Ethics of Robotics and AI Moral Responsibility and Societal - - PowerPoint PPT Presentation

ethics of robotics and ai
SMART_READER_LITE
LIVE PREVIEW

Ethics of Robotics and AI Moral Responsibility and Societal - - PowerPoint PPT Presentation

Ethics of Robotics and AI Moral Responsibility and Societal Challenges Mark Coeckelbergh Professor of Philosophy of Media and Technology University of Vienna mark.coeckelbergh@univie.ac.at || coeckelbergh.wordpress.com P HILOSOPHY OF TECHNOLOGY


slide-1
SLIDE 1

Ethics of Robotics and AI

Moral Responsibility and Societal Challenges

Mark Coeckelbergh

Professor of Philosophy of Media and Technology University of Vienna mark.coeckelbergh@univie.ac.at || coeckelbergh.wordpress.com

slide-2
SLIDE 2

PHILOSOPHY OF TECHNOLOGY

Philosophy Interdisciplinarity Policy

slide-3
SLIDE 3

Robophilosophy 2018

slide-4
SLIDE 4
slide-5
SLIDE 5

PHILOSOPHY OF TECHNOLOGY

Thinking about and for technology, but also using technology to think about philosophical issues

slide-6
SLIDE 6

PHILOSOPHY OF TECHNOLOGY

Focus: robots and AI

slide-7
SLIDE 7

PHILOSOPHY OF TECHNOLOGY

Ethics Philosophical anthropology Epistemology Aesthetics …

slide-8
SLIDE 8

WHAT IS THE HUMAN?

slide-9
SLIDE 9

WHAT IS THE HUMAN?

Using robotics and AI to think about humans

slide-10
SLIDE 10

WHAT IS THE HUMAN?

Negative anthropology: what the human is NOT

slide-11
SLIDE 11

WHAT IS THE HUMAN?

Negatieve anthropologie: wat the human is NOT e.g. not a chimp

slide-12
SLIDE 12

WHAT IS THE HUMAN?

Negatieve anthropologie: wat the human is NOT e.g. not a machine or more than a machine

slide-13
SLIDE 13

WHAT IS THE HUMAN?

Positive anthropology: what the human is e.g. a computational or information being

slide-14
SLIDE 14

TOWARDS AN ARTIFICIAL

HUMAN?

Brain-based: enhancement and/or Robots and AI

slide-15
SLIDE 15

CYBORGS

Merging of humans and machines

slide-16
SLIDE 16

ROBOTS: HUMAN-LIKE

slide-17
SLIDE 17

ROBOTS: NOT NECESSARILY HUMAN-

LIKE

slide-18
SLIDE 18

AI: SCIENCE FICTION

slide-19
SLIDE 19

AI: IN YOUR POCKET

slide-20
SLIDE 20

ETHICS!

slide-21
SLIDE 21

SCIENCE FICTION ALARM

slide-22
SLIDE 22

“AI is a fundamental existential risk for human civilization”

(Elon Musk)

slide-23
SLIDE 23

“we humans are like small children playing with a bomb”

(Nick Bostrom)

slide-24
SLIDE 24

“the Singularity is a future period during which the pace of technological change will be so fast and far-reaching that human existence on this planet will be irreversibly altered”

(Ray Kurzweil)

slide-25
SLIDE 25

FRANKENSTEIN

slide-26
SLIDE 26

ROMANTICISM

slide-27
SLIDE 27

AGAINST ALARMISM

slide-28
SLIDE 28

URGENT ISSUES NEAR FUTURE

slide-29
SLIDE 29

INDUSTRY

slide-30
SLIDE 30

DAILY LIFE

slide-31
SLIDE 31

IN THE OFFICE

slide-32
SLIDE 32

FINANCE

slide-33
SLIDE 33

TRANSPORT

slide-34
SLIDE 34

HEALTH CARE

slide-35
SLIDE 35

MILITARY APPLICATIONS

slide-36
SLIDE 36

DATA

slide-37
SLIDE 37

ALL THINGS - EVERYWHERE

slide-38
SLIDE 38

CHANGES TO OUR DAILY LIFES

slide-39
SLIDE 39

ETHICAL AND LEGAL PROBLEMS

slide-40
SLIDE 40

DEFINITION PROBLEMS

Problem for regulation:

  • Due to nature of new technologies:

robots, AI, algorithms, code, smart tech, internet of things, ‘cyber- physical systems’ … ?

  • How autonomous, intelligent, etc.?
slide-41
SLIDE 41

PRIVACY, SECURITY, SURVEILLANCE

  • The AI records what you do

and transfers data… to whom? Company? Third Party?

  • What if your robot gets

hacked?

slide-42
SLIDE 42

HEALTH

slide-43
SLIDE 43

ADDICTION

slide-44
SLIDE 44

REPLACEMENT, AUTONOMY, LOSS OF

AGENCY?

  • Robot/AI - human teams
  • Degrees of autonomy
  • Distributed agency
slide-45
SLIDE 45

MORAL AND LEGAL RESPONSIBILITY

  • Who?
  • AI/robot as moral agent?
  • Legal questions
slide-46
SLIDE 46

MORAL AND LEGAL RESPONSIBILITY

Examples

  • AI causes crash on

financial markets

  • Machines harms worker

in factory

  • Autonomous car drives

into group of children

  • Care robot gives the

wrong medication

  • Killer robot kills civilian
  • Child gets too attached

to educational robot

slide-47
SLIDE 47

MORAL AND LEGAL RESPONSIBILITY

Some problems

  • what about distributed

responsibility?

  • how to make sure

responsibility traces back to humans? human in control?

  • insurance?
  • regulating or ban?
  • new legal instruments or

not? (e.g. debate in European context about legal personhood robots versus using existing liability law)

slide-48
SLIDE 48

MORAL AND LEGAL RESPONSIBILITY

Some problems

  • acceptance:

– accident and death more acceptable if human agent, e.g. human driver – why is automated flying acceptable and automated driving not?

slide-49
SLIDE 49

MORAL AND LEGAL RESPONSIBILITY

  • gradations of automation

– E.g. gradations of autonomous driving; there is already automation in existing cars:

  • Cruise control
  • Lane departure correction

systems

  • Collision avoidance systems
  • Automated parking

>> how different are fully autonomous technologies, e.g. autonomous cars? >> new legal framework needed?

slide-50
SLIDE 50

MORAL AND LEGAL RESPONSIBILITY

Example: Classificaton Society of Automotive Engineers (SAE) 5 levels of self-driving: – Level 0: monitoring, warnings – Level 1: adaptive cruise control, automated parking – Level 2: automated driving, but driver must be alert and be able to take over any time … – Level 5: no human intervention needed

slide-51
SLIDE 51

MORAL AND LEGAL RESPONSIBILITY

Information and knowledge

  • Do users and operators

understand the system and its limitations?

  • (Mis)information by

manufacturers? Important for discussions about liability and negligence Difference with aviation, which is highly regulated and relatively safe

slide-52
SLIDE 52

RESPONSIBILITY

Case: Fatal accident

  • Uber self-driving car in

autonomous mode causes accident in Arizona: pedestrian dies (March 2018)

  • See also 2016 Tesla

accident

slide-53
SLIDE 53

RESPONSIBILITY

Case: Fatal accident

  • Who is responsible?

Volvo? Uber? Vehicle

  • perator/driver?

Pedestrian? State of Arizona? Problem of “many hands”

  • Draw on tort law:

Uber/driver failed to exercise reasonable care

  • Draw on product liability

law: Volvo and Uber

  • Conduct pedestrian: accident

avoidable?

  • State of Arizona: sufficient

regulation? E.g. one could require someone to be in driver seat – but enough?

slide-54
SLIDE 54

RESPONSIBILITY

Case: Fatal accident

  • Civil proceedings

versus criminal law (but robots/AI cannot be charged with a crime)

  • Need for better

technology and more regulation (or ban? Or self-regulation by private companies (laissez-faire)? Too early or too late?

slide-55
SLIDE 55

MORAL STATUS OF AIS/ ROBOTS

Moral agents?

  • What capacities needed for moral

judgment? Also emotions?

  • Rules enough?
  • Too anthropocentric?
slide-56
SLIDE 56

MORAL STATUS OF AIS/ ROBOTS

Moral patients?

  • Thing or more than that?
  • Machine as (quasi)other?
  • Vulnerability humans versus mach
slide-57
SLIDE 57
slide-58
SLIDE 58

MORAL STATUS OF AIS/ ROBOTS

Philosophically interesting, but also practical issue?

slide-59
SLIDE 59

TECHNOLOGY CHANGES MORALITY

  • Privacy today
  • How will AI and robotics

change our values?

slide-60
SLIDE 60

VULNERABLE USERS, ATTACHMENT AND DECEPTION

slide-61
SLIDE 61

SAFETY

slide-62
SLIDE 62

HUMAN DIGNITY AND AUTONOMY

slide-63
SLIDE 63

ADAPTING TOO MUCH?

Do we want to adapt to robots or should robots adapt to us?

slide-64
SLIDE 64

MORAL DISTANCE

slide-65
SLIDE 65

MORAL DISTANCE

slide-66
SLIDE 66

MORAL DISTANCE

slide-67
SLIDE 67

SOCIETAL IMPLICATIONS

  • Justice, fairness, power
  • Inclusive society?
  • Biased and non-

transparent algorithms >>

  • Social relations, e.g.

intimate relations

  • Sustainable economy?
  • Future of work >>
slide-68
SLIDE 68

THE FUTURE OF WORK

  • Replacement?
  • Working conditions and

experience of work?

  • Delegation and

distribution of tasks?

slide-69
SLIDE 69
slide-70
SLIDE 70

BIASED ALGORITHMS

  • Problem in machine

learning: AI trains on dataset that may contain a bias (e.g. favors young white men)

  • Problem of algorithm or

society, or both? How to deal with this?

  • Right non-discrimination
slide-71
SLIDE 71

BIASED ALGORITHMS

  • Is bias avoidable? No,

but we can explicitly discuss, analyze, and intervene (kind of bias, degree of bias)

  • Algorithms teach us

something about our societies (see also digital humanities: use AI!)

slide-72
SLIDE 72

NON-TRANSPARENT ALGORITHMS

  • Problem with new approaches to AI:

Decision AI/algorithm black box, I am affected by its decision but do not know how it came to its decision

  • Right to be informed, “Right to

Explanation of Automated Decision Making” (Wachter et al. 2017) but is that possible?

slide-73
SLIDE 73

TRUST AND TRANSPARENCY

  • Trust in system (technology:

reliability) vs trust in people (also emotions)

  • Transparency of data, process,
  • rganisation: again, it depends on

people

slide-74
SLIDE 74

GENDER ISSUES WITH DESIGN

slide-75
SLIDE 75
slide-76
SLIDE 76

GENDER ISSUES AND HUMAN RELATIONSHIPS

slide-77
SLIDE 77

November 10, 2017 mark.coeckelbergh@univie.ac.at 77

slide-78
SLIDE 78

ETHICS: APPROACH

  • Bottom up
  • Pro-active
  • Global
  • Positive
slide-79
SLIDE 79

Ethical & legal theory and principles

Experience – Practices

slide-80
SLIDE 80
slide-81
SLIDE 81

Ethical & legal theory and principles

Experience – Practices

slide-82
SLIDE 82

ETHICS AND REGULATION: LET’S

TRY TO BE PRO-ACTIVE

slide-83
SLIDE 83

ETHICS: HOW NOT TO DO IT

slide-84
SLIDE 84

ETHICS: PRO-ACTIVE IN RESEARCH

AND INNOVATION

  • Regulation: needed, but always too

late?

  • Work also through standards, see IEEE
  • Certification
slide-85
SLIDE 85

GLOBAL ACTION NEEDED

  • Due to nature of new technologies
  • Do we have suitable institutions for

this?

slide-86
SLIDE 86

ALSO NON-GOVERNMENTAL ACTORS!

slide-87
SLIDE 87

POSITIVE: ETHICS AND THE GOOD

LIFE

  • Not just constraints and what not

to do, but also what to do and how to live (good life, virtue, community/society)

slide-88
SLIDE 88

EXPLORE NEW POSSIBILITIES

  • New experiential and action

possibilities

  • Not only in the West
slide-89
SLIDE 89

INNOVATION, DESIGN, ART

  • Imagination needed
slide-90
SLIDE 90

Policy needed

Everyone affected, need for vision and policy NOW

slide-91
SLIDE 91

“It’s the principles, stupid”

slide-92
SLIDE 92

No, it’s not only about principles, values, norms, theory, etc. Challenge is to change technological practices (design, innovation and use) and principles, theory, etc. are instruments to do that

slide-93
SLIDE 93

reflecting on experience

slide-94
SLIDE 94

What to do?

Usually ethics focuses on what (not) to do, but often we agree on what (not) to do; there are also other questions:

  • Who does what?
  • How to do things (best)?

>> practical wisdom

slide-95
SLIDE 95

What to do?

Morality: constraints, red lines, sactions Ethics: the good life, the best life

slide-96
SLIDE 96

Who and how?

How can we work together to ensure that AI and robotics will contribute to a future we want? Also think about PROCESS Experts, citizens, and mediators needed

slide-97
SLIDE 97

Who and how?

Role researchers, governmental, intergovernmental, and non-governmental

  • rganisations/civil society includes: raise awarness

and bring people together, initiate new processes: HOW can we reach these goals?

slide-98
SLIDE 98

Who and how?

Power differences (e.g. big companies versus individual citizens) Cultural differences (global, Europe)

slide-99
SLIDE 99

SOME BARRIERS

  • Lack of sufficient transdisciplinary

expertise

  • Lack of connections academia –

policy makers and short-term views

  • Insufficient institutional support for

more participatory decision making

  • Not taking into account lessons

learnt, re-inventing the wheel

slide-100
SLIDE 100

ADDRESS PROBLEMS

  • More support for transdisciplinary

research

  • Further institutionalize links academia

– policy makers and make room for development of long-term vision

  • Collaborate with other, non-

governmental and non-academic actors in society

  • More studies taking into account work

already done, including work in the areas of philosophy of technology and robot ethics

slide-101
SLIDE 101

THE FUTURE OF AI (& INFORMATICS)

Beyond fear Ethical Interdisciplinary, incl. humanities Connected to wider society Europe: expertise in tech ethics

slide-102
SLIDE 102

THE FUTURE OF AI (& INFORMATICS)

The future of AI will be ethical or it will not be.

slide-103
SLIDE 103

Thanks!

Mark Coeckelbergh

Professor of Philosophy of Media and Technology University of Vienna mark.coeckelbergh@univie.ac.at || coeckelbergh.wordpress.com