Ethical Design And Decision in Autonomous And Intelligent Systems - - PowerPoint PPT Presentation

ethical design and decision in autonomous and intelligent
SMART_READER_LITE
LIVE PREVIEW

Ethical Design And Decision in Autonomous And Intelligent Systems - - PowerPoint PPT Presentation

Ethical Design And Decision in Autonomous And Intelligent Systems Raja Cha-la Ins-tute of intelligent Systems and Robo-cs (ISIR) University Pierre and Marie Curie, Paris Raja.Cha-la@isir.upmc.fr Chair, The IEEE Global Ini.a.ve Ini.a.ve on


slide-1
SLIDE 1

Ethical Design And Decision in Autonomous And Intelligent Systems

23/11/2017 1

Raja Cha-la

Ins-tute of intelligent Systems and Robo-cs (ISIR) University Pierre and Marie Curie, Paris

Raja.Cha-la@isir.upmc.fr Chair, The IEEE Global Ini.a.ve Ini.a.ve on Ethics of Autonomous and Intelligent Systems

slide-2
SLIDE 2

Booming Applica-ons Robo-cs and AI Manufacturing, Transporta1on, Logis1cs, Agriculture, Mining, Construc1on, Health, Jus1ce, Banking, Personal services, Leisure, Defense, Interven1on, etc.

  • To replace humans
  • To assist and serve

humans

  • To rehabilitate/augment humans

2

Plateforme évolu.ve (données pa.ents et li5érature) Pa#ent / MT Algorithmes de calcul de risque Base de données

Clinique du Risque de l’UPMC - PSL

Circuit de « précision » du Risque Consulta)ons spécialisées / Examens complémentaires Circuit de « réduc.on » du Risque ETP Pa)ents Experts E-coaching Impact scien.fique (big data, algorithmes, référen#els) Impact Clinique (améliora#on des PEC) Référen#els de bonnes pra#ques Niveau de Risque à affiner Niveau de Risque déterminé Objets connectés Impact Pédagogique (sémiologie du risque) Impact Economique (réduc#on des coûts)
slide-3
SLIDE 3

A Few Ethical, Legal, Social Issues Raised by A/IS

  • Impact on jobs
  • Personal data, privacy, intrusion, surveillance
  • Transparency, explainability of algorithmic decisions
  • Autonomous/learned machine decisions
  • Cogni-ve and affec-ve bonds with robots
  • Human dignity, integrity and autonomy
  • Transforma-on and augmenta-on of humans
  • Anthropomorphism and Human iden-ty
  • Legal accountability and responsibility of robots
  • Status of robots in the human society
  • Specific robot applica-ons and usage (AWS, Sexbots)
  • Fears about General/Super AI …

3 23/11/2017

slide-4
SLIDE 4

Ethical Concerns

  • (Un)Ethical usage of robots, AI and

autonomous systems.

  • Ethics of machine decisions.
  • Ethically aligned design: Ethics in research,

and engineering.

23/11/2017 4

slide-5
SLIDE 5

The IEEE Global Ini1a1ve on Ethics of Autonomous and Intelligent Systems

  • Launched April 2016
  • Mission: To ensure every stakeholder involved in the design and

development of AIS is educated, trained, and empowered to priori-ze ethical considera-ons so that these technologies are advanced for the benefit of humanity.

  • Brings together mul-ple and diverse voices from academia, industry and
  • rganiza-ons in the A/IS and ELS communi-es and landscapes to iden-fy and

find consensus about ELS in the development and deployment of A/IS.

  • Version 1 of Ethically Aligned Design: December 2016.
  • Version 2 featuring five new sec-on: December 2017. Final version by 2019.
  • 11 standards proposals under discussion/development within IEEE-SA by Ad-

Hoc working groups.

5 23/11/2017

slide-6
SLIDE 6

Ethically Aligned Design

A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems

6

Version 1

  • Released December, 2016 as a Creative Commons

doc / RFI for public input

  • Created by over 100 Global AI/Ethics experts, in a

bottom up, globally open and transparent process

  • Eight Committees / Sections
  • Contains over eighty key Issues and Candidate

Recommendations

  • Designed as the “go-to” resource to help

technologists and policy makers prioritize ethical considerations in AI/AS

23/11/2017

slide-7
SLIDE 7

Ethically Aligned Design

A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems

7

Version 2

  • Launching December 2017 as a Creative Commons doc /

RFI for second round of public input

  • Created by over 250 Global AI/Ethics experts, in a bottom

up, transparent, open and increasingly globally inclusive process

  • Will incorporate over 200 pages of feedback from public

RFI and new Working Groups from China, Japan, Korea and Brazil

  • Thirteen Committees / Sections
  • Will contains over one hundred twenty key Issues and

Candidate Recommendations

  • Designed as the “go-to” resource to help technologists and

policy makers prioritize ethical considerations in AI/AS

Cover Design TBD 23/11/2017

slide-8
SLIDE 8

Current CommiMees

  • General Principles
  • Personal Data and Individual Access Control
  • Embedding Values Into Autonomous Intelligent Systems
  • Methodologies to Guide Ethical Research and Design
  • Safety and Beneficence of Ar-ficial General Intelligence (AGI) and Ar-ficial Superintelligence (ASI)
  • Reframing Autonomous Weapons Systems
  • Economics/Humanitarian Issues
  • Law
  • Affec-ve Compu-ng
  • Classical Ethics in Informa-on & Communica-on Technologies
  • Policy
  • Mixed Reality
  • Wellbeing

8 23/11/2017

slide-9
SLIDE 9

IEEE-SA Standards Projects for Ethically Aligned Design

9

  • IEEE P7000: Model Process for Addressing Ethical Concerns During System Design
  • IEEE P7001: Transparency of Autonomous Systems
  • IEEE P7002: Data Privacy Process
  • IEEE P7003: Algorithmic Bias Considerations
  • IEEE P7004: Standard on Child and Student Data Governance
  • IEEE P7005: Standard on Employer Data Governance
  • IEEE P7006: Standard on Personal Data AI Agent
  • IEEE P7007:

Ontological Standard for Ethically Driven Robotics and Automation Systems

  • IEEE P7008: Standard for Ethically Driven Nudging for Robotic, Intelligent and

Autonomous Systems

  • IEEE P7009: Standard for Fail-Safe Design of Autonomous and Semi-Autonomous

Systems

  • IEEE P7010 : Wellbeing Metrics Standard for Ethical Artificial Intelligence and

Autonomous Systems

23/11/2017

slide-10
SLIDE 10

Embedding Values into Autonomous Systems

10

Example Issues:

  • Values to be embedded in A/IS are not all universal;

some are specific to user communities and to tasks.

  • Moral overload: A/IS are usually subject to a

multiplicity of norms and values that may conflict with each other.

  • A/IS can have built-in data or algorithmic biases that

disadvantage members of certain groups.

23/11/2017

slide-11
SLIDE 11

Autonomy And Decision-making

11 23/11/2017

slide-12
SLIDE 12

What is Autonomy?

  • Autonomy: ability of an agent to determine and achieve its ac-ons by its own
  • means. Rela-ve to environment and tasks. Related to adapta-on capacity.
  • Opera-onal autonomy vs. decisional autonomy
  • Aeainable Autonomy is rela-ve to task and environment complexity

Complexity of environment Complexity

  • f task

Attainable Autonomy

Complexity

  • f task

Attainable Autonomy

Complexity of environment

12 23/11/2017

slide-13
SLIDE 13

Examples of Autonomy

Teleopera-on: human control Advanced automa-c control

13

Opera-onal Autonomy Opera-onal/some decisional Autonomy

23/11/2017

Opera-onal Autonomy

Da Vinci Roomba Crusher, CMU Naviga-on, LAAS

slide-14
SLIDE 14

Automated Driving

14 23/11/2017

Google

slide-15
SLIDE 15

Automated Driving: Usual Situa1ons and Moral Dilemmas

15 23/11/2017

slide-16
SLIDE 16
  • Retaking human control from a self driving car ~ 10’’

At 36 km/h the car would have moved an addi-onal 10m on its own.

23/11/2017 16

UK Department of Transport

slide-17
SLIDE 17

Machine decisions

  • Machine decisions are based on knowledge and

ac-on possibili-es.

  • Knowledge about the environment is acquired by

sensing and through contextual informa-on.

  • Knowledge is prone to be par-al and uncertain.
  • Situa-ons are dynamic.
  • Ac-ons might have unexpected outcomes.

17 23/11/2017

slide-18
SLIDE 18

What does it mean for a machine to make ethical decisions?

  • Ethical decisions are related to human dignity and well-being
  • Abstract concepts such as dignity cannot be explicitly described, taught

to/understood by machines

  • Machines are not autonomous as humans are because they cannot

decide their purpose and their own goals. Therefore machines cannot determine ethical values

  • Therefore machines cannot make ethical decisions, but can perform

ac-ons with ethical consequences.

  • Machines can only select within a bounded set of categories or decisions

provided to them directly or indirectly (e.g., learning) by a human programmer

  • Accountability remains on the human who programmed the machine

23/11/2017 18

slide-19
SLIDE 19

Automated Driving: Usual Situa1ons and Moral Dilemmas

19 23/11/2017

slide-20
SLIDE 20

Decision under Uncertainty

  • Situa-on assessment. Iden-fica-on of current state

and es-ma-on of future states {S}: Pr(s)

  • Possible decisions: state-dependent ac-ons {A}
  • Uncertain state transi-ons Pr(s,a,s’).
  • Each ac-on and resul-ng state characterized by a

value H(a,s) reflec-ng the es-mated incurred harm.

23/11/2017 20

slide-21
SLIDE 21

Ethical Approaches

  • Virtue Ethics: promo-ng the personal virtues and the “good life”

(Plato, Aristotle). Agents must be virtuous for their decisions to be good

  • Deon-c Ethics: Obey a moral impera-ve in all circumstances (I. Kant 1797)
  • Consequen-alism, U-litarianism: “The greatest good for the greatest number”

(J. Bentham 1789, J. S. Mill 1861)

  • Casuis-c approach
  • Theory of Jus-ce; protect the most vulnerable (J. Rawls 1971)

21 23/11/2017

slide-22
SLIDE 22

Theory of Jus1ce

  • Jus-ce as fairness
  • The “veil of ignorance”: To choose a decision-making system

while ignoring how you will be affected by its decisions

  • Minimize harm for the most vulnerable

22

John Rawls 1921-2002

23/11/2017

slide-23
SLIDE 23

23/11/2017 23

Best decision in state s: Π*(s) = argmina Pr(s, a, s’) H(a,s’) “Rawlsian” decision: to cause the least harm to the most vulnerable State characterized by a vulnerability measure according to predefined categories and to actual situa-on interpreta-on

slide-24
SLIDE 24

Conclusions Related to Autonomy

  • Moral choices and legal liability
  • Stakeholder consensus
  • Unbiased, transparent and traceable decision-making

processes

  • Explainable decisions
  • Robot opera-on obedient to fundamental ethical

principles, law and interna-onal humanitarian law.

  • Rawlsian perspec-ve: more contextual; reduce highest

degree of harm for the most vulnerable?

  • Disputable Idea of a “electronic personality” for

autonomous robots

  • Accountability must remain with the humans behind the

system.

24 23/11/2017

slide-25
SLIDE 25

Issues related to social and companion robots : How close to us should robots be?

  • Respect privacy, in-macy
  • Frame cogni-ve and affec-ve bonds with robots
  • Issue with expression of emo-ons: Consider

impact on development of children’s emo-onal capaci-es

  • Respect Human dignity, integrity and autonomy

23/11/2017 25

slide-26
SLIDE 26

Issues: Should be we develop human-like robots?

  • Human iden-ty and anthropomorphism: projec-on of humanity
  • n machines?

Robots should be dis-nguishable as such.

  • Robots’ in the human society: equal to humans? rights and du-es?

Respect human iden-ty and dignity.

  • Specific robot applica-ons and usage (e.g., sexbots)?

How to frame ques-onable applica-ons

26