Engineering Moral Agents Kevin Baum (k.baum@uni-saarland.de) - - PowerPoint PPT Presentation

engineering moral agents
SMART_READER_LITE
LIVE PREVIEW

Engineering Moral Agents Kevin Baum (k.baum@uni-saarland.de) - - PowerPoint PPT Presentation

Dagstuhl Seminar Engineering Moral Agents Kevin Baum (k.baum@uni-saarland.de) Saarland University 1 Background 2011: Bachelor of Science in Computer Science at Saarland University Educa cation 2013: Master of Science in Computer


slide-1
SLIDE 1

Dagstuhl Seminar

Engineering Moral Agents

Kevin Baum (k.baum@uni-saarland.de) Saarland University

1

slide-2
SLIDE 2
  • 2011: Bachelor of Science in Computer Science at Saarland University
  • 2013: Master of Science in Computer Science at Saarland University
  • 2014: Master of Arts in Philosophy at Saarland University
  • Since 2014 working on my PhD thesis (collective actions of unstructured groups as source

for problems in normative ethics, e.g. for consequentialism), research assistant at the professorship for practical philosophy (Prof. Ulla Wessels & Prof. Christoph Fehige) Educa cation

2

Background

slide-3
SLIDE 3
  • Cooperation with Saarland University’s department of Computer Science since

2015, (co-)lecturer of different interdisciplinary courses:  Seminar “Ethik für Nerds” (with Prof. Holger Hermanns)  Advanced Lecture “Ethics for Nerds” (with Prof. Holger Hermanns)  Seminar “Technological Singularity and the Control Problem”  Seminar “Extending Morals: Robot Ethics & Machine Ethics” Invol

  • lvem

emen ent i in Computer E r Ethics an and Mac achine ne E Ethics

3

Background

Practi ctical e exper erience ce

  • Launched startup for middleware for adequate UIs for the internet of things,

smart homes and assisted living (MorphableUI) – basically, a social network for sensor.

  • Goal: Easy to use (bridging one kind of digital divide), privacy-respecting

(Respecting Nissenbaum’s informational norms), user controls his data

slide-4
SLIDE 4

Ethics for Nerds

PHILOSOPHY & ETHICS BASICS PRACTICES I

UPCOMING TOPICS AND (maybe partially) SCIFI

01 02 03 04 05

You’ll learn the basics:

  • The fields of Moral Philosophy
  • Normative Ethics 101: theories

(Consequentialism, Kantianism, Virtue Ethics) and concepts (right, wrong, permissible, …)

  • Basics from Computer Ethics

We’ll tackle questions like:

  • What are the personal responsibilities
  • f computer scientists?
  • Do CS need a Code of Ethics? How

would an appropriate CoE look like?

  • What are the problems to be solved

(e.g. voids of responsibility)? We’ll take a look on the world around us:

  • What is and what is not bad about:

surveillance, privacy & anonymity breaches, Big Data, (white, grey, black hat) hacking, …

  • Applying what we have learned on

some practices and technologies, e.g. PRISM, CCTV, GPS tracking mobile apps, fitness tracker… Regarding somewhat more theoretical

  • r futuristic aspects of computer

ethics:

  • Machine Ethics
  • RoboEthics

BUSINESS AND PROFESSIONAL ETHICS

PRACTICES II

We’ll take a look at the near future and emerging questions in the intersection

  • f moral philosophy and computer

science:

  • What is good and what bad about

autonomous driving?

  • Lethal Autonomous Weapons

Systems (LAWS) – ban them for moral reasons?

  • How ought an autonomous car to

‘decide’ in moral dilemmas?

4

slide-5
SLIDE 5

5

Current ME Research Interests

“What is the right thing to do for an autonomous car in context C?” is not the same as to ask “Which is the ethical adequate theory to implement into an autonomous car?”:

  • This might help us ducking the pressure to decide for ‘the correct ethical theory’ – something we cannot

reasonably expect after all this time of ethical endeavor (as Kai said: “Give us another 2000 years!”).

  • How? By eliminating certain theories as options right from the start – without dismissing them as ethical

theories as such.

  • How? For instance, there could be good consequentialist reasons not to implement cars to be

consequentialist ‘agents’, e.g. because  nobody would buy a car that would kill the owner by crashing into a wall if this is the only alternative to killing two people running onto a street without properly checking for approaching cars;  at the same time a world with only very few consequentialist cars on out street might be worse than a world with many deontologist (that is, in light of consequentialism, ethically inadequate) or even (as Sjur argued for) non-deliberating cars.

slide-6
SLIDE 6

8

Current ME Research Interests

Connection between (e.g., Dancy‘s) moral particularism and bottom-up approaches in ME ‘Mirroring’/Projectig moral character of owners/users on their machines as approach to some aspects of ME What is the real problem with implementing rules (top down approaches)? Formulation of the rule? Rule-following? Correct framing? Value detection? Resolving dilemmas? Find computational feasible, algorithmic formulations of normative theories? The ‘Control Problem’ (Bostrom), value alignment, importance of those aspects and their connection to ME Can we even really come up with algorithms for normative theories? Are they, in a certain sense, complete enough? Can we find useful (that is, applicable in context of ME) approximations?

slide-7
SLIDE 7

9

ME Roadmap Interests

ME experts – We need them, but why is there no study program for this? Specific interdisciplinary research questions and programs:

  • How can computer scientists and philosophers work and learn

from each other? And what can they learn?

  • First step: How can they learn to understand each other?

What happens without experts? Autonomous systems are coming, right?

https://m.academics.de/jobs/senior_scientist_m_w_artificial_intelligence_and_machine_ethics_127089.html