engineering moral agents
play

Engineering Moral Agents Kevin Baum (k.baum@uni-saarland.de) - PowerPoint PPT Presentation

Dagstuhl Seminar Engineering Moral Agents Kevin Baum (k.baum@uni-saarland.de) Saarland University 1 Background 2011: Bachelor of Science in Computer Science at Saarland University Educa cation 2013: Master of Science in Computer


  1. Dagstuhl Seminar Engineering Moral Agents Kevin Baum (k.baum@uni-saarland.de) Saarland University 1

  2. Background  2011: Bachelor of Science in Computer Science at Saarland University Educa cation  2013: Master of Science in Computer Science at Saarland University  2014: Master of Arts in Philosophy at Saarland University  Since 2014 working on my PhD thesis (collective actions of unstructured groups as source for problems in normative ethics, e.g. for consequentialism), research assistant at the professorship for practical philosophy (Prof. Ulla Wessels & Prof. Christoph Fehige) 2

  3. Background  Cooperation with Saarland University’s department of Computer Science since Invol olvem emen ent i in Computer E r Ethics 2015, (co-)lecturer of different interdisciplinary courses: and Mac an achine ne E Ethics  Seminar “Ethik für Nerds” (with Prof. Holger Hermanns)  Advanced Lecture “Ethics for Nerds” (with Prof. Holger Hermanns)  Seminar “Technological Singularity and the Control Problem”  Seminar “Extending Morals: Robot Ethics & Machine Ethics”  Launched startup for middleware for adequate UIs for the internet of things, Practi ctical e exper erience ce smart homes and assisted living (MorphableUI) – basically, a social network for sensor.  Goal: Easy to use (bridging one kind of digital divide), privacy-respecting (Respecting Nissenbaum’s informational norms), user controls his data 3

  4. Ethics for Nerds You’ll learn the basics: We’ll tackle questions like: Regarding somewhat more theoretical or futuristic aspects of computer The fields of Moral Philosophy What are the personal responsibilities ethics: • • of computer scientists? Normative Ethics 101: theories Machine Ethics • • (Consequentialism, Kantianism, Do CS need a Code of Ethics? How • Virtue Ethics) and concepts (right, would an appropriate CoE look like? RoboEthics • wrong, permissible, …) 02 04 What are the problems to be solved • Basics from Computer Ethics (e.g. voids of responsibility)? • PHILOSOPHY & ETHICS BUSINESS AND UPCOMING TOPICS AND PRACTICES I PRACTICES II BASICS PROFESSIONAL ETHICS (maybe partially) SCIFI 03 01 05 We’ll take a look at the near future and emerging questions in the intersection We’ll take a look on the world around us: of moral philosophy and computer science: What is and what is not bad about: • surveillance, privacy & anonymity What is good and what bad about breaches, Big Data, (white, grey, black • autonomous driving? hat) hacking, … Lethal Autonomous Weapons Applying what we have learned on • • Systems (LAWS) – ban them for some practices and technologies, e.g. moral reasons? PRISM, CCTV, GPS tracking mobile apps, fitness tracker… How ought an autonomous car to • ‘decide’ in moral dilemmas? 4

  5. Current ME Research Interests “What is the right thing to do for an autonomous car in context C?” is not the same as to ask “Which is the ethical adequate theory to implement into an autonomous car?”:  This might help us ducking the pressure to decide for ‘the correct ethical theory’ – something we cannot reasonably expect after all this time of ethical endeavor (as Kai said: “Give us another 2000 years!”).  How? By eliminating certain theories as options right from the start – without dismissing them as ethical theories as such.  How? For instance, there could be good consequentialist reasons not to implement cars to be consequentialist ‘agents’, e.g. because  nobody would buy a car that would kill the owner by crashing into a wall if this is the only alternative to killing two people running onto a street without properly checking for approaching cars;  at the same time a world with only very few consequentialist cars on out street might be worse than a world with many deontologist (that is, in light of consequentialism, ethically inadequate) or even (as Sjur argued for) non-deliberating cars. 5

  6. Current ME Research Interests Connection between (e.g., Dancy‘s) moral particularism and bottom-up approaches in ME ‘Mirroring’/Projectig moral character of owners/users on their machines as approach to some aspects of ME What is the real problem with implementing rules (top down approaches)? Formulation of the rule? Rule-following? Correct framing? Value detection? Resolving dilemmas? Find computational feasible, algorithmic formulations of normative theories? Can we even really come up with algorithms for normative theories? Are they, in a certain sense, complete enough? Can we find useful (that is, applicable in context of ME) approximations? The ‘Control Problem’ (Bostrom), value alignment, importance of those aspects and their connection to ME 8

  7. ME Roadmap Interests ME experts – We need them, but why is there no study program for this? What happens without experts? Autonomous systems are coming, right? Specific interdisciplinary research questions and programs:  How can computer scientists and philosophers work and learn from each other? And what can they learn?  First step: How can they learn to understand each other? https://m.academics.de/jobs/senior_scientist_m_w_artificial_intelligence_and_machine_ethics_127089.html 9

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend