Engineering moral agents: from human morality to artificial morality - - PowerPoint PPT Presentation

engineering moral agents from human morality to
SMART_READER_LITE
LIVE PREVIEW

Engineering moral agents: from human morality to artificial morality - - PowerPoint PPT Presentation

Engineering moral agents: from human morality to artificial morality Michael Fisher, Christian List, Marija Slavkovik, Alan Winfield Dagstuhl, May/June 2016 Organisers 1. Michael Fisher Professor of Computer Science at the University of


slide-1
SLIDE 1

Engineering moral agents: from human morality to artificial morality

Michael Fisher, Christian List, Marija Slavkovik, Alan Winfield

Dagstuhl, May/June 2016

slide-2
SLIDE 2

Organisers

  • 1. Michael Fisher

Professor of Computer Science at the University of Liverpool, UK, and Director of the UK Network on the Verification and Validation

  • f Autonomous Systems.
  • 2. Christian List

Professor of Political Science and Philosophy, Departments of Government and Philosophy, London School of Economics, UK.

  • 3. Marija Slavkovik

Postdoctoral Research Fellow, Department of Information and Media Studies, University of Bergen, Norway.

  • 4. Alan Winfield

Professor of Electronic Engineering, UWE Bristol, Visiting Professor, Department of Electronics, University of York, and Director of the UWE Science Communication Unit, UK.

slide-3
SLIDE 3

Autonomy Everywhere!

Systems with strong autonomy and intentionality are an imminent reality.

slide-4
SLIDE 4

Autonomy Everywhere!

Systems with strong autonomy and intentionality are an imminent reality.

slide-5
SLIDE 5

Autonomous Systems: Issues

Increasingly,

  • autonomous systems share their operational space with people,
  • these systems have control in many safety-critical situations.
  • autonomous systems have strong environment-manipulation

capabilities, and yet

  • current solutions for the operational safety, legality, and

morality/ethics are inappropriate. The current situation is that the key issues holding back the adoption and deployment of autonomous systems are rarely engineering issues, but are typically ethical, legal, and social issues

slide-6
SLIDE 6

Broad Range of Concerns — Examples

  • Ensuring the safety of the people sharing their space with

autonomous robots.

  • Developing certification methods and operational standards

for autonomous systems/devices.

  • Resolving issues with regard to legal responsibility and

compliance with legal norms raised by the operation of autonomous systems.

  • The scope of responsibility of the artificial agents for their

actions.

  • Autonomous systems recognising and making morally

ambiguous decisions in time-critical situations.

  • Preventing abuse of artificial agents by people for the purpose
  • f accomplishing illegal and immoral goals.
slide-7
SLIDE 7

Moral Philosophy

Developments in autonomous systems raise ethical/moral challenges. Moral philosophy provides a vast body of research, methods, and analysis but centred around the idea that moral agents are human. Humans have many characteristics that artificial entities do not:

  • they are mortal;
  • emotional;
  • dependent on society;
  • born and raised by other people; and
  • trained and motivated by their peers; and
  • fully autonomous and self-aware (arguement to be had here).

So, lessons from moral philosophy may not be directly transferable to artificial agents.

slide-8
SLIDE 8

All is not lost

When evaluating the ethics of humans, we are constrained to infer a person’s intentions from their statements and actions, or trust in the honesty of their statements. However, with autonomous robots/agents:

  • we can see precisely how they are programmed and, in some

cases, can expose the core intentions/deliberations used

  • we build these systems and so we can engineer them to have

exactly the ethical/moral behaviour that we want Consequently, we can potentially have much stronger enforcement

  • f ethics and morality within such artificial agents.
slide-9
SLIDE 9

Artificial Morality

There are two common approaches to artificial morality:

  • 1. Constraining the potentially immoral actions of the entity.

Typically by defining a set of rules, c.f Asimov’s “laws” Both formalisation and conflict resolution are open problems

  • 2. Training the entity to recognise and resolve morally

questionable situations and actions. Applying techniques such as machine learning to “teach” an artificial intentional entity to recognise morally uncertain situations and to resolve conflicts. Training is slow, resource-intensive, error-prone, and may have to be done anew for each different artificial entity. Hybrid approaches combining both methods are also considered.

slide-10
SLIDE 10

Where We Are?

The hard research questions in artificial morality are yet to be identified, as the development of artificial morality is not purely an engineering or purely a philosophical problem, but concerns robotics, computer science, philosophy, law, political science, and economic theory. Philosophers need to educate/inform engineers and computer scientists about the broad range of possibilities in moral philosophy. Computer scientists and engineers need to educate/inform philosophers about autonomy, verification, and reasoning.

slide-11
SLIDE 11

Aim of this Dagstuhl Seminar

To explore the critical hard research questions in artificial morality by promoting the interdisciplinary exchange of ideas on them. Artificial morality brings together many disciplines which have a vast amount of relevant knowledge and expertise, but which are

  • ften inaccessible to one another, and insufficiently develop their

mutual synergies. Researchers need to communicate to each other their experiences, research interests, and knowledge for artificial morality to move forward. Dagstuhl, being a place that fosters scientific cooperation and communication, is the ideal venue for achieving this goal.

slide-12
SLIDE 12

Expectations

We expect the seminar to:

  • 1. Give a view of current research in machine morality from the

AI side and of relevant areas of philosophy from the moral-philosophy, action-theoretic, and social-scientific side.

  • 2. Bridge the computer science/humanities/social-science divide

in the study of artificial morality,.

  • 3. Identify central research questions/challenges concerning
  • the definition and operationalisation of the concept of moral

agency, as it applies to human and non-human systems;

  • the formalisation and algorithmization of ethical theories; and
  • the regulatory structures that govern the role of artificial

agents and machines in our society.

slide-13
SLIDE 13

Structure

Earlier in week, we have a range of longer (“tutorials”) and shorter talks providing different perspectives on moral/ethical agents and their engineering. Later in the seminar, we propose that participants focus on discussing four central topics:

  • 1. Scope and context of moral concerns.

Which AI’s should be subject of machine ethics?

  • 2. Formalising ethics and moral agency for the purpose of

machine ethics

  • 3. Implementing machine ethics
  • 4. Validating/certifying/verifying the ethical behaviour of AI’s

It is possible that these topics will evolve/split/merge as the seminar proceeds.