Autonomy, Intention, Verification Michael Fisher University of - - PowerPoint PPT Presentation

autonomy intention verification
SMART_READER_LITE
LIVE PREVIEW

Autonomy, Intention, Verification Michael Fisher University of - - PowerPoint PPT Presentation

Autonomy, Intention, Verification Michael Fisher University of Liverpool, United Kingdom Dagstuhl, May/June 2016 Background Professor of Computer Science formal methods, autonomy, proof, programming languages


slide-1
SLIDE 1

Autonomy, Intention, Verification

Michael Fisher University of Liverpool, United Kingdom

Dagstuhl, May/June 2016

slide-2
SLIDE 2

Background

Professor of Computer Science

  • formal methods, autonomy, proof, programming languages

http://www.csc.liv.ac.uk/~michael Director of Centre for Autonomous Systems Technology

  • cross-displinary centre at University of Liverpool
  • involving CS, Engineering, Electronics, Law, Pyschology,. . .

http://www.liv.ac.uk/cast Coordinator of UK Network on the Verification and Validation of Autonomous Systems

  • funded by EPSRC
  • brings together formal verification, testing, user validation, etc

http://www.vavas.org

slide-3
SLIDE 3

Interested in: Autonomous Systems

Autonomy: the ability of a system to make its own decisions and to act

  • n its own, and to do both without direct human intervention.

Even within this, there are variations concerning decision-making: Automatic: involves a number of fixed, and prescribed, activities; there may be options, but these are generally fixed in advance. Adaptive: improves its performance/activity based on feedback from environment — typically developed using tight continuous control and optimisation, e.g. feedback control system. Autonomous: decisions made based on system’s (belief about its) current situation at the time of the decision — environment still taken into account, but internal motivations/beliefs are important.

slide-4
SLIDE 4

No Psychiatrists for Robots?

With an autonomous system we can (at least in principle) examine its internal programming and find out exactly

  • 1. what it is “thinking”,
  • 2. what choices it has, and
  • 3. why it decides to take particular ones.

...... If A and B then C or D Repeat X until v>55 ......

slide-5
SLIDE 5

Verifiable Autonomy

Our approach is that we should be certain what the autonomous system intends to do and how it chooses to go about this

Feedback Control Systems Rational Agent

[dynamic] beliefs/assumptions [dynamic] intentions/motivations

A rational agent: must have explicit reasons for making the choices it does, and should be able to explain these if needed

slide-6
SLIDE 6

Example: from Pilot to Rational Agent

Autopilot can essentially fly an aircraft

  • keeping on a particular path,
  • keeping flight level/steady under environmental conditions,
  • planning routes around obstacles, etc.

Human pilot makes high-level decisions, such as

  • where to go to,
  • when to change route,
  • what to do in an emergency, etc.

Rational Agent now makes the decisions the pilot used to make.

slide-7
SLIDE 7

Why Do We Care?

Robot drops a hammer on a person’s head. Did it do this becuase

  • 1. its gripper control was not very good and hammer fell?
  • 2. it intended to hit the human with the hammer?

slide-8
SLIDE 8

Verification of Autonomous Systems

We verify the rational agent within the system’s architecture. Importantly, this allows us to verify the decisions the system makes, not its outcomes.

Rational Agent

decisions [high-level, discrete] e.g. reasoning, goal selection, prediction, cooperation, etc…

Autonomous System

Control System

control [low-level, continuous] e.g. manipulation, path following, reaction,

  • bstacle avoidance, etc…

In summary: We cannot prove what the system will achieve, since interactions with the real world are always uncertain, but we can prove what (and why) it will try to achieve.

slide-9
SLIDE 9

Uses

  • UAV certification
  • domestic robotic assistants — safety
  • autonomous vehicle platooning
  • formation-flying satellites
  • human-robot teamwork
  • ethical decisions

Aside: UK Network on the Verification and Validation of Autonomous Systems

  • funded by EPSRC (national research body)
  • brings together formal verification, testing, user validation, etc

http://www.vavas.org

slide-10
SLIDE 10

Trust versus Privacy versus Legality/Ethics

Care-o-bot, Fraunhofer IPA www.robosafe.org

Orders: what if person orders robot to do something it thinks is detrimental to person? Privacy: person orders robot not to tell anyone else about the situation (e.g. doing something illegal/unethical). Assessment: “don’t wake me up until 07:00”, but breathing/heart very fast Trust: robot disobeying direct orders will erode trust!

slide-11
SLIDE 11

Sample Relevant Publications

  • Dennis, Fisher, Aitken, Veres, Gao, Shaukat, Burroughes. Reconfigurable
  • Autonomy. Kunstliche Intelligenz 28(3):199-207, 2014.
  • Dennis, Fisher, Slavkovik, Webster. Formal Verification of Ethical Choices

in Autonomous Systems. Robotics and Autonomous Systems 77:1-14, 2016.

  • Dennis, Fisher, Webster. Verifying Autonomous Systems.

Communications of the ACM 56(9):84–93, 2013

  • Dennis, Fisher, Lincoln, Lisitsa, Veres. Practical Verification of

Decision-Making in Agent-Based Autonomous Systems. Journal of Automated Software Engineering 23(3):305-359, 2016.

  • Webster, Cameron, Fisher, Jump. Generating Certification Evidence for

Autonomous Unmanned Aircraft Using Model Checking and Simulation. Journal of Aerospace Information Systems 11(5):258–279, 2014.

  • Webster, Dixon, Fisher, Salem, Saunders, Koay, Dautenhahn, Saez-Pons.

Toward Reliable Autonomous Robotic Assistants Through Formal Verification: A Case Study. IEEE Trans. Human-Machine Systems 46(2):186-196, 2016.