Verification and Validation for Safety in Robots Kerstin Eder - - PowerPoint PPT Presentation

verification and validation for safety in robots
SMART_READER_LITE
LIVE PREVIEW

Verification and Validation for Safety in Robots Kerstin Eder - - PowerPoint PPT Presentation

Verification and Validation for Safety in Robots Kerstin Eder Design Automation and Verification Trustworthy Systems Laboratory Verification and Validation for Safety in Robots, Bristol Robotics Laboratory Verification and Validation for Safety


slide-1
SLIDE 1

Kerstin Eder

Design Automation and Verification Trustworthy Systems Laboratory Verification and Validation for Safety in Robots, Bristol Robotics Laboratory

Verification and Validation for Safety in Robots

slide-2
SLIDE 2

To develop techniques and methodologies that can be used to design autonomous intelligent systems that are demonstrably trustworthy.

Verification and Validation for Safety in Robots

2

slide-3
SLIDE 3

Correctness from specification to implementation

User Requirements

High-level Specification

Optimizer

Design and Analysis (Simulink)

Controller (SW/HW)

e.g. C, C++, RTL (VHDL/Verilog)

Translate Implement

3

slide-4
SLIDE 4

What can be done at the code level?

  • P. Trojanek and K. Eder.

Verification and testing of mobile robot navigation algorithms: A case study in SPARK. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

  • pp. 1489 - 1494. Sep 2014.

http://dx.doi.org/10.1109/IROS.2014.6942753

4

slide-5
SLIDE 5

What can be done at the code level?

  • P. Trojanek and K. Eder.

Verification and testing of mobile robot navigation algorithms: A case study in SPARK. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

  • pp. 1489 - 1494. Sep 2014.

http://dx.doi.org/10.1109/IROS.2014.6942753

5

Navigation algorithms are fundamental for mobile

  • robots. While the correctness of the algorithms is

important, it is equally important that they do not fail because of bugs in their implementation.

slide-6
SLIDE 6

What can go wrong in robot navigation software?

Generic bugs:

§ Array and vector out-of-bounds accesses § Null pointer dereferencing § Accesses to uninitialized data

Domain-specific bugs:

§ Integer and floating-point arithmetic errors § Mathematic functions domain errors § Dynamic memory allocation and blocking inter- thread communication (non real-time)

6

slide-7
SLIDE 7

Verification Approach

State of the art verification approaches:

§ Model checking: infeasible § Static analysis of C++: not possible § Static analysis of C: requires verbose and difficult to maintain annotations

Our “Design for Verification” approach:

§ SPARK, a verifiable subset of Ada

§ No Memory allocation, pointers, concurrency

§ Required code modifications:

§ Pre- and post-conditions, loop (in)variants § Numeric subtypes (e.g. Positive) § Formal data containers

7

slide-8
SLIDE 8

Results

§ Three open-source implementations of navigation algorithms translated from C/C++ (2.7 kSLOC) to SPARK (3.5 kSLOC)

  • VFH+ (Vector Field Histogram)
  • ND (Nearness Diagram)
  • SND (Smooth Nearness-Diagram) navigation
  • Explicit annotations are less than 5% of the code
  • SPARK code is on average 30% longer than C/C++

§ Several bugs discovered by run-time checks injected by the Ada compiler

  • Fixed code proved to be run-time safe
  • except floating-point over- and underflows
  • These require the use of complementary techniques, e.g. abstract

interpretation.

§ Up to 97% of the verification conditions discharged automatically by SMT solvers in less than 10 minutes § Performance of the SPARK and C/C++ code similar

8

slide-9
SLIDE 9

Moral

9

If you want to make runtime errors an issue of the past, then you must select your tools (programming language and development environment) wisely!

https://rclutz.wordpress.com/2016/09/23/hammer-and-nail/

slide-10
SLIDE 10

http://github.com/riveras/spark-navigation

  • P. Trojanek and K. Eder.

Verification and testing of mobile robot navigation algorithms: A case study in SPARK. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

  • pp. 1489 - 1494. Sep 2014.

http://dx.doi.org/10.1109/IROS.2014.6942753

10

slide-11
SLIDE 11

Correctness from specification to implementation

User Requirements

High-level Specification

Optimizer

Design and Analysis (Simulink)

Controller (SW/HW)

e.g. C, C++, RTL (VHDL/Verilog)

Translate Implement

11

slide-12
SLIDE 12

User Requirements

High-level Specification

Optimizer

Design and Analysis (Simulink)

Controller (SW/HW)

e.g. C, C++, RTL (VHDL/Verilog)

Translate Implement

12

Correctness from specification to implementation

slide-13
SLIDE 13

What can be done at the design level?

  • D. Araiza Illan, K. Eder, A. Richards.

Formal Verification of Control Systems’ Properties with Theorem Proving. International Conference on Control (CONTROL), pp. 244 - 249. IEEE, Jul 2014. http://dx.doi.org/10.1109/CONTROL.2014.6915147

  • D. Araiza Illan, K. Eder, A. Richards.

Verification of Control Systems Implemented in Simulink with Assertion Checks and Theorem Proving: A Case Study. European Control Conference (ECC), pp. 2670 - 2675. Jul 2015. http://arxiv.org/abs/1505.05699

13

slide-14
SLIDE 14

Simulink in Control System Design

Important to distinguish design flaws from coding bugs

§ Analysis techniques from control systems theory (e.g., stability) § Serve as requirements/specification § For (automatic) code generation

Code

Control systems design level Implementation level

14

slide-15
SLIDE 15

Stability Matrix P > 0 (Lyapunov function) Equivalence

V(k)-V(k-1) = x(k-1)T [(A−BK)T P(A−BK)-P]x(k-1)

(Lyapunov's equation application) Add as assertions Capture control systems requirements Retain in code implementation Matrix P−(A−BK)T P(A−BK) > 0 (Lyapunov function's difference)

Verifying Stability

slide-16
SLIDE 16

Assertion-Based Verification

16

slide-17
SLIDE 17

Stability Matrix P > 0 (Lyapunov function) Equivalence

V(k)-V(k-1) = x(k-1)T [(A−BK)T P(A−BK)-P]x(k-1)

(Lyapunov's equation application) Matrix P−(A−BK)T P(A−BK) > 0 (Lyapunov function's difference)

Test in simulation

Combining Verification Techniques

17

Automatic theorem proving

Formalize logic theory of the Simulink diagram

Axiom: Bu = B * u ... … Goal: vdiff == vdiff_an

slide-18
SLIDE 18

Stability Matrix P > 0 (Lyapunov function) Equivalence

V(k)-V(k-1) = x(k-1)T [(A−BK)T P(A−BK)-P]x(k-1)

(Lyapunov's equation application) Matrix P−(A−BK)T P(A−BK) > 0 (Lyapunov function's difference)

Test in simulation

Combining Verification Techniques

18

Automatic theorem proving

First order logic theory of the Simulink diagram

Axiom: Bu = B * u ... … Goal: vdiff == vdiff_an

slide-19
SLIDE 19

Moral

19

No single technique is adequate to cover a whole design in practice. Combine techniques and learn from areas where verification is more mature.

slide-20
SLIDE 20

http://github.com/riveras/simulink

  • D. Araiza Illan, K. Eder, A. Richards.

Formal Verification of Control Systems’ Properties with Theorem Proving. International Conference on Control (CONTROL), pp. 244 - 249. IEEE, Jul 2014. http://dx.doi.org/10.1109/CONTROL.2014.6915147

  • D. Araiza Illan, K. Eder, A. Richards.

Verification of Control Systems Implemented in Simulink with Assertion Checks and Theorem Proving: A Case Study. European Control Conference (ECC), pp. 2670 - 2675. Jul 2015. http://arxiv.org/abs/1505.05699

20

slide-21
SLIDE 21

What can be done to increase the productivity

  • f simulation-based

testing?

  • D. Araiza-Illan, D. Western, A. Pipe, and K. Eder, “Coverage-Driven Verification: An Approach to Verify

Code for Robots that Directly Interact with Humans,” in Haifa Verification Conference, Haifa, Israel,

  • 2015. http://link.springer.com/chapter/10.1007/978-3-319-26287-1_5
  • D. Araiza-Illan, D. Western, A. G. Pipe, and K. Eder, “Systematic and Realistic Testing in Simulation of

Control Code for Robots in Collaborative Human-Robot Interactions,” in Towards Autonomous Robotic Systems (TAROS), Jun. 2016. http://link.springer.com/chapter/10.1007/978-3-319-40379-3_3

  • D. Araiza-Illan, A. G. Pipe, and K. Eder, “Intelligent Agent-Based Stimulation for Testing Robotic

Software in Human-Robot Interactions,” in Third Workshop on Model-Driven Robot Software Engineering (MORSE), Dresden, Germany, 2016. http://arxiv.org/abs/1604.05508 21

slide-22
SLIDE 22

HRI Verification Challenges

§ System complexity

– HW – SW – People

§ Concurrency § Experiments in labs

– Expensive – Unsafe

22

slide-23
SLIDE 23

§ Testing in simulation § Techniques well established in microelectronics design verification

– Coverage-Driven Verification

23

… to verify code that controls robots in HRI.

We are investigating…

slide-24
SLIDE 24

§ Robotic assistants need to be both powerful and smart.

– AI and learning are increasingly used in robotics

§ We need intelligent testing.

– No matter how clever your robot, the testing environment needs to reflect the agency your robot will meet in its target environment.

24

Agency for Intelligent Testing

slide-25
SLIDE 25

CDV to automate simulation-based testing

Dejanira Araiza-Illan, David Western, Anthony Pipe and Kerstin Eder. Coverage-Driven Verification — An Approach to Verify Code for Robots that Directly Interact with Humans. In Hardware and Software: Verification and Testing, pp. 69-84. Lecture Notes in Computer Science 9434. Springer, November 2015. (DOI 10.1007/978-3-319-26287-1_5) Dejanira Araiza-Illan, David Western, Anthony Pipe and Kerstin Eder. Systematic and Realistic Testing in Simulation of Control Code for Robots in Collaborative Human-Robot Interactions. 17th Annual Conference Towards Autonomous Robotic Systems (TAROS 2016), pp. 20-32. Lecture Notes in Artificial Intelligence 9716. Springer, June 2016. (DOI 10.1007/978-3-319-40379-3_3)

slide-26
SLIDE 26

Coverage-Driven Verification

26

SUT Test Response

slide-27
SLIDE 27

Coverage-Driven Verification

27

SUT Test Test Generator Response

slide-28
SLIDE 28

§ Tests must be effective and efficient § Strategies:

  • Pseudorandom (repeatability)

Test Generator

28

Robot to human object handover scenario

slide-29
SLIDE 29

§ Tests must be effective and efficient § Strategies:

  • Pseudorandom (repeatability)
  • Constrained pseudorandom
  • Model-based to target specific scenarios

Test Generator

29

Robot to human object handover scenario

slide-30
SLIDE 30

Model-based Test Generation

30

slide-31
SLIDE 31

Model-based Test Generation

31

slide-32
SLIDE 32

Formal model Traces from model checking Test template Test components:

  • High-level actions
  • Parameter instantiation

System + environment Environment to drive system

Model-based test generation

32

slide-33
SLIDE 33

Coverage-Driven Verification

33

SUT Test Test Generator Checker Response

slide-34
SLIDE 34

Checker

§ Requirements as assertion monitors:

  • Implemented as automata
  • if [precondition], check [postcondition]

“If the robot decides the human is not ready, then the robot never releases an object”.

§ Continuous monitoring at runtime, self-checking

– High-level requirements – Lower-level requirements depending on the simulation's detail (e.g., path planning, collision avoidance).

assert {! (robot_3D_position == human_3D_position)}

34

slide-35
SLIDE 35

Coverage-Driven Verification

35

SUT Test Test Generator Checker Coverage Collector Response

slide-36
SLIDE 36

Coverage Models

36

§

Code coverage

§

Structural coverage

§

Functional coverage

  • Requirements coverage
  • Functional and safety (ISO 13482:2014, ISO 10218-1)
slide-37
SLIDE 37

Requirements based on ISO 13482 and ISO 10218

37

slide-38
SLIDE 38

Requirements based on ISO 13482 and ISO 10218

38

slide-39
SLIDE 39

Coverage Models

§

Code coverage

§

Structural coverage

§

Functional coverage

  • Requirements coverage
  • Functional and safety (ISO 13482:2014, ISO 10218-1)
  • Cross-product functional coverage
slide-40
SLIDE 40

Functional Coverage Results

§ 100 pseudo-randomly generated tests § 160 model-based tests § 180 model-based constrained tests § 440 tests in total

slide-41
SLIDE 41

CDV for Human-Robot Interaction

Dejanira Araiza-Illan, David Western, Anthony Pipe and Kerstin Eder. Systematic and Realistic Testing in Simulation of Control Code for Robots in Collaborative Human-Robot

  • Interactions. 17th Annual Conference Towards Autonomous Robotic Systems (TAROS 2016), pp. 20-32. Lecture Notes in

Computer Science 9716. Springer, June 2016. DOI 10.1007/978-3-319-40379-3_3

slide-42
SLIDE 42

§ systematic, goal directed verification method

– high level of automation – capable of exploring systems of realistic detail under a broad range of environment conditions

§ focus on test generation and coverage

– constraining test generation requires significant engineering skill and SUT knowledge

Coverage-Directed Verification

– model-based test generation allows targeting requirements and cross-product coverage more effectively than constrained pseudorandom test generation

slide-43
SLIDE 43

http://github.com/robosafe/testbench

43

Dejanira Araiza-Illan, David Western, Anthony Pipe and Kerstin Eder. Coverage-Driven Verification — An Approach to Verify Code for Robots that Directly Interact with Humans. In Hardware and Software: Verification and Testing, pp. 69-84. Lecture Notes in Computer Science 9434. Springer, November 2015. (DOI: 10.1007/978-3-319-26287-1_5) Dejanira Araiza-Illan, David Western, Anthony Pipe and Kerstin Eder. Systematic and Realistic Testing in Simulation of Control Code for Robots in Collaborative Human-Robot Interactions. 17th Annual Conference Towards Autonomous Robotic Systems (TAROS 2016), pp. 20-32. Lecture Notes in Computer Science 9716. Springer, June 2016. (DOI: 10.1007/978-3-319-40379-3_3)

slide-44
SLIDE 44

CDV provides automation

44

What about agency?

slide-45
SLIDE 45

http://www.thedroneinfo.com/

slide-46
SLIDE 46

Belief-Desire-Intention Agents

46

Desires: goals to fulfil Beliefs: knowledge about the world Intentions: chosen plans, according to current beliefs and goals Guards for plans New goals New beliefs From executing plans

slide-47
SLIDE 47

CDV testbench components

Intelligent testing is harnessing the power of BDI agent models to introduce agency into test environments.

47

BDI Agents

slide-48
SLIDE 48

Research Questions

§ Are Belief-Desire-Intention agents suitable to model HRI? § How can we exploit BDI agent models for test generation? § Can machine learning be used to automate test generation in this setting? § How do BDI agent models compare to automata-based

48

techniques for model-based test generation?

slide-49
SLIDE 49

Interacting Agents

§ BDI can model agency in HRI

– Interactions between agents create realistic action sequences that serve as test patterns

49

Robot’s Code Agent Agent for Simulated Human Agents for Simulated Sensors beliefs beliefs beliefs

slide-50
SLIDE 50

Interacting Agents

§ BDI can model agency in HRI

– Interactions between agents create realistic action sequences that serve as test patterns

50

Robot’s Code Agent Agent for Simulated Human Agents for Simulated Sensors beliefs beliefs beliefs Which beliefs?

slide-51
SLIDE 51

Which beliefs?

Interacting Agents

51

Robot’s Code Agent Agent for Simulated Human Agents for Simulated Sensors beliefs beliefs beliefs

§ BDI can model agency in HRI

– Interactions between agents create realistic action sequences that serve as test patterns

slide-52
SLIDE 52

Verification Agents

§ Meta agents can influence beliefs § This allows biasing/directing the interactions

52

Robot’s Code Agent Agent for Simulated Human Agents for Simulated Sensors beliefs beliefs beliefs

(Meta Agent) Verification Agent

beliefs beliefs beliefs

slide-53
SLIDE 53

Which beliefs are effective?

53

Robot’s Code Agent Agent for Simulated Human Agents for Simulated Sensors beliefs beliefs beliefs

(Meta Agent) Verification Agent

beliefs beliefs beliefs

Manual belief selection

belief subsets

slide-54
SLIDE 54

Which beliefs are effective?

54

Robot’s Code Agent Agent for Simulated Human Agents for Simulated Sensors beliefs beliefs beliefs

(Meta Agent) Verification Agent

beliefs beliefs beliefs

Manual belief selection Random belief selection

belief subsets

slide-55
SLIDE 55

Which beliefs are effective?

55

Robot’s Code Agent Agent for Simulated Human Agents for Simulated Sensors beliefs beliefs beliefs

(Meta Agent) Verification Agent

beliefs beliefs beliefs

Optimal belief sets determined through RL plan coverage belief subsets

slide-56
SLIDE 56

Results

How effective are BDI agents for test generation? How do they compare to model checking timed automata?

40 50 60 70 80 90 100 Code coverDge (%) 20 40 60 80 100 120 140 160 7est nuPber 40 50 60 70 80 90 100 AccuPulDted code coverDge (%) PseudorDndoP 0odel checking 7A %DI Dgents

  • D. Araiza-Illan, A.G. Pipe, K. Eder. Intelligent Agent-Based Stimulation for Testing

Robotic Software in Human-Robot Interactions. (Proceedings of MORSE 2016, ACM, July 2016) DOI: 10.1145/3022099.3022101 (arXiv:1604.05508)

  • D. Araiza-Illan, A.G. Pipe, K. Eder

Model-based Test Generation for Robotic Software: Automata versus Belief-Desire- Intention Agents. (under review, preprint available at arXiv:1609.08439)

slide-57
SLIDE 57

The cost of learning belief sets

57

The cost of learning a good belief set needs to be considered when assessing the different BDI-based test generation approaches.

Convergence in <300 iterations, < 3 hours

slide-58
SLIDE 58

The cost of learning belief sets

58

The cost of learning a good belief set needs to be considered when assessing the different BDI-based test generation approaches.

Could be sped up by adding constraints and knowledge to the learning Convergence in <300 iterations, < 3 hours

slide-59
SLIDE 59

Code Coverage Results

59

slide-60
SLIDE 60

Code Coverage Results

60

Model-based + BDI vs. pseudorandom (abstract) test generation Per individual test, ascending

  • rder

Code branches coverage Pseudorandom never reached > 66% in 100 tests All model-based BDI reached > 80%

slide-61
SLIDE 61

BDI-agents vs timed automata

61

Effectiveness:

§ high-coverage tests are generated quickly

slide-62
SLIDE 62

62

BDI-agents vs timed automata

slide-63
SLIDE 63

BDI-agents vs timed automata

63

slide-64
SLIDE 64

Back to our Research Questions

§ Belief-Desire-Intention agents are suitable to model HRI § Traces of interactions between BDI agent models provide test templates § Machine learning (RL) can be used to automate the selection of belief sets so that test generation can be biased towards maximizing coverage § Compared to traditional model-based test generation (model checking timed automata), BDI models are:

§ more intuitive to write, they naturally express agency, § smaller in terms of model size, § more predictable to explore and § equal if not better wrt coverage.

64

slide-65
SLIDE 65

http://github.com/robosafe

  • D. Araiza Illan, D. Western, A. Pipe, K. Eder.

Coverage-Driven Verification - An approach to verify code for robots that directly interact with humans. (Proceedings of HVC 2015, Springer, November 2015)

  • D. Araiza Illan, D. Western, A. Pipe, K. Eder.

Systematic and Realistic Testing in Simulation of Control Code for Robots in Collaborative Human-Robot Interactions. (Proceedings of TAROS 2016, Springer, June 2016)

  • D. Araiza-Illan, A.G. Pipe, K. Eder.

Intelligent Agent-Based Stimulation for Testing Robotic Software in Human-Robot

  • Interactions. (Proceedings of MORSE 2016, ACM, July 2016)

DOI: 10.1145/3022099.3022101 (arXiv:1604.05508)

  • D. Araiza-Illan, A.G. Pipe, K. Eder

Model-based Test Generation for Robotic Software: Automata versus Belief-Desire- Intention Agents. (under review, preprint available at arXiv:1609.08439)

65

slide-66
SLIDE 66

In conclusion...

§ Learn from more mature disciplines § Select your tools and programming languages wisely § Exploit combinations of techniques § Automate

slide-67
SLIDE 67

In conclusion...

§ Learn from more mature disciplines § Select your tools and programming languages wisely § Exploit combinations of techniques § Automate...turn your solutions into “formal apps” § Be more clever

slide-68
SLIDE 68

In conclusion...

§ Learn from more mature disciplines § Select your tools and programming languages wisely § Exploit combinations of techniques § Automate...turn your solutions into “formal apps” § Be more clever...use the power of AI for verification

slide-69
SLIDE 69

Kerstin.Eder@bristol.ac.uk

Thank you

Special thanks to Dejanira Araiza Illan, Jeremy Morse, David Western, Arthur Richards, Jonathan Lawry, Trevor Martin, Piotr Trojanek, Yoav Hollander, Yaron Kashai, Mike Bartley, Tony Pipe and Chris Melhuish for their collaboration, contributions, inspiration and the many productive discussions we have had.

slide-70
SLIDE 70
slide-71
SLIDE 71
  • M. Webster, D. Western, D. Araiza-Illan, C. Dixon, K. Eder, M. Fisher, A.G. Pipe. An Assurance-based

Approach to Verification and Validation of Human-Robot Teams. arXiv:1608.07403

slide-72
SLIDE 72
  • M. Webster, D. Western, D. Araiza-Illan, C. Dixon, K. Eder, M. Fisher, A.G. Pipe. An Assurance-based

Approach to Verification and Validation of Human-Robot Teams. arXiv:1608.07403