research . deploy . Train . evaluate Outline - - PowerPoint PPT Presentation

research deploy train evaluate
SMART_READER_LITE
LIVE PREVIEW

research . deploy . Train . evaluate Outline - - PowerPoint PPT Presentation

On the Importance of Systematic Testing of Safety Critical Systems Anneliese Andrews Department of Computer Science University of Denver Denver, CO, USA research . deploy . Train . evaluate Outline


slide-1
SLIDE 1

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

On the Importance of Systematic Testing of Safety Critical Systems

Anneliese Andrews Department of Computer Science University of Denver Denver, CO, USA

slide-2
SLIDE 2

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Ø Introduction Ø Background & Related Work Ø Approach

ü Test Generation Process ü Phase 1: Generate Failures and Failure Applicability ü Construction of the Applicability Matrix ü Phase2: Generate Safety Mitigation Tests

Ø Validation

ü Case Study: Railroad Crossing Control System (RCCS) ü Multiple case study comparison ü Scalability ü Effectiveness

Ø Conclusion and Future Work

Outline

slide-3
SLIDE 3

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • Safety Critical Systems
  • Examples (medical devices, aircraft flight control, weapons and

nuclear systems)

  • Problem:
  • Test regular behavior.
  • Test proper mitigation of failures.
  • Need for certification.
  • Model-Based Testing
  • Focuses on testing the system behavior.
  • MBT techniques do not systematically model fault behavior
  • Aim: An end-to-end test generation process:

1. Functional Testing 2. Generating feasible failures for behavioral states. 3. Generating tests for proper mitigation of failures.

Introduction

slide-4
SLIDE 4

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • Certification
  • “Procedure by which a third party gives written assurance

that a product, process or service conforms to specified requirements”

  • Often related to
  • Standards or guidance document (e. g. automotive ISO 26262, civil

aviation DO-178B, DO-254)

  • Verification tool requires assessment that tool capable of

performing particular verification at acceptable level of confidence

  • Kornecki et al.:
  • “lack of research investment in certification technologies

will have significant impact on levels of autonomous control approaches that can be properly certified”

  • “could lead to limiting capability of future autonomous

systems”

slide-5
SLIDE 5

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • Focus of testing activities (V-model of ISO 26262):
  • Safety requirements verification
  • System level black box testing
  • Test case execution from Cause Consequence diagrams
  • Prototyping/Animation
  • Boundar value analysis
  • Equivalence classes, inpu partitioning
  • Process simulation
slide-6
SLIDE 6

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Ø Model Based Testing (MBT)

ü Communicating EFSM (CEFSM) ü CEFSM = (S, s0, E, P, T,A, M, V, C) ü CEFSMs communicate by exchanging messages through communication channels ü Test Case Generation from CEFSM Model ü Hesse et al., 2007 and Bourhfir et al., 1998. use reachability analysis to generate TCs ü Kovas et al., 2002. use mutation to enable the automation of test selection in a CEFSM model

Ø Fault Modeling and Analysis

ü Tribble et al., 2004. Introduce FTA (technique used to detect the specific causes of possible hazards)

Ø Integration of Safety Analysis Techniques and Behavior Models

ü Ariss et al., 2011, ü Kim et al., 2010 ü Sánchez et al., 2003 Introduce approach for generating test cases

Ø Mitigation Modeling

ü Avizienis et al., 2004 ( A Taxonomy of error handling & fault tolerance techs) ü Lerner et al., 2010 ( Identify several patterns)

Background

Analysis only (Not Testing)

¡

slide-7
SLIDE 7

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Ø Test Generation Process

Ø Phase 1: Generate Failures and Failure Applicability

ü Uses a CEFSM as BM & FT as a FM. ü Make FT compatible with BM ü Transform FT into CEFSM notations & integrate it with the BM. ü Generate tests from the integrated model

Ø Construction of the Applicability Matrix Ø Phase 2: Generate Safety Mitigation Tests

ü Testing proper mitigation where it is required. ü Construct BT from BM using BC ü Generate (p, e) pairs from applicability matrix ü Select (p, e) using CC(c1-c4) ü Construct a MM For each e ü Construct MT & woven into the BT at point of failure. ü Construct SMT to test SCSs.

The approach

slide-8
SLIDE 8

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • CEFSM =(S,s0,E,P,T,M,V,C), such that:

– S is a finite set of states, – s0 is the initial state, – E is a set of events, – P is a set of boolean predicates, – T is a set of transition functions such that T: S×P×E→S×A×M, – M is a set of communicating messages, – V is a set of variables, and – C is the set of input/output communication channel used in this CEFSM

T(si, pi, get(mi))/(sj, send(mj1 ,..., mjk ), A) mi = (mId, ej , mDestination) ej = (eId, eOccurrence, eStatus)

CEFSM

8 ¡

b/x

S0 S2 S1

a/y a/e b/x a/x b/c b/z R0 R2

R1

a/y d/z c/a c/f e/y u0 u

2

e / c

f/x

C1 ¡ C2 ¡ C3 ¡

slide-9
SLIDE 9

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • Why use FT?
  • Model-like not procedural.
  • Visual.
  • Quantitative and Qualitative analysis.
  • Explicitly describes how events combine to

result in a hazard or failure. Fault Tree

9 ¡

slide-10
SLIDE 10

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Fault Tree cont’

10 ¡

A fault tree evaluates the combinations of failures that can lead to the top event of interest.

slide-11
SLIDE 11

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Compatibility Transformation

11 ¡

FT = (∨,(∨,ACInit fail, NitroPurge fail) ,(∨,(Instrument fail, CryoTesting fail, Chilldown fail))) FT’= (∨,(∨,BFACInitiation.BFCond, BFNitrogenPurge.BFCond) ,(∨,(BFInstrument.BFCond, BFCryoTesting.BFCond, BFChilldown.BFCond)))

slide-12
SLIDE 12

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • Fault model Transformation.
  • Transform FT gates to equivalent GCEFSMs.
  • GCEFSM performs the same boolean function.

Transformation Rules

12 ¡

slide-13
SLIDE 13

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Transformation rules cont’

13 ¡

NOP

AND Gate ¡

S1 ¡ S0 ¡

T2 T1 T0 T3

Mi(e1(e1.eOccurrence=t,e1.eStatus=t)) ¡ Mj(e2(e2.eOccurrence=t,e2.eStatus=t)) ¡ Mi(e1(e1.eOccurrence=t,e1.eStatus=f)) ¡ TotalNoOfEvents NoOfPositiveEvents

T0 2 1 1 2

Mi(e1(e1.eOccurrence=t,e1.eStatus=t)) ¡ Mi(e1(e1.eOccurrence=t,e1.eStatus=f)) ¡

1

NOP GateOccurred GateNotOccurred NOP

T2 T0 T1

slide-14
SLIDE 14

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Event name Event ID Gate ID B EB1 G1 C EB2 G1 A EB3 G2

Transformation Procedure

14 ¡

EX.

slide-15
SLIDE 15

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Procedure FTA_TO_GCEFSM (T : Tree) { if (tree is null) then return, for each child C of T from left to right do FTA_TO_GCEFSM(C), Construct GCEFSM gate,// Create a gate and configure its // variables, output messages, // and its ID. if (leaf node) then insert event name, event ID & Gate ID into Event-Gate table. }

Transformation Procedure cont’

15 ¡

slide-16
SLIDE 16

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Event-Gate table for Leaf nodes

Integration Procedure

16 ¡

slide-17
SLIDE 17

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Procedure ModelsIntegration(BM,Event-Gate Table){ For every mBk Do For every Event-Gate entries Do If(mBk. EventNameAndAttribute == Event-Gate.EventNameAndAttribute) then mBk.EventID = Event-Gate.ei mBk.mDestination = Event-Gate.Gi }

Integration Procedure cont’

17 ¡

slide-18
SLIDE 18

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • Hessel et al.
  • Use reachability analysis,
  • Prune the branches that will not contribute to the coverage.
  • Bourhfir et al.
  • Use reachability analysis techniques to produce test cases for the global system.
  • CEFTG Generates test cases for each CEFSM individually.
  • Compute partial product of the system.
  • The algorithm terminates when the coverage achieved.
  • Li et al.
  • Create flow diagram of the model,
  • calculate a weight of each node, weight is mapped to the CEFSM branch for behavior and extension
  • f events with variables to model data.
  • Dominator: node A dominates node B if covering B implies node A.
  • Priority: additional coverage, higher priority.
  • Weight: depth of the node in the tree.
  • Each node is a branch and each edge is possible execution.
  • Tests provide additional coverage that have a higher priority.

Test Generation From CEFSM

18 ¡

slide-19
SLIDE 19

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • Failure Types

Construction of the Applicability Matrix

slide-20
SLIDE 20

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Ø Construct a BT from the BM, using behavior test criteria BC. Ø Determine failure scenarios based on coverage criteria. Ø Construct MT from MM, using MC. Ø Construct a SMT using the BT, failure scenarios and MT according to WR.

Phase2: Generate Safety Mitigation Tests

slide-21
SLIDE 21

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Ø Behavioral Model BM and Behavioral Test BT

ü Use CEFSM

Ø Determine Failure Scenarios

ü A point of failure is a particular state in a test path at which the failure is injected ü CT = t1 o t2 o t3 ……. o tl ü (1 ≤ p ≤ I) and (1 ≤ e ≤ E).

Ø Coverage Criteria

ü Criteria 1: All combinations, i.e. all positions p, all applicable failure types ü Criteria 2: All unique nodes, all applicable failures. ü Criteria 3: All tests, all unique nodes, all applicable failures. ü Criteria 4: All tests, all unique nodes, some failures

Ø Generate Mitigation Test (MT) from MM Ø Generate SMTs using Weaving Rules

ü determine test t that covers position p. ü keep path represented by t until failure position p. ü apply failure of type e (fe) in p. ü select appropriate mt ∈ Mte based on aggregation criteria to guarantee covering all possible failures and using each mt at least once.

Phase2: Generate Safety Mitigation Tests Con’t

slide-22
SLIDE 22

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

We use weaving rules more formally for each type of mitigation. t = {s1..............sb ............node(p)…….. sf ……….sk}

Ø Fix

ü Option 1: Compensate ( (Partial) Fix and proceed ) : smt= s1 ……node(p) mt node(p). ü Option 2: Go to fail-safe state (Fix and stop) : smt= s1 …..node(p) mt.

Ø Rollforward

ü Option 1: Rollforward mitigates the failure, and proceeds: smt = s1 …..node(p) mt sf ……. sk ü Option 2: Deferred fixing: smt = s1 …….node(p) sf mt sf+1 ……. sk .

Ø Rollbackward

ü Option 1: Rollbackward: smt= s1 ……. node(p) mt sb …….. sk ü Option 2: Rollbackward and stop: smt =s1 ….. node(p) mt sb .

Ø Internal compensate (no user action required). Test immediate system fix.

Formal Weaving Rules

slide-23
SLIDE 23

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Ø Detailed case study Ø Summary of multiple case studies Ø Scalability Ø Effectiveness

Validation

slide-24
SLIDE 24

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • Phase1: Generate failures and failure applicability [Gario et al]
  • BM: Railroad Crossing Control System (RCCS)

Accident FT

Case Study : Railroad Crossing Control System (RCCS)

slide-25
SLIDE 25

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

ICEFSM Model of the RCCS

slide-26
SLIDE 26

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Test Paths of RCCS & Integrated Model

slide-27
SLIDE 27

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Applicability Matrix & Failure Types

slide-28
SLIDE 28

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • (1 ≤ p ≤ 58 ) and (1 ≤ e ≤ 4).
  • 137 (p,e) pairs

Coverage Criteria C1: all positions, all applicable failures

slide-29
SLIDE 29

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • 33 (p,e) pairs

C2: all unique nodes, all applicable failures

slide-30
SLIDE 30

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • 33 (p,e) pairs

C3: all tests, all unique nodes, all applicable failures

slide-31
SLIDE 31

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • 14 (p,e) pairs

C4: all tests, all unique nodes, some failures

slide-32
SLIDE 32

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Mitigation Requirement

MM2: fix and stop

MM3: fix and proceed

MM4 :compensate

Mitigation Requirements & Mitigation Models

slide-33
SLIDE 33

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

SMTs for C3

slide-34
SLIDE 34

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

SMTs for C4

slide-35
SLIDE 35

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Domain BM #BT Len(BT) #F Types #FT Appl. Level M Types Len(MT) SMT S T C1 C2 C3 C4 LV 21 39 5 49 14 4 0.46% 2 16 516 180 180 23 RCCS 14 19 11 58 4 1 0.61% 3 4 178 43 43 15 Insulin Pump 15 23 11 74 4 1 0.66% 3 5 122 54 54 16

Three Case Studies

BM= Behavioral Model, S= State, T=Transition, BT= Behavioral Test, Len = length, CT= Concatenated Tests, F= Failure, FT= Fault Tree, MT= Mitigation Test, SMT= Safety Mitigation Test, C= Criteria.

slide-36
SLIDE 36

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

  • Simulation Experiments
  • C1 expensive
  • C2/C3 effective
  • C4 too weak

Scalability & Effectiveness

Domain BM FM CEFSMs EFSM S T Leaves AND OR XOR Timing S T S T GB 13 15 5 3 1 2 27 55 79 162 RCCS 14 19 8 2 5 28 70 303 514 LV 21 39 14 10 41 117 4316 8335 S= State, T=Transition, Timing= Timing gates

slide-37
SLIDE 37

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Ø Proposed a methodology to test SCSs Ø Phase 1: the transformation & integration of FTs into a CEFSM results in models that are at least an order of magnitude smaller than Sánchez’s. Ø Phase 2: keeping BM &MM separate and weaving required mitigations into a BT

Ø Keeps models smaller by not integrating them. Ø Leverages an existing BT Ø Focuses on mitigations Ø Preliminary Effectiveness analysis is positive

Conclusion & Future Work

slide-38
SLIDE 38

research ¡. ¡deploy ¡. ¡Train ¡. ¡evaluate ¡ ¡

Ø Future work :

ü Generalizability to other types of behavioral models (e. g. UML, Petri Nets), application domains ü Generalizability to other types of fault models, such as FMECA and the resulting prioritization of test cases, ü Evaluation wrt certification ü Tools