LEARNING-BASED TESTING: RECENT PROGRESS AND FUTURE PROSPECTS Karl - - PowerPoint PPT Presentation

learning based testing recent progress and future
SMART_READER_LITE
LIVE PREVIEW

LEARNING-BASED TESTING: RECENT PROGRESS AND FUTURE PROSPECTS Karl - - PowerPoint PPT Presentation

LEARNING-BASED TESTING: RECENT PROGRESS AND FUTURE PROSPECTS Karl Meinke Computer Science School KTH Stockholm Overview of Talk Tes&ng as a search problem 1. Architecture of an LBT tool: LBTest


slide-1
SLIDE 1

LEARNING-BASED TESTING: RECENT PROGRESS AND FUTURE PROSPECTS

Karl Meinke Computer Science School KTH Stockholm

slide-2
SLIDE 2

Overview of Talk

1.

Tes&ng ¡as ¡a ¡search ¡problem ¡

2.

Architecture ¡of ¡an ¡LBT ¡tool: ¡LBTest ¡

3.

General ¡Principles ¡of ¡LBT ¡

4.

Quan&fying ¡the ¡Complexity ¡of ¡Tes&ng ¡and ¡Learning ¡

5.

Regular ¡Inference ¡and ¡Over-­‑approxima&on ¡

6.

Querying ¡Power ¡of ¡Model ¡Checkers ¡ Unifying theme of 3,4 and 5 is measuring model convergence since we rarely learn to completion

Based on: ¡K. Meinke: Learning-Based Testing: Recent Progress and Future

Prospects, Proc. This Workshop? ¡

slide-3
SLIDE 3

Testing as a Search Problem

  • Basic route to fully automated testing
  • Has analogies to machine learning
  • Finding access strings
  • Finding distinguishing strings
  • Random testing
  • Search-based Testing (optimise a cost function, SBSE)
  • Gradient-descent
  • Genetic algorithms
  • Simulated annealing
  • Model-based Testing
  • Constraint satisfaction on abstract models
slide-4
SLIDE 4
  • 1. LBTest Tool
  • LBTest implements black-box requirements testing for

embedded systems with off-the-shelf and customised components

  • LBTest automates 3 processes:
  • Test Case Generation (ATCG), 3 Sources:
  • Active learning queries
  • Model checker generated counterexamples
  • Stochastic equivalence checker queries
  • Test execution (online testing)
  • Verdict construction (pass/fail/warning/exception)
  • Some configurations quickly achieve high model convergence
slide-5
SLIDE 5

LBTest Architecture

Automaton Learning Algorithm NuSMV Model Checker model abstraction Mn

  • bserved
  • utput

counterexample in

  • n

n = 1, 2, … System under Test e.g. jar file Communication wrapper TCG and Oracle LTL Requirement Formula Req Stochastic equivalence checker Verdict vn test case in

LBTest

in in

slide-6
SLIDE 6

Technical & Process Advantages

  • Well suited to agile development
  • Model is always synchronised to actual code
  • No false positives or false negatives due to wrong/
  • utdated models (C.f. model-based testing)
  • Avoid manual model construction and maintenance
slide-7
SLIDE 7

Modular Structure

  • Learners
  • L*Mealy
  • Kearn’s algorithm
  • CGE, ICGE (term rewriting system representation)
  • MinSplit (NDFA representation)
  • Hybrid automaton learner HyCGE (infinite state systems)
  • Model checkers
  • NuSMV 2.5
  • BDD checker
  • BMC/SAT solver
  • nuXmv 1.0
  • Stochastic equivalence checker
  • First / longest / shortest difference (strategies)
slide-8
SLIDE 8

Requirements Modeling

  • Modeling reactive systems needs a time concept
  • LBTest uses propositional linear temporal logic (PLTL)
  • PLTL = “Boolean logic + time”
  • Conventional model-based testing (conformance testing)

is the next-only part of PLTL.

  • Could interface LTL to visual requirements modeling

languages and pattern languages.

slide-9
SLIDE 9

Approximate Models

  • Real-world SUTs are infinite state systems
  • LBTest constructs finite state approximations through

finite partition sets.

  • Input partitioning is implemented in LBTest (test selection)
  • Output partitioning is implemented in SUT wrapper

(equivalence class)

  • Gives a limited first-order extension to PLTL.
slide-10
SLIDE 10

Verdict Construction (Oracle step)

  • On-the-fly verdict construction filters false negatives
  • Compares two behaviours:

(1) a predicted behaviour from model (bad) (2) an observed behaviour in SUT

  • Prediction == Observation => Fail/Warning
  • Prediction != Observation => Pass
  • No Observation => Exception/Timeout error
slide-11
SLIDE 11

ABS:absRR RRWhl_rpm ABSTorq_RR ABS:absRL ABS:absFL ABS:absFR GlobalBrakeController : gbc Pedal: brake Pedal: accel

Case Study: Brake-by-Wire ECU

slide-12
SLIDE 12

Fourteen Black-box Requirements

REQ-4 If the brake pedal is pressed and the actual speed

  • f the vehicle is larger than 10 km/h and the slippage

sensor shows that the (front right) wheel is slipping, this implies that the corresponding brake torque at the (front right) wheel should very quickly be 0.

G( BrakePedal = b & Motion = moving & SlipRR = slipping -> X( ABSBrakeTorqueRR = zero ) )

slide-13
SLIDE 13 I under1;still;zero;noSlip a i b under1;still;zero;noSlip b biaabi 1;still;zero;noSlip b a i biaa 20;moving;zero;noSlip i biaab 1;still;nonZero;slip b biaaa 30;moving;zero;noSlip a i bb under1;still;nonZero;noSlip b bia 10;moving;zero;noSlip a b a i b a bi under1;still;zero;noSlip i b a i a b i i b a

biaaa 30;moving;zero;noSlip

Model #3 after 400 msec

slide-14
SLIDE 14
  • 2. ¡Abstract ¡LBT ¡Algorithm
  • 1. M0 ¡:= ¡getIni&alHypothesis(); ¡

¡

  • 2. ¡For ¡each ¡k ¡>= ¡0 ¡do ¡

1.

Model ¡check ¡Mk ¡against ¡Req ¡

2.

Choose ¡“best ¡counterexample” ik+1 ¡from ¡step ¡2.1 ¡

3.

Execute ¡ik+1 ¡ ¡on ¡SUT ¡to ¡produce ¡ok+1 ¡

4.

if ¡(ik+1 ¡, ¡ok+1) ¡sa&sfies ¡!Req ¡label ¡ik+1 ¡as ¡an ¡error ¡

5.

If ¡equivalent ¡(SUT, ¡Mk+1, ¡convergenceBound) ¡break. ¡

6.

Mk+1 ¡:= ¡getNextHypothesis ¡(ik+1 ¡, ¡ok+1) ¡ ¡

¡ ¡

slide-15
SLIDE 15
  • 3. Quantifying the Complexity of Testing

and Learning

slide-16
SLIDE 16
  • 4. Regular Inference by

Over-Approximation

slide-17
SLIDE 17
slide-18
SLIDE 18
  • 5. Querying Power of Model Checkers

What is the querying power of model checkers? Do they accelerate convergence? (Lucent patent!) Answer: Switch the MC off … In: 2 automotive case studies using more than 20 different requirements and 3 learning algorithms we found no difference in convergence at all.

slide-19
SLIDE 19

Conclusions

  • Advantages
  • Flexible (black-box)
  • High-volume, high coverage (active learning)
  • Metric coverage (stochastic equivalence checking)
  • Accurate test verdicts (formal requirements & model checking)
slide-20
SLIDE 20

Future Prospects

  • Latency problems – distributed learning?
  • Faster learning – sparse models, more extrapolation?
  • Learning more expressive models?
  • No statistical methods here?
  • Implicit model techniques (NN) & model checking?
  • Multiple counter-examples (model checking)