Testing wi with and for MDE E tools Jess Snchez Cuadrado - - PowerPoint PPT Presentation

testing wi with and for mde e tools
SMART_READER_LITE
LIVE PREVIEW

Testing wi with and for MDE E tools Jess Snchez Cuadrado - - PowerPoint PPT Presentation

Testing wi with and for MDE E tools Jess Snchez Cuadrado University of Murcia Join work with Juan de Lara and Esther Guerra (@UAM) jesusc@um.es @sanchezcuadrado http://github.com/jesusc Outline Context: AnATLyzer Testing


slide-1
SLIDE 1

Testing wi with and for MDE E tools

Jesús Sánchez Cuadrado

University of Murcia Join work with Juan de Lara and Esther Guerra (@UAM)

@sanchezcuadrado http://github.com/jesusc jesusc@um.es

slide-2
SLIDE 2

Outline

  • Context: AnATLyzer
  • Testing Framework
  • Testing MDE tools

MFoC workshop @ IMDEA – 26/11/2019 3

slide-3
SLIDE 3

Context: AnATLyzer

Static analyser for ATL model transformations

4

slide-4
SLIDE 4

AnATLyzer features

  • Static analysis to detect more that 50 types of problems in ATL

transformations

  • Powered by constraint solving
  • Support for both live and batch analysis
  • Additional features
  • Eclipse IDE Integration (+ quick fixes, quick assist, explanations)
  • Visualizations
  • Source and target constraint handling
  • Programmatic API
  • Utilities built around AnATLyzer
  • Transformation footprint, constraint satisfaction for OCL
  • Analysis extensions
  • Support for UML profiles

MFoC workshop @ IMDEA – 26/11/2019 5

slide-5
SLIDE 5

AnATLyzer IDE

  • 1. Live error

reporting

  • 2. Analysis view
  • 3. Additional

analysis executed

  • n demand
  • 4. Quick fixes
  • 5. Visualizations

MFoC workshop @ IMDEA – 26/11/2019 6

1 2 3 4

slide-6
SLIDE 6

AnATLyzer IDE

MFoC workshop @ IMDEA – 26/11/2019 7

slide-7
SLIDE 7

How AnATLyzer works?

No! Discard error

ATL trafo. meta- models 1: type checking TDG 4: constraint solving

potential problems errors, warnings

3: trafo. analysis 2: create

  • dep. graph

annot. ATL model witness model found?

Yes! Confirm error

MFoC workshop @ IMDEA – 26/11/2019 8

slide-8
SLIDE 8
  • Jesús Sánchez Cuadrado, Esther

Guerra and Juan de Lara. “Static analysis of model transformations”. IEEE Transactions on Software

  • Engineering. 43(9), 2017.
  • Analysed 100 transformations
  • 4806 errors were found
  • 1207 errors were runtime errors
  • 2157 errors were violations of the

target meta-model conformance

  • Jesús Sánchez Cuadrado, Esther

Guerra and Juan de Lara. “AnATLyzer: An Advanced IDE for ATL model transformations”. ICSE’18, 2018.

  • Replicated the experiment with

larger transformations. Similar results.

  • Java2Kdm
  • 149 errors
  • Kdm2Uml
  • 60 errors
  • SysML2Modelica
  • 222 errors

Some experimental results

MFoC workshop @ IMDEA – 26/11/2019 9

slide-9
SLIDE 9

Testing framework

10

slide-10
SLIDE 10

AnATLyzer’s testing framework

  • We built a testing framework organically
  • On demand, as new issues were arising to test AnATLyzer
  • We needed to:
  • Evaluate the precision of the static analysis
  • Evaluate the relevance of the quick fixes
  • Verify the correctness of internal transformations
  • In particular, optimizing transformations for OCL expressions
  • Result:
  • AnATLyzer’s testing framework

MFoC workshop @ IMDEA – 26/11/2019 11

slide-11
SLIDE 11

Components of our testing framework

  • Model generators
  • Support for model finding
  • Catalogue of mutation operators
  • Transformation executor
  • Configures and launches transformations
  • Oracles
  • Contracts
  • Model comparators
  • Test driver

MFoC workshop @ IMDEA – 26/11/2019 12

slide-12
SLIDE 12

Testing scenarios

  • Manual test cases
  • Automatic testing
  • Contract-based testing
  • Mutation testing

MFoC workshop @ IMDEA – 26/11/2019 13

slide-13
SLIDE 13

Manual test cases

MFoC workshop @ IMDEA – 26/11/2019 14

model_n.xmi

Input models

expect_n.xmi

Expected outputs

Transformation Execution

  • utput_n.xmi

Actual outputs

Model comparison Differences report crash?

slide-14
SLIDE 14

Manual test cases

15

@RunWith(Parameterized.class) public class TestUML2GUI extends ManualModelsTestCase { private static Metadata metadata = new Metadata("transformations/factories2pn_demo.atl") .configureInModel("IN", “FAC", "metamodels/factory.ecore") .configureOutModel("OUT", “PN", "metamodels/pn.ecore") .configureOutputFolder("outputs/manual"); @Parameters(name = "{0}") public static Collection<AnATLyzerTestCase> data() { metadata.addTestCase("IN", "models/manual/factory-1.uml", "OUT", "models/manual/pn-1.xmi"); metadata.addTestCase("IN", "models/manual/factory-2.uml", "OUT", "models/manual/pn-2.xmi"); return metadata.getTestCases(); } …

slide-15
SLIDE 15

Automatic test case generation

MFoC workshop @ IMDEA – 26/11/2019 16

model_n.xmi

Transformation Execution Model generator

  • utput_n.xmi

Validation (target conformance) crash?

  • We seek for crashes or for non-conforming

models

  • Akin to fuzzing
  • AnATLyzer is able to do this statically manytimes,

but, e.g.,

  • Infinite recursion
  • OCL operations not supported by model

finder

slide-16
SLIDE 16

Model generation

  • Random generation
  • Reused from a third-party implementation
  • Works but probably needs to be improved
  • Meta-model coverage
  • Instantiate feasible combinations of types and features
  • Path coverage
  • For each feasible (sub-)path of the transformation
  • Generate a model that would reach this point

MFoC workshop @ IMDEA – 26/11/2019 17

slide-17
SLIDE 17

Contract-based testing

MFoC workshop @ IMDEA – 26/11/2019 18

model_n.xmi

Transformation Execution Model generator

  • utput_n.xmi

Contract validator crash? Contract

FAC!Factory.allInstances()->size() = PN!PetriNet()->size()

slide-18
SLIDE 18

Mutation testing

  • How do we know if your test

suite is good enough?

  • Mutate the original program
  • Execute the program with the

mutation

  • Is the test suite passing?
  • No: the test suite is detecting the

potential bug (killed mutant)

  • Yes: the test suite is not strong

enough to detect the bug

MFoC workshop @ IMDEA – 26/11/2019 19

mutant programs mutation

  • perators

live mutants killed mutants

mutation score

test cases yes no program under test

INPUT

mutant test ==

  • riginal test
slide-19
SLIDE 19

Some mutation operators

  • Implemented 55 operators
  • Initial experiment to assess them – Not big conclusions yet

MFoC workshop @ IMDEA – 26/11/2019 20

slide-20
SLIDE 20

How do we test our tools?

MFoC workshop @ IMDEA – 26/11/2019 21

slide-21
SLIDE 21

Testing a static analyser

  • If AnATLyzer reports an error:
  • Is the error going to happen at

runtime?

  • If not, it is a false positive
  • If AnATLyzer doesn’t report an error:
  • Does exist a model that will exercise

the error?

  • If so, it is a false negative

MFoC workshop @ IMDEA – 26/11/2019 22

anATLyzer testing

  • k
  • k
  • k

error error

  • k

error error TN

true negative

FN

false negative

FP

false positive

TP

true positive

transform. mutant input test model

AnATLyzer testing

?

synthetic transform. input mm.

1 2 3 4

mutation

  • perators

mm. coverage

slide-22
SLIDE 22

Testing a static analyser

  • 1. Take a transformation manually

checked to be error free

  • Generate large test suite (mm. cov)
  • 2. Apply mutations to the

transformation

  • Classic mutations +
  • Mutations aimed at breaking the typing
  • 3. For each mutant, run
  • AnATLyzer
  • The test suite
  • 4. Compare the results

MFoC workshop @ IMDEA – 26/11/2019 23

anATLyzer testing

  • k
  • k
  • k

error error

  • k

error error TN

true negative

FN

false negative

FP

false positive

TP

true positive

transform. mutant input test model

AnATLyzer testing

?

synthetic transform. input mm.

1 2 3 4

mutation

  • perators

mm. coverage

slide-23
SLIDE 23

Testing a static analyser

  • Good precision and recall
  • Use of constraint solving to confirm or

discard problems

  • Typical causes of discrepancies
  • Infinite recursión
  • Dead code
  • AnATLyzer limitations or bugs!

MFoC workshop @ IMDEA – 26/11/2019 24

Results #Mutants 483 True positives 337 (62.29%) True negatives 125 (25.88%) False positives 15 (3.11%) False negatives 6 (1.24%) Precision 0.96 Recall 0.98

slide-24
SLIDE 24

Testing quick fixes

  • A quick fix should fix an error
  • But may introduce other errors (which ones?)
  • An error should be fixable by at least one quick fix
  • How do we know if our implementation satisfies these properties?

MFoC workshop @ IMDEA – 26/11/2019 25

slide-25
SLIDE 25

Testing quick fixes

  • 1. Synthetic, clean transformations
  • 2. Apply mutation operators
  • 3. For each mutant, run AnATLyzer
  • 4. For each error, try to quick fix
  • 5. Re-run AnATLyzer
  • 6. Compare the results

1. Is the original error fixed? 2. Have we introduced new errors?

MFoC workshop @ IMDEA – 26/11/2019 26 transform. mutant errors

AnATLyzer

?

synthetic transform.

1 2 3

mutation

  • perators

Quick fix AnATLyzer

errors transform. fixed

3

4 5 6

* Comparing the results is quite tricky…

slide-26
SLIDE 26

Testing an optimiser

  • Quick fixes tend to generate large

and often unreadable expressions

  • OCL optimiser
  • Reduce the size of the generated

expressions by applying simplifications

  • How do we know that the

implementation keeps the same semantics?

  • Compiler verification?
  • Too expensive
  • Translation validation!
  • Runtime verification approach

MFoC workshop @ IMDEA – 26/11/2019 27

f.elements->select(e | e.oclIsKindOf(FAC!Generator) or e.oclIsKindOf(FAC!Assembler) or e.oclIsKindOf(FAC!Terminator)); f.elements->select(e | e.oclIsKindOf(FAC!Machine))

slide-27
SLIDE 27

Testing an optimiser

28 Arc.allInstances()->forAll(a | a.source.name.oclIsKindOf(Name)) Arc.allInstances()-> forAll(e | true) true apply optimisations

  • clIsKindOf (#4)

Iterators (#5) let original = Arc.allInstances()->forAll(a | a.source.name.oclIsKindOf(Name)) in let optimised = true in not ((original implies optimised) and (optimised implies original))

Name NetContent 0..1 name Arc NetContent Element source target

: Arc : Place

Transition Place

Equivalence formula (negated) Counter-example

A B C D

: Transition construct verification formula model finding

PNML meta-model

slide-28
SLIDE 28

Testing an optimiser

29 Arc.allInstances()->forAll(a | a.source.name.oclIsKindOf(Name)) Arc.allInstances()-> forAll(e | true) true apply optimisations

  • clIsKindOf (#4)

Iterators (#5)

Name NetContent 0..1 name Arc NetContent Element source target

: Arc : Place

Transition Place

Counter-example

A B D

: Transition model finding

PNML meta-model

OclUndefined.oclIsKindOf(Name) => false Optimisation not applicable because:

slide-29
SLIDE 29

Testing an optimiser

  • Translation validation is applied after each optimisation
  • Runtime verification approach
  • Can we have a sense of the correctness “upfront”?
  • Apply quick fixes to:
  • Already faulty transformations
  • Mutants
  • Optimise each quick fix
  • Verify

MFoC workshop @ IMDEA – 26/11/2019 30

slide-30
SLIDE 30

Conclusions

  • AnATLyzer Testing Framework
  • Testing support for MDE developers
  • Applied to our own MDE tools
  • Unfortunately neglected in the MDE setting, particularly for tool construction
  • AnATLyzer’s testing framework
  • Components to cover key scenarios
  • Manual test cases
  • Model generation
  • Contract-based testing
  • Mutation testing
  • Easily composable
  • Powered by a model finder

MFoC workshop @ IMDEA – 26/11/2019 31

slide-31
SLIDE 31

Conclusions

  • AnATLyzer Testing Framework
  • Testing support for MDE developers
  • Applied to our own MDE tools
  • Unfortunately neglected in the MDE setting, particularly for tool construction
  • AnATLyzer’s testing framework
  • Components to cover key scenarios
  • Manual test cases
  • Model generation
  • Contract-based testing
  • Mutation testing
  • Easily composable
  • Powered by a model finder

MFoC workshop @ IMDEA – 26/11/2019 32

As researchers … We must test our tools!

slide-32
SLIDE 32

Thank you!

Any questions?

http://anatlyzer.github.io http://github.com/jdelara/MDETesting

@sanchezcuadrado http://github.com/jesusc jesusc@um.es