testing wi with and for mde e tools
play

Testing wi with and for MDE E tools Jess Snchez Cuadrado - PowerPoint PPT Presentation

Testing wi with and for MDE E tools Jess Snchez Cuadrado University of Murcia Join work with Juan de Lara and Esther Guerra (@UAM) jesusc@um.es @sanchezcuadrado http://github.com/jesusc Outline Context: AnATLyzer Testing


  1. Testing wi with and for MDE E tools Jesús Sánchez Cuadrado University of Murcia Join work with Juan de Lara and Esther Guerra (@UAM) jesusc@um.es @sanchezcuadrado http://github.com/jesusc

  2. Outline • Context: AnATLyzer • Testing Framework • Testing MDE tools MFoC workshop @ IMDEA – 26/11/2019 3

  3. Context: AnATLyzer Static analyser for ATL model transformations 4

  4. AnATLyzer features • Static analysis to detect more that 50 types of problems in ATL transformations • Powered by constraint solving • Support for both live and batch analysis • Additional features • Eclipse IDE Integration (+ quick fixes, quick assist, explanations) • Visualizations • Source and target constraint handling • Programmatic API • Utilities built around AnATLyzer • Transformation footprint, constraint satisfaction for OCL • Analysis extensions • Support for UML profiles MFoC workshop @ IMDEA – 26/11/2019 5

  5. AnATLyzer IDE 1. Live error 1 reporting 2. Analysis view 4 3. Additional analysis executed on demand 4. Quick fixes 3 5. Visualizations 2 MFoC workshop @ IMDEA – 26/11/2019 6

  6. AnATLyzer IDE MFoC workshop @ IMDEA – 26/11/2019 7

  7. How AnATLyzer works? meta- models annot. 2: create 1: type 3: trafo. analysis ATL dep. graph checking TDG model ATL trafo. potential errors, problems warnings Yes! Confirm error 4: constraint solving No! Discard error witness model found? MFoC workshop @ IMDEA – 26/11/2019 8

  8. Some experimental results • Jesús Sánchez Cuadrado, Esther • Jesús Sánchez Cuadrado, Esther Guerra and Juan de Lara. “ AnATLyzer: Guerra and Juan de Lara. “Static An Advanced IDE for ATL model analysis of model transformations” . transformations” . ICSE’18, 2018. IEEE Transactions on Software • Replicated the experiment with Engineering. 43(9), 2017. larger transformations. Similar • Analysed 100 transformations results. • 4806 errors were found • Java2Kdm • 1207 errors were runtime errors • 149 errors • 2157 errors were violations of the • Kdm2Uml target meta-model conformance • 60 errors • SysML2Modelica • 222 errors MFoC workshop @ IMDEA – 26/11/2019 9

  9. Testing framework 10

  10. AnATLyzer’s testing framework • We built a testing framework organically • On demand, as new issues were arising to test AnATLyzer • We needed to: • Evaluate the precision of the static analysis • Evaluate the relevance of the quick fixes • Verify the correctness of internal transformations • In particular, optimizing transformations for OCL expressions • Result: • AnATLyzer’s testing framework MFoC workshop @ IMDEA – 26/11/2019 11

  11. Components of our testing framework • Model generators • Support for model finding • Catalogue of mutation operators • Transformation executor • Configures and launches transformations • Oracles • Contracts • Model comparators • Test driver MFoC workshop @ IMDEA – 26/11/2019 12

  12. Testing scenarios • Manual test cases • Automatic testing • Contract-based testing • Mutation testing MFoC workshop @ IMDEA – 26/11/2019 13

  13. Manual test cases Input models Transformation model_n.xmi Execution Expected outputs crash? output_n.xmi expect_n.xmi Actual outputs Model comparison Differences report MFoC workshop @ IMDEA – 26/11/2019 14

  14. Manual test cases @RunWith (Parameterized.class) public class TestUML2GUI extends ManualModelsTestCase { private static Metadata metadata = new Metadata("transformations/factories2pn_demo.atl") .configureInModel("IN", “FAC" , "metamodels/factory.ecore") .configureOutModel("OUT", “PN" , "metamodels/pn.ecore") .configureOutputFolder("outputs/manual"); @Parameters (name = "{0}") public static Collection<AnATLyzerTestCase> data() { metadata.addTestCase("IN", "models/manual/factory-1.uml", "OUT", "models/manual/pn-1.xmi"); metadata.addTestCase("IN", "models/manual/factory-2.uml", "OUT", "models/manual/pn-2.xmi"); return metadata.getTestCases(); } … 15

  15. Automatic test case generation Model Transformation model_n.xmi generator Execution • We seek for crashes or for non-conforming crash? models output_n.xmi • Akin to fuzzing • AnATLyzer is able to do this statically manytimes, but, e.g., • Infinite recursion Validation • OCL operations not supported by model (target conformance) finder MFoC workshop @ IMDEA – 26/11/2019 16

  16. Model generation • Random generation • Reused from a third-party implementation • Works but probably needs to be improved • Meta-model coverage • Instantiate feasible combinations of types and features • Path coverage • For each feasible (sub-)path of the transformation • Generate a model that would reach this point MFoC workshop @ IMDEA – 26/11/2019 17

  17. Contract-based testing Model Transformation model_n.xmi generator Execution crash? output_n.xmi FAC!Factory.allInstances()->size() = PN!PetriNet()->size() Contract Contract validator MFoC workshop @ IMDEA – 26/11/2019 18

  18. Mutation testing mutation operators mutant INPUT • How do we know if your test programs suite is good enough? program under test • Mutate the original program test cases • Execute the program with the mutation • Is the test suite passing? mutant test == • No: the test suite is detecting the original test no yes potential bug (killed mutant) • Yes: the test suite is not strong enough to detect the bug live mutants killed mutants mutation score MFoC workshop @ IMDEA – 26/11/2019 19

  19. Some mutation operators • Implemented 55 operators • Initial experiment to assess them – Not big conclusions yet MFoC workshop @ IMDEA – 26/11/2019 20

  20. How do we test our tools? MFoC workshop @ IMDEA – 26/11/2019 21

  21. Testing a static analyser 1 synthetic • If AnATLyzer reports an error: input mm. transform. • Is the error going to happen at mutation mm. operators coverage runtime? 2 • If not, it is a false positive transform. input test mutant model • If AnATLyzer doesn’t report an error: • Does exist a model that will exercise 3 AnATLyzer testing the error? • If so, it is a false negative ? 4 anATLyzer ok ok error error testing ok error ok error FP TP FN TN false true false true negative positive positive negative MFoC workshop @ IMDEA – 26/11/2019 22

  22. Testing a static analyser 1 synthetic 1. Take a transformation manually input mm. transform. checked to be error free mutation mm. operators coverage • Generate large test suite (mm. cov) 2 transform. input test 2. Apply mutations to the mutant model transformation • Classic mutations + 3 AnATLyzer testing • Mutations aimed at breaking the typing 3. For each mutant, run ? • AnATLyzer 4 anATLyzer ok ok error error • The test suite testing ok error ok error 4. Compare the results FP TP FN TN false true false true negative positive positive negative MFoC workshop @ IMDEA – 26/11/2019 23

  23. Testing a static analyser Results • Good precision and recall #Mutants 483 • Use of constraint solving to confirm or discard problems True positives 337 (62.29%) True negatives 125 (25.88%) • Typical causes of discrepancies False positives 15 (3.11%) • Infinite recursión False negatives 6 (1.24%) • Dead code • AnATLyzer limitations or bugs! Precision 0.96 Recall 0.98 MFoC workshop @ IMDEA – 26/11/2019 24

  24. Testing quick fixes • A quick fix should fix an error • But may introduce other errors (which ones?) • An error should be fixable by at least one quick fix • How do we know if our implementation satisfies these properties? MFoC workshop @ IMDEA – 26/11/2019 25

  25. 1 Testing quick fixes synthetic transform. mutation operators 1. Synthetic, clean transformations 2 transform. 2. Apply mutation operators mutant 3. For each mutant, run AnATLyzer 4. For each error, try to quick fix AnATLyzer 5. Re-run AnATLyzer 3 3 ? 6. Compare the results errors errors 1. Is the original error fixed? 6 2. Have we introduced new errors? transform. AnATLyzer Quick fix fixed 4 5 * Comparing the results is quite tricky … MFoC workshop @ IMDEA – 26/11/2019 26

  26. Testing an optimiser • Quick fixes tend to generate large and often unreadable expressions • OCL optimiser • Reduce the size of the generated f.elements->select(e | e.oclIsKindOf(FAC!Generator) or expressions by applying simplifications e.oclIsKindOf(FAC!Assembler) or e.oclIsKindOf(FAC!Terminator)); • How do we know that the implementation keeps the same semantics? f.elements->select(e | • Compiler verification? e.oclIsKindOf(FAC!Machine)) • Too expensive • Translation validation! • Runtime verification approach MFoC workshop @ IMDEA – 26/11/2019 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend