automated test case generation
play

Automated Test Case Generation or: How to not write test cases - PowerPoint PPT Presentation

Automated Test Case Generation Stefan Klikovits Automated Test Case Generation or: How to not write test cases Stefan Klikovits EN-ICE-SCD Universit e de Gen` eve 28 th September, 2015 Automated Test Reminder: Testing is not easy Case


  1. Automated Test Case Generation Stefan Klikovits Automated Test Case Generation or: How to not write test cases Stefan Klikovits EN-ICE-SCD Universit´ e de Gen` eve 28 th September, 2015

  2. Automated Test Reminder: Testing is not easy Case Generation Stefan Klikovits Credit: HBO

  3. Automated Test Overview: Automated Testing Case Generation Stefan Klikovits Automated . . . ◮ test execution ◮ setup ◮ program execution ◮ capture results ◮ result checking ◮ reporting

  4. Automated Test Overview: Automated Testing Case Generation Stefan Klikovits Automated . . . ◮ test input generation ◮ test selection ◮ test execution ◮ results generation (oracle problem) ◮ result checking ◮ reporting

  5. Automated Test Random Testing Case Generation Stefan Klikovits

  6. Automated Test Random testing Case Generation Stefan Klikovits ◮ input domains form regions [8] ◮ input represents the region around it ◮ maximum coverage through maximum diversity [2] ◮ but: random input is. . . well, random!

  7. Automated Test Adaptive Random Testing Case Generation Stefan Klikovits NON-Random random testing (?!) Credit: https://sbloom2.wordpress.com/category/evaluations/

  8. Automated Test Adaptive Random Testing Case Generation Stefan Klikovits NON-Random random testing (?!) ◮ evaluate previous TCs before generating a new one ◮ choose one that is as different as possible ◮ various strategies [1]

  9. Automated Test ART strategies Case Generation Stefan Klikovits ◦ ◦ • • dom(y) dom(y) ◦ ◦ • • ◦ ◦ • • dom(x) dom(x) ◦ ◦ • • dom(y) dom(y) ◦ ◦ • • ◦ ◦ • • dom(x) dom(x)

  10. Automated Test Criticism Case Generation Stefan Klikovits ◮ non-determinism ◮ input data problems (e.g. ordering in discrete domains) ◮ computationally expensive (time, memory) ◮ unrealistic scenarios [2] ◮ too high defect rates ◮ no actual SUT

  11. Automated Test Combinatorial Testing Case Generation Stefan Klikovits

  12. Automated Test Combinatorial Testing Case Generation Stefan Klikovits ◮ Idea: test all possible input (combinations) ◮ large number of TCs ◮ (slight) improvement: Equivalence classes ! ◮ (5 < uint a < 10) ⇒ { [0..5], [6..9], [10..maxInt] } ◮ still large TC sets ◮ 5 parameters – 3 EC each ⇒ 243 TC ◮ plus boundary values , exceptions, etc.

  13. Automated Test Orthogonal/Covering Arrays Case Generation Stefan Klikovits Orthogonal arrays (OA) ◮ test each pair/triple/... of parameters ◮ restriction: every τ -tuple has to be tested equally often Covering arrays (CA) ◮ . . . every pair ( τ -tuple) has to appear at least once ◮ logarithmic growth [3]

  14. Automated Test Working principle Case Generation Stefan Klikovits Scenario: ◮ 3 Parameters (OS, Browser, Printer) ◮ 2 values each ( { W, L } , { FF, CH } , { A, B } ) OS Browser Printer W FF A W FF B W CH A W CH B L FF A L FF B L CH A L CH B Table: Pairwise testing

  15. Automated Test Working principle Case Generation Stefan Klikovits Scenario: ◮ 3 Parameters (OS, Browser, Printer) ◮ 2 values each ( { W, L } , { FF, CH } , { A, B } ) OS Browser Printer W FF A W FF B W CH A W CH B L FF A L FF B L CH A L CH B Table: Pairwise testing

  16. Automated Test Criticism Case Generation Stefan Klikovits ◮ computationally expensive ◮ NP-hard [3] ◮ test case prioritisation Industry measurements: ◮ 70 % pairwise; 90 % threeway [6] ◮ 97 % of medical devices with pairwise tests [5]

  17. Automated Test Examples Case Generation Stefan Klikovits Scenario 2: ◮ 4 parameters - 3 values each ◮ exhaustive tests: 3 4 = 81 ◮ TCs to cover all pairs: 9

  18. Automated Test Examples Case Generation Stefan Klikovits Scenario 3: ◮ 10 parameters - 4 values each ◮ exhaustive tests: 4 10 = 1 , 048 , 576 ◮ TCs to cover all pairs: 29

  19. Automated Test Symbolic Execution Case Generation Stefan Klikovits

  20. Automated Test Symbolic execution Case Generation Stefan Klikovits ◮ build execution tree ◮ use symbols as input ◮ sum up Path constraints (PCs) ◮ use constraint solvers

  21. Automated Test Example Execution tree and Path constraints [7] Case Generation Stefan Klikovits x : A , y : B PC : true int x, y; 1: if ( x > y ) { 1 1 2: if ( y - x > 0 ) x : A , y : B x : A , y : B 3: assert ( false ); PC : A > B PC : A ≤ B } 2 2 x : A , y : B x : A , y : B PC : A > B ∧ B − A ≤ 0 PC : A > B ∧ B − A > 0

  22. Automated Test Difficult constraints? Concolic Execution! Case Generation Stefan Klikovits Idea: ◮ use symbolic values as long as possible ◮ switch to real values when necessary Example: Figure: Example of Concolic Execution [4]

  23. Automated Test Real life application Case Generation Stefan Klikovits Whitebox fuzzying [4] 1. start with well-formed inputs 2. record all the individual constraints along the execution path 3. one by one negate the constraints, solve with a constraint solver and execute new paths Properties: ◮ highly scalable ◮ focus on security vulnerabilities (buffer overflows) ◮ no need for a test oracle (check for system failures & vulnerabilities) Found one third of all bugs discovered in Windows 7!

  24. Automated Test Model-based TC generation Case Generation Stefan Klikovits Credit: http://formalmethods.wikia.com/wiki/Centre for Applied Formal Methods ◮ automatic/manual model generation ◮ three approaches ◮ Axiomatic | FSM | LTS

  25. Automated Test Model-based test case generation Case Generation Stefan Klikovits TC selection: ◮ offline/online test selection Modeling notations (textual & graphical): ◮ Scenario-, State-, Process-oriented

  26. Automated Test Criticism Case Generation Stefan Klikovits ◮ state space explosion ◮ complex model generation ◮ defining a “good” model is non-trivial ◮ requires knowledge of modeling

  27. Automated Test Summary Case Generation Stefan Klikovits ◮ (Adaptive) Random Testing (BB): cheap generation; non-deterministic; (hit & miss) ◮ Combinatorial Testing (BB): expensive; many TCs ◮ Symbolic/Concolic Execution (WB): problematic constraints; path explosion ◮ Model-based (WB) not “just” coding; need a “good” model (complex); state space

  28. Automated Test Summary Case Generation Stefan Klikovits ◮ (Adaptive) Random Testing (BB): cheap generation; non-deterministic; (hit & miss) ◮ Combinatorial Testing (BB): expensive; many TCs ◮ Symbolic/Concolic Execution (WB): problematic constraints; path explosion ◮ Model-based (WB) not “just” coding; need a “good” model (complex); state space

  29. Automated Test Summary Case Generation Stefan Klikovits ◮ (Adaptive) Random Testing (BB): cheap generation; non-deterministic; (hit & miss) ◮ Combinatorial Testing (BB): expensive; many TCs ◮ Symbolic/Concolic Execution (WB): problematic constraints; path explosion ◮ Model-based (WB) not “just” coding; need a “good” model (complex); state space

  30. Automated Test Summary Case Generation Stefan Klikovits ◮ (Adaptive) Random Testing (BB): cheap generation; non-deterministic; (hit & miss) ◮ Combinatorial Testing (BB): expensive; many TCs ◮ Symbolic/Concolic Execution (WB): problematic constraints; path explosion ◮ Model-based (WB) not “just” coding; need a “good” model (complex); state space

  31. Automated Test References Case Generation Stefan Klikovits [1] Saswat Anand, Edmund K. Burke, Tsong Yueh Chen, John Clark, Myra B. Cohen, Wolfgang Grieskamp, Mark Harman, Mary Jean Harrold, and Phil Mcminn. An orchestrated survey of methodologies for automated software test case generation. J. Syst. Softw. , 86(8):1978–2001, August 2013. [2] Andrea Arcuri and Lionel C. Briand. Adaptive random testing: an illusion of effectiveness? In ISSTA , pages 265–275, 2011. [3] Charles J. Colbourn. Combinatorial aspects of covering arrays. Le Matematiche (Catania) , 58, 2004.

  32. Automated Test References (cont.) Case Generation Stefan Klikovits [4] Patrice Godefroid. Test generation using symbolic execution. In Deepak D’Souza, Telikepalli Kavitha, and Jaikumar Radhakrishnan, editors, FSTTCS , volume 18 of LIPIcs , pages 24–33. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2012. [5] D. R. Kuhn, D. R. Wallace, and A. M. Gallo, Jr. Software fault interactions and implications for software testing. IEEE Trans. Softw. Eng. , 30(6):418–421, June 2004. [6] D. Richard Kuhn and Michael J. Reilly. An investigation of the applicability of design of experiments to software testing. In Proceedings of the 27th Annual NASA Goddard Software Engineering Workshop (SEW-27’02) , SEW ’02,

  33. Automated Test References (cont.) Case Generation Stefan Klikovits pages 91–, Washington, DC, USA, 2002. IEEE Computer Society. [7] Corina S. Pasareanu and Willem Visser. A survey of new trends in symbolic execution for software testing and analysis. STTT , 11(4):339–353, 2009. [8] L.J. White and E.I. Cohen. A domain strategy for computer program testing. IEEE Transactions on Software Engineering , 6(3):247–257, 1980.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend