automated search for good coverage criteria
play

Automated Search For Good Coverage Criteria Position Paper Phil - PowerPoint PPT Presentation

Automated Search For Good Coverage Criteria Position Paper Phil McMinn University of Sheffield Mark Harman University College London Gordon Fraser University of Sheffield Gregory Kapfhammer Allegheny College Coverage


  1. Automated Search For “Good” Coverage Criteria Position Paper Phil McMinn University of Sheffield Mark Harman 
 University College London Gordon Fraser 
 University of Sheffield 
 Gregory Kapfhammer Allegheny College

  2. Coverage Criteria: 
 The “OK”, The Bad 
 and The Ugly The “OK” • Divide up system into things to test • Useful to generate tests on if no functional model exists • Indicates what parts of the system are and aren’t tested

  3. The Bad • Not based on anything to do with faults, not even: • Fault histories • Fault taxonomies • Common faults

  4. The Ugly • Studies disagree as to which criteria are best • Coverage or test suite size?

  5. The Key Question 
 of this Talk Can we evolve “good” coverage criteria? Coverage criteria that are better correlated with fault revelation ?

  6. Why This Might Work • The best criterion might actually be a 
 mix and match of aspects existing criteria • For example “cover the top n longest d-u paths, and then any remaining uncovered branches” • Or…

  7. Maybe this is One Big Empirical Study using SBSE … which aspects of which criteria and how much branches less more less more complex d-u chains less more basis paths

  8. What About Including Aspects Not Incorporated into Existing Criteria Non functional aspects • For example timing behaviour, memory usage • “Cover all branches using as much memory as possible” Fault histories • “Maximize basis path coverage in classes with the longest fault histories”

  9. “Isn’t This Just 
 Mutation Testing?” Our criteria are more like generalised strategies • Potentially more insightful to the nature of faults • Cheaper to apply 
 (coverage is generally easier to obtain than a 100% mutation score) Perhaps different strategies will work best for different types of software, or different teams of software developers

  10. How This Might Work

  11. Fault Database Need examples of real faults • Defects4J • CoREBench • … or, just use mutation

  12. Fitness Function “Goodness” is correlation between greater coverage and greater fault revelation • Needs test suites to establish

  13. Generation of Test Suites At least two possibilities • Generate up front universe of test suites • Generate specific test suites with the aim of achieving specific coverage levels of the criteria under evaluation (drawback: expensive)

  14. Search Representation GP Trees OR AND over 75% basis path coverage up to 50% maximise branch coverage memory usage

  15. Handling Bloat GP techniques classically involve “bloat” • Consequence: generated criteria may not be very succinct • Various techniques could be applied to simplify the criteria, e.g. delta debugging

  16. Overfitting The evolved criteria may not generalise beyond the systems studied and the faults seeded • May not be a disadvantage: • insights into classes of system • faults made by particular developers • … apply traditional techniques from machines learning to combat overfitting.

  17. 
 Summary Our Position: 
 SBSE can be used to automatically evolve 
 coverage criteria that are well correlated 
 with fault revelation 
 Over to the audience: 
 Is it feasible that we could do this?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend