m2
play

M2 Wednesday, March 27, 2002 11:30 AM M EASURING THE E FFECTIVENESS - PDF document

BIO P R E S E N T A T I O N PRESENTATION M2 Wednesday, March 27, 2002 11:30 AM M EASURING THE E FFECTIVENESS OF A UTOMATED F UNCTIONAL T ESTING Ross Collard Collard & Company International Conference On Software Test Automation March


  1. BIO P R E S E N T A T I O N PRESENTATION M2 Wednesday, March 27, 2002 11:30 AM M EASURING THE E FFECTIVENESS OF A UTOMATED F UNCTIONAL T ESTING Ross Collard Collard & Company International Conference On Software Test Automation March 25-28, 2002 San Jose, CA, USA

  2. Ross Collard Ross Collard is president of Collard & Company, a consulting firm located in Manhattan. His experience includes several software testing & QA projects; strategic planning for technology; and managing large software projects. His consulting and training clients have included: ADP, American Express, Anheuser- Busch, AT&T, Banamex, Bank of America, Baxter Healthcare, Bechtel, Blue Cross/Blue Shield, Boeing, British Airways, the CIA, Ciba Geigy, Cisco, Citibank, Computer Associates, Dayton Hudson, DEC, Dell, EDS, Exxon, General Electric, Goldman Sachs, GTE, the Federal Reserve Bank, Ford, Fijitsu, Hewlett-Packard, Hughes Aircraft, Intel, Johnson & Johnson, JP Morgan, Lucent, McGraw Hill, MCI, Merck, Microsoft, Motorola, NASA, Nortel, Novell, Oracle, Procter & Gamble, Prudential, IBM, Swiss Bank and the U.S. Air Force. Mr. Collard has conducted seminars on business and information technology topics for businesses, governments and universities, including George Washington, Harvard and New York Universities, MIT and U.C. Berkeley. He has lectured in the U.S.A., Europe, the Middle East, the Far East, South America and the South Pacific. He has a BE in Electrical Engineering from the University of New Zealand (where he grew up), an MS in Computer Science from the California Institute of Technology and an MBA from Stanford University. He can be reached at rcollar@attglobal.net.

  3. MEASURING THE EFFECTIVENESS OF AUTOMATED FUNCTIONAL TESTING Ross Collard, Collard & Company With thanks to James Bach, Elfriede Dustin, Dot Graham, Sam Guckenheimer, and Bret Pettichord. Overview – The Issues We Will Discuss What problem are we trying to solve, and what questions do we want to ask about our test automation? What information do we need to develop credible answers? Where do we get the information, and how trustworthy and precise does it need to be? How do we derive conclusions from the information? What problems are we likely to encounter, and how do we handle them? 1

  4. What Problem are We Trying to Solve? o Justify past expenditures on test automation (a post-mortem). o Lobby for future investments (in tools, equipment, staff, training, etc.) o Assess the current status of our automation. -Before-and-after comparisons within our organization. -Benchmarks to industry peers. o Set direction. (Where do we go from here?) -Change focus. -Retrench / abandon. -Champion and encourage further automation. 2

  5. A Caution Although test automation effectiveness has been studied relatively little, information technology (IT) investment effectiveness has been widely measured. Large-scale studies from MIT, Morgan Stanley and others have concluded that the level of IT investment is NOT a predictor of company profitability or growth. Why? The issue is HOW organizations invest. Low-performing companies use IT to automate existing manual business processes, while the high performers make IT a catalyst to change the business processes to improve customer value. This is technology adoption at work: by the time there is sufficient trustworthy data to form valid conclusions, the questions are moot. 3

  6. Reasons NOT to Measure o Data can be difficult or impossible to acquire. o The effort often is time consuming, distracting from other activities. o Assumptions are needed about unknowns. o The data analysis can take some weird turns – can the conclusions logically be supported by the data?? o The results can lack credibility and be subject to criticism. o The results could be embarrassing. 4

  7. Reasons that We Must Measure o You can’t manage what you can’t measure. o Without findings and conclusions, you’re just another opinion. o In the hustle and bustle of test activities, it is hard to know the situation without standing back and reflecting. o It is unprofessional not to objectively evaluate and report back to the investors in test automation (managers, clients). o An honest effort to evaluate our effectiveness will win kudos and respect. 5

  8. Characteristics which Influence the Effectiveness of Test Automation o Many organizations have unrealistic goals for test automation, sometimes aided by vendor snake oil. o Many organizations have vague goals for automation. o Many organizations fail with automation, though it all depends on how we define success and failure. For example, some organizations abandon using the tools. o The time required to develop and maintain automated test cases often goes way up. 6

  9. Common Characteristics (continued) o The time to execute automated test cases goes way down. o The elapsed time for testing decreases, sometimes dramatically. o The number of problems found increases, sometimes dramatically. o The tool acquisition or development cost usually is only 5% to 15% of TCO (total costs of ownership, including training, internal and external support, tool upgrades, etc.) 7

  10. Common Characteristics (continued) o Many of the costs are overhead and hidden unless the organization has finely tuned cost accounting systems, in areas like training, test library maintenance, and centralized support for test automation on decentralized projects. o The savings from test automation generally do not come from tester headcount reduction. o Testers’ morale can increase – there’s less scut work – or decrease, because they don’t like automation or the tools. o Testers’ skills often need extensive upgrading, especially if the tools or test environment are quirky. 8

  11. Common Characteristics (continued) o Re-use of test cases for regression testing often rises significantly with automation, and re-test time is cut significantly too. o There can be long learning curves and long lead times before the pay-off from test automation is realized, as with the building of any other kind of infrastructure. Don’t measure too early. (Though interim progress reports are important to keep the faithful believing.) 9

  12. Testers’ (and Managers’) Questions about Test Automation (1) To Evaluate Effectiveness: o Can we show justification for the investments we’ve made? -What were our original goals for test automation? -Do the benefits realized match expectations? -How much have we spent on automation? -How much would we have spent on comparable testing without automation: what’s the equivalent manual test effort (Dot Graham’s EMTE)? -Is test automation helping or hurting? 10

  13. Testers’ and Managers’ Questions (continued) o How effective is our automated testing in finding problems? -How many problems are being found in testing? --Manual vs. automated. --By type of defect. --By level of severity. -What are the levels of a) irreproducible and b) false test results? --Manual vs. automated. -What costs have been avoided by the defects found not causing failures? -What types of problems are being MISSED with automated testing? 11

  14. Testers’ and Managers’ Questions (continued) o How reliable are the test results? -What is the test coverage? (Even if more defects are not being found, higher coverage is an indicator of higher confidence in the test results.) --Manual vs. automated. --As measured by coverage tools. --As assessed subjectively (if coverage tools are not available). -Have testing practices become more organized and consistent with automation (less vagaries)? 12

  15. Testers’ and Managers’ Questions (continued) o How effective is our automated testing in encouraging test case and test facility re-use? -What percentage of test cases are re-run in regression testing? --Manual vs. automated. --Our experience vs. industry norms. -What percentage of test cases are re-used across multiple test projects? --Manual vs. automated. 13

  16. Testers’ and Managers’ Questions (continued) o How effective is our automated testing in speeding delivery? -How quickly is the testing completed? --Manual vs. automated. -What impact does automated testing have on delivery time (e.g., by reducing re-work)? -How do we compare with industry norms? o What is the impact on user satisfaction? -Periodic user satisfaction surveys. --By customer segment or user group. --By system or version of a system. --Areas tested by manual vs. automated means. 14

  17. Testers’ and Managers’ Questions (continued) o What are the costs of testing? -How much elapsed time and tester hours do each suite of test cases require (for automated test cases vs. the manual equivalents)? --Test case development. --Test case maintenance. --Test execution. --Results evaluation and follow-up. -What is the cost of the equipment tied up in testing? --Test case development and maintenance. --Test execution. -How many times does a test case need to run to break even (earn back its development and maintenance costs)? 15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend