software testing
play

Software Testing Fernando Brito e Abreu (fba@di.fct.unl.pt) - PDF document

Software Testing Fernando Brito e Abreu (fba@di.fct.unl.pt) Universidade Nova de Lisboa (http://www.unl.pt) QUASAR Research Group (http://ctp.di.fct.unl.pt/QUASAR) SWEBOK: the 10 Knowledge Areas Software Requirements Software Design


  1. Software Testing Fernando Brito e Abreu (fba@di.fct.unl.pt) Universidade Nova de Lisboa (http://www.unl.pt) QUASAR Research Group (http://ctp.di.fct.unl.pt/QUASAR) SWEBOK: the 10 Knowledge Areas  Software Requirements  Software Design  Software Construction  Software Testing  Software Maintenance  Software Configuration Management  Software Engineering Management  Software Engineering Process  Software Engineering Tools and Methods  Software Quality 2 Software Engineering / Fernando Brito e Abreu 18-Nov-08 1

  2. Motivation - The Bad News ...  Software bugs cost the U.S. economy an estimated $59.5 billion annually, or about 0.6% of the gross domestic product.  Sw users shoulder more than half of the costs  Sw developers and vendors bear the remainder of the costs. Source: The Economic Impacts of Inadequate Infrastructure for Software Testing , Technical Report, National Institute of Standards and Technology, USA, May 2002 http://www.nist.gov/director/prog-ofc/report02-3.pdf 3 Software Engineering / Fernando Brito e Abreu 18-Nov-08 Motivation - The GOOD News! According to the same report:  More than 1/3 of the costs (an estimated $22.2 billion) can be eliminated with earlier and more effective identification and removal of software defects.  Savings can mainly occur in the development stage, when errors are introduced.  More than half of these errors aren't detected until later in the development process or during post-sale software use. 4 Software Engineering / Fernando Brito e Abreu 18-Nov-08 2

  3. Motivation  Reliability is one of the most important software quality characteristics  Reliability has a strong financial impact:  better image of producer  reduction of maintenance costs  celebration or revalidation of maintenance contracts, new developments, etc.  The quest for Reliability is the aim of V&V ! 5 Software Engineering / Fernando Brito e Abreu 18-Nov-08 Verification and Validation (V&V)  Verification - product correctness and consistency in a given development phase, face to products and standards used as input to that phase - " Do the Job Right"  Validation - product conformity with specified requirements - " Do the Right Job"  Basically two complementary V&V techniques :  Reviews (Walkthroughs, Inspections, ...)  Tests 6 Software Engineering / Fernando Brito e Abreu 18-Nov-08 3

  4. Summary  Software Testing Fundamentals  Test Levels  Test Techniques  Test-related Measures  Test Process 7 Software Engineering / Fernando Brito e Abreu 18-Nov-08 Summary  Software Testing Fundamentals  Test Levels Test Levels Test Levels  Test Techniques Test Techniques Test Techniques  Test Test Test-related Measures related Measures related Measures  Test Process Test Process Test Process 8 Software Engineering / Fernando Brito e Abreu 18-Nov-08 4

  5. Testing is …  … an activity performed for evaluating product quality , and for improving it, by identifying defects and problems.  … the dynamic verification of the behavior of a program on a finite set of test cases , suitably selected from the usually infinite executions domain, against the expected behavior. 9 Software Engineering / Fernando Brito e Abreu 18-Nov-08 Dynamic versus static verification  Testing always implies executing the program on (valued) inputs; therefore is a dynamic technique  The input value alone is not always sufficient to determine a test, since a complex, nondeterministic system might react to the same input with different behaviors, depending on its state  Different from testing and complementary to it are static techniques (described in the Software Quality KA) 10 Software Engineering / Fernando Brito e Abreu 18-Nov-08 5

  6. Terminology issues  Error  the human cause for defect existence (although bugs walk …)  Fault or defect (aka bug)  incorrectness, omission or undesirable characteristic in a deliverable  the cause of a failure  Failure  Undesired effect (malfunction) observed in the system’s delivered service  Incorrectness in the functioning of a system  See: IEEE Standard for SE Terminology (IEEE610-90) 11 Software Engineering / Fernando Brito e Abreu 18-Nov-08 Testing views  Testing for defect identification  A successful test is one which causes a system to fail  Testing can reveal failures, but it is the faults that must be removed  Testing to demonstrate (that the software meets its specifications or other desired properties)  A successful test is one where no failures are observed  Fault detection (e.g. in code) is often hard through failure exposure  Identifying all failure-causing input sets (i.e. those sets of inputs that cause a failure to appear) may not be feasible 12 Software Engineering / Fernando Brito e Abreu 18-Nov-08 6

  7. Summary  Software Testing Fundamentals  Test Levels  Test Techniques Test Techniques Test Techniques  Test Test Test-related Measures related Measures related Measures  Test Process Test Process Test Process 13 Software Engineering / Fernando Brito e Abreu 18-Nov-08 Test Levels Objectives of testing  Testing can be aimed at verifying different properties:  Checking if functional specifications are implemented right  aka conformance testing, correctness testing, or functional testing  Checking nonfunctional properties  E.g. performance, reliability evaluation, reliability measurement, usability evaluation, etc  Stating the objective in precise, quantitative terms allows control to be established over the test process  Often objectives are qualitative or not even stated explicitly 14 Software Engineering / Fernando Brito e Abreu 18-Nov-08 7

  8. Test Levels Objectives of testing  Acceptance /  Regression testing Qualification testing  Performance testing  Installation testing  Stress testing  Alpha and beta testing  Back-to-back testing  Conformance /  Recovery testing Functional /  Configuration testing Correctness testing  Usability testing  Reliability achievement and evaluation 15 Software Engineering / Fernando Brito e Abreu 18-Nov-08 Test Levels – Objectives of testing Acceptance / Qualification testing  Checks the system behavior against the customer’s requirements  The customer may not exist yet, so someone has to forecast his intended requirements  This testing activity may or may not involve the developers of the system 16 Software Engineering / Fernando Brito e Abreu 18-Nov-08 8

  9. Test Levels – Objectives of testing Installation testing  Installation testing can be viewed as system testing conducted once again according to hardware configuration requirements  Usually performed in the target environment at the customer’s premises  Installation procedures may also be verified  e.g. is the customer local expert able to add a new user in the developed system? 17 Software Engineering / Fernando Brito e Abreu 18-Nov-08 Test Levels – Objectives of testing Alpha and beta testing  Before the software is released, it is sometimes given to a small, representative set of potential users for trial use. Those users may be:  in-house ( alpha testing )  external ( beta testing )  These users report problems with the product  Alpha and beta use is often uncontrolled, and is not always referred to in a test plan 18 Software Engineering / Fernando Brito e Abreu 18-Nov-08 9

  10. Test Levels – Objectives of testing Conformance / Functional / Correctness testing  Conformance testing is aimed at validating whether or not the observed behavior of the tested software conforms to its specifications 19 Software Engineering / Fernando Brito e Abreu 18-Nov-08 Test Levels – Objectives of testing Reliability achievement and evaluation  Testing is a means to improve reliability  By randomly generating test cases according to the operational profile, statistical measures of reliability can be derived  Reliability growth models allow to express this reality 20 Software Engineering / Fernando Brito e Abreu 18-Nov-08 10

  11. Reliability growth models  Provide a prediction of reliability based on the failures observed under reliability achievement and evaluation  They assume, in general, that:  a growing number of well-succeeded tests increases our confidence on the system’s reliability  the faults that caused the observed failures are fixed after being found (thus, on average, product’s reliability has an increasing trend) 21 Software Engineering / Fernando Brito e Abreu 18-Nov-08 Reliability growth models  Many models were published, which are divided into:  failure-count models  time-between failure models 22 Software Engineering / Fernando Brito e Abreu 18-Nov-08 11

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend