system acceptance and regression system acceptance and
play

System Acceptance and Regression System, Acceptance, and Regression - PowerPoint PPT Presentation

System Acceptance and Regression System, Acceptance, and Regression Testing (c) 2007 Mauro Pezz & Michal Young Ch 22, slide 1 Learning objectives Learning objectives Distinguish system and acceptance testing Distinguish system


  1. System Acceptance and Regression System, Acceptance, and Regression Testing (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 1

  2. Learning objectives Learning objectives • Distinguish system and acceptance testing • Distinguish system and acceptance testing – How and why they differ from each other and from unit and integration testing unit and integration testing • Understand basic approaches for quantitative assessment (reliability performance assessment (reliability, performance, ...) ) • Understand interplay of validation and verification for usability and accessibility ifi ti f bilit d ibilit – How to continuously monitor usability from early d design to delivery i t d li • Understand basic regression testing approaches – Preventing accidental changes (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 2

  3. System Acceptance Regression Test for Test for ... Correctness Correctness, Usefulness Usefulness, Accidental Accidental completion satisfaction changes Test by ... Development Test group with Development test group test group users users test group test group Validat ion Verification Verification (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 3

  4. Ch 22, slide 4 (c) 2007 Mauro Pezzè & Michal Young System testing 22.2 22.2

  5. System Testing System Testing • Key characteristics: • Key characteristics: – Comprehensive (the whole system, the whole spec) – Based on specification of observable behavior Based on specification of observable behavior Verification against a requirements specification, not validation, and not opinions – Independent of design and implementation Independence : Avoid repeating software design errors in system test design errors in system test design (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 5

  6. Independent V&V Independent V&V • One st rat egy for maximizing independence: • One st rat egy for maximizing independence: S ystem (and acceptance) test performed by a different organization different organization – Organizationally isolated from developers (no pressure to say “ ok” ) pressure to say ok ) – S ometimes outsourced to another company or agency agency • Especially for critical systems • Outsourcing for independent j udgment, not to save money • May be addit ional system test, not replacing internal V&V – Not all outsourced testing is IV&V • Not independent if controlled by development organization (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 6

  7. Independence without changing staff Independence without changing staff • If the development organization controls • If the development organization controls system testing ... – Perfect independence may be unattainable, but we Perfect independence may be unattainable but we can reduce undue influence • Develop system test cases early Develop system test cases early – As part of requirements specification, before maj or design decisions have been made design decisions have been made • Agile “ test first” and conventional “ V model” are both examples of designing system test cases before designing the implementation • An opportunity for “ design for test” : S tructure system for critical system testing early in proj ect critical system testing early in proj ect (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 7

  8. Incremental System Testing Incremental System Testing • S • S ystem tests are often used to measure ystem tests are often used to measure progress – S S ystem test suite covers all features and scenarios of ystem test suite covers all features and scenarios of use – As proj ect progresses the system passes more and – As proj ect progresses, the system passes more and more system tests • Assumes a “ threaded” incremental build plan: • Assumes a threaded incremental build plan: Features exposed at top level as they are developed developed (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 8

  9. Global Properties Global Properties • S • S ome system properties are inherently global ome system properties are inherently global – Performance, latency, reliability, ... – Early and incremental testing is still necessary, but Early and incremental testing is still necessary but provide only estimates • A maj or focus of system testing A maj or focus of system testing – The only opportunity to verify global properties against actual system specifications against actual system specifications – Especially to find unanticipated effects, e.g., an unexpected performance bottleneck unexpected performance bottleneck (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 9

  10. Context-Dependent Properties Context-Dependent Properties • Beyond system global: S • Beyond system-global: S ome properties depend ome properties depend on the system context and use – Example: Performance properties depend on Example: Performance properties depend on environment and configuration – Example: Privacy depends both on system and how it – Example: Privacy depends both on system and how it is used • Medical records system must protect against unauthorized y p g use, and authorization must be provided only as needed – Example: S ecurity depends on threat profiles • And threats change! • Testing is j ust one part of the approach (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 10

  11. Establishing an Operational Envelope Establishing an Operational Envelope • When a property (e g performance or real • When a property (e.g., performance or real- time response) is parameterized by use ... – requests per second, size of database, ... requests per second size of database • Extensive stress testing is required – varying parameters within the envelope, near the bounds, and beyond • Goal: A well-understood model of how the G l A ll d d d l f h h property varies with the parameter – How sensitive is the property to the parameter? – Where is the “ edge of the envelope” ? – What can we expect when the envelope is exceeded? (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 11

  12. Stress Testing Stress Testing • Often requires extensive simulation of the • Often requires extensive simulation of the execution environment – With systematic variation: What happens when we With systematic variation: What happens when we push the parameters? What if the number of users or requests is 10 times more, or 1000 times more? or requests is 10 times more, or 1000 times more? • Often requires more resources (human and machine) than typical test cases machine) than typical test cases – S eparate from regular feature tests – Run less often, with more manual control Run less often with more manual control – Diagnose deviations from expectation • Which may include difficult debugging of latent faults! • Which may include difficult debugging of latent faults! (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 12

  13. Ch 22, slide 13 Acceptance testing (c) 2007 Mauro Pezzè & Michal Young 22.3 22.3

  14. Estimating Dependability Estimating Dependability • Measuring quality not searching for faults • Measuring quality, not searching for faults – Fundamentally different goal than systematic testing • Quantitative dependability goals are statistical Q tit ti d d bilit l t ti ti l – Reliability – Availability – Mean time to failure – ... • Requires valid statistical samples from operat ional profile – Fundamentally different from systematic testing (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 14

  15. Statistical Sampling Statistical Sampling • We need a valid operat ional profile (model) • We need a valid operat ional profile (model) – S ometimes from an older version of the system – S S ometimes from operational environment (e g for ometimes from operational environment (e.g., for an embedded controller) – S S ensit ivit y t est ing reveals which parameters are ensit ivit y t est ing reveals which parameters are most important, and which can be rough guesses • And a clear precise definition of what is being • And a clear, precise definition of what is being measured – Failure rate? Failure rate? Per session per hour per operation? Per session, per hour, per operation? • And many, many random samples – Especially for high reliability measures (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 15

  16. Is Statistical Testing Worthwhile? Is Statistical Testing Worthwhile? • Necessary for • Necessary for ... – Critical systems (safety critical, infrastructure, ...) • But difficult or impossible when ... – Operational profile is unavailable or j ust a guess • Often for new functionality involving human interaction – But we may factor critical functions from overall use to obtain a good model of only the critical properties – Reliability requirement is very high – Reliability requirement is very high • Required sample size (number of test cases) might require years of test execution • Ultra-reliability can seldom be demonstrated by testing (c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 16

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend