interval quality relating customer perceived quality to
play

Interval Quality: Relating Customer-Perceived Quality To Process - PowerPoint PPT Presentation

Interval Quality: Relating Customer-Perceived Quality To Process Quality Audris Mockus and David Weiss { audris,weiss } @avaya.com Avaya Labs Research Basking Ridge, NJ 07920 http://mockus.org/ Motivation: bridge the gap between developer and


  1. Interval Quality: Relating Customer-Perceived Quality To Process Quality Audris Mockus and David Weiss { audris,weiss } @avaya.com Avaya Labs Research Basking Ridge, NJ 07920 http://mockus.org/

  2. Motivation: bridge the gap between developer and user and measure in vivo ✦ A key software engineering objective is to improve quality via practices and tools that support requirements, design, implementation, verification, and maintenance ✦ Needs of a user: installability, reliability, availability, backward compatibility, cost, and features ✦ Primary objectives ✧ Can we measure user-perceived-quality in vivo ? ✧ Can we communicate it to the development team? ✧ Is the common wisdom about software quality correct? 2 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  3. Outline ✦ History of quality in communications systems ✦ How to observe quality in vivo ✦ Questions ✧ Can we compare quality among releases? ✧ Which part of the life-cycle affects quality the most? ✧ Can we approximate quality using easy-to-obtain measures? ✧ Does hardware or software have more impact on quality? ✦ Answers ✧ Yes, service, no, it depends ✦ Discussion 3 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  4. Approaches to measure quality ✦ Theoretical models [16] ✦ Simulations ( in silico ) ✦ Observing indirectly ✧ Test runs, load tests, stress tests, SW defects and failures ✦ Observing directly in vivo via recorded user/system actions (not opinion surveys) has following benefits: ✧ is more realistic, ✧ is more accurate, ✧ provides higher level of confidence, ✧ is more suited to observe an overall effect than in vitro research, ✧ is more relevant in practice. 4 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  5. History of Communications Quality [6] ✦ Context: military and commercial communication systems, 1960-present ✦ Goals: system outage, loss of service, degradation of service ✧ Downtime of 2 hours over 40 yr, later “5 nines” (or 5 min per year) ✧ Degradation of service, e.g., < . 01% calls mishandled ✧ Faults per line per time unit, e.g., errors per 100 subscribers per year ✧ MTBF for service or equipment, e.g, exchange MTBF, % subscribers with MTBF > X ✧ Duplication levels, e.g., standby HW for systems with > 64 subscribers 5 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  6. Observing in vivo — architecture Weekly System ID System ID System ID Platform snapshots Customer Info. Release System ID Platform Configuration Date modified Release First date Inventory system Installed base Metrics/ Bounds System ID Alarming Time base MTBF Alarm ID Other alarm info Augmented ticket/alarm Alarm type Alarming system Outage duration Outage/Restart Availability Release/Platf. System ID Ticket ID Population Inst/runtme Survival Resolution Time Rel. launch Hazard Other attributes Ticketing system System Id/Conf. Level 1 Level 2 Level 0 6 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  7. Observing in vivo — primary data sources ✦ Service tickets ✧ Represent requests for action to remedy adverse events: outages, software and hardware issues, and other requests ✧ Manual input = ⇒ not always accurate ✧ Some issues may be unnoticed and/or unreported = ⇒ missing ✦ Software alarms ✧ Complete list for the systems configured to generate them ✧ Irrelevant events may be included, e.g, experimental, misconfigured systems that are not in production use at the time ✦ Inventory ✧ Type, size, configuration, install date for each release ✦ Link between deployment dates and tickets/alarms 7 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  8. Issues with commonly available data and published analyses ✦ Present ✧ Problem reports by month (hopefully grouped by release) ✧ Sales (installations) by month (except for freely downloadable SW) ✦ Absent ✧ No link between install time and problem report = ⇒ ✧ no way to get accurate estimates of hazard function (probability density of observing a failure conditional on the absence of earlier failures) ✧ No complete list of software outages = ⇒ ✧ no way to get rough estimates of the underlying rate 8 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  9. Data Remedies ⇒ collect snapshots to ✦ Only present state of inventory is kept = reconstruct history ✦ The accounting aggregation (by solution, license) is different from service (by system) or production (by release/patch) ⇒ remap to the finest common aggregation aggregation = ✦ Missing data ✧ Systems observed for different periods = ⇒ use survival curves ✧ Reporting bias = ⇒ divide into groups according to service levels and practices ⇒ design measures for ✦ Quantity of interest not measured = upper and lower bounds 9 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  10. Practical questions ✦ Can we compare quality among releases to evaluate the effectiveness of QA practices? ✦ Which part of the production/deployment/service life-cycle affects quality the most? ✦ Can quality be approximated with easy-to-obtain measures, e,g., defect density? ✦ Does hardware or software have more impact on quality? 10 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  11. Hazard function (Probability density of observing a failure conditional on the absence of earlier failures) 0.8 Estimated Hazard Rate 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 Runtime (years) ✦ Have to adjust for runtime and separate by platform or the MTBF will characterize the currently installed base, not release quality ✦ Therefore, how to compare release quality? 11 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  12. Interval Quality Post inst. MR rates. Current Date 0.025 0−1 months after inst. 0−3 months after inst. 0.020 0−6 months after inst. 0.015 *** *** 0.010 *** *** *** *** 0.005 *** *** *** *** ** *** 0.000 1.1 1.3 2.0 2.1 2.2 3.0 3.1 ✦ Fraction of customers that report software failures within the first few months of installation ✦ Does not account for proximity to launch, platform mix ✦ Significant differences marked with “*” ✦ “We live or die by this measure” 12 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  13. Can we use easy-to-obtain defect density? F3 0.015 DL DM DL DL DefPerKLOC/100 DM DefPerPreGaMR*10 F1 Probability 1m. F3 F3 Probability 3m. 0.010 Quantity DM F1 DM F3 DM F3 DL 0.005 DL F3 F1 DM F1 DL DM F1 F3 DL F1 F1 0.000 r1.1 r1.2 r1.3 r2.0 r2.1 r2.2 Anti-correlated?! 13 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  14. High defect density leads to satisfied customers? ✦ What does any organization strive for? 14 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  15. ⇒ Predictability! Stability = ✦ The rate at which customer problems get to Tier IV is almost constant despite highly varying deployment and failure rates 150 1500 Numbers of field issues Deployed systems 100 1000 500 50 0 0 r1.1 r1.2 r1.3 r2.0 r2.1 r2.2 Months 15 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  16. Major versus Minor releases ✦ Defect density numerator is about the same as for IQ because ✧ Major releases are deployed more slowly to fewer customers ✧ For minor releases a customer is less likely to experience a fault so they are deployed faster and to more customers ✦ The denominator diverges because ✧ Major releases have more code changed and fewer customers ✧ Minor releases have less code and more customers 16 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  17. Hardware vs Software ✦ Limitations ✧ Durations of SW Warm, 5e+02 SW Cold, HW differ by or- ders of magnitude ✧ Warm rst. don’t drop calls 5e+01 ✧ High/Critical cfg. may by unaffected 5e+00 ✧ HW-High ultra conserva- tive ✧ Variability for each esti- 5e−01 mate may be high HW Low HW High SW Cold SW All ✦ Distribution of MTBF for 15 platform/release combinations 17 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

  18. Which part of the software production and delivery contributes most to quality? ✦ Development perspective - fraction of MRs removed per stage ✧ Development → features, bugs introduced, and resolved ✧ Verification → 40% of development stage MRs (post unit-test) ✧ α/β trials → 7% of development stage MRs ✧ Deployment → 5% in major and 18% in minor releases ✦ Customer perspective - probability of observing a failure ✧ may drop up to 30 times in the first few months post-launch [15] 18 Mockus & Weiss Interval Quality: Relating Customer-Perceived Quality To Process Quality

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend