how to explain empirical
play

How to Explain Empirical First Idea: Let Us Use . . . Distribution - PowerPoint PPT Presentation

Classification of . . . Cautious Approach Empirical Results How to Explain the . . . How to Explain Empirical First Idea: Let Us Use . . . Distribution of Software Second Idea: We Have . . . Third Idea: Which . . . Defects by Severity What


  1. Classification of . . . Cautious Approach Empirical Results How to Explain the . . . How to Explain Empirical First Idea: Let Us Use . . . Distribution of Software Second Idea: We Have . . . Third Idea: Which . . . Defects by Severity What If We Had a . . . What If We Have 4 or . . . Home Page Francisco Zapata 1 , Olga Kosheleva 2 , and Vladik Kreinovich 3 Title Page Departments of 1 Industrial, Manufacturing, and Systems Engineering 2 Teacher Education, and 3 Computer Science ◭◭ ◮◮ University of Texas at El Paso, El Paso, TX 79968, USA ◭ ◮ fazg74@gmail.com, olgak@utep.edu, vladik@utep.edu Page 1 of 32 Go Back Full Screen Close Quit

  2. Classification of . . . Cautious Approach 1. Classification of Software Defects Empirical Results • Software packages have defects of different severity. How to Explain the . . . First Idea: Let Us Use . . . • Some defects allow hackers to enter the system and can Second Idea: We Have . . . thus, have a potentially high severity. Third Idea: Which . . . • Other defects are minor and maybe not be worth the What If We Had a . . . effort needed to correct them. What If We Have 4 or . . . Home Page • For example: Title Page – if we declare a variable which is never used ◭◭ ◮◮ – or we declare an array of too big size, so that most of its elements are never used, ◭ ◮ – this makes the program not perfect, Page 2 of 32 – but its only negative consequences are wasting com- Go Back puter time or memory. Full Screen Close Quit

  3. Classification of . . . Cautious Approach 2. Classification of Software Defects (cont-d) Empirical Results • In the last decades, several tools have appeared that: How to Explain the . . . First Idea: Let Us Use . . . – given a software package, Second Idea: We Have . . . – mark possible defects of different potential severity. Third Idea: Which . . . • Usually, software defects which are worth repairing are What If We Had a . . . classified into three categories by their relative severity: What If We Have 4 or . . . Home Page – software defects of very high severity (they are also known as critical ); Title Page – software defects of high severity (they are also known ◭◭ ◮◮ as major ); and ◭ ◮ – software defects of medium severity. Page 3 of 32 • This is equivalent to classifying all the defects into four Go Back categories – the fourth category are minor defects. Full Screen Close Quit

  4. Classification of . . . Cautious Approach 3. Cautious Approach Empirical Results • The main objective of this classification is not to miss How to Explain the . . . any potentially serious defects. First Idea: Let Us Use . . . Second Idea: We Have . . . • Thus, in case of any doubt, a defect is classified into Third Idea: Which . . . the most severe category possible. What If We Had a . . . • As a result: What If We Have 4 or . . . Home Page – the only time when a defect is classified into medium severity category Title Page – is when we are absolutely sure that this defect is ◭◭ ◮◮ not of high or of very high severity. ◭ ◮ • If we have any doubt, we classify this defect as being Page 4 of 32 of high or very high severity. Go Back Full Screen Close Quit

  5. Classification of . . . Cautious Approach 4. Cautious Approach (cont-d) Empirical Results • Similarly: How to Explain the . . . First Idea: Let Us Use . . . – the only time when a defect is classified as being of Second Idea: We Have . . . high severity Third Idea: Which . . . – is when we are absolutely sure that this defect is What If We Had a . . . not of very high severity. What If We Have 4 or . . . • If there is any doubt, we classify this defect as being Home Page of very high severity. Title Page • In particular: ◭◭ ◮◮ – in situations in which we have no information about ◭ ◮ severity of different defects, Page 5 of 32 – we should classify them as of very high severity. Go Back Full Screen Close Quit

  6. Classification of . . . Cautious Approach 5. Cautious Approach (cont-d) Empirical Results • As we gain more information about the consequences How to Explain the . . . of different defects, we can start First Idea: Let Us Use . . . Second Idea: We Have . . . – assigning some of the discovered defects Third Idea: Which . . . – to medium or high severity categories. What If We Had a . . . • However, since by default we classify a defect as having What If We Have 4 or . . . high severity: Home Page – the number of defects classified as being of very Title Page high severity should still be the largest, ◭◭ ◮◮ – followed by the number of defects classified as being ◭ ◮ of high severity, Page 6 of 32 – and finally, the number of defects classified as being Go Back of medium severity should be the smallest. Full Screen Close Quit

  7. Classification of . . . Cautious Approach 6. Empirical Results Empirical Results • Software defects can lead to catastrophic consequences. How to Explain the . . . First Idea: Let Us Use . . . • So, it is desirable to learn as much as possible about Second Idea: We Have . . . different defects. Third Idea: Which . . . • In particular, it is desirable to know how frequently What If We Had a . . . one meets defects of different severity. What If We Have 4 or . . . Home Page • For this, it is necessary to study a sufficiently large number of detects. Title Page • This is a challenging task: ◭◭ ◮◮ – while many large software packages turn out to ◭ ◮ have defects, even severe defects, Page 7 of 32 – such defects are rare, uncovering each severe defect Go Back in a commercial software is a major event. Full Screen Close Quit

  8. Classification of . . . Cautious Approach 7. Empirical Results (cont-d) Empirical Results • At first glance, it may seem that: How to Explain the . . . First Idea: Let Us Use . . . – unless we consider nor-yet-released still-being tested Second Idea: We Have . . . software, Third Idea: Which . . . – there is no way to find a sufficient number of defects What If We Had a . . . to make statistical analysis possible. What If We Have 4 or . . . • However, there is a solution to this problem: namely, Home Page legacy software. Title Page • This software that was written many years ago, when ◭◭ ◮◮ the current defect-marking tools were not yet available. ◭ ◮ • Legacy software works just fine: Page 8 of 32 – if it had obvious defects, Go Back – they would have been noticed during its many years Full Screen of use. Close Quit

  9. Classification of . . . Cautious Approach 8. Empirical Results (cont-d) Empirical Results • However, legacy software works just fine only in the How to Explain the . . . original computational environment. First Idea: Let Us Use . . . Second Idea: We Have . . . • When the environment changes, many hidden severe Third Idea: Which . . . defects are revealed. What If We Had a . . . • In some cases, the software package used a limited size What If We Have 4 or . . . buffer that was sufficient for the original usage. Home Page • When the number of users increases, the buffer be- Title Page comes insufficient. ◭◭ ◮◮ • Another typical situation is when a C program, which ◭ ◮ should work for all computers, Page 9 of 32 – had some initial bugs that were repaired by hacks Go Back – that explicitly took into account that in those days, Full Screen most machines used 32-bit words. Close Quit

  10. Classification of . . . Cautious Approach 9. Empirical Results (cont-d) Empirical Results • As a result, when we run the supposedly correct pro- How to Explain the . . . gram on a 64-bit machine, we get many errors. First Idea: Let Us Use . . . Second Idea: We Have . . . • Such occurrences are frequent. Third Idea: Which . . . • As a result, when hardware is upgraded – e.g., from What If We Had a . . . 32-bit to 64-bit machines, What If We Have 4 or . . . – software companies routinely apply the defect-detecting Home Page software packages to their legacy code Title Page – to find and repair all the defects of very high, high, ◭◭ ◮◮ and medium severity. ◭ ◮ • Such defects are not that frequent; however Page 10 of 32 – even if in the original million-lines-of codes software Go Back package, 99.9% of the lines of code are flawless, Full Screen – this still means that there may be a thousand of severe defects. Close Quit

  11. Classification of . . . Cautious Approach 10. Empirical Results (cont-d) Empirical Results • This is clearly more than enough to provide a valid How to Explain the . . . statistical analysis of distribution of software defects First Idea: Let Us Use . . . by severity. Second Idea: We Have . . . Third Idea: Which . . . • So, at first glance, we have a source of defects. What If We Had a . . . • However, the problem is that companies – naturally – What If We Have 4 or . . . do not like to brag about defects in their code. Home Page • This is especially true when a legacy software turns out Title Page to have thousands of severe defects. ◭◭ ◮◮ • On the other hand, companies are interested in know- ◭ ◮ ing the distribution of the defects: Page 11 of 32 – this would help them deal with these defects, Go Back – and not all software companies have research de- Full Screen partment that would undertake such an analysis. Close Quit

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend