tools integrate tools work together tools work together
play

Tools integrate Tools work together Tools work together Models - PowerPoint PPT Presentation

Programming Environments The Future of Programming Environments Past of Andreas Zeller programming environments Saarland University A Tool Set Tools evolve Tools integrate Tools work together Tools work together Models Specs Code


  1. Programming Environments The Future of Programming Environments Past of Andreas Zeller programming environments Saarland University A Tool Set Tools evolve

  2. Tools integrate Tools work together Tools work together Models Specs Code Traces Profiles Tests e-mail Bugs Effort Navigation Changes Chats

  3. Models Specs Code Traces Profiles Tests e-mail Bugs Effort Navigation Changes Chats Bugs Changes Programmers who changed this function also changed… Changes 12

  4. Eclipse Preferences Preference Code Your task – extend Eclipse with a new preference: Preferences are stored in the field fkeys[]: What else do you need to change? 13 14 Eclipse Code 27,000 20,000 200,000 files classes methods 12,000 non-Java EROSE What else do you need to change? funded by IBM Eclipse Innovation Grant 15 16

  5. 2003-02-19 (aweinand): fixed #13332 Mining Associations createGeneralPage() createT extComparePage() fKeys[] #42 fKeys[], initDefaults(), …, plugin.properties, … initDefaults() buildnotes_compare.html #752 fKeys[], initDefaults(), …, plugin.properties, … PatchMessages.properties #9872 fKeys[], initDefaults(), …, plugin.properties, … plugin.properties #11386 fKeys[], initDefaults(), … #20814 fKeys[], initDefaults(), …, plugin.properties, … #30989 fKeys[], initDefaults(), …, plugin.properties, … #41999 fKeys[], initDefaults(), …, plugin.properties, … CVS Version Archive #47423 fKeys[], initDefaults(), …, plugin.properties, … Laser 2006 – Summer School on Software Engineering 18 Mining Associations Classical Data Mining #42 fKeys[], initDefaults(), …, plugin.properties, … #752 fKeys[], initDefaults(), …, plugin.properties, … Classical association mining finds all rules: #9872 fKeys[], initDefaults(), …, plugin.properties, … • Helpful in understanding general patterns {fKeys[], initDefaults()} ⇒ {plugin.properties} #11386 fKeys[], initDefaults(), … • Requires high support thresholds #20814 fKeys[], initDefaults(), …, plugin.properties, … Support 7, Confidence 7/8 = 0.875 #30989 fKeys[], initDefaults(), …, plugin.properties, … • Takes time to compute (3 days and more) #41999 fKeys[], initDefaults(), …, plugin.properties, … #47423 fKeys[], initDefaults(), …, plugin.properties, … 19 20

  6. Mining on Demand Evaluation Alternative – mine only matching rules: • The programmer has changed one entity. • Mine only rules related to the situation � � , Can eROSE suggest related entities? i.e. � ⇒ X • Evaluation using last 1,000 transactions • Mine only rules which have a singleton as of eight open-source CVS repositories consequent, i.e. � ⇒ {x} • Training: all transactions before evaluation Average runtime of a query: 0.5 seconds 21 22 Precision vs. Recall Results Entities Files What EROSE finds What it should find Recall Precision Top 3 Recall Precision Top 3 Eclipse 0.15 0.26 0.53 0.17 0.26 0.54 GCC 0.28 0.39 0.89 0.44 0.42 0.87 GIMP 0.12 0.25 0.91 0.27 0.26 0.90 JBOSS 0.16 0.38 0.69 0.25 0.37 0.64 JEDIT 0.07 0.16 0.52 0.25 0.22 0.68 False positives False negatives KOFFICE 0.08 0.17 0.46 0.24 0.26 0.67 Correct prediction POSTGRES 0.13 0.23 0.59 0.23 0.24 0.68 PYTHON 0.14 0.24 0.51 0.24 0.36 0.60 High precision = returned entities are relevant = few false positives High recall = relevant entities are returned = few false negatives Average 0.15 0.26 0.64 0.26 0.30 0.70 23 24

  7. Results Entities Files Recall Precision Top 3 Recall Precision Top 3 Eclipse 0.15 0.26 0.53 0.17 0.26 0.54 • eROSE predicts 15% of all changed entities GCC 0.28 0.39 0.89 0.44 0.42 0.87 (files: 26%) GIMP 0.12 0.25 0.91 0.27 0.26 0.90 • In 64% of all transactions, eROSE’s topmost JBOSS 0.16 0.38 0.69 0.25 0.37 0.64 JEDIT 0.07 0.16 0.52 0.25 0.22 0.68 three suggestions contain a changed entity KOFFICE 0.08 0.17 0.46 0.24 0.26 0.67 (files: 70%) POSTGRE 0.13 0.23 0.59 0.23 0.24 0.68 Changes PYTHON 0.14 0.24 0.51 0.24 0.36 0.60 Average 0.15 0.26 0.64 0.26 0.30 0.70 25 Code Changes Version Di � erences From: Brian Kahne <bkahne@ibmoto.com> New version To: DDD Bug Report Address <bug-ddd@gnu.org> Program works Subject: Problem with DDD and GDB 4.17 When using DDD with GDB 4.16, the run command correctly uses any prior command-line arguments, or the value of "set args". However, when I switched to Wie finden wir GDB 4.17, this no longer worked: If I entered a run Old version Program fails die alternative Welt? command in the console window, the prior command- Causes line options would be lost. [...] 27 28

  8. What was Changed General Plan • Decompose diff into changes per location $ diff -r gdb-4.16 gdb-4.17 diff -r gdb-4.16/COPYING gdb-4.17/COPYING (= 8,721 individual changes) 5c5 < 675 Mass Ave, Cambridge, MA 02139, USA • Apply subset of changes, using PATCH --- > 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 282c282 • Reconstruct GDB; build errors mean < Appendix: How to Apply These Terms to Your New Programs --- unresolved test outcome > How to Apply These Terms to Your New Programs • Test GDB and return outcome …and so on for 178,200 lines (8,721 locations) • Delta debugging narrows down difference 29 30 Isolating Changes The Failure Cause diff -r gdb-4.16/gdb/infcmd.c gdb-4.17/gdb/infcmd.c Delta Debugging Log 1239c1278 100000 GDB with ddmin algorithm < "Set arguments to give program being debugged when it is ... with dd algorithm 10000 ... plus scope information started.\n Changes left --- 1000 > "Set argument list to give program being debugged when 100 it is started.\n 10 • Documentation becomes GDB output 1 0 50 100 150 200 250 300 Tests executed • DDD expects Arguments, • Result after 98 tests (= 1 hour) but GDB outputs Argument list 31 32

  9. DDChange Code Tests Changes 33 Map bugs to code locations Bugs Changes Bugs Changes

  10. Eclipse Imports 71% of all components importing compiler show a post-release defect import org.eclipse.jdt.internal.compiler.lookup.*; Eclipse Bugs import org.eclipse.jdt.internal.compiler.*; import org.eclipse.jdt.internal.compiler.ast.*; import org.eclipse.jdt.internal.compiler.util.*; ... import org.eclipse.pde.core.*; import org.eclipse.jface.wizard.*; import org.eclipse.ui.*; 14% of all components importing ui show a post-release defect Joint work with Adrian Schröter • Tom Zimmermann Eclipse Imports Eclipse Imports Correlation with failure Correlation with failure import org.eclipse.jdt.internal.compiler.lookup.*; import org.eclipse.jdt.internal.compiler.*; Compiler code • Internals • Core functionality import org.eclipse.jdt.internal.compiler.ast.*; import org.eclipse.jdt.internal.compiler.util.*; ... import org.eclipse.pde.core.*; import org.eclipse.jface.wizard.*; GUI code • Standard Java classes • Help texts import org.eclipse.ui.*; Correlation with success Correlation with success

  11. Predicting Prediction failure-prone packages defect no defect top 5% • Relate defect density to imports • Base: Eclipse bug and version databases 10% (Bugzilla, CVS) ~300 Packages • 36% of all packages had post-release defects • Prediction using support vector machine 90% Is it the Developers? Where do bugs come from? Bug density Does experience correlates with matter? experience!

  12. How about Testing? History? Does code Yes – I found lots of Yes! coverage predict the more tests, bugs here. Will bug density? the more bugs! there be more? How about Metrics? Problem Domains? Do code metrics Do imports Yes! (but only Yes! (but only predict bug predict bug with history) with history) density? density?

  13. Software Archives • contain full record of project history • maintained via programming environments • automatic maintenance and access What makes code buggy • freely accessible in open source projects in the first place? Bugs Changes Models Specs Code Traces Profiles Tests Models Specs Code Code Traces Profiles Profiles Tests “Which modules should I test most?” e-mail Bugs Effort Navigation Changes Chats e-mail Bugs Bugs Effort Navigation Changes Changes Chats

  14. Models Specs Code Code Traces Profiles Tests Models Specs Specs Code Code Traces Profiles Tests “How long will it take “This requirement is to fix this bug?” risky” e-mail Bugs Bugs Effort Effort Navigation Changes Changes Chats e-mail e-mail Bugs Bugs Effort Navigation Changes Changes Chats Chats Empirical Studies Empirical SE Build model that • Compares what we believe Measure data explains the data with what we observe • Standard practice in modern science Use model for • Recent addition to software engineering Test predictive power predictions

  15. Predicting Effort Predicting Maintainability Size of vocabulary McCabe complexity Maintainability = 171 − 5 . 2 ln( V ) − 0 . 23 V ( G ) − 16 . 2 ln( L ) �� � +50 sin 2 . 4 C code lines Percentage of comment lines Rosenberg, L. and Hyatt, L. “Developing An Effective Metrics Program ” Oman, P . & Hagemeister, J. "Constructing and Testing of Polynomials Predicting Software Maintainability." European Space Agency Software Assurance Symposium, Netherlands, March, 1996 Journal of Systems and Software 24, 3 (March 1994): 251–266. Obtaining Data

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend