SCAN: An approach to Label and Relate Performances Evaluation - - PowerPoint PPT Presentation

scan an approach to label and relate
SMART_READER_LITE
LIVE PREVIEW

SCAN: An approach to Label and Relate Performances Evaluation - - PowerPoint PPT Presentation

SCAN Soumaya Medini Introduction SCAN Approach SCAN SCAN: An approach to Label and Relate Performances Evaluation Execution Trace Segments RQ1: How do the labels of the trace segments produced by the participants change when providing


slide-1
SLIDE 1

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN: An approach to Label and Relate Execution Trace Segments

Soumaya Medini

Soccer & Ptidej Lab DGIGL, ´ Ecole Polytechnique de Montr´ eal

16 Mai 2014

Pattern Trace Identification, Detection, and Enhancement in Java SOftware Cost-effective Change and Evolution Research Lab

slide-2
SLIDE 2

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

Outline

Introduction SCAN Approach SCAN Performances Evaluation SCAN Usefulness Evaluation Conclusion References

2 / 24

slide-3
SLIDE 3

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

Introduction

◮ Software maintenance can be up to 90% of software

  • cost. [Standish, 1984]

◮ Program comprehension occupies up to 90% of software

  • maintenance. [Standish, 1984]

◮ Concept location is an important task during Program

comprehension.[Rajlich, 2002]

Concept location

Aims to dentify the source code elements that implement a concept of the software.

3 / 24

slide-4
SLIDE 4

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

Introduction

Drawbacks

◮ Scalability: [Cornelissen et al., 2009]: “The scalability of

dynamic analysis due to the large amounts of data that may be introduced in dynamic analysis, affecting performance, storage, and the cognitive load humans can deal with.”

◮ Large and noisy: execution trace corresponding to“Draw

a rectangle”in JHotDraw contains 3,000 method calls.

4 / 24

slide-5
SLIDE 5

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

Introduction

To address drawbacks

◮ Compact (e.g., Reiss and Renieris

[Reiss and Renieris, 2001], Hamou-Lhadj and Lethbridge [Hamou-Lhadj and Lethbridge, 2006]).

◮ Segment (e.g., Asadi et al.

[Asadi et al., 2010], Medini et al. [Medini et al., 2011], Pirzadeh and Hamou-Lhadj [Pirzadeh and Hamou-Lhadj, 2011]).

5 / 24

slide-6
SLIDE 6

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Approach

Step1: Trace Segmentation

◮ Use a dynamic programming optimization technique. ◮ The cost function relies on conceptual cohesion and

coupling measures.

Step 2: Label Identification

◮ Extract terms contained in the signature of all called

methods in each segment.

◮ Rank terms by their tf-idf values and keep the top ones. ◮ Topmost 10 terms yields meaningful segments labels.

6 / 24

slide-7
SLIDE 7

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Approach

Step 3: Relation Identification

◮ Use Formal Concept Analysis. ◮ Same feature, sub-feature and macro-phase. ArgoUML FCA lattice example.

7 / 24

slide-8
SLIDE 8

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Performances Evaluation

Segments characteristics. Participants characteristics.

8 / 24

slide-9
SLIDE 9

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Performances Evaluation

RQ1

◮ To reduce the time and effort: segments characterized

using 5 and 15 different unique methods.

◮ Small version: may result in loss of relevant information. ◮ Medium version: may preserve better the relevant

information.

◮ Unique methods are selected according to their tf-idf.

RQ1

How do the labels of the trace segments produced by the participants change when providing them different amount of information?

9 / 24

slide-10
SLIDE 10

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information?

Experiment Design

◮ We select 9 segments (between 50 and 200). ◮ We group participants into 3 groups. Each version is

assigned to a different group.

◮ Oracle: Labels of the full segments. ◮ Evaluation: Intersection between terms of small and

medium and terms of full segment.

10 / 24

slide-11
SLIDE 11

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information?

Precision (P), Recall (R), and F-Measure (F) of labels when comparing small and medium versus full segments. ◮ Small and medium segments preserve 50% or more of

the labels.

◮ Reduction size: 92% for small and 76% for medium

segments.

11 / 24

slide-12
SLIDE 12

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Performances Evaluation

RQ2

RQ2

How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN?

◮ 210 segments (less than 100) manually labeled by the

participants.

◮ Evaluation: Union, intersection.

12 / 24

slide-13
SLIDE 13

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Performances Evaluation

RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN?

Precision (P) and Recall (R) of automatic labels assigned by SCAN compared to oracle built by participants.

13 / 24

slide-14
SLIDE 14

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Performances Evaluation

RQ3

RQ3

To what extent does SCAN correctly identify relations among segments?

◮ 100 relations among segments are validated by

participants.

Example of relations among segments.

14 / 24

slide-15
SLIDE 15

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Performances Evaluation

RQ3: To what extent does SCAN correctly identify relations among segments?

Evaluation of the automatic relations.

15 / 24

slide-16
SLIDE 16

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Usefulness Evaluation

SITIR approach

◮ Single Trace and Information Retrieval approach

proposed by Liu et al. [Dapeng et al., 2007].

◮ Ranked methods based on similarity with the bug

description or title.

Effectiveness Evaluation

◮ Feature location aims at finding a starting point of the

modification.

◮ Number of methods in the ranked list that a developer

has to scrutinize before reaching the seed.

◮ The seed: method belonging to the impact set of the

feature.

16 / 24

slide-17
SLIDE 17

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Usefulness Evaluation

Rank Method Similarity 1 IsiImporter.isiAuthorsConvert(String) 0.48 2 IsiImporter.isiAuthorsConvert(String[]) 0.44 3 AuthorList.getAuthorList(String) 0.35 4 NameFieldAutoCompleter.addBibtexEntry(BibtexEntry) 0.33 5 AuthorList.AuthorList(String) 0.31 … Order of execution Method

1 IsiImporter.importEntries(InputStream) 2 IsiImporter.isiAuthorsConvert(String) 3 IsiImporter.isiAuthorsConvert(String[]) 4 IsiImporter.isiAuthorConvert(String) 5 Util.join(String[]-String-int-int) 6 IsiImporter.parseMonth(String) 7 IsiImporter.parsePages(String) 8 Globals.getEntryType(String) 9 BibtexEntry.BibtexEntry(String-BibtexEntryType) 10 BibtexEntry.setType(BibtexEntryType) 11 BibtexEntry.firePropertyChangedEvent(String-Object-Object) 12 IsiImporter.processSubSup(HashMap<String-String>) 13 IsiImporter.processCapitalization(HashMap<String-String>) 14 CaseChanger.changeCase(String-int-boolean) 15 BibtexEntry.setField(Map<String-String>) 16 IsiImporter.isiAuthorsConvert(String) 17 IsiImporter.isiAuthorsConvert(String[]) 18 IsiImporter.isiAuthorConvert(String) 19 Util.join(String[]-String-int-int) 20 IsiImporter.parseMonth(String) 21 IsiImporter.parsePages(String) 22 Globals.getEntryType(String) 23 BibtexEntry.BibtexEntry(String-BibtexEntryType) 24 BibtexEntry.setType(BibtexEntryType) 25 BibtexEntry.firePropertyChangedEvent(String-Object-Object) 26 IsiImporter.processSubSup(HashMap<String-String>) 27 IsiImporter.processCapitalization(HashMap<String-String>) 28 BibtexEntry.setField(Map<String-String>) 29 ImportFormatReader.purgeEmptyEntries(Collection<BibtexEntry>) 30 BibtexEntry.getAllFields() 31 ParserResult.ParserResult(BibtexDatabase-HashMap<String-String>-HashMap<String-BibtexEntryType>) 32 ParserResult.ParserResult(Collection<BibtexEntry>) 33 ImportFormatReader.createDatabase(Collection<BibtexEntry>) 34 ImportFormatReader.purgeEmptyEntries(Collection<BibtexEntry>) 35 BibtexEntry.getAllFields() 36 Util.createNeutralId() 37 BibtexEntry.setId(String) 38 BibtexEntry.firePropertyChangedEvent(String-Object-Object) 39 BibtexDatabase.insertEntry(BibtexEntry) 40 BibtexDatabase.getEntryById(String) 41 BibtexEntry.addPropertyChangeListener(VetoableChangeListener) 42 BibtexDatabase.fireDatabaseChanged(DatabaseChangeEvent) 43 BibtexEntry.getCiteKey() 44 BibtexDatabase.checkForDuplicateKeyAndAdd(String-String-boolean) 45 BibtexDatabase.addKeyToSet(String) 46 ParserResult.ParserResult(BibtexDatabase-HashMap<String-String>-HashMap<String-BibtexEntryType>) 47 JstorImporter.isRecognizedFormat(InputStream) 48 JstorImporter.importEntries(InputStream) 49 ImportFormatReader.purgeEmptyEntries(Collection<BibtexEntry>) 50 MsBibImporter.isRecognizedFormat(InputStream)

SITIR: Top 5 ranked method SCAN: Segment 4 Bug report

convert hash author entri isi bibtex result databas chang type

Label:

Bug#460 in JabRef: Wrong author import from Inspec ISI file.

17 / 24

slide-18
SLIDE 18

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Usefulness Evaluation

RQ4

◮ Methods relevant to a change request are grouped in

few segments.

◮ Does SCAN has a potential to support feature

location? RQ5

◮ When no feature location technique is available to guide

the search.

◮ SCAN is used to retrieve segments containing relevant

methods using FCA.

◮ To what extent does SCAN support feature location

tasks if used as a standalone technique?

18 / 24

slide-19
SLIDE 19

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Usefulness Evaluation

Program characteristics

19 / 24

slide-20
SLIDE 20

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location?

Distribution of the gold set methods across the segments.

53 % of the methods that developer need to understand are saved compared to the entire trace.

20 / 24

slide-21
SLIDE 21

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Usefulness Evaluation

RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Bug#460 in JabRef: Resulting FCA lattice. ◮ Labels of the segments. ◮ Title of the bug report.

21 / 24

slide-22
SLIDE 22

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Usefulness Evaluation

RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

RecallSegments: Retrieving segments containing gold set methods. RecallMethods: Retrieving gold set methods.

22 / 24

slide-23
SLIDE 23

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

SCAN Usefulness Evaluation

RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Number of methods needed to understand the retrieved segments compared to the number of methods needed to understand the entire trace.

Saving is near to 45% of methods to analyze compared to the entire trace.

23 / 24

slide-24
SLIDE 24

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

Conclusion

24 / 24

slide-25
SLIDE 25

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

Asadi, F., Di Penta, M., Antoniol, G., and Gu´ eh´ eneuc, Y.-G. (2010). A heuristic-based approach to identify concepts in execution traces. In Proceedings of the European Conference on Software Maintenance and Reengineering (CSMR), pages 31–40. Cornelissen, B., Zaidman, A., van Deursen, A., Moonen, L., and Koschke, R. (2009). A systematic survey of program comprehension through dynamic analysis. IEEE Transactions on Software Engineering, 35(5):684–702. Dapeng, L., Andrian, M., Denys, P., and Vaclav, R. (2007). Feature location via information retrieval based filtering

  • f a single scenario execution trace.

In Proceedings of the International Conference on Automated Software Engineering (ASE), pages 234–243.

24 / 24

slide-26
SLIDE 26

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

Hamou-Lhadj, A. and Lethbridge, T. (2006). Summarizing the content of large traces to facilitate the understanding of the behaviour of a software system. In Proceedings of the International Conference on Program Comprehension (ICPC), pages 181–190. Medini, S., Galinier, P., Di Penta, M., Gu´ eh´ eneuc, Y.-G., and Antoniol, G. (2011). A fast algorithm to locate concepts in execution traces. In Proceedings of the International Symposium on Search-based Software Engineering (SSBSE), pages 252–266. Pirzadeh, H. and Hamou-Lhadj, A. (2011). A novel approach based on gestalt psychology for abstracting the content of large execution traces for program comprehension. In Proceedings of the International Conference on Engineering of Complex Computer Systems (ICECCS), pages 221–230.

24 / 24

slide-27
SLIDE 27

SCAN Soumaya Medini Introduction SCAN Approach SCAN Performances Evaluation

RQ1: How do the labels of the trace segments produced by the participants change when providing them different amount of information? RQ2: How do the labels of the trace segments produced by the participants compare to the labels generated by SCAN? RQ3: To what extent does SCAN correctly identify relations among segments?

SCAN Usefulness Evaluation

RQ4: Does SCAN has a potential to support feature location? RQ5: To what extent does SCAN support feature location tasks if used as a standalone technique?

Conclusion References

Rajlich, V. (2002). The role of concepts in program comprehension. In in Proceedings of IEEE International Workshop on Program Comprehension (IWPC&#039;02), 2002, volume 29, pages 271–278. Reiss, S. P. and Renieris, M. (2001). Encoding program executions. In Proceedings of the International Conference on Software Engineering (ICSE), pages 221–230. Standish, T. A. (1984). An essay on software reuse. IEEE Trans. Transactions on Software Engineering, 10(5):494–497.

24 / 24