Assembly Instruction Level Reverse Execution for Debugging PhD - - PowerPoint PPT Presentation

assembly instruction level reverse execution for debugging
SMART_READER_LITE
LIVE PREVIEW

Assembly Instruction Level Reverse Execution for Debugging PhD - - PowerPoint PPT Presentation

Assembly Instruction Level Reverse Execution for Debugging PhD Dissertation Defense by Tankut Akgul Advisor: Vincent J. Mooney School of Electrical and Computer Engineering Georgia Institute of Technology March 2004 Outline Background


slide-1
SLIDE 1

Assembly Instruction Level Reverse Execution for Debugging

PhD Dissertation Defense

by Tankut Akgul Advisor: Vincent J. Mooney School of Electrical and Computer Engineering Georgia Institute of Technology March 2004

slide-2
SLIDE 2

2

Outline

Background Reverse Execution Definition Previous Work Reverse Execution Methodology Program Slicing Definition Previous Work Program Slicing Methodology Experimental Results

slide-3
SLIDE 3

3

Background

Debugging is a repetitive process!

Detect an error Restart the program Determine the bug location(s) Error-free program Remove the bug(s) and recompile the program Start the program

slide-4
SLIDE 4

4

Outline

Background Reverse Execution Definition Previous Work Reverse Execution Methodology Program Slicing Definition Previous Work Program Slicing Methodology Experimental Results

slide-5
SLIDE 5

5

Definition of Reverse Execution

Reverse execution: Taking a program T from its

current state Si to a previous state Sj

Source code level reverse execution: Reverse

execution where Sj can be as early as one source code statement before state Si

Instruction level reverse execution: Reverse

execution where Sj can be as early as one assembly instruction before state Si

slide-6
SLIDE 6

6

Previous Work

Debugging Optimistic Simulations Database Applications Interactive Systems

Editors Program development environments

slide-7
SLIDE 7

7

Previous Work in Reverse Execution

Restore earlier state

Periodic checkpointing Incremental checkpointing

Regenerate part of earlier state

Source transformation

Build a reversible processor with reversible

circuit elements (Pendulum)

slide-8
SLIDE 8

8

Previous Work in Reverse Execution

12KB 12KB + = 60KB Memory usage for state saving: 12KB + 12KB + + 12KB 8KB 4KB + = 24KB Memory usage for state saving: 4KB + 4KB + + 4KB

Periodic Periodic checkpointing checkpointing: : Incremental Incremental checkpointing checkpointing: :

= 4KB

slide-9
SLIDE 9

9

Previous Work in Reverse Execution

Sample () { int x, y; y = 0; x += 10; if (x > 15) y++; else y--; } Source Transformation: Source Transformation: Source code Sample () { int x, y; save y; y = 0; x += 10; if (x > 15) { b = 0; y++; } else { b = 1; y--; } } Transformed code Sample_rev () { int x, y; if (b == 0) y--; else y++; x -= 10; restore y; } Reverse code

State saved for each destructive operation Destructive operation: An operation whose target operand is different than its source operands

  • C. Carothers, K. Perumalla and R. Fujimoto, “Efficient Optimistic Parallel Simulations using

Reverse Computation,” in Proceedings of ACM/IEEE/SCS Workshop on Parallel and Distributed Simulation (PADS), Atlanta, USA, May 1999.

slide-10
SLIDE 10

10

Previous Work in Reverse Execution

Heavy use of state saving State saving = memory and time

  • verheads during forward execution

No direct instruction level reverse

execution support

slide-11
SLIDE 11

11

Outline

Background Reverse Execution Definition Previous Work Reverse Execution Methodology Program Slicing Definition Previous Work Program Slicing Methodology Experimental Results

slide-12
SLIDE 12

12

Reverse Execution Methodology

Assumptions: Assumptions:

State that cannot be modified directly does not

include debugging information

E.g., condition status register

Physical memory is treated as a uniform entity

Exact physical memory state is not preserved E.g., a value not in cache can be brought into cache after

recovery

Sequential execution model Indirect calls are made to well-defined target points

slide-13
SLIDE 13

13

Reverse Execution Methodology

We define the state of a processor as follows: S = ( PC , M' , R' ) PC : program counter M' : directly modified memory values R' : directly modified register values In order to reverse execute a program, do the following:

Construct a reverse program RT for an input program T Recover M' and R' by executing RT in place of T Recover the program counter value by using the

correspondence between T and RT

slide-14
SLIDE 14

14

Reverse Execution Methodology

Reverse Code Generation (RCG) steps:

  • 1. Divide the original program into program partitions
  • 2. Generate the reverse of the instructions. The reverse of

an instruction is called a Reverse Instruction Group (RIG)

  • 3. Combine the RIGs

3.a Combine the RIGs to generate the reverse of each basic block (RBB) 3.b Combine the RBBs to generate the reverse of each partition 3.c Combine the reverse partitions to generate the reverse of whole program

slide-15
SLIDE 15

15

Reverse Execution Methodology

Partition the input program while constructing a call graph Read an instruction α from current partition Generate a RIG for α Build a modified value graph for current partition end of BB? end of partition? end of program? Connect the RBB to reverse program Connect the reverse partition to reverse program, go to next partition Y N Y N end Y N start

slide-16
SLIDE 16

16

Step 1: Program Partitioning

Partitions are regions of code delimited by “function

call” or “indirect branch” instructions that may exist within the original code e.g., in PowerPC instruction set: bl : function call instruction blr : branch to link register instruction (indirect)

slide-17
SLIDE 17

17

Step1: Program Partitioning

first partition

  • f main

second partition

  • f main

single partition

  • f foo

main: li r3,0x5 bl foo addi r12, r12, 1 blr foo: li r11, 3

  • ri

r12, r3, 15 divw r10, r3, r11 cmpwi r10, 100 bg L1 sub r11, r3, r12 b L2 L1: addi r12, r10, 1 sub r11, r12, r3 L2: mullw r12, r11, r10 blr main: li r3,0x5 bl foo addi r12, r12, 1 blr foo: li r11, 3

  • ri

r12, r3, 15 divw r10, r3, r11 cmpwi r10, 100 bg L1 sub r11, r3, r12 b L2 L1: addi r12, r10, 1 sub r11, r12, r3 L2: mullw r12, r11, r10 blr

slide-18
SLIDE 18

18

Methodology (Continued)

Reverse Code Generation (RCG) steps:

  • 1. Divide the program into program partitions (single entry-

single exit regions).

  • 2. Generate the reverse of the instructions. The reverse of

an instruction is called a Reverse Instruction Group (RIG)

  • 3. Combine the RIGs

3.a Combine the RIGs to generate the reverse of each basic block (RBB) 3.b Combine the RBBs to generate the reverse of each partition 3.c Combine the reverse partitions to generate the reverse of whole program

slide-19
SLIDE 19

19

Step 2: RIG Generation

Three techniques to generate a RIG:

  • 1. Re-define technique
  • 2. Extract-from-use technique
  • 3. State saving technique
slide-20
SLIDE 20

20

start

Step 2: RIG Generation (Cont.)

exit r1 = r2 + r3 r4 = r1 + r3 r1 = r2 - 4 r1 = r2 r2 < 0

P P

Find the definitions of r1 reaching P

P

Recover r1 by selectively re-executing

the found definitions or by selectively extracting the found definitions out of later uses of those definitions α

false true if r2 < 0 r1 = r4 – r3 r1 = r2 r1 = r2 + r3 else

RIG for α :

  • r

Extract-from-use Re-define

slide-21
SLIDE 21

21

start

Step2: RIG Generation (Cont.)

r1

3 = Φ(r1 1,r1 2)

exit

Rename Values Generate a directed graph called

modified value graph (MVG)

r1 = r2 + r3 r4 = r1 + r3 r1 = r2 - 4 r1 = r2 r2 < 0

P P P P' '

Find the definition of r1 reaching P

P

r1

1

r2 r3 r1

2

r4 r4

1

r1

3

r1 r1

4

r

2

< r2

0 ≥ 0

Φ Recover r1 using available nodes

at P P' '

r1 = r4 – r3 if r2 < 0 r1 = r2 r1 = r2 + r3 else

RIG for α :

  • r

α

r1

1

r1

2

r1

4

r4

1

Select

  • perator

false true + +

slide-22
SLIDE 22

22

Methodology (Continued)

Three steps to generate a complete reverse program:

  • 1. Divide the program into program partitions (single entry-

single exit regions).

  • 2. Generate the reverse of the instructions. The reverse of

an instruction is called a Reverse Instruction Group (RIG)

  • 3. Combine the RIGs

3.a Combine the RIGs to generate the reverse of each basic block (RBB) 3.b Combine the RBBs to generate the reverse of each partition 3.c Combine the reverse partitions to generate the reverse of whole program

slide-23
SLIDE 23

23

Step 3.a: Constructing the RBBs

i1 i2 i3 i4 i5 i6 i7 BB1 BB2 BB3 BB4 RIG2 RBB1 RIG1 RIG4 RBB2 RIG3 RIG7 RBB3 RIG6 RIG8 RBB4 RIG5 Bottom-up placement order within BBs i8

slide-24
SLIDE 24

24

Step 3.b: Combining the RBBs

start exit

α2 α3 α4 α5 α6 α8 α7 α9

? ? ?

α1

BB1 BB2 BB3 BB4

BB: Basic Block

BB6 start exit

RIG9

RBB6 RBB5 RBB4 RBB3 RBB2 RBB1

RIG7 RIG8 RIG4 RIG3 RIG6 RIG5 RIG2 RIG1

cb cb cb

RBB: Reverse of a BB cb: Conditional branch

BB5

slide-25
SLIDE 25

25

Step 3.c: Combining the Reverse Partitions

m1 g1 h g2 m2 end start

A0 A0 A2 A2 A2 A4 A3 A3 A1 m m1

1

m m2

2

g g1

1

g g2

2

h h A0 A2 A1 A3 A4 main: cmp r1, r2 bl g; // call g … blr // return g: … mtlr r0 // set a func. ptr. bclrl // call by the func. ptr. … blr // return h: … blr // return

  • Push addresses on the

dynamically taken edges into stack

  • Pop the addresses from stack

during reverse execution and branch to reverses of popped addresses

A2 A2

slide-26
SLIDE 26

26

Recovering the Program Counter

Input Input Program Program 0x0 0x4000 0x4 0x3FFC 0x8 0x3FE0 … … Input Input instruction instruction address address RIG RIG address address Designates the entry point into the reverse program for every instruction in the input program Program being debugged Reverse of the input program RCG algorithm Reverse Reverse Program Program Inversion Inversion Table Table

slide-27
SLIDE 27

27

Complexity

N : number of nodes in an MVG ≅ # of assembly instructions in a code M : average degree of a node (# of neighbors) K : maximum number of repetitive applications of re-define and extract-from-use techniques allowed M is independent of total code size for fixed partition size Complexity = O(N×MK) On a 1 GHz CPU, 1 iteration ≅ 1 nsec N = 1,000,000, M = 10, K = 3 1 sec Byte Recover (Node n) { if (n.available == true) return true; ∀m ∈ children(n) do { stat = Recover(m); if stat != available break; } ∀m ∈ parents(n) do { stat = Recover(m); if (stat == available) { ∀z ∈ siblings(n) do { stat = Recover(z); if (stat != available) break; } } if (stat == available) break; } Write_RIG(); }

slide-28
SLIDE 28

28

Outline

Background Reverse Execution Definition Previous Work Reverse Execution Methodology Program Slicing Definition Previous Work Program Slicing Methodology Experimental Results

slide-29
SLIDE 29

29

Program Slicing

  • Static Slice

Static Slice: A set of program statements that may influence a variable V at statement S. C = (V, S) is a static slicing criterion.

  • Dynamic Slice

Dynamic Slice: A set of program statements that influence a variable V at an execution instance q of statement S given a set of program inputs X. C = (X, V, Sq) is a dynamic slicing criterion.

slide-30
SLIDE 30

30

Program Slicing

Two ways to influence a variable:

Data dependency

y = z; x = y + 1; x is data dependent on y and z

Control dependency

if (y < 0) x = 1; x is control dependent on y

A slice is a set of all statements that compute

dependencies of a variable

slide-31
SLIDE 31

31

Program Slicing

Pass = 0 ; Fail = 0 ; Count = 0 ; while (!eof()) { TotalMarks=0; scanf("%d",Marks); if (Marks >= 40) Pass = Pass + 1; if (Marks < 40) Fail = Fail + 1; Count = Count + 1; TotalMarks = TotalMarks+Marks ; } average = TotalMarks/Count; /* This is the point of interest */ printf("The average is %d\n",average) ; PassRate = Pass/Count*100 ; printf(“Pass rate is %d\n",PassRate) ; while (!eof()) { TotalMarks=0; scanf("%d",Marks); Count = Count + 1; TotalMarks = TotalMarks+Marks; } average = TotalMarks/Count; printf("The average is %d\n",average) ;

Original Program Original Program Slice Slice w.r.t w.r.t. “ . “average average” ”

Example is taken from Prof. Mark Harman’s webpage at http://www.brunel.ac.uk/~csstmmh2/exe1.html

slide-32
SLIDE 32

32

Previous Work in Program Slicing

Static Slicing (Weiser)

Control flow graph analysis No runtime information

Static Slicing (Ottenstein et al.)

Program dependency graph analysis No runtime information

Dynamic Slicing (Korel and Laski)

Control flow graph analysis Program execution trajectory

Dynamic Slicing (Agrawal et al.)

Dynamic dependence graph (DDG) analysis Program execution trajectory

slide-33
SLIDE 33

33

Outline

Background Reverse Execution Definition Previous Work Reverse Execution Methodology Program Slicing Definition Previous Work Program Slicing Methodology Experimental Results

slide-34
SLIDE 34

34

RCG with Slicing (RCGS)

Reverse execution along a dynamic slice Dynamic slicing table Global MVG Reduced reverse program Input program Reverse program Green arrows indicate the actions performed by the debugger Orange arrows indicate the base static analysis performed only once per program Blue arrows indicate the extended static analysis performed for each dynamic slice Full-scale reverse execution Forward execution local MVGs

slide-35
SLIDE 35

35

Contributions of RCGS

Reverse execution along a dynamic slice

Faster reverse execution

No complete execution trajectory is required

Less runtime memory usage

Not only reveals dynamic slice instructions but

also obtains runtime values of variables

More efficient debugging

slide-36
SLIDE 36

36

Reverse Execution Along a Dynamic Slice

stw %r0,0x4(%r1) li %r8,0x64 li %r10,0x1

  • ri

%r12,%r10,0x0 li %r11,0x1 cmpw %r8,%r11 blt- 0x1000c0 add %r9,%r12,%r10

  • ri

%r10,%r12,0x0

  • ri

%r12,%r9,0x0 addi %r11,%r11,0x1 b 0x1000a4 bclr 0x14,0x0 Determine data dependencies

statically

Determine control flow

dynamically

Merge static information with

dynamic information to reverse execute along the dynamic slice

slide-37
SLIDE 37

37

Generation of a Reduced Reverse Program

slide-38
SLIDE 38

38

Experimentation Platform

Background Debug Mode (BDM) Interface

PC

Windows 2000

MBX860

MPC860 processor 4MB DRAM, 2MB Flash RTC, four 16-bit timers, watchdog

slide-39
SLIDE 39

39

Comparisons

Reverse Execution with Incremental

State Saving (ISS)

Save state before each instruction

Reverse Execution with Incremental

State Saving for Destructive Instructions (ISSDI)

Save state before each destructive

instruction

Reverse Execution with RCG

slide-40
SLIDE 40

40

Benchmarks

4636 LZW 6908 ADPCM encoder 3308 Matrix multiply 3104 Selection sort Executable object size (bytes) Executable object size (bytes) Benchmark Benchmark

ADPCM: Adaptive Differential Pulse Code Modulation LZW: Lempel Ziv Welch

slide-41
SLIDE 41

41

Benchmarks

194,451,339 LZW (16KB input data) 16,063,096 LZW (4KB input data) 1,380,413 LZW (1KB input data) 1,496,649 ADPCM (128KB input data) 751,280 ADPCM (64KB input data) 378,294 ADPCM (32KB input data) 457,183,831 Matrix multiply (400x400) 472,044 Matrix multiply (40x40) 650 Matrix multiply (4x4) 198,539,130 Selection sort (10000 inputs) 2,000,202 Selection sort (1000 inputs) 21,187 Selection sort (100 inputs)

Raw Execution Time Raw Execution Time ( (decrementer decrementer ticks) ticks) Benchmark Benchmark

1 tick = 0.4 microseconds on MBX860

slide-42
SLIDE 42

42

Experiment 1

Instrument each benchmark with state

saving instructions at appropriate points for ISS, ISSDI and RCG

Forward execute each instrumented

benchmark from the beginning until the end

Measure forward execution times

slide-43
SLIDE 43

43

Instrumented Forward Execution Time

ISS: Incremental State Saving, ISSDI: Incremental State Saving for Destructive Instructions 1.31X 1.53X 288077045 378614957 439424024 LZW (16KB input data) 1.34X 1.52X 23813230 31942838 36319691 LZW (4KB input data) 1.31X 1.52X 2054657 2699287 3126206 LZW (1KB input data) 1.20X 1.31X 2464232 2950562 3223166 ADPCM (128KB input data) 1.20X 1.31X 1232110 1475276 1611572 ADPCM (64KB input data) 1.20X 1.31X 616101 737720 805972 ADPCM (32KB input data) 1.90X 2.32X 458691637 870539981 1064415269 Matrix multiply (400x400) 1.88X 2.29X 476243 895703 1092872 Matrix multiply (40x40) 1.70X 2.02X 708 1197 1432 Matrix multiply (4x4) 1.27X 1.40X 280677488 356208073 394063091 Selection sort (10000 inputs) 1.27X 1.40X 2841029 3595213 3979802 Selection sort (1000 inputs) 1.24X 1.38X 31113 38496 42984 Selection sort (100 inputs) ISSDI/ ISSDI/ RCG RCG ISS/ ISS/ RCG RCG RCG RCG ISSDI ISSDI ISS ISS Benchmark Benchmark

slide-44
SLIDE 44

44

Forward Execution Time Overhead

98.48 98.97 102.88 79.41 81.70 79.74 41.37 42.04 46.90

0.00 20.00 40.00 60.00 80.00 100.00 120.00 Selection Sort (100) Selection Sort (1000) Selection Sort (10000)

ISS ISSDI RCG

% Overhead

120.31 131.52 132.82 84.15 90.41 89.75 0.33 0.89 8.92

0.00 20.00 40.00 60.00 80.00 100.00 120.00 140.00 Matrix Multiply (4x4) Matrix Multiply (40x40) Matrix Multiply (400x400)

ISS ISSDI RCG 126.47 126.11 125.98 95.54 94.71 98.86 48.84 48.15 48.25

0.00 20.00 40.00 60.00 80.00 100.00 120.00 140.00 LZW (1KB) LZW (4KB) LZW (16KB)

ISS ISSDI RCG 115.36 114.51 113.05 96.37 97.14 95.01 64.00 64.65 62.86

0.00 20.00 40.00 60.00 80.00 100.00 120.00 140.00 ADPCM (32KB) ADPCM (64KB) ADPCM (128KB)

ISS ISSDI RCG

% Overhead % Overhead % Overhead

instrumented execution time – raw execution time raw execution time Overhead =

slide-45
SLIDE 45

45

Experiment 2

Reverse execute each benchmark from

the end until the beginning (by executing the reverse versions)

Measure reverse execution times

slide-46
SLIDE 46

46

Reverse Execution Time

ISSDI/ ISSDI/ RCG RCG ISS/ ISS/ RCG RCG RCG RCG ISSDI ISSDI ISS ISS Benchmark Benchmark 0.74X 0.78X 36719 27137 28724 Selection sort (100 inputs)

  • 3516414
  • Selection sort (1000 inputs)
  • Selection sort (10000 inputs)

0.60X 0.66X 1325 784 880 Matrix multiply (4x4) 0.53X 0.61X 1088827 578556 660189 Matrix multiply (40x40)

  • 1070219421
  • Matrix multiply (400x400)

0.82X 0.86X 765036 628958 656702 ADPCM (32KB input data) 0.82X

  • 1528807

1257770

  • ADPCM (64KB input data)
  • 3057176
  • ADPCM (128KB input data)
  • 2619106
  • LZW (1KB input data)
  • 30596864
  • LZW (4KB input data)
  • 371045637
  • LZW (16KB input data)
slide-47
SLIDE 47

47

Experiment 3

Forward execute each instrumented

benchmark from the beginning until the end

Measure memory usage for state

saving

slide-48
SLIDE 48

48

Memory Usage for State Saving

354X 589X 1331 471140 784336 LZW (16KB input data) 112X 185X 351 39163 64970 LZW (4KB input data) 35X 57X 98.4 3425 5630 LZW (1KB input data) 2X 2.5X 2464 4767 6175 ADPCM (128KB input data) 2X 2.5X 1232 2384 3088 ADPCM (64KB input data) 2X 2.5X 616 1192 1544 ADPCM (32KB input data) 1404X 2206X 1250 1755006 2756883 Matrix multiply (400x400) 143X 224X 12.6 1801 2820 Matrix multiply (40x40) 14X 21X 0.17 2.35 3.6 Matrix multiply (4x4) 55X 82X 7237 397913 593389 Selection sort (10000 inputs) 27X 40X 151 4065 6032 Selection sort (1000 inputs) 6.3X 9X 7.5 46.9 68.2 Selection sort (100 inputs) ISSDI / RCG ISS / RCG RCG (KB) ISSDI (KB) ISS (KB)

slide-49
SLIDE 49

49

Experiment 4

Forward execute original 400x400

matrix multiply from the beginning to various intermediate points and measure the execution times

Reverse execute 400x400 matrix

multiply using RCG from the end to various intermediate points and measure reverse execution times

slide-50
SLIDE 50

50

Program Re-execute Approach vs. RCG

100 200 300 400 500 100 200 300 400 Outermost loop iteration count Time (seconds) Forward execution Reverse execution via RCG

400x400 matrix multiply

starting point for reverse execution starting point for forward execution

slide-51
SLIDE 51

51

Experiment 5

Extract three slices for each benchmark Reverse execute each benchmark fully

starting from end of each slice until beginning of each slice

Reverse execute each benchmark

along computed slices only

Measure the reverse execution times

slide-52
SLIDE 52

52

Full-scale Reverse Execution vs. Reverse Execution Along a Slice

356.5 189.5 522.5 12 10 7 100 200 300 400 500 600

slice1 slice2 slice3 time (microseconds)

141.5 83 202.5 40.5 72 106 50 100 150 200 250

slice1 slice2 slice3 time (microseconds)

1.44 1.13 0.83 1.1 0.9 0.6 0.0 0.5 1.0 1.5 2.0

slice1 slice2 slice3 time (seconds)

1.96 54.22 114.34 0.7 19.8 40.5 0.0 20.0 40.0 60.0 80.0 100.0 120.0 140.0

slice1 slice2 slice3 time (seconds)

Selection Sort (10 inputs) Matrix Multiply (4x4) ADPCM Encoder (128KB input) LZW (128KB input)

full reverse execution reverse execution along a dynamic slice

slide-53
SLIDE 53

53

Full-scale Reverse Execution vs. Reverse Execution Along a Slice

3.56E+02 2.72E+05 2.68E+08 1.41E+02 1.63E+01 9.67E+00 1.00E+00 1.00E+01 1.00E+02 1.00E+03 1.00E+04 1.00E+05 1.00E+06 1.00E+07 1.00E+08 1.00E+09

4x4 multiply 40x40 multiply 400x400 multiply time (microseconds)

full reverse execution reverse execution along the dynamic slice

1.42E+02 1.33E+04 1.30E+06 6.54E+05 6.71E+03 7.28E+01 1.00E+00 1.00E+01 1.00E+02 1.00E+03 1.00E+04 1.00E+05 1.00E+06 1.00E+07

10 integers 100 integers 1000 integers time (microseconds)

Matrix Multiply Selection Sort

slide-54
SLIDE 54

54

Experiment 6

Extract three slices for each benchmark Measure average runtime memory

requirement for reverse execution along three slices with RCGS

Measure average runtime memory

requirement for reverse execution along three slices with ISS plus execution trajectory (ET) approach

slide-55
SLIDE 55

55

Runtime Memory Requirements

1984 140 140

500 1000 1500 2000 2500

RCGS ISS+ET bytes

680 113 230

200 400 600 800 1000

RCGS ISS+ET bytes

5542 870 1694

1000 2000 3000 4000 5000 6000 7000

RCGS ISS+ET kilobytes

94

0.1 1.0 10.0 100.0 1000.0

RCGS ISS+ET megabytes

235 0.6

Selection Sort (10 inputs) Matrix Multiply (4x4) ADPCM Encoder (128KB input) LZW (128KB input)

RCGS: RCG with Slicing ET: Execution Trajectory RCGS memory usage ISS memory usage ET memory usage

slide-56
SLIDE 56

56

Reverse Debugger

Execute forward Step forward Execute backward Step backward Memory window Breakpoint window Register window Source window

slide-57
SLIDE 57

57

Reverse Debugger

19 Number of files ~7000 Number of C lines Reverse Debugger Code Specs Reverse Debugger Code Specs

slide-58
SLIDE 58

58

Conclusion

Reduced debugging time with localized re-

executions

Very low time and memory overheads in

forward execution by using reverse code

Reverse execution up to an assembly

instruction level granularity

Dynamic slicing support to speed up

reverse execution without execution trajectory requirement

slide-59
SLIDE 59

59

Publications

  • T. Akgul, V. J. Mooney and S. Pande, “A Fast Assembly Level Reverse

Execution Method via Dynamic Slicing,” accepted for publication in Proceedings of the 26th International Conference on Software Engineering (ICSE'04), May 2004.

  • T. Akgul and V. J. Mooney, “Assembly Instruction Level Reverse Execution for

Debugging,” submitted to Transactions on Software Engineering and Methodology (TOSEM) on December 2002, accepted with minor revision.

  • T. Akgul and V. J. Mooney, “Instruction-level Reverse Execution for

Debugging,” Proceedings of the Workshop on Program Analysis for Software Tools and Engineering (PASTE'02), pp. 18-25, November 2002.

  • T. Akgul and V. J. Mooney, “Instruction-level Reverse Execution for

Debugging,” Technical Report GIT-CC-02-49, September 2002. http://codesign.ece.gatech.edu/publications/index.htm

  • T. Akgul, P. Kuacharoen, V. Mooney and V. Madisetti, "A Debugger RTOS for

Embedded Systems," Proceedings of the 27th EUROMICRO Conference (EUROMICRO'01), pp. 264-269, September 2001.

  • P. Kuacharoen, T. Akgul, V. Mooney and V. Madisetti, "Adaptability,

Extensibility, and Flexibility in Real-Time Operating Systems," Proceedings of the EUROMICRO Symposium on Digital Systems Design (EUROMICRO'01),

  • pp. 400-405, September 2001.
  • T. Akgul, P. Kuacharoen, V. J. Mooney and V. K. Madisetti, “A Debugger

Operating System for Embedded Systems,'' U.S. Patent Application, no. 20030074650, April 17, 2003.

  • P. Kuacharoen, T. Akgul, V. J. Mooney and V. K. Madisetti, “A Dynamic

Operating System,'' U.S. Patent Application, no. 20030074487, April 17, 2003.

slide-60
SLIDE 60

60

Thank you! Thank you!