Horizon Runtime Efficient Event Scheduling in Runtime Efficient - - PowerPoint PPT Presentation

horizon
SMART_READER_LITE
LIVE PREVIEW

Horizon Runtime Efficient Event Scheduling in Runtime Efficient - - PowerPoint PPT Presentation

Horizon Runtime Efficient Event Scheduling in Runtime Efficient Event Scheduling in Multi-threaded Network Simulation Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Geo g u , o Sto e s, Ja es G oss, aus e e


slide-1
SLIDE 1

Horizon

Runtime Efficient Event Scheduling in Runtime Efficient Event Scheduling in Multi-threaded Network Simulation

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Geo g u ,

  • Sto e s, Ja

es G oss, aus e e

http://www.comsys.rwth-aachen.de/

Communication and Distributed Systems

OMNeT++ Workshop, SimuTools, March 2011

slide-2
SLIDE 2

Motivation  Need for Complex Network Simulation Models

 Detailed channel and PHY characteristics  Large scale P2P and Internet backbone models  High processing and runtime demand g p g

 Proliferation of Multi-processor Systems p y

 Desktop: 4-8 cores, servers: 24 cores  “Desktop Cluster” es top C uste  Cheap, powerful commodity hardware

2

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-3
SLIDE 3

Motivation  Need for Complex Network Simulation Models

 Detailed channel and PHY characteristics  Large scale P2P and Internet backbone models  High processing and runtime demand g p g

 Proliferation of Multi-processor Systems p y

 Desktop: 4-8 cores, servers: 24 cores  “Desktop Cluster” es top C uste  Cheap, powerful commodity hardware

 Utilize Parallelization to Cut Runtimes?

3

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-4
SLIDE 4

Motivation: Downside of Parallelization  Parallelization Introduces Overhead

 Thread synchronization, management of shared data y g  Increased management overhead per event  Negative impact on events of low complexity g p p y

4

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-5
SLIDE 5

Motivation: Downside of Parallelization  Parallelization Introduces Overhead

 Thread synchronization, management of shared data y g  Increased management overhead per event  Negative impact on events of low complexity g p p y

 Dilemma / Tradeoff Dilemma / Tradeoff

Performance Overhead

5

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-6
SLIDE 6

Motivation: Downside of Parallelization  Parallelization Introduces Overhead

 Thread synchronization, management of shared data y g  Increased management overhead per event  Negative impact on events of low complexity g p p y

 Dilemma / Tradeoff Dilemma / Tradeoff

Performance Overhead

 Minimize Parallelization Overhead

6

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-7
SLIDE 7

Horizon: Approach  Horizon

 Focus on multi-processor systems p y  Centralized architecture  Conservative synchronization

  • Sim. Model

y

Determine independent events

 Expanded Events

 Modeling paradigm g p g  Per event lookahead  Identify independent events

Computing Cluster / CPUs

y p

7

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-8
SLIDE 8

Horizon: Approach  Horizon

 Focus on multi-processor systems p y  Centralized architecture  Conservative synchronization

  • Sim. Model

y

Determine independent events

 Expanded Events

 Modeling paradigm g p g  Per event lookahead  Identify independent events y p

8

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-9
SLIDE 9

Horizon: Expanded Events  Expanded Events

 Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p

 

9

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-10
SLIDE 10

Horizon: Expanded Events  Expanded Events

 Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p

expanded event

 

10

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-11
SLIDE 11

Horizon: Expanded Events  Expanded Events

 Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p

expanded event tstart tend

 

11

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-12
SLIDE 12

Horizon: Expanded Events  Expanded Events

 Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p

expanded event tstart tend

Trigger processing

 

12

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-13
SLIDE 13

Horizon: Expanded Events  Expanded Events

 Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p

expanded event tstart tend

Fetch results Trigger processing

 

13

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-14
SLIDE 14

Horizon: Expanded Events  Expanded Events

 Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p

expanded event tstart tend

Fetch results Parallelization Window Trigger processing

 Independent Events

 Events starting between tstart and tend  Do not depend on results generated by overlapping event

14

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

 Modeling paradigm

slide-15
SLIDE 15

Horizon: Expanded Events  Expanded Events

 Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p

expanded event tstart tend expanded event

 Independent Events

 Events starting between tstart and tend  Do not depend on results generated by overlapping event

15

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

 Modeling paradigm

slide-16
SLIDE 16

Challenges

How to reduce parallelization overhead? How to reduce parallelization overhead?

slide-17
SLIDE 17

Challenges and Solutions  We Address Two Challenges

Thread Synchronization Event Scheduling Thread Synchronization Overhead Event Scheduling Overhead

17

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-18
SLIDE 18

Challenges and Solutions  We Address Two Challenges

Thread Synchronization Event Scheduling Event Scheduling Thread Synchronization Overhead Event Scheduling Overhead Event Scheduling Overhead

18

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-19
SLIDE 19

Thread Synchronization Overhead: Challenge  Master/Worker Architecture

 Master coordinates simulation progress p g  Workers do actual processing  Synchronization involves

future event set

y

Workers wait for incoming jobs Access to shared data structures

event scheduler

 Straightforward Implementation

 Locks condition variables  Locks, condition variables  Workers pull jobs from work queue  If lock occupied or no job available  If lock occupied or no job available

Suspend thread Free-up CPU resources

19

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

Free up CPU resources

slide-20
SLIDE 20

Thread Synchronization Overhead: Challenge  Master/Worker Architecture

 Master coordinates simulation progress p g  Workers do actual processing  Synchronization involves

future event set

y

Workers wait for incoming jobs Access to shared data structures

event scheduler

 Straightforward Implementation

 Locks condition variables

work queue

 Locks, condition variables  Workers pull jobs from work queue  If lock occupied or no job available  If lock occupied or no job available

Suspend thread Free-up CPU resources

20

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

Free up CPU resources

slide-21
SLIDE 21

Thread Synchronization Overhead: Challenge  Master/Worker Architecture

 Master coordinates simulation progress p g  Workers do actual processing  Synchronization involves

future event set

y

Workers wait for incoming jobs Access to shared data structures

event scheduler

 Straightforward Implementation

 Locks condition variables

work queue

 Locks, condition variables  Workers pull jobs from work queue  If lock occupied or no job available

worker worker worker

 If lock occupied or no job available

Suspend thread Free-up CPU resources

21

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

Free up CPU resources

slide-22
SLIDE 22

Thread Synchronization Overhead: Challenge  Master/Worker Architecture

 Master coordinates simulation progress p g  Workers do actual processing  Synchronization involves

future event set

y

Workers wait for incoming jobs Access to shared data structures

event scheduler

 Straightforward Implementation

 Locks condition variables

work queue

 Locks, condition variables  Workers pull jobs from work queue  If lock occupied or no job available

worker worker worker

 If lock occupied or no job available

Suspend thread Free-up CPU resources

22

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

Free up CPU resources

slide-23
SLIDE 23

Thread Synchronization Overhead: Challenge  Master/Worker Architecture

 Master coordinates simulation progress p g  Workers do actual processing  Synchronization involves

future event set

Increases Threading Overhead Incr Increases T eases Threading Ov eading Overhead erhead

y

Workers wait for incoming jobs Access to shared data structures

event scheduler

Increases Threading Overhead

  • sys-calls into kernel
  • context switches

Incr Increases T eases Threading Ov eading Overhead erhead

  • Sys-calls into kernel
  • Context switches

 Straightforward Implementation

 Locks condition variables

work queue

 Locks, condition variables  Workers pull jobs from work queue  If lock occupied or no job available

worker worker worker

 If lock occupied or no job available

Suspend thread Free-up CPU resources

23

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

Free up CPU resources

slide-24
SLIDE 24

Thread Synchronization Overhead: Approach  Challenge

 Suspending Threads Increases Overhead p g

 Observation

future event set

Observation

 Simulations run on dedicated hardware  Freeing-up CPUs is needless

event scheduler

 Freeing up CPUs is needless  Crucial to minimize offloading delay

work queue

 Approach

 Use busy waiting for synchronization

worker worker worker

 Use busy waiting for synchronization  Master actively pushes jobs to workers

24

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-25
SLIDE 25

Thread Synchronization Overhead: Solution  Push-based Event Offloading

 Eliminate shared work queue q  Introduce local synch. buffer per thread  Spinlock for future event set

future event set

p

 Synchronization Buffer

M t i j b t t b ff

event scheduler work queue

 Master assigns jobs to empty buffer  Workers spin on empty buffer

 Additional Benefit

 Master can identify busy threads

worker worker worker

y y  Master handles event instead of worker  Make use of scheduler CPU

25

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-26
SLIDE 26

Thread Synchronization Overhead: Solution  Push-based Event Offloading

 Eliminate shared work queue q  Introduce local synch. buffer per thread  Spinlock for future event set

future event set

p

 Synchronization Buffer

M t i j b t t b ff

event scheduler

 Master assigns jobs to empty buffer  Workers spin on empty buffer

 Additional Benefit

 Master can identify busy threads

worker worker worker

y y  Master handles event instead of worker  Make use of scheduler CPU

26

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-27
SLIDE 27

Thread Synchronization Overhead: Solution  Push-based Event Offloading

 Eliminate shared work queue q  Introduce local synch. buffer per thread  Spinlock for future event set

future event set

p

 Synchronization Buffer

M t i j b t t b ff

event scheduler

 Master assigns jobs to empty buffer  Workers spin on empty buffer

 Additional Benefit

 Master can identify busy threads

worker worker job job worker job

y y  Master handles event instead of worker  Make use of scheduler CPU

27

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-28
SLIDE 28

Challenges and Solutions  We Address Two Challenges

Thread synchronization Event Scheduling Thread synchronization Overhead Event Scheduling Overhead

28

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-29
SLIDE 29

Event Scheduling Overhead: Challenge

expanded event

 Integrate Expanded Events

 One discrete event marks start  Another discrete event marks end  Overlapping events: Start before barrier event

 Straightforward Implementation

 Insert barrier event upon offloading  Wait at barrier event till execution finished

29

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-30
SLIDE 30

Event Scheduling Overhead: Challenge

expanded event start event

 Integrate Expanded Events

 One discrete event marks start  Another discrete event marks end  Overlapping events: Start before barrier event

 Straightforward Implementation

 Insert barrier event upon offloading  Wait at barrier event till execution finished

30

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-31
SLIDE 31

Event Scheduling Overhead: Challenge

expanded event start event barrier event

 Integrate Expanded Events

 One discrete event marks start  Another discrete event marks end  Overlapping events: Start before barrier event

 Straightforward Implementation

 Insert barrier event upon offloading  Wait at barrier event till execution finished

31

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-32
SLIDE 32

Event Scheduling Overhead: Challenge

expanded event start event barrier event expanded event expanded event

 Integrate Expanded Events

 One discrete event marks start  Another discrete event marks end  Overlapping events: Start before barrier event

 Straightforward Implementation

 Insert barrier event upon offloading  Wait at barrier event till execution finished

32

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-33
SLIDE 33

Event Scheduling Overhead: Challenge

expanded event start event barrier event expanded event expanded event

Doubles overhead per event Doub Doubles Ov les Overhead per Ev erhead per Event ent C ti d l ti f t

 Integrate Expanded Events

 One discrete event marks start

  • Creation, deletion of events
  • Insertion, removal from FES

 Another discrete event marks end  Overlapping events: Start before barrier event

 Straightforward Implementation

 Insert barrier event upon offloading  Wait at barrier event till execution finished

33

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-34
SLIDE 34

Event Scheduling Overhead: Approach  Observations

 Push-based synchronization y

Upper bound for simultaneously offloaded events: #CPUs Upper bound for simultaneously existing barriers: #CPUs

 Approach

 Avoid insertion into locked(!) message queue  Avoid insertion into locked(!) message queue  Each thread maintains barrier time of current event  Pointer to earliest barrier enables fast lookup  Pointer to earliest barrier enables fast lookup

job: tend: barrier time

OMNeT++ message

job: tend: barrier time

OMNeT++ message

job: tend: barrier time

OMNeT++ message

job: tend: barrier time

OMNeT++ message

34

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

end end end end

slide-35
SLIDE 35

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 0.5 s tend: 0.8 s tstart: 1.2 s tend: 1.5 s tstart: 0.0 s tend: 1.0 s

job: job: job: job: tend: - tend: - tend: - tend: -

35

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-36
SLIDE 36

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 0.5 s tend: 0.8 s tstart: 1.2 s tend: 1.5 s

job: job: job:

tstart: 0.0 s tend: 1.0 s

job: tend: - tend: - tend: - tend: 1.0 s

36

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-37
SLIDE 37

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 0.5 s tend: 0.8 s tstart: 1.2 s tend: 1.5 s

job: job: job:

tstart: 0.0 s tend: 1.0 s

job: tend: - tend: - tend: - tend: 1.0 s

37

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-38
SLIDE 38

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 1.2 s tend: 1.5 s

job: job:

tstart: 0.0 s tend: 1.0 s

job:

tstart: 0.5 s tend: 0.8 s

job: tend: - tend: - tend: 1.0 s tend: 0.8 s

38

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-39
SLIDE 39

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 1.2 s tend: 1.5 s

job: job:

tstart: 0.0 s tend: 1.0 s

job:

tstart: 0.5 s tend: 0.8 s

job: tend: - tend: - tend: 1.0 s tend: 0.8 s

39

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-40
SLIDE 40

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 1.2 s tend: 1.5 s tstart: 0.9 s tend: 1.1 s

job: job:

tstart: 0.0 s tend: 1.0 s

job:

tstart: 0.5 s tend: 0.8 s

job: tend: - tend: - tend: 1.0 s tend: 0.8 s

40

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-41
SLIDE 41

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 1.2 s tend: 1.5 s tstart: 0.9 s tend: 1.1 s

job: job: job:

tstart: 0.0 s tend: 1.0 s

job: tend: - tend: - tend: - tend: 1.0 s

41

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-42
SLIDE 42

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 1.2 s tend: 1.5 s tstart: 0.9 s tend: 1.1 s

job: job: job:

tstart: 0.0 s tend: 1.0 s

job: tend: - tend: - tend: - tend: 1.0 s

42

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-43
SLIDE 43

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 1.2 s tend: 1.5 s

job: job:

tstart: 0.0 s tend: 1.0 s

job:

tstart: 0.9 s tend: 1.1 s

job: tend: - tend: - tend: 1.0 s tend: 1.1 s

43

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-44
SLIDE 44

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 1.2 s tend: 1.5 s

job: job: job:

tstart: 0.0 s tend: 1.0 s

job: tend: - tend: - tend: - tend: 1.0 s

44

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-45
SLIDE 45

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

tstart: 1.2 s tend: 1.5 s

job: job: job: job: tend: - tend: - tend: - tend: -

45

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-46
SLIDE 46

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

job: job: job:

tstart: 1.2 s tend: 1.5 s

job: tend: - tend: - tend: - tend: 1.5 s

46

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-47
SLIDE 47

Event Scheduling Overhead: Solution

Example Schedule:

simulated time Future Event Set

Simulator:

job: job: job:

tstart: 1.2 s tend: 1.5 s

job: tend: - tend: - tend: - tend: 1.5 s

47

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-48
SLIDE 48

Evaluation

How does it perform? How does it perform?

slide-49
SLIDE 49

Evaluation: Model  Design Goal

 Measure event handling overhead g

 “Null” Simulation Model u S u at o

  • de

 110 independent modules  Null module

Null

Null module

Only re-schedules self-messages No other computations

Null Module

 Execute 5.5 Million Events

 Execution Time == Overhead

49

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-50
SLIDE 50

Evaluation: Thread Synchronization Overhead

~ 9.5x reduction 1000x ~ 1000x reduction

50

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-51
SLIDE 51

Evaluation: Analysis of Context Switches

P ll b d Th d S h i i Pull-based Thread Synchronization Push-based Thread Synchronization

51

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-52
SLIDE 52

Evaluation: Event Scheduling Overhead

~ 1.5x reduction ~ 1.5x 1.5x reduction

52

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-53
SLIDE 53

Conclusions

The take away (barrier) message The take away (barrier) message…

slide-54
SLIDE 54

Conclusions  Parallelization Increases Overhead

 Thread synchronization y  Event scheduling

 Two Approaches to Mitigate Overhead

 Push-based thread synchronization minimizes context switches Push based thread synchronization minimizes context switches  Local barrier information replaces barrier messages

 Overhead Reduction

 Push-based synchronization: ~9 5x reduction  Push based synchronization: 9.5x reduction  Barrier algorithm: ~1.5x reduction  Combined: ~ 14x reduction

54

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

 Combined: 14x reduction

slide-55
SLIDE 55

Thank You

Questions?

55

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

slide-56
SLIDE 56

Backup Slides

Just in case someone asks Just in case someone asks…

slide-57
SLIDE 57

Time Calibration  How to Obtain Accurate Timing Information?

 Utilize existing techniques g q

 Emulation

A fili l d h d

Ac

 Accurate profiling on emulated hardware

 Automatic Simulation Calibration

ccuracy

 Applicable to simple hardware platforms only

P t l S ifi ti  Protocol Specifications

 Independent of hardware platform

Develop Develop Spe

 Expert Knowledge

 Requires experience and careful judgment

pment ed

57

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

q p j g

slide-58
SLIDE 58

Parallel Scheduling  Parallel Scheduling

 Offload independent events to worker CPUs p

event en+2 g units independent events

  • sim. time

event en+4 event en event en+1 event en+3 event en-2 event en-1 processing

 Causal Correctness

  • sim. time

tstart tend

 Increasing timestamp order among dependent events

 Data Integrity g y

 Compose model of self-contained functional units  Functional units correspond to concept of logical processes

58

Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle

Communication and Distributed Systems

p p g p