discrete event systems and generalized semi markov
play

Discrete-Event Systems and Generalized Semi-Markov Processes - PowerPoint PPT Presentation

Discrete-Event Systems and Generalized Semi-Markov Processes Reading: Section 1.4 in Shedler or Section 4.1 in Haas Peter J. Haas CS 590M: Simulation Spring Semester 2020 1 / 27 Discrete-Event Systems and Generalized Semi-Markov Processes


  1. Discrete-Event Systems and Generalized Semi-Markov Processes Reading: Section 1.4 in Shedler or Section 4.1 in Haas Peter J. Haas CS 590M: Simulation Spring Semester 2020 1 / 27

  2. Discrete-Event Systems and Generalized Semi-Markov Processes Discrete-Event Stochastic Systems The GSMP Model Simulating GSMPs Generating Clock Readings: Inversion Method Markovian and Semi-Markovian GSMPs 2 / 27

  3. Discrete-Event Stochastic Systems Stochastic state transitions occur at an increasing sequence of random times X(t) t � � How to model underlying process X ( t ) : t � 0 ? I Generalized semi-Markov processes (GSMPs) I Basic model of a discrete-event system 3 / 27

  4. GSMP Overview I Events associated with a state “compete” to trigger next state transition I Each event has own distribution for determining the next state I New events I Associated with new state but not old state, or I Associated with new state and just triggered state transition I Clock is set with time until event occurs (runs down to 0) I Old events I Associated with old and new states, did not trigger transition I Clock continues to run down I Canceled events I Associated with old state, but not new state I Clock reading is discarded I Clocks can run down at state-dependent speeds 4 / 27

  5. Clock-Reading Plot clock : a given for :* ' " i ← cancellation . eject occurs 5 / 27

  6. GSMP Building Blocks I S : a (finite or countably infinite) set of states I E = { e 1 , e 2 , . . . , e M } : a finite set of events I E ( s ) ✓ E : the set of active events in state s 2 S I p ( s 0 ; s , E ⇤ ): probability that new state = s 0 when events in E ⇤ simultaneously occur in s I Write p ( s 0 ; s , e ⇤ ) if E ⇤ = { e ⇤ } (unique trigger event) I r ( s , e ): the nonnegative finite speed at which clock for e runs down in state s I Typically r ( s , e ) = 1 I Set r ( s , e ) = 0 to model “preempt resume” service discipline I F ( · ; s 0 , e 0 , s , E ⇤ ): cdf of new clock-reading for e 0 after state E ⇤ ! s 0 transition s � I µ : initial distribution for state and clock readings D D I Assume initial state s ⇠ ν and clock readings ⇠ F 0 ( · ; e , s ) - 6 / 27

  7. New and Old Events 7 / 27

  8. Example: GI/G/1 Queue - Assume that interarrival-time dist’n F a and service-time dist’n F s are continuous (no simult. event occurrences) - Assume that at time t = 0 a job arrives to an empty system X ( t ) = # of jobs in service or waiting in queue at time t Can define ( X ( t ) : t � 0) as a GSMP: - r - I " completion of service I S = { , I , b 0 " " arrival , ee { er , ed e , " I E = , en } if s > 0 e Sei - o ; EG sell it s , e) =o otherwise - I E ( s ) = pest ; s - l ; s , edel , - I , pls , e.) - pls ti ; s - ez I p : ' and fix ) ite - faux ) if E 's e , I F ( x ; s 0 , e 0 , s , e ⇤ ) : sea , s ) : Falk ) for all , e s I fix ) to I I r ( s , e ) = , Wako Hs # f. ex ; VC D= I I Initial dist’n: 8 / 27

  9. A More Complex Example: Patrolling Repairman See handout for details I Provides an example of how to concisely express GSMP building blocks Specifying a GSMP can be complex and time-consuming, so why do it? I Direct guidance for coding (helps catch “corner cases”) I Communicates model at high level (vs poring through code) I Theory for GSMPs can help in establishing important properties of the simulation I Stability (i.e., convergence to steady state), so that steady-state estimation problems are well defined I Validity of specific simulation output-analysis methods, so that estimates are correct 9 / 27

  10. GSMPs and GSSMCs - fei , - u , en ) events E � � GSMP formally defined in terms of GSSMC ( S n , C n ) : n � 0 I S n = state just after n th transition I C n = ( C n , 1 , C n , 2 , . . . , C n , M ) = clock readings just after n th transition I See Haas or Shedler books for definition of P � � ( s , c ) , A and µ 10 / 27

  11. GSMP Definition reading clock for ei I Define I Holding time: t ⇤ ( s , c ) = min { i : e i 2 E ( s ) } c i / r ( s , e i ) I n th state-transition time: ζ n = P n � 1 k =0 t ⇤ ( s , c ) I # of state transitions in [0 , t ]: N ( t ) = max { n � 0 : ζ n  t } Let ∆ 62 S and set ( S N ( t ) if N ( t ) < 1 ; X ( t ) = if N ( t ) = 1 ∆ 11 / 27

  12. GSMP Definition in a Picture N t ( ) = 2 S 2 N t ( ) = 3 S 3 N t ( ) = 0 S 0 N t ( ) = 1 S 1 t S *( , C ) t S *( , C ) t S *( , C ) t S *( , C ) 0 0 1 1 2 2 3 3 ζ 0 ζ 1 ζ 2 ζ 3 ζ 4 t = 0 X t ( ) = 3 S = Such 12 / 27

  13. Discrete-Event Systems and Generalized Semi-Markov Processes Discrete-Event Stochastic Systems The GSMP Model Simulating GSMPs Generating Clock Readings: Inversion Method Markovian and Semi-Markovian GSMPs 13 / 27

  14. Sample Path Generation GSMP Simulation Algorithm (Variable Time-Advance) 1. (Initialization) Select s D ⇠ ν . For each e i 2 E ( s ) generate a D clock reading c i ⇠ F 0 ( · ; e i , s ). Set c i = 0 for e i / 2 E ( s ). 2. Determine holding time t ⇤ ( s , c ) and set of trigger events E ⇤ = E ⇤ ( s , c ) = { e i : c i / r ( s , e i ) = t ⇤ ( s , c ) } . 3. Generate next state s 0 D ⇠ p ( · ; s , E ⇤ ). D 4. For each e i 2 N ( s 0 ; s , E ⇤ ), generate c 0 ⇠ F ( · ; s 0 , e i , s , E ⇤ ). i 0 5. For each e i 2 O ( s 0 ; s , E ⇤ ), set c i = c i � t ⇤ ( s , c ) r ( s , e i ). 6. For each e i 2 ( E ( s ) � E ⇤ ) � E ( s 0 ), set c 0 i = 0 (i.e., cancel event e i ). 7. Set s = s 0 and c = c 0 , and go to Step 2. (Here c = ( c 1 , c 2 , . . . , c M ) and similarly for c 0 .) 14 / 27

  15. Sample Path Generation, Continued Algorithm generates sequence of states ( S n : n � 0) , clock-reading vectors ( C n : n � 0) , and holding times � � t ⇤ ( S n , C n ) : n � 0 Transition times ( ζ n : n � 0) and continuous-time process tt � � X ( n ) : n � 0 computed as described previously ⇥ � �⇤ Use usual techniques to estimate quantities like E X ( t ) f or even Z t  1 � � � α = E f X ( u ) du t 0 N ( t ) � 1 " !# 1 X f ( S n ) t ⇤ ( S n , C n ) + f ( S N ( t ) ) � � = E t − ζ N ( t ) t n =0 Flow charts and diagrams can be helpful (see Law, p. 30–32 for an example) 15 / 27

  16. Discrete-Event Systems and Generalized Semi-Markov Processes Discrete-Event Stochastic Systems The GSMP Model Simulating GSMPs Generating Clock Readings: Inversion Method Markovian and Semi-Markovian GSMPs 16 / 27

  17. HEE ' Generating Clock Readings: Example de Exponential distribution with rate (intensity) λ cdf pdf ( ( λ e � λ x if x � 0; 1 � e � λ x if x � 0; f ( x ; λ ) = and F ( x ; λ ) = 0 if x < 0 0 if x < 0 Mean = 1 / λ Claim: If U D , then V D ⇠ Uniform(0 , 1) and V = � ln U ⇠ exp( λ ) λ - Pan using ⇒ x ) Proof: " - ein ) : e- tx - ¥ Ptv > x ) =P ( - = Pcu 17 / 27

  18. ⇒ The Inversion Method: Special Case FT 'm Spose that cdf F ( x ) = P ( V  x ) is increasing and continuous Claim: If U D ⇠ Uniform(0 , 1) and V = F � 1 ( U ), then V D ⇠ F xD Proof: - ' latex )=PfrCf" CUD - Pff pcvsx ) - Plus Fox ) ) FIX ) 18 / 27

  19. I - EdX=U Example: Exponential Distribution - U U' II - U e- H F ( x ) = 1 � e � λ x - I - - Helmuth ) - u ) - bull - braid F � 1 ( u ) = T x . t ' ) - lulu = f inversion method case of special 19 / 27

  20. The Inversion Method: General Case Generalized inverse F � 1 ( u ) = min { x : F ( x ) � u } F(x) u X w co o x -1 F (u) Claim still holds: F � 1 ( u )  x , u  F ( x ) by definition Exercise: Show that inversion method = naive method for discrete RVs 20 / 27

  21. Discrete-Event Systems and Generalized Semi-Markov Processes Discrete-Event Stochastic Systems The GSMP Model Simulating GSMPs Generating Clock Readings: Inversion Method Markovian and Semi-Markovian GSMPs 21 / 27

  22. pdf " " " E " EEE r . ' ' Ii Markovian GSMPs , ↳ xx in Properties of the Exponential Distribution If X D ∼ exp( λ ) and Y D ∼ exp( µ ) then 1. min( X , Y ) D ∼ exp( λ + µ ) [indep. of whether min = X or Y ] λ 2. P ( X < Y ) = λ + µ 3. P ( X > a + b | X > a ) = e � λ b [memoryless property] Properties 1 and 2 generalize to multiple exponential RVs Simple GSMP event e 0 F ( · ; s 0 , e 0 , s , E ⇤ ) ≡ F ( · ; e 0 ) and F 0 ( · ; e 0 ; s ) ≡ F ( · ; e 0 ) 22 / 27

  23. Markovian GSMPs, Continued Suppose that all events in a GSMP are simple with exponential clock-setting distn’s Key observation: By memoryless property, whenever GSMP jumps into a state s , clock readings for events in E ( s ) are mutually independent and exponentially distributed Simplified Simulation Algorithm (No clock readings needed) 1. (Initialization) Select s D ∼ ν 2. Generate holding time t ⇤ D ∼ exp( λ ), where λ = λ ( s ) = P e i 2 E ( s ) λ i 3. Select e i ∈ E ( s ) as trigger event with probability λ i / λ 4. Generate the next state s 0 D ∼ p ( · ; s , e i ) 5. Set s = s 0 and go to Step 2 23 / 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend