continuous time markov chains
play

Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and - PowerPoint PPT Presentation

Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 31, 2016 Introduction to Random Processes


  1. Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 31, 2016 Introduction to Random Processes Continuous-time Markov Chains 1

  2. Continuous-time Markov chains Continuous-time Markov chains Transition probability function Determination of transition probability function Limit probabilities and ergodicity Introduction to Random Processes Continuous-time Markov Chains 2

  3. Definition ◮ Continuous-time positive variable t ∈ [0 , ∞ ) ◮ Time-dependent random state X ( t ) takes values on a countable set ◮ In general denote states as i = 0 , 1 , 2 , . . . , i.e., here the state space is N ◮ If X ( t ) = i we say “the process is in state i at time t ” ◮ Def: Process X ( t ) is a continuous-time Markov chain (CTMC) if � X ( s ) = i , X ( u ) = x ( u ) , u < s � � � P X ( t + s ) = j � X ( s ) = i � � � = P X ( t + s ) = j ◮ Markov property ⇒ Given the present state X ( s ) ⇒ Future X ( t + s ) is independent of the past X ( u ) = x ( u ) , u < s � X ( s ) = i ◮ In principle need to specify functions P � � � X ( t + s ) = j ⇒ For all times t and s , for all pairs of states ( i , j ) Introduction to Random Processes Continuous-time Markov Chains 3

  4. Notation and homogeneity ◮ Notation ◮ X [ s : t ] state values for all times s ≤ u ≤ t , includes borders ◮ X ( s : t ) values for all times s < u < t , borders excluded ◮ X ( s : t ] values for all times s < u ≤ t , exclude left, include right ◮ X [ s : t ) values for all times s ≤ u < t , include left, exclude right � X ( s ) = i ◮ Homogeneous CTMC if P � � � X ( t + s ) = j invariant for all s ⇒ We restrict consideration to homogeneous CTMCs � X ( s ) = i ◮ Still need P ij ( t ) := P � � � X ( t + s ) = j for all t and pairs ( i , j ) ⇒ P ij ( t ) is known as the transition probability function. More later ◮ Markov property and homogeneity make description somewhat simpler Introduction to Random Processes Continuous-time Markov Chains 4

  5. Transition times ◮ T i = time until transition out of state i into any other state j ◮ Def: T i is a random variable called transition time with ccdf � X (0) = i � � � P ( T i > t ) = P X (0 : t ] = i ◮ Probability of T i > t + s given that T i > s ? Use cdf expression � T i > s � X [0 : s ] = i � � � � � � P T i > t + s = P X (0 : t + s ] = i � X [0 : s ] = i � � � = P X ( s : t + s ] = i � X ( s ) = i � � � = P X ( s : t + s ] = i � X (0) = i � � � = P X (0 : t ] = i ◮ Used that X [0 : s ] = i given, Markov property, and homogeneity � T i > s � ◮ From definition of T i ⇒ P � � T i > t + s = P ( T i > t ) ⇒ Transition times are exponential random variables Introduction to Random Processes Continuous-time Markov Chains 5

  6. Alternative definition ◮ Exponential transition times is a fundamental property of CTMCs ⇒ Can be used as “algorithmic” definition of CTMCs ◮ Continuous-time random process X ( t ) is a CTMC if (a) Transition times T i are exponential random variables with mean 1 /ν i (b) When they occur, transition from state i to j with probability P ij ∞ � P ij = 1 , P ii = 0 j =1 (c) Transition times T i and transitioned state j are independent ◮ Define matrix P grouping transition probabilities P ij ◮ CTMC states evolve as in a discrete-time Markov chain ⇒ State transitions occur at exponential intervals T i ∼ exp( ν i ) ⇒ As opposed to occurring at fixed intervals Introduction to Random Processes Continuous-time Markov Chains 6

  7. Embedded discrete-time Markov chain ◮ Consider a CTMC with transition matrix P and rates ν i ◮ Def: CTMC’s embedded discrete-time MC has transition matrix P ◮ Transition probabilities P describe a discrete-time MC ⇒ No self-transitions ( P ii = 0, P ’s diagonal null) ⇒ Can use underlying discrete-time MCs to study CTMCs ◮ Def: State j accessible from i if accessible in the embedded MC ◮ Def: States i and j communicate if they do so in the embedded MC ⇒ Communication is a class property ◮ Recurrence, transience, ergodicity. Class properties . . . More later Introduction to Random Processes Continuous-time Markov Chains 7

  8. Transition rates ◮ Expected value of transition time T i is E [ T i ] = 1 /ν i ⇒ Can interpret ν i as the rate of transition out of state i ⇒ Of these transitions, a fraction P ij are into state j ◮ Def: Transition rate from i to j is q ij := ν i P ij ◮ Transition rates offer yet another specification of CTMCs ◮ If q ij are given can recover ν i as ∞ ∞ ∞ � � � ν i = ν i P ij = ν i P ij = q ij j =1 j =1 j =1 � ∞ � − 1 � ◮ Can also recover P ij as ⇒ P ij = q ij /ν i = q ij q ij j =1 Introduction to Random Processes Continuous-time Markov Chains 8

  9. Birth and death process example ◮ State X ( t ) = 0 , 1 , . . . Interpret as number of individuals ◮ Birth and deaths occur at state-dependent rates. When X ( t ) = i ◮ Births ⇒ Individuals added at exponential times with mean 1 /λ i ⇒ Birth or arrival rate = λ i births per unit of time ◮ Deaths ⇒ Individuals removed at exponential times with rate 1 /µ i ⇒ Death or departure rate = µ i deaths per unit of time ◮ Birth and death times are independent ◮ Birth and death (BD) processes are then CTMCs Introduction to Random Processes Continuous-time Markov Chains 9

  10. Transition times and probabilities ◮ Q: Transition times T i ? Leave state i � = 0 when birth or death occur ◮ If T B and T D are times to next birth and death, T i = min( T B , T D ) ⇒ Since T B and T D are exponential, so is T i with rate ν i = λ i + µ i ◮ When leaving state i can go to i + 1 (birth first) or i − 1 (death first) λ i ⇒ Birth occurs before death with probability = P i , i +1 λ i + µ i µ i ⇒ Death occurs before birth with probability = P i , i − 1 λ i + µ i ◮ Leave state 0 only if a birth occurs, then ν 0 = λ 0 , P 01 = 1 ⇒ If CTMC leaves 0, goes to 1 with probability 1 ⇒ Might not leave 0 if λ 0 = 0 (e.g., to model extinction) Introduction to Random Processes Continuous-time Markov Chains 10

  11. Transition rates ◮ Rate of transition from i to i + 1 is (recall definition q ij = ν i P ij ) λ i q i , i +1 = ν i P i , i +1 = ( λ i + µ i ) = λ i λ i + µ i ◮ Likewise, rate of transition from i to i − 1 is µ i q i , i − 1 = ν i P i , i − 1 = ( λ i + µ i ) = µ i λ i + µ i ◮ For i = 0 ⇒ q 01 = ν 0 P 01 = λ 0 λ i − 1 λ i +1 λ 0 λ i 0 i − 1 i +1 i . . . . . . µ 1 µ i µ i +1 ◮ Somewhat more natural representation. Similar to discrete-time MCs Introduction to Random Processes Continuous-time Markov Chains 11

  12. Poisson process example ◮ A Poisson process is a BD process with λ i = λ and µ i = 0 constant ◮ State N ( t ) counts the total number of events (arrivals) by time t ⇒ Arrivals occur a rate of λ per unit time ⇒ Transition times are the i.i.d. exponential interarrival times λ λ λ λ i − 1 i +1 0 i . . . . . . ◮ The Poisson process is a CTMC Introduction to Random Processes Continuous-time Markov Chains 12

  13. M/M/1 queue example ◮ An M/M/1 queue is a BD process with λ i = λ and µ i = µ constant ◮ State Q ( t ) is the number of customers in the system at time t ⇒ Customers arrive for service at a rate of λ per unit time ⇒ They are serviced at a rate of µ customers per unit time λ λ λ λ i − 1 i +1 0 i . . . . . . µ µ µ ◮ The M/M is for Markov arrivals/Markov departures ⇒ Implies a Poisson arrival process, exponential services times ⇒ The 1 is because there is only one server Introduction to Random Processes Continuous-time Markov Chains 13

  14. Transition probability function Continuous-time Markov chains Transition probability function Determination of transition probability function Limit probabilities and ergodicity Introduction to Random Processes Continuous-time Markov Chains 14

  15. Transition probability function ◮ Two equivalent ways of specifying a CTMC 1) Transition time averages 1 /ν i + transition probabilities P ij ⇒ Easier description ⇒ Typical starting point for CTMC modeling � X ( s ) = i � � � 2) Transition probability function P ij ( t ) := P X ( t + s ) = j ⇒ More complete description for all t ≥ 0 ⇒ Similar in spirit to P n ij for discrete-time Markov chains ◮ Goal: compute P ij ( t ) from transition times and probabilities ⇒ Notice two obvious properties P ij (0) = 0, P ii (0) = 1 Introduction to Random Processes Continuous-time Markov Chains 15

  16. Roadmap to determine P ij ( t ) ◮ Goal is to obtain a differential equation whose solution is P ij ( t ) ⇒ Study change in P ij ( t ) when time changes slightly ◮ Separate in two subproblems (divide and conquer) ⇒ Transition probabilities for small time h , P ij ( h ) ⇒ Transition probabilities in t + h as function of those in t and h ◮ We can combine both results in two different ways 1) Jump from 0 to t then to t + h ⇒ Process runs a little longer ⇒ Changes where the process is going to ⇒ Forward equations 2) Jump from 0 to h then to t + h ⇒ Process starts a little later ⇒ Changes where the process comes from ⇒ Backward equations Introduction to Random Processes Continuous-time Markov Chains 16

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend