chapter 4 markov chains
play

Chapter 4. Markov Chains Prof. Shun-Ren Yang Department of Computer - PowerPoint PPT Presentation

Chapter 4. Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Introduction A Markov chain : Consider a stochastic process { X n , n = 0 , 1 , 2 , . . . } that takes on a finite or


  1. Chapter 4. Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

  2. Introduction A Markov chain : • Consider a stochastic process { X n , n = 0 , 1 , 2 , . . . } that takes on a finite or countable number of possible values denoted by the set of nonnegative integers { 0 , 1 , 2 , . . . } . • If X n = i , then the process is said to be in state i at time n . • Suppose that whenever the process is in state i , there is a fixed probability P ij that it will next be in state j . That is, P { X n +1 = j | X n = i, X n − 1 = i n − 1 , . . . , X 1 = i 1 , X 0 = i 0 } = P ij for all states i 0 , i 1 , . . . , i n − 1 , i, j and all n ≥ 0. ⇒ Prof. Shun-Ren Yang, CS, NTHU 1

  3. Introduction • Markovian property: For a Markov chain, the conditional distribution of any future state X n +1 , given the past states X 0 , X 1 , . . . , X n − 1 and the present state X n , is independent of the past states and depends only on the present state. • The value P ij represents the probability that the process will, when in state i , next make a transition into state j . • Since probabilities are nonnegative and since the process must make a transition into some state, we have that ∞ � P ij ≥ 0 , i, j ≥ 0; P ij = 1 , i = 0 , 1 , . . . j =0 • Let P denote the matrix of one-step transition probabilities P ij , so Prof. Shun-Ren Yang, CS, NTHU 2

  4. Introduction that � � P 00 P 01 P 02 · · · � � � � � � P 10 P 11 P 12 · · · � � � � . � � . P = . � � � � � � P i 0 P i 1 P i 2 · · · � � � � . . . � � . . . � � . . . � � Prof. Shun-Ren Yang, CS, NTHU 3

  5. Example 1. The M/G/ 1 Queue • The M/G/ 1 Queue. – The customers arrive at a service center in accordance with a Poisson process with rate λ . – There is a single server and those arrivals finding the server free go immediately into service; all others wait in line until their service turn. – The service times of successive customers are assumed to be independent random variables having a common distribution G ; – The service times are also assumed to be independent of the arrival process. • If we let X ( t ) denote the number of customers in the system at t , then { X ( t ) , t ≥ 0 } would not possess the Markovian property that the conditional distribution of the future depends only on the present and not on the past. Prof. Shun-Ren Yang, CS, NTHU 4

  6. Example 1. The M/G/ 1 Queue • For if we knew the number in the system at time t , then to predict future behavior, – we would “ not ” care how much time had elapsed since the last arrival (since the arrival process is memoryless) – we would care how long the person in service had already been there (since the service distribution G is arbitrary and therefore not memoryless) • Let us only look at the system at “moments” when customers “depart”. – let X n denote the number of customers left behind by the n th departure, n ≥ 1 – let Y n denote the number of customers arriving during the service period of the ( n + 1)st customer Prof. Shun-Ren Yang, CS, NTHU 5

  7. Example 1. The M/G/ 1 Queue • When X n > 0, the n th departure leaves behind X n customers — of which one enters service and the other X n − 1 wait in line. • Hence, at the next departure the system will contain the X n − 1 customers that were in line in addition to any arrivals during the service time of the ( n + 1)st customer. Since a similar argument holds when X n = 0, we see that X n +1 = • Since Y n , n ≥ 1, represent the number of arrivals in nonoverlapping service intervals, it follows, the arrival process being a Poisson process, Prof. Shun-Ren Yang, CS, NTHU 6

  8. Example 1. The M/G/ 1 Queue that they are independent and � ∞ P { Y n = j } = 0 • From the above, it follows that { X n , n = 1 , 2 , . . . } is a Markov chain with transition probabilities given by Prof. Shun-Ren Yang, CS, NTHU 7

  9. Example 2. The G/M/ 1 Queue • The G/M/ 1 Queue. – The customers arrive at a single-server service center in accordance with an arbitrary renewal process having interarrival distribution G . – The service distribution is exponential with rate µ • If we let X n denote the number of customers in the system as seen by the n th arrival, it is easy to see that the process { X n , n ≥ 1 } is a Markov chain. • Note that as long as there are customers to be served, the number of services in any length of time t is a Poisson random variable with mean µt . Therefore, � ∞ P i,i +1 − j = 0 Prof. Shun-Ren Yang, CS, NTHU 8

  10. Example 2. The G/M/ 1 Queue • The above equation follows since if an arrival finds i in the system, then the next arrival will find i + 1 minus the number served, and the probability that j will be served is easily seen (by conditioning on the time between the successive arrivals) to equal the right-hand side. • The formula for P i 0 is little different (it is the probability that at least i + 1 Poisson events occur in a random length of time having distribution G ) and thus is given by � ∞ P i 0 = 0 • Remark: Note that in the previous two examples we were able to discover an embedded Markov chain by looking at the process only at certain time points, and by choosing these time points so as to exploit the lack of memory of the exponential distribution. Prof. Shun-Ren Yang, CS, NTHU 9

  11. Example 3. The General Random Walk The general random walk: sums of independent, identically distributed random variables. • Let X i , i ≥ 1, be independent and identically distributed with P { X i = j } = a j , j = 0 , ± 1 , . . . • If we let n � S 0 = 0 and S n = X i i =1 then { S n , n ≥ 0 } is a Markov chain for which P ij = a j − i • { S n , n ≥ 0 } is called the general random walk. Prof. Shun-Ren Yang, CS, NTHU 10

  12. Example 4. The Simple Random Walk • The random walk { S n , n ≥ 1 } , where S n = � n 1 X i , is said to be a simple random walk if for some p , 0 < p < 1, P { X i = 1 } = p P { X i = − 1 } = q ≡ 1 − p Thus in the simple random walk the process always either goes up one step (with probability p ) or down one step (with probability q ). • Consider | S n | , the absolute value of the simple random walk. The process {| S n | , n ≥ 1 } measures at each time unit the absolute distance of the simple random walk from the origin. • To prove {| S n |} is itself a Markov chain, we first show that if | S n | = i , then no matter what its previous values the probability that S n equals i (as opposed to − i ) is p i / ( p i + q i ). Prof. Shun-Ren Yang, CS, NTHU 11

  13. Example 4. The Simple Random Walk Proposition. If { S n , n ≥ 1 } is a simple random walk, then p i P { S n = i || S n | = i, | S n − 1 | = i n − 1 , . . . , | S 1 | = i 1 } = p i + q i Proof. Prof. Shun-Ren Yang, CS, NTHU 12

  14. Example 4. The Simple Random Walk • From the proposition, it follows upon conditioning on whether S n = + i or − i that P {| S n +1 | = i + 1 || S n | = i, . . . , | S 1 |} = = • Hence, {| S n | , n ≥ 1 } is a Markov chain with transition probabilities Prof. Shun-Ren Yang, CS, NTHU 13

  15. Chapman-Kolmogorov Equations • P ij : the one-step transition probabilities • Define the n -step transition probabilities P n ij to be the probability that a process in state i will be in state j after n additional transitions. That is, P n ij = P { X n + m = j | X m = i } , n ≥ 0 , i, j ≥ 0 where, of course, P 1 ij = P ij . • The Chapman-Kolmogorov equations provide a method for computing these n -step transition probabilities. These equations are ∞ P n + m P n ik P m � = for all n, m ≥ 0 , all i, j, kj ij k =0 and are established by observing that Prof. Shun-Ren Yang, CS, NTHU 14

  16. Chapman-Kolmogorov Equations P n + m = P { X n + m = j | X 0 = i } ij = = = • Let P ( n ) denote the matrix of n -step transition probabilities P n ij , then the Chapman-Kolmogorov equations assert that P ( n + m ) = P ( n ) · P ( m ) , where the dot represents matrix multiplication. Prof. Shun-Ren Yang, CS, NTHU 15

  17. Chapman-Kolmogorov Equations • Hence, P ( n ) = P · P ( n − 1) = P · P · P ( n − 2) = · · · = P n , and thus P ( n ) may be calculated by multiplying the matrix P by itself n times. Prof. Shun-Ren Yang, CS, NTHU 16

  18. Classification of States • State j is said to be accessible from state i if for some n ≥ 0 , P n ij > 0. • Two states i and j accessible to each other are said to communicate , and is denoted by i ↔ j . • Proposition. Communication is an equivalence relation. That is: 1. i ↔ i ; 2. if i ↔ j , then j ↔ i ; 3. if i ↔ j and j ↔ k , then i ↔ k . Proof. The first two parts follow trivially from the definition of communication. To prove 3., suppose that i ↔ j and j ↔ k ; then there exists m , n such that P m ij > 0, P n jk > 0. Hence, ∞ P m + n P m ir P n rk ≥ P m ij P n � = jk > 0 . ik r =0 Similarly, we may show there exists an s for which P s ki > 0. Prof. Shun-Ren Yang, CS, NTHU 17

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend