randomized algorithms randomized algorithms
play

Randomized Algorithms Randomized Algorithms Markov Chains and - PowerPoint PPT Presentation

Randomized Algorithms Randomized Algorithms Markov Chains and Random Walks Markov Chains and Random Walks Speaker: Chuang-Chieh Lin Advisor: Professor Maw-Shang Chang National Chung Cheng University 2007/4/25 2007/4/25 References


  1. Randomized Algorithms Randomized Algorithms Markov Chains and Random Walks Markov Chains and Random Walks Speaker: Chuang-Chieh Lin Advisor: Professor Maw-Shang Chang National Chung Cheng University 2007/4/25 2007/4/25

  2. References References � Professor S. C. Tsai’s slides. � Randomized Algorithms , Rajeev Motwani and Prabhakar Raghavan. � Probability and Computing - Randomized Algorithms and Probabilistic Analysis , Michael Mitzenmacher and Eli Upfal. � Wikipedia: Markov Chains 2007/4/25 2 2007/4/25 2 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  3. Outline Outline � Introduction to Markov chains � Classification of states � Stationary distribution � Random walks on undirected graphs � Connectivity problem 2007/4/25 3 2007/4/25 3 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  4. I ntroduction to Markov I ntroduction to Markov Chains Chains � Markov chains provide a simple but powerful framework for modeling random processes. � Markov chains can be used to analyze simple randomized algorithms applying random walks. 2007/4/25 4 2007/4/25 4 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  5. I ntroduction to Markov I ntroduction to Markov Chains (cont’ ’d) d) Chains (cont De fi nition: F A stochastic process X = { X ( t ) , t ∈ T } is a collection of random variables. F If T is a countable set, say T = { 0 , 1 , 2 , . . . } , we say that X is a discrete time stochastic process . F Otherwise it is called continuous time sto- chastic process . F Here we consider a discrete time stochastic process X n , for n = 0 , 1 , 2 , . . . . 2007/4/25 5 2007/4/25 5 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  6. I ntroduction to Markov I ntroduction to Markov Chains (cont’ ’d) d) Chains (cont F If X n = i , then the process is said to be in state i at time n . F Denote Pr [ X n +1 = j | X n = i, X n − 1 = i n − 1 , . . . , X 0 = i 0 ] = P i,j for all states i 0 , i 1 , . . . , i n − 1 , i, j and all n ≥ 0. F X n +1 depends only on X n . 2007/4/25 6 2007/4/25 6 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  7. I ntroduction to Markov I ntroduction to Markov Chains (cont’ ’d) d) Chains (cont That is, P i,j = Pr [ X n +1 = j | X n = i ] , for all states i, j and all n ≥ 0 Such a stochastic process is known as a Markov chain . 2007/4/25 7 2007/4/25 7 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  8. � Formal definitions. 2007/4/25 8 2007/4/25 8 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  9. Markov property Markov property � In probability theory, a stochastic process has the Markov property if the conditional probability distribution of future states of the process, given the present state and all past states, depends only upon the current state and not on any past states. � Mathematically, if X ( t ), t > 0, is a stochastic process, the Markov property states that Pr[ X ( t + h ) = y | X ( s ) = x ( s ) , ∀ s ≤ t ] = Pr[ X ( t + h ) = y | X ( s ) = x ( t )] , ∀ h > 0 2007/4/25 9 2007/4/25 9 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  10. Markov chain Markov chain � In mathematics, a Markov chain, named after Andrey Markov, is a discrete-time stochastic process with the Markov property. June 14, 1856 – July 20, 1922 2007/4/25 10 2007/4/25 10 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  11. Homogeneous Homogeneous � Markov processes are typically termed ( time- ) homogeneous if Pr[ X ( t + h ) = y | X ( t ) = x ( t )] = Pr[ X ( h ) = y | X (0) = x ( t )] , ∀ t, h > 0 2007/4/25 11 2007/4/25 11 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  12. Transition matrix Transition matrix P i ,1 1 F P i,j ≥ 0. P i ,2 F P i 2 P i,j = 1. . . P i , j . j j ⎡ ⎤ L L P P P 0 , 0 0 , 1 0 , j ⎢ ⎥ L L P P P ⎢ ⎥ 1 , 0 1 , 1 1 , j ⎢ ⎥ = M M O M O transition matrix P ⎢ ⎥ L L P P P ⎢ ⎥ , 0 , 1 , i i i j ⎢ ⎥ M M O M O ⎣ ⎦ 2007/4/25 12 2007/4/25 12 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  13. Transition probability Transition probability The m -step transition probability P m i,j of the Markov chain is de fi ned as the conditional probability, given that the chain is currently in state i , that will be in state j after m additional transitions. That is, P m i,j = Pr [ X n + m = j | X n = i ] , for m ≥ 0 , i, j ≥ 0 . Conditioning on the fi rst transition from i , we have the following equation: X P m P i,k P m − 1 i,j = . k,j k ≥ 0 2007/4/25 13 2007/4/25 13 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  14. Chapman- -Kolmogorov Kolmogorov Chapman equation equation � Generalize the previous result, we have Chapman-Kolmogorov equation as follows. X P n + m P n i,k P m = k,j . i,j k ≥ 0 2007/4/25 14 2007/4/25 14 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  15. Chapman- -Kolmogorov Kolmogorov Chapman equation (cont (cont’ ’d) d) equation Proof: By the de fi nition of the n -step transition probability, X P n + m = Pr[ X n + m = j, X n = k | X 0 = i ] i,j k ≥ 0 X = Pr[ X n = k | X 0 = i ] · k ≥ 0 Pr[ X n + m = j | X n = k, X 0 = i ] By the Markov property, Pr[ X n + m = j | X n = k, X 0 = i ] = Pr[ X n + m = j | X n = k ] = P m k,j . With the additional observation that Pr [ X n = k | X 0 = i ] = P n i,k , the theorem immediately follows. ¥ 2007/4/25 15 2007/4/25 15 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  16. Recall: Transition matrix Recall: Transition matrix P i ,1 1 F P i,j ≥ 0. P i ,2 F P i 2 P i,j = 1. . . P i , j . j j ⎡ ⎤ L L P P P 0 , 0 0 , 1 0 , j ⎢ ⎥ L L P P P ⎢ ⎥ 1 , 0 1 , 1 1 , j ⎢ ⎥ = M M O M O transition matrix P ⎢ ⎥ L L P P P ⎢ ⎥ , 0 , 1 , i i i j ⎢ ⎥ M M O M O ⎣ ⎦ 2007/4/25 16 2007/4/25 16 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  17. Recall: Homogeneous Recall: Homogeneous � Markov processes are typically termed ( time- ) homogeneous if Pr[ X ( t + h ) = y | X ( t ) = x ( t )] = Pr[ X ( h ) = y | X (0) = x ( t )] , ∀ t, h > 0 2007/4/25 17 2007/4/25 17 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  18. F Let P ( n ) denote the matrix of n -step tran- sition probabilities P n i,j , then the Chapman- Kolmogorov equations implies that P ( n ) = P n . � For example, 2007/4/25 18 2007/4/25 18 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  19. 0 1 2 3 0.7 ⎡ ⎤ 0 0 . 7 0 0 . 3 0 0.5 0 1 ⎢ ⎥ 1 0 . 5 0 0 . 5 0 ⎢ ⎥ = P ⎢ ⎥ 0.4 2 0 0 . 4 0 0 . 6 ⎢ ⎥ 0 0.3 0.2 0.5 3 ⎣ ⎦ 0 0 . 2 0 0 . 8 0.6 0 1 2 3 0.8 2 3 ⎡ ⎤ 0 . 49 0 . 12 0 . 21 0 . 18 0 ⎢ ⎥ 0 . 35 0 . 2 0 . 15 0 . 3 1 ⎢ ⎥ = 2 P ⎢ ⎥ 0 . 2 0 . 12 0 . 2 0 . 48 2 ⎢ ⎥ ⎣ 0 . 1 0 . 16 0 . 1 0 . 64 ⎦ 3 2007/4/25 19 2007/4/25 19 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  20. Classification of states Classification of states � A first step in analyzing the long-term behavior of a Markov chain is to classify its states. � In the case of a finite Markov chain, this is equivalent to analyzing the connected connectivity structure of the directed graph representing the Markov chain. 2007/4/25 20 2007/4/25 20 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  21. Basic definitions Basic definitions F State j is said to be accessible from state i if P n i,j > 0 for some n ≥ 0. F We say states i and j communicate if they are both accessible from each other. ( i ↔ j ) F The Markov chain is said to be irreducible if all states communicate with each other. 2007/4/25 21 2007/4/25 21 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

  22. Basic definitions (cont’ ’d) d) Basic definitions (cont F Let r t i,j denote the probability that starting at state i , the fi rst transition to state j occurs at time t . That is, r t i,j = Pr [ X t = j and, for 1 ≤ s ≤ t − 1 , X s 6 = j | X 0 = i ] . F State i is said to be recurrent if P t ≥ 1 r t i,i = 1, and transient if P t ≥ 1 r t i,i < 1. 2007/4/25 22 2007/4/25 22 Computation Theory Lab, CSIE, CCU, Taiwan Computation Theory Lab, CSIE, CCU, Taiwan

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend