randomized algorithms lecture 7 random walks ii
play

Randomized Algorithms Lecture 7: Random Walks - II Sotiris - PowerPoint PPT Presentation

Randomized Algorithms Lecture 7: Random Walks - II Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013 - 2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 1 / 43 Overview A. Markov Chains B.


  1. Randomized Algorithms Lecture 7: “Random Walks - II ” Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013 - 2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 1 / 43

  2. Overview A. Markov Chains B. Random Walks on Graphs Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 2 / 43

  3. A. Markov Chains - Stochastic Processes Stochastic Process: A set of random variables { X t , t ∈ T } defined on a set D , where: - T : a set of indices representing time - X t : the state of the process at time t - D : the set of states The process is discrete/continuous when D is discrete/continuous. It is a discrete/continuous time process depending on whether T is discrete or continuous. In other words, a stochastic process abstracts a random phenomenon (or experiment) evolving with time, such as: - the number of certain events that have occurred (discrete) - the temperature in some place (continuous) Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 3 / 43

  4. Markov Chains - transition matrix Let S a state space (finite or countable). A Markov Chain (MC) is at any given time at one of the states. Say it is currently at state i ; with probability P ij it moves to the state j . So: 0 ≤ P ij ≤ 1 and ∑ P ij = 1 j The matrix P = { P ij } ij is the transition probabilities matrix. The MC starts at an initial state X 0 , and at each point in time it moves to a new state (including the current one) according to the transition matrix P . The resulting sequence of states { X t } is called the history of the MC. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 4 / 43

  5. The memorylessness property Clearly, the MC is a stochastic process, i.e. a random process in time. the defining property of a MC is its memorylessness, i.e. the random process “forgets” its past (or “history”), while its “future” (next state) only depends on the “present” (its current state). Formally: Pr { X t +1 = j | X 0 = i 0 , X 1 = i 1 , . . . , X t − 1 = i t − 1 , X t = i } = Pr { X t +1 = j | X t = i } = P ij The initial state of the MC can be arbitrary. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 5 / 43

  6. t-step transitions For states i, j ∈ S , the t-step transition probability from i to j is: P ( t ) ij = Pr { X t = j | X 0 = i } i.e. we compute the ( i, j )-entry of the t -th power of transition matrix P . Chapman - Kolmogorov equations: t − 1 P ( t ) ∑ ∩ ij = Pr { X t = j, X k = i k | X 0 = i } i 1 ,i 2 ,...,i t − 1 ∈ S k =1 ∑ = P ii 1 P i 1 i 2 · · · P i t − 1 j i 1 ,i 2 ,...,i t − 1 ∈ S Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 6 / 43

  7. First visits The probability of first visit at state j after t steps, starting from state i , is: r ( t ) ij = Pr { X t = j, X 1 ̸ = j, X 2 ̸ = j, . . . , X t − 1 ̸ = j | X 0 = i } The expected number of steps to arrive for the first time at state j starting from i is: t · r ( t ) ∑ h ij = ij t> 0 Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 7 / 43

  8. Visits/State categories The probability of a visit (not necessarily for the first time) at state j , starting from state i , is: r ( t ) ∑ f ij = ij t> 0 Clearly, if f ij < 1 then there is a positive probability that the MC never arrives at state j , so in this case h ij = ∞ . A state i for which f ii < 1 (i.e. the chain has positive probability of never visiting state i again) is a transient state. If f ii = 1 then the state is persistent (also called recurrent). If state i is persistent but h ii = ∞ it is null persistent. If it is persistent and h ii ̸ = ∞ it is non null persistent. Note. In finite Markov Chains, there are no null persistent states. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 8 / 43

  9. Example (I) A Markov Chain The transition matrix P : 1 2  0 0  3 3 1 1 1 1   P = 2 8 4 8   0 0 1 0   0 0 0 1 The probability of starting from v 1 , moving to v 2 , staying there for 1 time step and then moving back to v 1 is: Pr { X 3 = v 1 , X 2 = v 2 , X 1 = v 2 | X 0 = v 1 } = = P v 1 v 2 P v 2 v 2 P v 2 v 1 = 2 3 · 1 8 · 1 2 = 1 24 Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 9 / 43

  10. Example (II) The probability of moving from v 1 to v 1 in 2 steps is: P (2) v 1 v 1 = P v 1 v 1 · P v 1 v 1 + P v 1 v 2 · P v 2 v 1 = 1 3 · 1 3 + 2 3 · 1 2 = 4 9 Alternatively, we calculate P 2 and get the (1,1) entry. The first visit probability from v 1 to v 2 in 2 steps is: r (2) v 1 v 2 = P v 1 v 1 P v 1 v 2 = 1 3 · 2 3 = 2 9 ) 6 · 2 while r (7) ( 1 3 = 2 v 1 v 2 = ( P v 1 v 1 ) 6 P v 1 v 2 = 3 3 7 ( 1 ) t − 1 · 1 r ( t ) v 2 v 1 = ( P v 2 v 2 ) t − 1 P v 2 v 1 = 1 and 2 = 8 2 3 t − 2 for t ≥ 1 (since r (0) v 2 v 1 = 0) Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 10 / 43

  11. Example (III) The probability of (eventually) visiting state v 1 starting from v 2 is: 2 3 t − 2 = 4 1 ∑ f v 2 v 1 = 7 t ≥ 1 The expected number of steps to move from v 1 to v 2 is: t · ( P v 1 v 1 ) ( t − 1) P v 1 v 2 = 3 ∑ t · r ( t ) ∑ h v 1 v 2 = v 1 v 2 = 2 t ≥ 1 t ≥ 1 (actually, we have the mean of a geometric distribution with parameter 2 3 ) Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 11 / 43

  12. Irreducibility Note: A MC can naturally be represented via a directed, weighted graph whose vertices correspond to states and the transition probability P ij is the weight assigned to the edge ( i, j ). We include only edges ( i, j ) with P ij > 0. A state u is reachable from a state v (we write v → u ) iff there is a path P of states from v to u with Pr {P} > 0. A state u communicates with state v iff u → v and v → u (we write u ↔ v ) A MC is called irreducible iff every state can be reached from any other state (equivalently, the directed graph of the MC is strongly connected). Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 12 / 43

  13. Irreducibility (II) In our example, v 1 can be reached only from v 2 (and the directed graph is not strongly connected) so the MC is not irreducible. Note: In a finite MC, either all states are transient or all states are (non null) persistent. Note: In a finite MC which is irreducible, all states are persistent. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 13 / 43

  14. Absorbing states Another type of states: A state i is absorbing iff P ii = 1 (e.g. in our example, the states v 3 and v 4 are absorbing) Another example: The states v 0 , v n are absorbing Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 14 / 43

  15. State probability vector Definition. Let q ( t ) = ( q ( t ) 1 , q ( t ) 2 , ..., q ( t ) n ) be the row vector whose i -th component q ( t ) is the probability that the MC is i in state i at time t . We call this vector the state probability vector (alternatively, we call it the distribution of the MC at time t ). Main property. Clearly q ( t ) = q ( t − 1) · P = q (0) · P t where P is the transition probability matrix Importance: rather than focusing on the probabilities of transitions between the states, this vector focuses on the probability of being in a state. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 15 / 43

  16. Periodicity Definition. A state i called periodic iff the largest integer T satisfying the property q ( t ) > 0 ⇒ t ∈ { a + kT | k ≥ 0 } i is largest than 1 ( a > 0 a positive integer); otherwise it is called aperiodic. We call T the periodicity of the state. In other words, the MC visits a periodic state only at times which are terms of an arithmetic progress of rate T . Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 16 / 43

  17. Periodicity (II) Example: a random walk on a bipartite graph clearly represents a MC with all states having periodicity 2. Actually, a random walk on a graph is aperiodic iff the graph is not bipartite. Definition: We call aperiodic a MC whose states are all aperiodic. Equivalently, the chain is aperiodic iff (gcd: greatest common divisor): ∀ x, y : gcd { t : P ( t ) xy > 0 } = 1 Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 17 / 43

  18. Ergodicity Note: the existence of periodic states introduces significant complications since the MC “oscillates” and does not “converge”. The state of the chain at any time clearly depends on the initial state; it belongs to the same “part” of the graph at even times and the other part at odd times. Similar complications arise from null persistent states. Definition. A state which is non null persistent and aperiodic is called ergodic. A MC whose states are all ergodic is called ergodic. Note: As we have seen, a finite, irreducible MC has only non-null persistent states. Sotiris Nikoletseas, Associate Professor Randomized Algorithms - Lecture 7 18 / 43

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend