Markov Chains II
CS70 Summer 2016 - Lecture 6C
David Dinh 27 July 2016
UC Berkeley
Markov Chains II CS70 Summer 2016 - Lecture 6C David Dinh 27 July - - PowerPoint PPT Presentation
Markov Chains II CS70 Summer 2016 - Lecture 6C David Dinh 27 July 2016 UC Berkeley Agenda Classification of MC states Aperiodicity, irreducibility, ergodicity Convergence, limiting and stationary distributions Reference for this lecture:
UC Berkeley
1
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3
3
3
4
4
4
4
i j denote the probability that we first hit state j in t steps,
t rt i i
5
i,j denote the probability that we first hit state j in t steps,
t rt i i
5
i,j denote the probability that we first hit state j in t steps,
t rt i,i = 1 and transient
5
i,j denote the probability that we first hit state j in t steps,
t rt i,i = 1 and transient
5
i,j denote the probability that we first hit state j in t steps,
t rt i,i = 1 and transient
5
i,j denote the probability that we first hit state j in t steps,
t rt i,i = 1 and transient
5
6
6
6
6
6
6
6
6
j j
S
7
j j
S
7
j,j = Pr[Xt+S = j|Xt = j] = 0 unless ∆ divides
7
j,j = Pr[Xt+S = j|Xt = j] = 0 unless ∆ divides
7
j,j = Pr[Xt+S = j|Xt = j] = 0 unless ∆ divides
7
j j
j j is nonzero is greater than 1On
j j is zero.
1gcd = greatest common divisor.
8
j,j > 0}
j j is nonzero is greater than 1On
j j is zero.
1gcd = greatest common divisor.
8
j,j > 0}
j j is nonzero is greater than 1On
j j is zero.
1gcd = greatest common divisor.
8
j,j > 0}
j j is nonzero is greater than 1On
j j is zero.
1gcd = greatest common divisor.
8
j,j > 0}
j j is nonzero is greater than 1On
j j is zero.
1gcd = greatest common divisor.
8
j,j > 0}
j j is nonzero is greater than 1On
j j is zero.
1gcd = greatest common divisor.
8
j,j > 0}
j j is nonzero is greater than 1On
j j is zero.
1gcd = greatest common divisor.
8
j,j > 0}
j,j is nonzero is greater than 1
j j is zero.
1gcd = greatest common divisor.
8
j,j > 0}
j,j is nonzero is greater than 1On
j,j is zero.
1gcd = greatest common divisor.
8
9
9
9
10
10
10
11
11
11
12
12
12
12
12
12
1 2
1 2
2b 1 and 1a 2 1
2 1a 2b
1 2
13
1 2
1 2
2b 1 and 1a 2 1
2 1a 2b
1 2
13
2b 1 and 1a 2 1
2 1a 2b
1 2
13
1a 2 1
2 1a 2b
1 2
13
1a 2b
1 2
13
1 2
13
1 2
13
1 2
13
13
13
13
1 2
1 2
1 1 and 2 2
14
1 2
1 2
1 1 and 2 2
14
2 2
14
14
14
14
14
j i exists and is independent of j.
j i
15
j i exists and is independent of j.
j i
15
j,i exists and is independent of j.
j i
15
j,i exists and is independent of j.
j,i = 1/hi,i
15
j,i exists and is independent of j.
j,i = 1/hi,i
15
16
16
17
17
17
17
17
i be the probability that you’re at state i after t timesteps.
i for i
l2: probability that you win (state is
18
i be the probability that you’re at state i after t timesteps.
i for i
l2: probability that you win (state is
18
i be the probability that you’re at state i after t timesteps.
i for i
l2: probability that you win (state is
18
i be the probability that you’re at state i after t timesteps.
i for i ∈ [−l1 + 1, l2 − 1]?
l2: probability that you win (state is
18
i be the probability that you’re at state i after t timesteps.
i for i ∈ [−l1 + 1, l2 − 1]? 0 (since they are transient
l2: probability that you win (state is
18
i be the probability that you’re at state i after t timesteps.
i for i ∈ [−l1 + 1, l2 − 1]? 0 (since they are transient
l2: probability that you win (state is
18
i be the probability that you’re at state i after t timesteps.
i for i ∈ [−l1 + 1, l2 − 1]? 0 (since they are transient
l2: probability that you win (state is
18
i l1 l2
i
t
19
i l1 l2
i
t
19
i l1 l2
i
t
19
i l1 l2
i
t
19
i l1 l2
i
t
19
i∈[−l1,l2]
i = 0
t
19
i∈[−l1,l2]
i = 0
t→∞ E[Wt] =
19
i∈[−l1,l2]
i = 0
t→∞ E[Wt] =
19
i∈[−l1,l2]
i = 0
t→∞ E[Wt] = l2q − l1(1 − q)
19
i∈[−l1,l2]
i = 0
t→∞ E[Wt] = l2q − l1(1 − q) = 0
19
i∈[−l1,l2]
i = 0
t→∞ E[Wt] = l2q − l1(1 − q) = 0
19
20
20
20
20
20
21
21
21
21
21
21
21
21
21
21
2|E| .
v d v
v v v d v
v u N v
22
2|E| .
v d v
v v v d v
v u N v
22
2|E| .
v d(v) =
v v v d v
v u N v
22
2|E| .
v d(v) = 2 |E| so
v πv = ∑ v d(v)/(2 |E|) = 1.
v u N v
22
2|E| .
v d(v) = 2 |E| so
v πv = ∑ v d(v)/(2 |E|) = 1. It’s a distribution.
v u N v
22
2|E| .
v d(v) = 2 |E| so
v πv = ∑ v d(v)/(2 |E|) = 1. It’s a distribution.
v u N v
22
2|E| .
v d(v) = 2 |E| so
v πv = ∑ v d(v)/(2 |E|) = 1. It’s a distribution.
v u N v
22
2|E| .
v d(v) = 2 |E| so
v πv = ∑ v d(v)/(2 |E|) = 1. It’s a distribution.
u∈N(v)
22
w N u
w N u
23
w N u
w N u
23
w N u
w N u
23
w∈N(u)
w N u
23
w∈N(u)
w N u
23
w∈N(u)
w∈N(u)
23
w∈N(u)
w∈N(u)
23
2
24
2
24
2
24
2
24
24
24
24
24
24
24
25
25
25
25
25
25
25
25
25
25