Hamid R. Rabiee
Stochastic Processes
Markov Processes
1
Stochastic Processes Markov Processes Hamid R. Rabiee 1 Overview - - PowerPoint PPT Presentation
Stochastic Processes Markov Processes Hamid R. Rabiee 1 Overview o Markov Property o Markov Chains o Definition o Stationary Property o Paths in Markov Chains o Classification of States o Steady States in MCs. Stochastic Processes 2 Markov
1
2
Stochastic Processes
3
value at time t, the value at time t+1 is independent of values at times before t. That is: ๐๐ ๐$%& = ๐ฆ$%& ๐$ = ๐ฆ$, ๐$*& = ๐ฆ$*&, โฆ , ๐& = ๐ฆ& = ๐๐ ๐$%& = ๐ฆ$%& ๐$ = ๐ฆ$ For all t, x-%&, x-, ๐ฆ$*&, ๐ฆ$*., โฆ , ๐ฆ&.
Pr ๐$%& = ๐ฃ|๐$ = ๐ค = ๐๐ ๐& = ๐ฃ| ๐0 = ๐ค for all t.
time.
stationary Markov Process it is sufficient to define ๐๐ ๐& = ๐ฃ| ๐0 = ๐ค and ๐๐ ๐0 = ๐ค for all u and v.
here.
4
biased coin is tossed and according to the result, the score will be increased or decreased by one point.
(winning) or -100 (losing).
variable and the sequence of scores as the game progresses forms a random process ๐7, ๐&, โฆ , ๐$.
5
At the end of each step the score solely depends on the current score ๐ก9 and the result of tossing the coin (which is independent of time and previous tosses).
๐๐ ๐$%& = ๐ก ๐$ = ๐ก9, ๐$*& = ๐ก$*&, โฆ , ๐7 = 0 = ? ๐ ; ๐ก = ๐ก9 + 1 1 โ ๐ ; ๐ก = ๐ก9 โ 1 ; ๐๐ขโ๐๐ ๐ฅ๐๐ก๐ = ๐๐ ๐$%& = ๐ก ๐$ = ๐ก9 = ๐๐ ๐& = ๐ก ๐7 = ๐ก9
would not be stationary.
6 Independent of t and ๐ก7, โฆ , ๐ก$*&
model its movement.
everything about its future movements, we can consider the 4- dimensional state:
๐$ = (๐$, ๐
$, ฬ
๐$, ฬ ๐
$)
Gaussian additive noise: ๐$ = ๐ต$๐$*& + ๐ค$ ; ๐ค$~๐(0, ฮฃ) Or equivalently: ๐
ST|STUV ๐ก$ ๐ก$*& = ๐(๐ก$; ๐ต$๐ก$*&, ฮฃ)
Gaussian models. 7
can define linear relation as: ๐ต$ = 1 ฮ๐ข 1 ฮ๐ข 1 1
setting the 3rd and 4th rows of the matrix at each time.
dimensions containing ฬ
๐$ and ฬ
๐
$.
8 Independent of t (stationary)
t to be a linear combination of its d previous values with a Gaussian additive noise: ๐$ = Y
Z[& \
๐ฝZ ๐$*Z + ๐ค$ ; ๐ค$~๐(0, ฮฃ)
9
the d-dimensional state ๐$ = ๐$, ๐$*& โฆ , ๐$*\ _:
๐$ = ๐ต๐$*& + ๐ค`๐ถ
Where: ๐ต = ๐ฝ& ๐ฝ. โฏ ๐ฝ\ 1 โฏ 1 โฎ โฑ 1 , ๐ถ = 1 โฎ 1
๐
eT|eTUV ๐ฃ$ ๐ฃ$*& = ๐ ๐ฃ$;
๐ต$๐ฃ$*& & , ฮฃ ๐ โ 1 โค ๐ โค ๐ โถ ๐ฃ$ Z*& = ๐ฃ$*& Z
guaranteeing consistency).
inconsistency between those values.
10
11
Chainsโ (MC).
at each time step the process moves according to a fixed transition matrix: ๐ ๐$%& = ๐ ๐$ = ๐ = ๐Zl
Double stochastic matrix: Rows and columns sum up to 1
12
with a transition diagram:
to another (we omit edges with zero weight).
= ๐&, โฆ , ๐n where ๐Z = ๐๐ (๐7 = ๐).
person walking randomly on a graph (considering each step to depend only on the state he is currently in).
13
when we get into, we never get out.
not able to visualize it in this way:
14
99
100
p p p p 1-p 1-p 1-p 1-p 1-p 1 1
rainy, the next day is rainy with probability ๐ฝ (and sunny with probability 1 โ ๐ฝ). If the day is sunny, the next day is rainy with probability ๐พ (and sunny with probability 1 โ ๐พ). S = {rainy, sunny}, ๐ = ๐ฝ 1 โ ๐ฝ ๐พ 1 โ ๐พ
15 R S ๐ฝ 1 โ ๐ฝ ๐พ 1 โ ๐พ
๐๐ ๐ฆ2 = 0 ๐ฆ0 = 0 = ๐๐ ๐ฆ& = 1 ๐ฆ7 = 0 ร ๐ ๐ฆ. = 0 ๐ฆ& = 1 +๐๐ ๐ฆ& = 0 ๐ฆ7 = 0 ร ๐ ๐ฆ. = 0 ๐ฆ& = 0
(n) as the probability that starting from state i, the
process stops at state j after n time steps. ๐Zl
(.) = โ`[& โฌ
๐Z`๐`l
16
Stochastic Processes
๐Zl
(n) = Y `[& โฌ
๐Z`
(n*&)๐`l
the value at the iโth row and jโth column is ๐Zl
(n).
๐ on the states then after n steps the distribution is ๐๐n.
17
Stochastic Processes
process remains in that state once it enters the state is 1 (i.e., ๐ZZ = 1).
state, and if from every state it is possible to go to an absorbing state (not necessarily in one step).
absorbing states and never leave it.
18
Stochastic Processes
99
100
p p p p 1-p 1-p 1-p 1-p 1-p 1 1
process will be absorbed is 1.
l it is possible to
reach an absorbing state starting from ๐ก
exists p and m, such that the probability of not absorbing after m steps is at most p, in 2m steps at most ๐., etc.
the probability
not being absorbed is monotonically decreasing, we have:
nโโPr(not absorbed after n steps) =0
19
Stochastic Processes
starting in i it is possible that the process will ever enter state j: (๐n)Zl> 0 .
7 = Pr ๐7 = ๐ ๐7 = ๐ = 1
state space up into a number of separate classes in which every pair of states communicate. (why?)
class.
20
Stochastic Processes
Z denote the probability that, starting in
state i, the process will ever reenter state i : ๐
Z = Pr โ๐: ๐n = ๐ ๐7 = ๐)
the expected number of time periods that the process is in state i is infinite:
๐น ๐ก๐๐จ๐ ๐: ๐n = ๐ ๐7 = ๐ = โ`[&
โ
๐ร๐๐ (๐ก๐๐จ๐ ๐: ๐n = ๐ = ๐|๐7 = ๐) = โฆ + โร๐๐ ๐๐ ๐ก๐๐จ๐ ๐: ๐n = ๐ = โ|๐7 = ๐ < โ โ ๐๐ ๐๐ ๐ก๐๐จ๐ ๐: ๐n = ๐ = โ|๐7 = ๐ = 0
21
Stochastic Processes
โ (๐n)ZZ = โ.
recurrent state.
steps that after that the process should not be in any state (which is a contradiction).
22
Stochastic Processes
starting in i, the expected time until the process returns to state i is finite.
recurrent.
divisible by d, and d is the largest integer with this property.
recurrent.
properties. They are shared between states of a class.
23
Stochastic Processes
24
Stochastic Processes
1
0.25
2
0.75
3
0.25
5 6
0.75 0.75 0.25 0.25
7 8
1 1 1 0.25
4
0.25 0.5 0.75
Classes: {1},{2,3},{4,5},{6},{7.8} Recurrent states: 6, 7, 8 => all positive Periodic states: 2, 3, 7, 8 (with period 2) Absorbing state(s): 6 Ergodic state(s): 6
1
As time goes to infinity, what is the probability of being in each class? Answer: The process will be in transient classes {1},{2,3},{4,5} with probability 0. Problem is symmetric for entering classes {6} and {7,8} as their only input edge is one from 5 with equal probabilities 0.25, and
being in each of these two classes is 0.5.
25
Stochastic Processes
1
0.25
2
0.75
3
0.25
5 6
0.75 0.75 0.25 0.25
7 8
1 1 1 0.25
4
0.25 0.5 0.75
Classes: {1},{2,3},{4,5},{6},{7.8} Recurrent states: 6, 7, 8 => all positive Periodic states: 2, 3, 7, 8 (with period 2) Absorbing state(s): 6 Ergodic state(s): 6
1 If the process is absorbed in {7,8} (which could be considered as an absorbing super state) what will happen after that? It will alternate between 7 and 8 to the end. So at time ๐ข โ โ probability of being in 7 (or 8) will depend on the parity of t. In general finding the exact behavior of non-ergodic states as ๐ข โ โ is not easy. (try it!) We will talk about the ergodic cases in next slides.
26
Stochastic Processes
Theorem: For an irreducible ergodic Markov chain lim
nโโ ๐n Zl
exists and is independent of ๐. Furthermore, letting
๐l
โ = lim nโโ ๐n Zl
Then ๐โ = ๐&
โ, โฆ ๐\ โ $ is unique nonnegative solution of
๐โ = ๐โ๐ Y
l[& \
๐l = 1
nโโ ๐n Zl does not exist
in general but the given equations yet have a unique solution ๐โ = ๐&
โ, โฆ ๐\ โ $ in which ๐l โ will be equal to the long run
proportion of time that the Markov chain is in state j.
27
Stochastic Processes
want to see how will the weather be when time goes to infinity: ๐ = ๐ฝ 1 โ ๐ฝ ๐พ 1 โ ๐พ โ ๐7
โ = ๐ฝ๐7 โ + ๐พ๐& โ
๐&
โ = 1 โ ๐ฝ ๐7 โ + 1 โ ๐พ ๐& โ
๐7
โ + ๐& โ = 1
โ = โ &%โ*โข and ๐& โ = &*โข &%โ*โข .
existence of solution and its meaning:
One of these equations is redundant. (why?)
models.
28
Stochastic Processes