Applications of Markov Chains Markov Chain Definition Three key - - PowerPoint PPT Presentation

applications of markov chains markov chain definition
SMART_READER_LITE
LIVE PREVIEW

Applications of Markov Chains Markov Chain Definition Three key - - PowerPoint PPT Presentation

Applications of Markov Chains Markov Chain Definition Three key components (and one assumption): I Models systems of states and transitions I Set S of States Intro to Markov Chains a graph I PageRank Googles search algorithm.


slide-1
SLIDE 1

Intro to Markov Chains

CS 70, Summer 2019 Lecture 26, 8/7/19

1 / 23

Applications of Markov Chains

I Models systems of states and transitions I PageRank – Google’s search algorithm. States are webpages, transitions are links. I Tons of applications outside of CS: statistical physics, speech recognition, bioinformatics, sabermetrics...

2 / 23

baseball

statistics

!

Markov Chain Definition

Three key components (and one assumption): I Set S of Think of these as I Transition probabilities. Think of these as Transitions out of a node should sum to I Initial distribution µ(0). Gives the probability that we start at a state. I Memorylessness (aka Markov property)

3 / 23

States vertices

  • f

a graph

Pfi

→ j ]

directed edges

in a graph

1

Example: Gambling

I start with $2. If I guess a coin flip correctly, I get $1, and if I am incorrect, I lose $1. I stop gambling when I either hit $0 or $4.

4 / 23

¥④÷F④÷÷④±$

Initial Dist

:

¥ggIg

,

WP

gig

$

Traversing the Chain

X0 is the initial state. Choose transitions according to its probability. Xi is the state you’re on at time i. Xi is a RV. Markov Property: Only the current state matters for the next. ”Knowing the entire history of the chain is equivalent to just knowing the current state.”

5 / 23

+

=opf

hit 's

state .io/IYkpCx.=kI-- I

3-

(

"

Memory less

" )

lpfxnti-sntilxo-sgxi.si

,

. .

.gl/n--SnT--lPCXnti--Sn+i/Xn--Sn ]

Gambling II

Same chain as before: What is P[X1 = 3|X0 = 2]? What is P[X100 = 3|X99 = 2, X0 = 2]? What is P[X1 = 3, X2 = 2, X3 = 3, X4 = 4]?

6 / 23

  • Iz

prob

  • I

prob

G⑨←①0③→④ ?

¥

→ 117%0--31×99=2]

=p[ X , -31×0=2]

= Lz

"

Ix Ix ¥x¥

.
  • it

sagging

xxx :3 "

.

axed

slide-2
SLIDE 2

Gambling II

What is P[X4 = 4]?

7 / 23

same

setup

Xo

  • -2
  • Xo

X ,

Xz

Xz

1/4=4

  • Pathi

:

2

3

4 4

4

path 2 :

2

3

2

3

4

Path

3

:

2

I

2

3

4

1p[Xq=4T=P[

Path

1)

tPCPath27tPCPath3 ]

aotod

idea

,

but it

doesn't

scale

.

The Transition Matrix

Calculations are easier to do when we stick the transition probabilities in a matrix. Transition matrix P. The (i, j) entry is P[X1 = j|X0 = i], or the transition from i to j.

8 / 23

Pfi

→ j ] ←

Col

index

  • States

row"

!im÷¥¥÷÷÷÷÷j:÷÷

:c

:

Col sum

:

non

  • neg ,
  • therwise

no

/

restriction

row

entries

:

non

  • neg

index

= States

EI

.

The Distribution Vectors

So far: saw initial distribution µ(0). Can represent it as a row vector: We can also define a distribution at time n:

9 / 23

[1%0--1]

pfXo=2 ]

. . .

]

index

I

2

3

. .
  • T

entries

=

States

sum tot

.

[ pCxn=1T

pCXn=2T

  • .

Try

index

I 2

3

. . .
  • States

call this

UCM)

Distribution at Time 1

We’ll prove that µ(0)P = µ(1). If we know µ(1), how do we get µ(2)?

10 / 23

cpcxoilppcxo

  • 23
. . .

I

µ§Y÷

IXN Tn xn

IXN

IPCX ,

  • i ]
=

ith entry

  • f

Uk )

=

µ

CO ) x

ith

column

  • f

P

= pfxo =D xpfx ,

/ Xo

  • I ]tlP[xo= 2) xlpfx ,
  • it Xo 'D

t

. . .

casework

  • n

Xo

=

IPCX ,

= i ]

a

Total

Prob

.

Rule

.

ill 2)

= MCI

) p

=

UCO) p2

Distribution at Time n

In general: µ(n) = Pnµ(0). (Proof optional.) Example: Two State Markov Chain

11 / 23

' " now

vector

a

a E ( 0,1 ) i

  • Di
  • a

P

  • ftaa

a

( Notes

: )

pm

= [ tttaM

I

  • tzan )

profile using

induction

Aside: n ! 1

For the two state Markov chain, as n ! 1, Pn ! No matter what µ(0) is: Tomorrow: we’ll study this in greater detail!

12 / 23

¥,

I I

[ p

i

  • p ]

Ep

i

  • PIL}

'

Iz)

  • CE

EI

slide-3
SLIDE 3

Break

What’s the weirdest thing you’ve ever eaten?

13 / 23

First Step Analysis: Two Heads

I repeatedly flip a coin, and stop when I get two heads in a row. What is the expected number of flips I need before stopping?

14 / 23

First Step Analysis: Two Heads

For state S, let τ(S) be the expected time to two heads, starting from state S. Analyze a single transition out of each state to get the first step equations:

15 / 23

4 variables

:

TCO )

TCH )

T (T)

TLE )

101

:

TCO)

  • It ETCH)
  • ' ETH
'

E

"

T

: TCT)

  • 2-itztfthtt.TK )

H :

TCH )

  • It

ETH ) TITLE)

t

HH

:

TCE)

  • O

Goal

:

TCO )

.

Notes

.

Max of Two Geometrics

Let X, Y ∼ Geometric(p). X, Y are independent. Say X, Y model time until a success. max(X, Y ) is the first time that both X, Y have succeeded at least once. What is E[max(X, Y )]?

16 / 23

States

  • #

Successes

1- p

Ci

  • pi

'

T

  • both

P2r_

both

geo geo

fall

.

succeed

Max of Two Geometrics

Set up the first step equations, and solve:

17 / 23

I

  • p

up

ps

T

  • both

P2r_

both

geo geo

fall

.

succeed

+ ( i )

  • expected

time

until

from

i

O :

TCO )

  • It

( I

  • pp TCO ) -12pct
  • p )TH)tp2Tl2 )

1

:

T( 17=1

1- ( I

  • p)

TCI )

t PTCZ) z

:

T (2) =D

Exercise

:

Finish

up

.

Coupon Collector: A Markov Chain?

Can we reformulate Coupon Collector (with n distinct coupons) as a Markov chain? How do we recover the expected number of coupons needed to get all n distinct ones?

18 / 23

States

=

# distinct

coupons

at

En

rien rien

t①¥②¥

. .

¥05

Let

T ( it

= expected

time

to

"

N

"

from

" i

"

goal

:

TCO )

.
slide-4
SLIDE 4

Probability of A Before B

Let A and B be two disjoint subsets of the states S of a Markov chain. Let α(i) be the probability that we enter A before entering B, if we start at state i.

19 / 23

"

  • Probability of A Before B

Can also run first step analysis!

20 / 23

If

IEA

:

x

( i )

:

I

Already

in

A !

if

B

:

ali

)= Impossible

to get

to

A before

B

.

else

:

ali )

  • C

pfi→jklj )

neighbors j

  • f

i

casework

based

  • n taking

1

Step

from

i

.

Gambling III

I start with $100. In each round, I win $100 with probability p and lose $100 with probability (1 − p). I end when I either have $0 or $300. What is the probability I end the game with $300?

21 / 23

¥④a

Gambling III

Let A = {0}, B = {300}. First step equations:

22 / 23

I

¥④a

ali

)

  • PCB

before

that

state

i ]

$0

:

x( 07=0

$100

:

2400 )

=

p4( 200 )

+ ( I

  • p ) 210 )

$200

:

x

( 200 )

=

pal

300 )

t

( C

  • p )

2400 )

$300

:

I

3001=1

achoo )

=

P2

( 200 )

all

  • oh

,P¥}°

=

Cl

  • PINO

xp

Summary

I Markov chains let you model real world problems with states and transition probabilities I The Markov property tells you that where you go next only depends on the current state, not on any previous history. I The first step analysis is a simple way of analyzing expected hitting times and probabilities of hitting certain states before

  • thers.

23 / 23