Markov Chains and Pandemics Caleb Dedmore and Brad Smith December - - PowerPoint PPT Presentation

markov chains and pandemics
SMART_READER_LITE
LIVE PREVIEW

Markov Chains and Pandemics Caleb Dedmore and Brad Smith December - - PowerPoint PPT Presentation

Markov Chains and Pandemics Caleb Dedmore and Brad Smith December 12, 2016 Markov Chain Basics A displayed formula: ( InitialConditionVector ) ( TransitionMatrix ) t (1) - or - x 0 T t = x (1) Markov Chains have to meet the requirements of


slide-1
SLIDE 1

Markov Chains and Pandemics

Caleb Dedmore and Brad Smith December 12, 2016

slide-2
SLIDE 2

Markov Chain Basics

A displayed formula: (InitialConditionVector) · (TransitionMatrix)t (1)

  • or -

x0T t = x(1) Markov Chains have to meet the requirements of two rules to exist: A) Each trial leads to a finite set of outcomes B) The outcome of any trial depends at most on the

  • utcomes of the immediately preceding conditions.

Theorem For t, the number of trials, our Markov equation (1) is the predicted outcome from the initial condition.

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-3
SLIDE 3

Initial Condition Vector

The first factor to consider in a Markov Chain, is the initial condition vector: (x(0)

1 , x(0) 2 , ..., x(0) i

) The values of the vector must add up to the whole of the targeted subject, or one.

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-4
SLIDE 4

1 2 3

Figure: A Basic Transition Diagram.

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-5
SLIDE 5

Transition Matrix

(x(0)

1 , x(0) 2 , . . . , x(0) i

)      T11 T12 . . . T1j T21 T22 . . . T2j . . . . . . . . . . . . Ti1 Ti2 . . . Tij      = (x(1)

1 , x(1) 2 , . . . , x(1) i

)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-6
SLIDE 6

Markov Notation

T11x(0)

1

+ T21x(0)

2 + · · · + Ti1x(0) i

= x(1)

1

T12x(0)

1

+ T22x(0)

2 + · · · + Ti2x(0) i

= x(1)

2

. . . T1jx(0)

1

+ T2jx(0)

2 + · · · + Tijx(0) i

= x(1)

i

(x(1)

1 , x(1) 2 , . . . , x(1) i

) = x(1) We can use the same method of calculations to find the next condition vector: x(1)T = x(2)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-7
SLIDE 7

Markov Notation

T11x(k−1)

1

+ T21x(k−1)

2

+ · · · + Ti1x(k−1)

i

= x(k)

1

T12x(k−1)

1

+ T22x(k−1)

2

+ · · · + Ti2x(k−1)

i

= x(k)

2

. . . T1jx(k−1)

1

+ T2jx(k−1)

2

+ · · · + Tijx(k−1)

i

= x(k)

i

We now have a feasible equation for one trial increments: x(k) = x(k−1)T (2)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-8
SLIDE 8

Determining Initial Condition Values

So now, for our starting initial condition vector, we have the notation below: (x(0)

1 , x(0) 2 , 1 − x(0) 1

− x(0)

2 )

(x(0)

1 , x(0) 2 , x(0) 3 )

Lets place our values as: x(0) = (.90, .07, .03) (3)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-9
SLIDE 9

SIR Diagram

0.85 0.15 0.12 0.88 1.0

S I R

Figure: A Complete SIR Diagram.

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-10
SLIDE 10

SIR Matrix

Using our values from our diagram 2, we can create a transition matrix for calculations of change within the population in our

  • scenario. S represents susceptible, I represents infected, and R

represents recovered:   S I R S .85 .15 .0 I .0 .12 .88 R .0 1.0   (4)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-11
SLIDE 11

Beginning Trials

We can now plug in our ICV (3) and SIR (4) for our disease scenario to calculate one trial. x(0)T = x(1) (.90, .07, .03)   .85 .15 .0 .0 .12 .88 .0 1.0   = x(1)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-12
SLIDE 12

Beginning Trials

Calculating out our vector-matrix multiplication, we can find our condition vector for one trial, or one week of our scenario: 0.85(.90) + 0(.07) + 0(.03) = 0.765 0.15(.90) + 0.12(.07) + 0(.03) = 0.1434 0(.90) + 0.88(.07) + 1.0(.03) = 0.0916 x(1) = (0.765, 0.143, 0.092)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-13
SLIDE 13

The Second Trial

Using equation (2), we can find our formula for two trials: x(2) =x(2−1)(TransitionMatrix) =x(1)(TransitionMatrix) (0.765, 0.143, 0.092)   .85 .15 .0 .0 .12 .88 .0 1.0   = (0.650, 0.132, 0.218)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-14
SLIDE 14

Matrix Power

(InitialConditionVector) · (TransitionMatrix)t It is important to note that t is not the symbol T for transpose, but is instead the number of trials. For t = 5: (.90, .07, .03)   .85 .15 .0 .0 .12 .88 .0 1.0  

5

− → (.399, .082, .519)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-15
SLIDE 15

Matrix Power

(.90, .07, .03)   .85 .15 .0 .0 .12 .88 .0 1.0  

10

− → (.178, .036, .786)

  • or -

(.90, .07, .03)   0.197 0.040 0.763 .0 6.19 ∗ 10−10 1.0 .0 1.0   − → (.178, .036, .786)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-16
SLIDE 16

Steady State Vector

(InitialConditionVector)·(MarkovMatrix)t = (InitialConditionVector)

  • or -

Mp = p Where M is the Transpose of our Matrix, and p is the immediately preceding vector.

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-17
SLIDE 17

Steady State Vector

Mp = p Mp − p = 0 (M − I)p = 0 Now, simply finding the nullspace gives us the value of p.

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-18
SLIDE 18

Steady State Vector

(M − I) − →   .85 .15 .12 .88 1.0   −   1 1 1   − →   −.15 .15 −.88 .88   By row reducing, we can solve for our steady state vector: p = (0, 0, 1)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-19
SLIDE 19

Taking the Limit

(.90, .07, .03) lim

t→∞

  .85 .15 .0 .0 .12 .88 .0 1.0  

t

− →(.90, .07, .03)   .0 .0 1.0 .0 .0 1.0 .0 .0 1.0   − →(0, 0, 1)

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-20
SLIDE 20

SIRD

    S I R D S .85 .15 .0 .0 I .0 .20 .70 .10 R .0 .0 1.0 0. D .0 .0 .0 1.0     By adding a new state, we can manipulate our transition matrix into an IODA matrix. The form is show below: I O D A

  • Caleb Dedmore and Brad Smith

Markov Chains and Pandemics

slide-21
SLIDE 21

IODA

Through basic row operations, our matrix is now in IODA form:     R D S I R 1.0 .0 .0 .0 D .0 1.0 .0 .0 S .0 .0 .85 .15 I .70 .10 .0 .20     Partitioned into IODA sections:     1.0 .0 .0 1.0 .0 .0 .0 .0 .0 .0 .70 .10 .85 .15 .0 .20    

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-22
SLIDE 22

Average

Now, to further manipulate our findings, we will solve N = (I − A)−1 (I − A)−1 − → 1.0 .0 .0 1.0

.85 .15 .0 .20 −1 − → .15 −.15 .0 .80 −1 − →(1/.12) .80 .15 .0 .15

→ 6.67 1.25 0. 1.25

  • Caleb Dedmore and Brad Smith

Markov Chains and Pandemics

slide-23
SLIDE 23

By reducing N, we get the matrix: 5.34 1 0. 1

  • The average amount of trials before a member of the population

enters an absorptive state, either recovered or deceased, is 5.34 trials.

Caleb Dedmore and Brad Smith Markov Chains and Pandemics

slide-24
SLIDE 24

Sources

David Arnold. Writing Scientific Papers in L

AT

EX David Arnold. The Leslie Matrix Rose-Hulman Institute of Technology. Markov Chains Bernadette H. Perham and Arnold E. Perham. Topics in Discrete Mathematics: Markov Chain Theory https://www. rose-hulman.edu/~bryan/lottamath/transmat.pdf

Caleb Dedmore and Brad Smith Markov Chains and Pandemics