Markov Chains and Pandemics Caleb Dedmore and Brad Smith December - - PowerPoint PPT Presentation
Markov Chains and Pandemics Caleb Dedmore and Brad Smith December - - PowerPoint PPT Presentation
Markov Chains and Pandemics Caleb Dedmore and Brad Smith December 12, 2016 Markov Chain Basics A displayed formula: ( InitialConditionVector ) ( TransitionMatrix ) t (1) - or - x 0 T t = x (1) Markov Chains have to meet the requirements of
Markov Chain Basics
A displayed formula: (InitialConditionVector) · (TransitionMatrix)t (1)
- or -
x0T t = x(1) Markov Chains have to meet the requirements of two rules to exist: A) Each trial leads to a finite set of outcomes B) The outcome of any trial depends at most on the
- utcomes of the immediately preceding conditions.
Theorem For t, the number of trials, our Markov equation (1) is the predicted outcome from the initial condition.
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Initial Condition Vector
The first factor to consider in a Markov Chain, is the initial condition vector: (x(0)
1 , x(0) 2 , ..., x(0) i
) The values of the vector must add up to the whole of the targeted subject, or one.
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
1 2 3
Figure: A Basic Transition Diagram.
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Transition Matrix
(x(0)
1 , x(0) 2 , . . . , x(0) i
) T11 T12 . . . T1j T21 T22 . . . T2j . . . . . . . . . . . . Ti1 Ti2 . . . Tij = (x(1)
1 , x(1) 2 , . . . , x(1) i
)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Markov Notation
T11x(0)
1
+ T21x(0)
2 + · · · + Ti1x(0) i
= x(1)
1
T12x(0)
1
+ T22x(0)
2 + · · · + Ti2x(0) i
= x(1)
2
. . . T1jx(0)
1
+ T2jx(0)
2 + · · · + Tijx(0) i
= x(1)
i
(x(1)
1 , x(1) 2 , . . . , x(1) i
) = x(1) We can use the same method of calculations to find the next condition vector: x(1)T = x(2)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Markov Notation
T11x(k−1)
1
+ T21x(k−1)
2
+ · · · + Ti1x(k−1)
i
= x(k)
1
T12x(k−1)
1
+ T22x(k−1)
2
+ · · · + Ti2x(k−1)
i
= x(k)
2
. . . T1jx(k−1)
1
+ T2jx(k−1)
2
+ · · · + Tijx(k−1)
i
= x(k)
i
We now have a feasible equation for one trial increments: x(k) = x(k−1)T (2)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Determining Initial Condition Values
So now, for our starting initial condition vector, we have the notation below: (x(0)
1 , x(0) 2 , 1 − x(0) 1
− x(0)
2 )
(x(0)
1 , x(0) 2 , x(0) 3 )
Lets place our values as: x(0) = (.90, .07, .03) (3)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
SIR Diagram
0.85 0.15 0.12 0.88 1.0
S I R
Figure: A Complete SIR Diagram.
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
SIR Matrix
Using our values from our diagram 2, we can create a transition matrix for calculations of change within the population in our
- scenario. S represents susceptible, I represents infected, and R
represents recovered: S I R S .85 .15 .0 I .0 .12 .88 R .0 1.0 (4)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Beginning Trials
We can now plug in our ICV (3) and SIR (4) for our disease scenario to calculate one trial. x(0)T = x(1) (.90, .07, .03) .85 .15 .0 .0 .12 .88 .0 1.0 = x(1)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Beginning Trials
Calculating out our vector-matrix multiplication, we can find our condition vector for one trial, or one week of our scenario: 0.85(.90) + 0(.07) + 0(.03) = 0.765 0.15(.90) + 0.12(.07) + 0(.03) = 0.1434 0(.90) + 0.88(.07) + 1.0(.03) = 0.0916 x(1) = (0.765, 0.143, 0.092)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
The Second Trial
Using equation (2), we can find our formula for two trials: x(2) =x(2−1)(TransitionMatrix) =x(1)(TransitionMatrix) (0.765, 0.143, 0.092) .85 .15 .0 .0 .12 .88 .0 1.0 = (0.650, 0.132, 0.218)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Matrix Power
(InitialConditionVector) · (TransitionMatrix)t It is important to note that t is not the symbol T for transpose, but is instead the number of trials. For t = 5: (.90, .07, .03) .85 .15 .0 .0 .12 .88 .0 1.0
5
− → (.399, .082, .519)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Matrix Power
(.90, .07, .03) .85 .15 .0 .0 .12 .88 .0 1.0
10
− → (.178, .036, .786)
- or -
(.90, .07, .03) 0.197 0.040 0.763 .0 6.19 ∗ 10−10 1.0 .0 1.0 − → (.178, .036, .786)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Steady State Vector
(InitialConditionVector)·(MarkovMatrix)t = (InitialConditionVector)
- or -
Mp = p Where M is the Transpose of our Matrix, and p is the immediately preceding vector.
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Steady State Vector
Mp = p Mp − p = 0 (M − I)p = 0 Now, simply finding the nullspace gives us the value of p.
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Steady State Vector
(M − I) − → .85 .15 .12 .88 1.0 − 1 1 1 − → −.15 .15 −.88 .88 By row reducing, we can solve for our steady state vector: p = (0, 0, 1)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Taking the Limit
(.90, .07, .03) lim
t→∞
.85 .15 .0 .0 .12 .88 .0 1.0
t
− →(.90, .07, .03) .0 .0 1.0 .0 .0 1.0 .0 .0 1.0 − →(0, 0, 1)
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
SIRD
S I R D S .85 .15 .0 .0 I .0 .20 .70 .10 R .0 .0 1.0 0. D .0 .0 .0 1.0 By adding a new state, we can manipulate our transition matrix into an IODA matrix. The form is show below: I O D A
- Caleb Dedmore and Brad Smith
Markov Chains and Pandemics
IODA
Through basic row operations, our matrix is now in IODA form: R D S I R 1.0 .0 .0 .0 D .0 1.0 .0 .0 S .0 .0 .85 .15 I .70 .10 .0 .20 Partitioned into IODA sections: 1.0 .0 .0 1.0 .0 .0 .0 .0 .0 .0 .70 .10 .85 .15 .0 .20
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Average
Now, to further manipulate our findings, we will solve N = (I − A)−1 (I − A)−1 − → 1.0 .0 .0 1.0
- −
.85 .15 .0 .20 −1 − → .15 −.15 .0 .80 −1 − →(1/.12) .80 .15 .0 .15
- −
→ 6.67 1.25 0. 1.25
- Caleb Dedmore and Brad Smith
Markov Chains and Pandemics
By reducing N, we get the matrix: 5.34 1 0. 1
- The average amount of trials before a member of the population
enters an absorptive state, either recovered or deceased, is 5.34 trials.
Caleb Dedmore and Brad Smith Markov Chains and Pandemics
Sources
David Arnold. Writing Scientific Papers in L
AT
EX David Arnold. The Leslie Matrix Rose-Hulman Institute of Technology. Markov Chains Bernadette H. Perham and Arnold E. Perham. Topics in Discrete Mathematics: Markov Chain Theory https://www. rose-hulman.edu/~bryan/lottamath/transmat.pdf
Caleb Dedmore and Brad Smith Markov Chains and Pandemics