SLIDE 36 Markov chains
◮ Consider discrete-time index n = 0, 1, 2, . . . ◮ Time-dependent random state Xn takes values on a countable set
◮ In general denote states as i = 0, 1, 2, . . ., i.e., here the state space is N ◮ If Xn = i we say “the process is in state i at time n”
◮ Random process is XN, its history up to n is Xn = [Xn, Xn−1, . . . , X0]T ◮ Def: process XN is a Markov chain (MC) if for all n ≥ 1, i, j, x ∈ Nn
P
- Xn+1 = j
- Xn = i, Xn−1 = x
- = P
- Xn+1 = j
- Xn = i
- = Pij
◮ Future depends only on current state Xn (memoryless, Markov property)
⇒ Future conditionally independent of the past, given the present
Network Science Analytics Centrality Measures and Link Analysis 36