Markov Chains, Mixing Time, and Gerrymandering What is a Markov - - PowerPoint PPT Presentation

markov chains mixing time and gerrymandering what is a
SMART_READER_LITE
LIVE PREVIEW

Markov Chains, Mixing Time, and Gerrymandering What is a Markov - - PowerPoint PPT Presentation

Markov Chains, Mixing Time, and Gerrymandering What is a Markov chain? Definition (Event Space) An event space is a collection of events . Definition (Random Variable) A random variable X is some mapping X : R , where X (


slide-1
SLIDE 1

Markov Chains, Mixing Time, and Gerrymandering

slide-2
SLIDE 2

What is a Markov chain?

Definition (Event Space)

An event space Ω is a collection of events Σ.

Definition (Random Variable)

A random variable X is some mapping X : Ω → R, where X(ω ∈ Ω) represents the value of some outcome in Ω.

Example

Say we have a fair coin c. Then, we can define X to be X(ω) =

  • ω = heads

1 ω = tails

slide-3
SLIDE 3

What is a Markov chain?

Example (Weather)

slide-4
SLIDE 4

What is a Markov chain?

Definition (Markov property)

P(Xn+1 = σ | X1 = σ1, . . . , Xn = σn) = P(Xn+1 = σ | Xn = σn)

Definition (Markov chain)

Suppose we have some random process R = (X0, X1, . . . , Xn). Then, a Markov chain is R equipped with the Markov property.

slide-5
SLIDE 5

What is a Markov chain?

Definition (Transition Matrix)

Given some Markov chain M = (X0, X1, . . . , Xn), its transition matrix P can be defined as Pi,j = P(Xn+1 = j | Xn = i)

Definition (Reversibility)

A Markov chain M = (X0, X1, . . . , Xn) is considered reversible if, given some probability distribution π, the following holds: πi · P(Xn+1 = j | Xn = i) = πj · P(Xn+1 = i | Xn = j)

slide-6
SLIDE 6

What is a Markov chain?

Definition (Stationary Distribution)

A Markov chain M has reached a stationary distribution π if, for transition matrix P, π = π · P

slide-7
SLIDE 7

What is a Markov chain?

Example (Weather)

P = 0.9 0.1 0.5 0.5

slide-8
SLIDE 8

Pegden et al.

Theorem (1.1)

Let M = (X0, X1, . . . ) be a reversible Markov chain with stationary distribution π, and suppose the states of M have real-valued labels. If X0 ∼ π, then for any fixed k, the probability that the label of X0 is an ǫ-outlier from among the list of labels

  • bserved in the trajectory X0, X1, X2, . . . , Xk is, at most,

√ 2ǫ.

slide-9
SLIDE 9

Pegden et al.

A bit of clarity...

Assume that M has some stationary distribution π, and that, if we start at some distribution X0, we’ll eventually get to the stationary distribution, denoted as X0 ∼ π Then, pick some ǫ. If we have some labeling function ω : Ω → R, such that each state in M has some real-valued label, the probability that X0’s real-valued label, ω(X0), is ǫ(k + 1)-far away from ω(X1, X2, . . . , Xn) is √ 2ǫ.

slide-10
SLIDE 10

Pegden et al.

A few conclusions arise: (1) If ω(X0) is a most extreme outlier, then the chain M will not approach its stationary distribution with p ≤ √ 2ǫ. (2) Given (1), consider Ω to be a set of districting plans. Then, define some scoring metric for these plans, called R : Ω → R, which acts as our labeling function. Next, consider some districting plan D = X0. If R(D) is an outlier, then the chain M cannot approach its stationary distribution. As such, D may be considered a ”gerrymander.”

slide-11
SLIDE 11

Conclusion

References: (1) Assessing significance in a Markov chain without mixing, Pegden et al. (2) VRDI Intro. (3) Finite Markov Chains and Algorithmic Applications, Olle Haggstrom (London Mathematical Society).