Markov Chain Monte Carlo Ryan Martin UIC - - PowerPoint PPT Presentation

markov chain monte carlo
SMART_READER_LITE
LIVE PREVIEW

Markov Chain Monte Carlo Ryan Martin UIC - - PowerPoint PPT Presentation

Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 89 in Givens & Hoeting, Chapters 2527 in Lange 2 Updated: April 4, 2016 1 / 42 Outline 1 Introduction 2 Crash


slide-1
SLIDE 1

Stat 451 Lecture Notes 0712 Markov Chain Monte Carlo

Ryan Martin UIC www.math.uic.edu/~rgmartin

1Based on Chapters 8–9 in Givens & Hoeting, Chapters 25–27 in Lange 2Updated: April 4, 2016 1 / 42

slide-2
SLIDE 2

Outline

1 Introduction 2 Crash course on Markov chains 3 Motivation, revisited 4 Metropolis–Hastings algorithm 5 Gibbs sampler 6 Some MCMC diagnostics 7 Conclusion

2 / 42

slide-3
SLIDE 3

Motivation

We know how to sample independent random variables from the target distribution f (x), at least approximately. Monte Carlo uses these simulated random variables to approximate integrals. But the random variables don’t need to be independent in

  • rder to accurately approximate integrals!

Markov Chain Monte Carlo (MCMC) constructs a dependent sequence of random variables that can be used to approximate the integrals just like for ordinary Monte Carlo. The advantage of introducing this dependence is that very general “black-box” algorithms (and corresponding theory) are available to perform the required simulations.

3 / 42

slide-4
SLIDE 4

Initial remarks

In some sense, MCMC is basically a black-box approach that works for almost all problems. The previous remark is dangerous — it’s always a bad idea to use tools without knowing that they will work. Here we will discuss some basics of Markov chains and MCMC but know that there are very important unanswered questions about how and when MCMC works.3

3MCMC is an active area of research; despite the many developments in

MCMC, according to Diaconis (∼2008), still “almost nothing” is known!

4 / 42

slide-5
SLIDE 5

Outline

1 Introduction 2 Crash course on Markov chains 3 Motivation, revisited 4 Metropolis–Hastings algorithm 5 Gibbs sampler 6 Some MCMC diagnostics 7 Conclusion

5 / 42

slide-6
SLIDE 6

Markov chains

A Markov chain is just a sequence of random variables {X1, X2, . . .} with a specific type of dependence structure. In particular, a Markov chain satisfies P(Xn+1 ∈ B | X1, . . . , Xn) = P(Xn+1 ∈ B | Xn), (⋆) i.e., the future, given the past and present, depends only on the present. Independence is a trivial Markov chain. From (⋆) we can argue that the probabilistic properties of the chain are completely determined by:

initial distribution for X0, and the transition distribution, distribution of Xn+1, given Xn.4

4Assume the Markov chain is homogeneous, so that the transition

distribution does not depend on n.

6 / 42

slide-7
SLIDE 7

Example – simple random walk

Let U1, U2, . . .

iid

∼ Unif({−1, 1}). Set X0 = 0 and let Xn = n

i=1 Ui = Xn−1 + Un.

The initial distribution is P{X0 = 0} = 1. The transition distribution is determined by Xn =

  • Xn−1 − 1

with prob 1/2 Xn−1 + 1 with prob 1/2 While very simple, the random walk is an important example in probability, having connections to advanced things like Brownian motion.

7 / 42

slide-8
SLIDE 8

Keywords5

A state A is recurrent if a chain starting in A will eventually return to A with probability 1. This state is nonull if the expected time to return is finite. A chain is called recurrent if each state is recurrent. A Markov chain is irreducible if there is positive probability that a chain starting in a state A can reach any other state B. A Markov chain is aperiodic if, for a starting state A, there is no constraint on the times at which the chain can return to A. An irreducible, aperiodic Markov chain with all states being nonnull recurrent is called ergodic.

5Not mathematically precise! 8 / 42

slide-9
SLIDE 9

Limit theory6

f is a stationary distribution if X0 ∼ f implies Xn ∼ f for all n. An ergodic Markov chain has at most one stationary dist. Furthermore, if the chain is ergodic, then lim

n→∞ P(Xm+n ∈ B | Xm ∈ A) =

  • B

f (x) dx, for all A, B, m. Even further, if ϕ(x) is integrable, then 1 n

n

  • t=1

ϕ(Xt) →

  • ϕ(x)f (x) dx,

with prob 1. This is a version of the famous ergodic theorem. There are also central limit theorems for Markov chains, but I won’t say anything about this.

6Again, not mathematically precise! 9 / 42

slide-10
SLIDE 10

Outline

1 Introduction 2 Crash course on Markov chains 3 Motivation, revisited 4 Metropolis–Hastings algorithm 5 Gibbs sampler 6 Some MCMC diagnostics 7 Conclusion

10 / 42

slide-11
SLIDE 11

Why MCMC?

In Monte Carlo applications, we want to generate random variables with distribution f . This could be difficult or impossible to do exactly. MCMC is designed to construct an ergodic Markov chain with f as its stationary distribution. Asymptotically, the chain will resemble samples from f . In particular, by the ergodic theorem, expectations with respect to f can be approximated by averages. Somewhat surprising is that it is actually quite easy to construct and simulate a suitable Markov chain, explaining why MCMC methods have become so popular. But, of course, there are practical and theoretical challenges...

11 / 42

slide-12
SLIDE 12

Outline

1 Introduction 2 Crash course on Markov chains 3 Motivation, revisited 4 Metropolis–Hastings algorithm 5 Gibbs sampler 6 Some MCMC diagnostics 7 Conclusion

12 / 42

slide-13
SLIDE 13

Details

Let f (x) denote the target distribution pdf. Let q(x | y) denote a conditional pdf for X, given Y = y; this pdf should be easy to sample from. Given X0, the Metropolis–Hasting algorithm (MH) produces a sequence of random variables as follows:

1 Sample X ⋆

t ∼ q(x | Xt−1).

2 Compute

R = min

  • 1 ,

f (X ⋆

t )

f (Xt−1) q(Xt−1 | X ⋆

t )

q(X ⋆

t | Xt−1)

  • .

3 Set Xt = X ⋆

t with probability R; otherwise, Xt = Xt−1.

General R code to implement MH is on course website.

13 / 42

slide-14
SLIDE 14

Details (cont.)

The proposal distribution is not easy to choose, and the performance of the algorithm depends on this choice. Two general strategies are:

Take the proposal q(x | y) = q(x); i.e., at each stage of the MH algorithm, X ⋆

t does not depend on Xt−1.

Take q(x | y) = q0(x − y), for a symmetric distribution with pdf q0, which amounts to a random walk proposal.

This is one aspect of the MCMC implementation that requires a lot of care from the user; deeper understanding is needed to really see how the proposal affects the performance. In my examples, I will just pick a proposal that seems to work reasonably well...

14 / 42

slide-15
SLIDE 15

Details (cont.)7

Assuming the proposal is not too bad, then a number of things can be shown about the sequence {Xt : t ≥ 1}:

the chain is ergodic, and the target f is the stationary distribution.

Consequently, the sequence converges to the stationary distribution and, for any integrable function ϕ(x), we can approximate integrals with sample averages. So, provided that we run the simulation “long enough,” we should be able to get arbitrarily good approximations. This is an interesting scenario where we, as statisticians, are able to control the sample size!

7Again, not mathematically precise! 15 / 42

slide-16
SLIDE 16

Example: “cosine model”

Consider the problem from old homework, where the likelihood function is L(θ) ∝

n

  • i=1

{1 − cos(Xi − θ)}, −π ≤ θ ≤ π. Observed data (X1, . . . , Xn) given in the code. Assume that θ is given a Unif(−π, π) prior distribution. Use MH to sample from the posterior:

Proposal: q(θ′ | θ) = Unif(θ′ | θ ± 0.5). Burn-in: B = 5000. Sample size: M = 10000.

16 / 42

slide-17
SLIDE 17

Example: “cosine model” (cont.)

Left figure shows histogram of the MCMC sample with posterior density overlaid. Right figure shows a “trace plot” of the chain.

θ Density −0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 2000 4000 6000 8000 10000 −0.5 0.0 0.5 1.0 Iteration θ

17 / 42

slide-18
SLIDE 18

Example: Weibull model

Data X1, . . . , Xn has Weibull likelihood L(α, η) ∝ αnηn exp

  • α

n

  • i=1

log Xi − η

n

  • i=1

X α

i

  • .

Prior: π(α, η) ∝ e−αηb−1e−cη, for some (b, c). Posterior density is proportional to αnηn+b−1 exp

  • α

n

  • i=1

log Xi − 1

  • − η
  • c +

n

  • i=1

X α

i

  • .

Exponential is a special case of Weibull, when α = 1. Goal: informal Bayesian test of H0 : α = 1.

18 / 42

slide-19
SLIDE 19

Example: Weibull model (cont.)

Data from Problem 7.11 in Ghosh et al (2006). Use MH to sample from posterior of (α, η).

Proposal: (α′, η′) | (α, η) ∼ Exp(α) × Exp(η). b = 2 and c = 1; B = 1000 and M = 10000.

Histogram shows marginal posterior of α. Is an exponential model (α = 1) reasonable?

α Density 1 2 3 4 0.0 0.2 0.4 0.6 0.8

19 / 42

slide-20
SLIDE 20

Example: logistic regression

Based on Examples 1.13 and 7.11 in Robert & Casella’s book. In 1986, the Challenger space shuttle exploded during take-off, the result of an “o-ring” failure. Failure may have been due to the cold temperature (31◦F). Goal: Analyze the relationship between temperature and

  • -ring failure.

In particular, fit a logistic regression model.

20 / 42

slide-21
SLIDE 21

Example: logistic regression (cont.)

Model: Y | x ∼ Ber(p(x)), x = temperature. Failure probability, p(x), is of the form p(x) = exp(α + βx) 1 + exp(α + βx). Using available data, fit logistic regression using glm.

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 15.0429 7.3786 2.039 0.0415 * x

  • 0.2322

0.1082

  • 2.145

0.0320 *

  • Signif. codes:

0 *** 0.001 ** 0.01 * 0.05 . 0.1 1

Note that p(31) ≈ 0.999!!!

21 / 42

slide-22
SLIDE 22

Example: logistic regression (cont.)

Can also do a Bayesian analysis of this logistic model. Use MH to obtain samples from the posterior of (α, β). Samples can be used to approximate the posterior distribution

  • f p(x0) for any fixed x0, e.g., x0 = 65, 31; see below.

Details about the prior and proposal construction are given in the R code and a short write-up posted on the course website.

p(65) Density 0.2 0.4 0.6 0.8 0.0 0.5 1.0 1.5 2.0 2.5 3.0 p(31) Density 0.96 0.97 0.98 0.99 1.00 50 100 150 200

22 / 42

slide-23
SLIDE 23

Outline

1 Introduction 2 Crash course on Markov chains 3 Motivation, revisited 4 Metropolis–Hastings algorithm 5 Gibbs sampler 6 Some MCMC diagnostics 7 Conclusion

23 / 42

slide-24
SLIDE 24

Setup

Suppose we have a multivariate target distribution f . MH can be applied to such a problem, but there are challenges in constructing a good proposal over multiple dimensions. Idea: sample one dimension at a time. Question: How to carry out the sampling so that it will approximate the target, at least in a limit? Gibbs sampler is the right tool for the job.

24 / 42

slide-25
SLIDE 25

Details

Suppose we have a trivariate target f (x) = f (x1, x2, x3). Suppose we can write down the set of full conditionals f (x1 | x2, x3), f (x2 | x1, x3), f (x3 | x1, x2) and that these can be sampled from. Gibbs sampler generates a sequence {X (t) : t ≥ 0} by iteratively sampling from the conditionals: X (t)

1

∼ f (x1 | X (t−1)

2

, X (t−1)

3

) X (t)

2

∼ f (x2 | X (t)

1 , X (t−1) 3

) X (t)

3

∼ f (x3 | X (t)

1 , X (t) 2 ).

25 / 42

slide-26
SLIDE 26

Details (cont.)

Gibbs sequence forms a Markov chain. In fact, Gibbs sampler is a special case of MH! Connection to MH is made by considering Gibbs as a sequence that updates one component of X at a time. The acceptance probability for this form of MH update is exactly 1, which explains why Gibbs sampler has no accept/reject step. Since Gibbs is a special kind of MH, the convergence for MH applies to Gibbs as well.

26 / 42

slide-27
SLIDE 27

Example: bivariate normal

A super-simple Gibbs example: bivariate normal. Suppose X = (X1, X2) is bivariate normal, correlation ρ. Full conditionals are easy to write down here. Gibbs steps: X (t)

1

∼ N(ρX (t−1)

2

, 1 − ρ2) X (t)

2

∼ N(ρX (t)

1 , 1 − ρ2).

Not as efficient as direct sampling, but works fine.

27 / 42

slide-28
SLIDE 28

Example: many-normal-means

Model: Xi

ind

∼ N(θi, 1), i = 1, . . . , n. Hierarchical prior distribution: [(θ1, . . . , θn) | ψ]

iid

∼ N(0, ψ−1), ψ ∼ Gamma(a, b). Takes some work, but it can be shown8 that the full conditionals are θi | (Xi, ψ)

ind

∼ N

  • Xi

1 + ψ, 1 1 + ψ

  • ,

i = 1, . . . , n ψ−1 | (θ, X) ∼ Gamma

  • a + n

2, b + 1 2

n

  • i=1

θ2

i

  • .

So Gibbs sampler is pretty easy to implement...

8Easiest argument is based on standard conjugate priors... 28 / 42

slide-29
SLIDE 29

Example: many-normal-means (cont.)

Suppose the goal is to estimate θ2 = n

i=1 θ2 i .

In general, the MLE X2 is lousy. However, the Bayes estimator, E(θ2 | X), is better and can be evaluated by running the Gibbs sampler. Can use the Rao–Blackwellized estimator of E(θ2

i | X) to

reduce the variance. Simulation study to compare Bayes estimator with MLE:

n = 10, θ = (1, 1, . . . , 1); 1000 reps, 5000 Monte Carlos, a = b = 1.

mle.mse bayes.mse [1,] 180.1721 32.93027

29 / 42

slide-30
SLIDE 30

Example: capture–recapture

Example 26.3.3 in Lange. Consider a lake that contains N fish, N is unknown. Capture–recapture study:

On n occasions, fish are caught, marked, and returned. At occasion i = 1, . . . , n, record Ci = number of fish caught at time i Ri = number of “recaptures” at time i. Ci − Ri is the number of new fish caught at time i. Set Ui = i

j=1(Ci − Ri).

Model assumes independent binomial sampling.

30 / 42

slide-31
SLIDE 31

Example: capture–recapture (cont.)

Introduce binomial success probabilities (ω1, . . . , ωn). Likelihood for (N, ω) is

L(N, ω) =

n

  • i=1

Ui−1 Ri

  • ωRi

i (1 − ωi)Ui−1−Ri

N − Ui−1 Ci − Ri

  • ωCi −Ri

i

(1 − ωi)N−Ui−1−Ci +Ri =

n

  • i=1

Ui−1 Ri N − Ui−1 Ci − Ri

  • ωCi

i (1 − ωi)N−Ci

= N! (N − Un)!

n

  • i=1

Ui−1 Ri

  • ωCi

i (1 − ωi)N−Ci .

Priors: N ∼ Pois(m) and ωi

ind

∼ Beta(a, b).

31 / 42

slide-32
SLIDE 32

Example: capture–recapture (cont.)

Posterior distribution for (N, ω) proportional to N! (N − Un)! mN N!

n

  • i=1

Ui−1 Ri

  • ωCi+a−1

i

(1 − ωi)N−Ci+b−1. To run a Gibbs sampler, we need the full conditionals.

Distribution of (ω1, . . . , ωn), given N and data, is clearly ωi | (N, data)

ind

∼ Beta(a + Ci, b + N − Ci), i = 1, . . . , n. Distribution of N, given ω and data, is N | (ω, data) ∼ Un + Pois

  • m

n

  • i=1

(1 − ωi)

  • .

Now the Gibbs sampler is easy to run...

32 / 42

slide-33
SLIDE 33

Example: probit regression

Model: Yi

ind

∼ Ber(Φ(x⊤

i β)), i = 1, . . . , n.

Suppose β has a normal prior. Not directly obvious how to implement Gibbs to get a sample from the posterior distribution of β. Recall, from EM notes, that this model can be simplified by introducing some “missing data.” The conditional distribution of the missing data, given the

  • bserved data and β, makes up one part of the full

conditionals. The other part of the full conditionals is simple since the model for the complete data is, by construction, nice.

33 / 42

slide-34
SLIDE 34

Example: probit regression (cont.)

Missing data: Z1, . . . , Zn where Zi ∼ N(x⊤

i β, 1) and

Yi = IZi>0, i = 1, . . . , n. Full conditionals:

Distribution of β, given (Y , Z), only depends on Z and is easy because normal prior for β is conjugate. Distribution of Z, given (Y , β), is a truncated normal...

Though I’ve not given the precise details, the steps for constructing a Gibbs sampler are not too difficult;9 see Section 8.3.2 in Ghosh et al (2006) for details.

9The only potential difficulty is simulating from a truncated normal when

the truncation point is extreme, but remember we have talked about extreme normal tail probabilities before...

34 / 42

slide-35
SLIDE 35

Example: Dirichlet process mixture

In Bayesian nonparametrics, the Dirichlet process mixture model is probably the most widely used. Flexible model for density estimation — normal mixture density with unspecified component means (and variances), but also doesn’t specify the number of components. The main challenge with using mixture models is choosing how many components to use; this DPM selects the number

  • f components automatically.

Interesting that, despite being a “nonparametric” model, the computations are not too hard, just a Gibbs sampler. Simplest algorithm in Escobar & West (JASA 1995); a nice slice sampler is proposed in Kalli et al (Stat Comp 2011).

35 / 42

slide-36
SLIDE 36

Example: Dirichlet process mixture (cont.)

Using the slice sampler from Kalli et al to fit same normal mixture model to the galaxy data from HW. R code for this on my research page.

Ynew Density 10 15 20 25 30 35 0.00 0.05 0.10 0.15 0.20 Post mean Kernel 4 6 8 10 12 14 0.00 0.05 0.10 0.15 0.20 0.25 Number of components Probability

36 / 42

slide-37
SLIDE 37

Outline

1 Introduction 2 Crash course on Markov chains 3 Motivation, revisited 4 Metropolis–Hastings algorithm 5 Gibbs sampler 6 Some MCMC diagnostics 7 Conclusion

37 / 42

slide-38
SLIDE 38

Diagnostic plots

Sample path plot, or trace plot:

Can reveal any residual dependence after burn-in. Idea is that a sample path of iid samples should show no trend, so if there is minimal trend in our sample plot, then we can be comfortable treating samples as independent.

Autocorrelation plot:

Plot sample correlation of {(Xt, Xt+r) : t = 1, 2, . . .} as a function of the “lag” r. What to see the autocorrelation plot decay rapidly, suggesting that the dependence along the chain is not too strong.

If these plots indicate that chain has not yet converged to stationarity, then you can run the chain longer or make some

  • ther modifications, e.g.,

Transformations “thinning”

38 / 42

slide-39
SLIDE 39

Other considerations

The practical/theoretical rate of convergence can depend on the parametrization; see homework. There is no agreement in the stat community about how many chains to run, how long should the burn in be, etc. Charles Geyer (Univ Minnesota) strongly supports running

  • nly one long chain — check out his “rants”

Gelman & Rubin suggest running several shorter chains with different starting points; textbook gives their diagnostic test.

39 / 42

slide-40
SLIDE 40

Outline

1 Introduction 2 Crash course on Markov chains 3 Motivation, revisited 4 Metropolis–Hastings algorithm 5 Gibbs sampler 6 Some MCMC diagnostics 7 Conclusion

40 / 42

slide-41
SLIDE 41

Remarks

MCMC methods are powerful because they give fairly general procedures able to solve a variety of important problems. There are “black-box” software implementations:

The mcmc package in R will do random walk MH; In SAS, PROC MCMC does similar things; BUGS (“Bayes Using Gibbs Sampler”).

However, it’s a bad idea to blindly use these things without fully understanding what they’re doing and whether or not they will actually work in your problem. Also important to look at convergence diagnostics before the simulation results be used for inference.

41 / 42

slide-42
SLIDE 42

Remarks (cont.)

Our focus here was on relatively simple MCMC methods. Don’t think that MH, Gibbs, etc are all separate methods, they can be combined. For example, if one of the full conditionals is difficult to sample from, one might consider a MH step10 within Gibbs to sample this conditional. Book by Robert & Casella has some details about more advanced MCMC methods, including various combinations of these standard techniques.

10Could also use accept–reject for this... 42 / 42