Introduction to Bayesian Statistics Lecture 4: Multiparameter models - - PowerPoint PPT Presentation

introduction to bayesian statistics
SMART_READER_LITE
LIVE PREVIEW

Introduction to Bayesian Statistics Lecture 4: Multiparameter models - - PowerPoint PPT Presentation

Introduction to Bayesian Statistics Lecture 4: Multiparameter models (I) Rung-Ching Tsai Department of Mathematics National Taiwan Normal University March 18, 2015 Noninformative prior distributions Proper and improper prior distributions


slide-1
SLIDE 1

Introduction to Bayesian Statistics

Lecture 4: Multiparameter models (I)

Rung-Ching Tsai

Department of Mathematics National Taiwan Normal University

March 18, 2015

slide-2
SLIDE 2

Noninformative prior distributions

  • Proper and improper prior distributions
  • Unnormalized densities
  • Uniform prior distributions on different scales
  • Some examples
  • Probability parameter θ ∈ (0, 1)
  • One possibility: p(θ) = 1 [proper]
  • Another possibility: p(logitθ) ∝ 1 corresponds to p(θ) ∝ θ−1(1 − θ)−1

[improper]

  • Location parameter θ unconstrained
  • One possibility: p(θ) ∝ 1 [improper] ⇒ p(θ|y) ≈ normal(θ|¯

y, σ2

n )

  • Scale parameter σ > 0
  • One possibility: p(σ) ∝ 1 [improper]
  • Another possibility: p(logσ2) ∝ 1 corresponds to p(σ2) ∝ σ−2 [improper]

2 of 17

slide-3
SLIDE 3

Noninformative prior distributions: Jeffrey’s principle

  • φ = h(θ), p(φ) = p(θ)| dθ

dφ| = p(θ) |h′(θ)|−1

  • Jeffrey’s principle leads to a non informative prior density:

p(θ) ∝ [J(θ)]1/2, where J(θ) is the Fisher information for θ: J(θ) = E dlogp(y|θ) dθ 2 |θ

  • = −E

d2logp(y|θ) dθ2 |θ

  • Jeffrey’s prior model is invariant to parameterization, evaluate J(φ)

at θ = h−1(φ):

J(φ) = −E d2logp(y|φ) dφ2

  • = −E
  • d2logp(y|θ = h−1(φ))

dθ2

  • 2

= J(θ)

  • 2

;

thus, J(φ)1/2 = J(θ)1/2| dθ

dφ|

3 of 17

slide-4
SLIDE 4

Examples: Various noninformative prior distributions

  • y|θ ∼ binomial(n, θ), p(y|θ) =

n

y

  • θy(1 − θ)n−y
  • Jeffrey’s prior density p(θ) ∝ [J(θ)]1/2:

logp(y|θ) = constant + ylogθ + (n − y)log(1 − θ). J(θ) = −E d2logp(y|θ) dθ2 |θ

  • =

n θ(1 − θ) Jeffreys′prior ⇒ p(θ) ∝ θ−1/2(1 − θ)−1/2.

  • Three alternatives of prior
  • Jeffreys’ prior: θ ∼ Beta( 1

2, 1 2)

  • uniform prior: θ ∼ Beta(1, 1), i.e., p(θ) = 1
  • improper prior: θ ∼ Beta(0, 0) i.e., p(logθ) ∝ 1

4 of 17

slide-5
SLIDE 5

From single-parameter to multiparameter models

  • The reality of applied statistics: there are always several (maybe

many) unknown parameters!

  • BUT the interest usually lies in only a few of these (parameters of

interest) while others are regarded as nuisance parameters for which we have no interest in making inferences but which are required in order to construct a realistic model.

  • At this point the simple conceptual framework of the Bayesian

approach reveals its principal advantage over other forms of inference.

5 of 17

slide-6
SLIDE 6

Bayesian approach to multiparameter models

  • The Bayesian approach is clear: Obtain the joint posterior

distribution of all unknowns, then integrate over the nuisance parameters to leave the marginal posterior distribution for the parameters of interest.

  • Alternatively using simulation, draw samples from the entire joint

posterior distribution (even this may be computationally difficult), look at the parameters of interest and ignore the rest.

6 of 17

slide-7
SLIDE 7

Parameter of interest and nuisance parameter

  • Suppose model parameter θ has two parts θ = (θ1, θ2)
  • Parameter of interest: θ1
  • Nuisance parameter: θ2
  • For example

y|µ, σ2 ∼ normal(µ, σ2),

  • Unknown: µ and σ2
  • Parameter of interest (usually, not always): µ
  • Nuisance parameter: σ2
  • Approach to obtain p(θ1|y)
  • Averaging over nuisance parameters
  • Factoring the joint posterior
  • A strategy for computation: Conditional simulation via Gibbs sampler

7 of 17

slide-8
SLIDE 8

Posterior distribution of θ = (θ1, θ2)

  • Prior of θ:

p(θ) = p(θ1, θ2)

  • Likelihood of θ:

p(y|θ) = p(y|θ1, θ2)

  • Posterior of θ = (θ1, θ2) given y:

p(θ1, θ2|y) ∝ p(θ1, θ2)p(y|θ1, θ2).

8 of 17

slide-9
SLIDE 9

Approaches to obtain marginal posterior of θ1, p(θ1|y)

  • Joint posterior of θ1 and θ2: p(θ1, θ2|y) ∝ p(θ1, θ2)p(y|θ1, θ2)
  • Approaches to obtain marginal posterior density p(θ1|y)
  • By averaging or integrating over the nuisance parameter θ2:

p(θ1|y) =

  • p(θ1, θ2|y)dθ2.
  • By factoring the joint posterior:

p(θ1|y) =

  • p(θ1, θ2|y)dθ2

=

  • p(θ1|θ2, y)p(θ2|y)dθ2.

(1)

  • p(θ1|y) is a mixture of the conditional posterior distributions given the

nuisance parameter θ2, p(θ1|θ2, y).

  • The weighting function p(θ2|y) combines evidence from data and prior.
  • θ2 can be categorical (discrete) and may take only a few possible values

representing, for example, different sub-models.

9 of 17

slide-10
SLIDE 10

A strategy for computation: Simulations instead of integration

We rarely evaluate integral (1) explicitly, but it suggests an important strategy for constructing and computing with multiparameter models, using simulations.

  • Successive conditional simulations
  • Draw θ2 from its marginal posterior distribution, p(θ2|y).
  • Draw θ1 from conditional posterior distribution given the drawn value
  • f θ2, p(θ1|θ2, y).
  • All-Others conditional simulations (Gibbs sampler)
  • Draw θ(t+1)

1

from conditional posterior distribution given the previous drawn value of θ(t)

2 , p(θ1|θ(t) 2 , y).

  • Draw θ(t+1)

2

from conditional posterior distribution given the drawn value of θ(t)

1 , p(θ2|θ(t) 1 , y).

  • Iterating the procedure will ultimately generate samples from the

marginal posterior distribution of p(θ1, θ2|y).

10 of 17

slide-11
SLIDE 11

Multiparameter model: the normal model (I)

  • y1, · · · , yn

iid

∼ normal(µ, σ2), both µ and σ2 unknown, use Bayesian approach to estimate µ.

  • choose a prior for (µ, σ2), take noninformative priors:

p(µ, σ2) = p(µ)p(σ2) ∝ 1 · (σ2)−1 = σ−2

  • prior independence of location and scale
  • p(µ) ∝ 1: noninformative or uniform but improper prior
  • p(logσ2) ∝ 1 ⇒ p(σ2) ∝ (σ2)−1: noninformative or uniform on logσ2
  • likelihood:

p(y|µ, σ2) =

n

  • i=1

1 √ 2πσ exp

  • − 1

2σ2 (yi − µ)2

σ−nexp

  • − 1

2σ2 (

n

  • i=1

(yi − µ)2

  • 11 of 17
slide-12
SLIDE 12

Joint posterior distribution, p(µ, σ2|y)

  • y1, · · · , yn

iid

∼ normal(µ, σ2)

  • prior of (µ, σ2): p(µ, σ2) = p(µ)p(σ2) ∝ 1 · (σ2)−1 = σ−2
  • find the joint posterior distribution of (µ, σ2):

p(µ, σ2|y) ∝ p(µ, σ2)p(y|µ, σ2) ∝ σ−n−2exp

  • − 1

2σ2 (

n

  • i=1

(yi − µ)2

  • =

σ−n−2exp

  • − 1

2σ2 (

n

  • i=1

(yi − ¯ y)2 + n(¯ y − µ)2

  • =

σ−n−2exp

  • − 1

2σ2 [(n − 1)s2 + n(¯ y − µ)2]

  • .

where s2 =

1 n−1

n

i=1(yi − ¯

y)2, the sample variance. The sufficient statistics are s2 and ¯ y.

12 of 17

slide-13
SLIDE 13

Conditional posterior distribution, p(µ|σ2, y)

  • p(µ, σ2|y) = p(µ|σ2, y)p(σ2|y)
  • Use the case with single parameter µ with known σ2 and non

informative prior p(µ) ∝ 1, we have p(µ|σ2, y) ∼ normal(¯ y, σ2 n ).

13 of 17

slide-14
SLIDE 14

Marginal posterior distribution, p(σ2|y)

  • p(µ, σ2|y) = p(µ|σ2, y)p(σ2|y)
  • p(σ2|y) requires averaging the joint distribution

p(µ, σ2|y) ∝ σ−n−2exp

  • − 1

2σ2 [(n − 1)s2 + n(¯

y − µ)2]

  • ver µ, that

is, evaluating the simple normal integral

  • exp
  • − 1

2σ2 n(¯ y − µ)2

  • dµ =
  • 2πσ2

n , thus, p(σ2|y) ∝ (σ2)−(n+1)/2exp

  • −(n − 1)s2

2σ2

  • σ2|y

∼ Inv − χ2(n − 1, s2), which is a scaled inverse-χ2 distribution.

14 of 17

slide-15
SLIDE 15

Analytic form of marginal posterior distribution of µ

  • µ is typically the estimand of interest, so ultimate objective of the

Bayesian analysis is the marginal posterior distribution of µ. This can be obtained by integrating σ2 out of the joint posterior

  • distribution. Easily done by simulation: first draw σ2 from p(σ2|y),

then draw µ from p(µ|σ2, y).

  • The posterior distribution of µ, p(µ|y), can be thought of as a

mixture of normal distributions mixed over the scaled inverse chi-squared distribution for the variance - a rare case where analytic results are available.

15 of 17

slide-16
SLIDE 16

Performing the integration

  • We start by integrating the joint posterior density over σ2

p(µ|y) = ∞ p(µ, σ2|y)dσ2

  • With the substitution z =

A 2σ2 , A = (n − 1)s2 + n(µ − ¯

y)2, the result is an unnormalized gamma integral: p(µ|y) ∝ A−n/2 ∞ z(n−2)/2exp(−z)dz ∝ [(n − 1)s2 + n(µ − ¯ y)2]−n/2 ∝

  • 1 + n(µ − ¯

y)2 (n − 1)s2 −n/2

  • µ|y ∼ tn−1(¯

y, s2

n ).

16 of 17

slide-17
SLIDE 17

Parallel between Bayesian & Frequentist results

  • σ2: Bayes (under noninformative prior on logσ2, p(σ2) ∝ (σ2)−1)

versus Frequentist: (n − 1)s2 σ2 |y ∼ χ2

n−1 vs.

(n − 1)s2 σ2 |µ, σ2 ∼ χ2

n−1

  • µ: Bayes (under noninformative prior on (µ, logσ2),

p(µ, σ2) ∝ (σ2)−1) versus Frequentist: µ − ¯ y s/√n |y ∼ tn−1 vs. ¯ y − µ s/√n |µ, σ2 ∼ tn−1. where the ratio

¯ y−µ s/√n is called a pivotal quantity: Its sampling

distribution does not depend on the nuisance parameter σ2.

17 of 17