Challenges on the use of Imprecise Prior for Imprecise Inference on - - PowerPoint PPT Presentation

challenges on the use of imprecise prior
SMART_READER_LITE
LIVE PREVIEW

Challenges on the use of Imprecise Prior for Imprecise Inference on - - PowerPoint PPT Presentation

Challenges on the use of Imprecise Prior for Imprecise Inference on Poisson Sampling Models Chel Hee Lee 1 , elis Bickis 2 Mik 1 Clinical Research Support Unit Community Health and Epidemiology College of Medicine University of Saskatchewan 2


slide-1
SLIDE 1

Challenges on the use of Imprecise Prior

for Imprecise Inference

  • n Poisson Sampling Models

Chel Hee Lee1 Mik

, elis Bickis2

1Clinical Research Support Unit

Community Health and Epidemiology College of Medicine University of Saskatchewan

2Department of Mathematics and Statistics

University of Saskatchewan

9th WPMSIIP, Durham, England 4 September 2016

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 1 / 37

slide-2
SLIDE 2

1

Motivation

2

Imprecise Inferential Framework

3

Illustration Scenario I Scenario II Scenario III-1 Scenario III-2

4

Discussion

5

References

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 2 / 37

slide-3
SLIDE 3

Motivation

General Framework

  • P. I. N.

Count 1 1 1 1 1 1 1 1 X8213 2 2 2 2 2 1 1 1 X8222 2 3 3 3 3 3 1 1 1 X8223 1 4 4 4 4 4 1 1 1 X8224 1 5 5 5 5 5 1 1 1 X8225 3 6 6 6 6 6 1 1 1 X8226 2 7 7 7 7 7 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 3 / 37

slide-4
SLIDE 4

Motivation

General Framework

  • P. I. N.

Count 1 1 1 1 1 1 1 1 X8213 2 2 2 2 2 1 1 1 X8222 2 3 3 3 3 3 1 1 1 X8223 1 4 4 4 4 4 1 1 1 X8224 1 5 5 5 5 5 1 1 1 X8225 3 6 6 6 6 6 1 1 1 X8226 2 7 7 7 7 7 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 3 / 37

slide-5
SLIDE 5

Motivation

General Framework

  • P. I. N.

Count 1 1 1 1 1 1 1 1 X8213 2 2 2 2 2 1 1 1 X8222 2 3 3 3 3 3 1 1 1 X8223 1 4 4 4 4 4 1 1 1 X8224 1 5 5 5 5 5 1 1 1 X8225 3 6 6 6 6 6 1 1 1 X8226 2 7 7 7 7 7 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1

Sampling Model

Yi

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 3 / 37

slide-6
SLIDE 6

Motivation

General Framework

  • P. I. N.

Count 1 1 1 1 1 1 1 1 X8213 2 2 2 2 2 1 1 1 X8222 2 3 3 3 3 3 1 1 1 X8223 1 4 4 4 4 4 1 1 1 X8224 1 5 5 5 5 5 1 1 1 X8225 3 6 6 6 6 6 1 1 1 X8226 2 7 7 7 7 7 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1

Sampling Model

Yi

Prior Distribution

θ

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 3 / 37

slide-7
SLIDE 7

Motivation

General Framework

  • P. I. N.

Count 1 1 1 1 1 1 1 1 X8213 2 2 2 2 2 1 1 1 X8222 2 3 3 3 3 3 1 1 1 X8223 1 4 4 4 4 4 1 1 1 X8224 1 5 5 5 5 5 1 1 1 X8225 3 6 6 6 6 6 1 1 1 X8226 2 7 7 7 7 7 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1

Sampling Model

Yi

Prior Distribution

θ

Hyperparameters

ξ

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 3 / 37

slide-8
SLIDE 8

Motivation

General Framework

  • P. I. N.

Count 1 1 1 1 1 1 1 1 X8213 2 2 2 2 2 1 1 1 X8222 2 3 3 3 3 3 1 1 1 X8223 1 4 4 4 4 4 1 1 1 X8224 1 5 5 5 5 5 1 1 1 X8225 3 6 6 6 6 6 1 1 1 X8226 2 7 7 7 7 7 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1

Sampling Model

Yi

Prior Distribution

θ

Hyperparameters

ξ

ξ1

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 3 / 37

slide-9
SLIDE 9

Motivation

General Framework

  • P. I. N.

Count 1 1 1 1 1 1 1 1 X8213 2 2 2 2 2 1 1 1 X8222 2 3 3 3 3 3 1 1 1 X8223 1 4 4 4 4 4 1 1 1 X8224 1 5 5 5 5 5 1 1 1 X8225 3 6 6 6 6 6 1 1 1 X8226 2 7 7 7 7 7 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1

Sampling Model

Yi

Prior Distribution

θ

Hyperparameters

ξ

ξ1 ξ2

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 3 / 37

slide-10
SLIDE 10

Motivation

General Framework

  • P. I. N.

Count 1 1 1 1 1 1 1 1 X8213 2 2 2 2 2 1 1 1 X8222 2 3 3 3 3 3 1 1 1 X8223 1 4 4 4 4 4 1 1 1 X8224 1 5 5 5 5 5 1 1 1 X8225 3 6 6 6 6 6 1 1 1 X8226 2 7 7 7 7 7 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1

Sampling Model

Yi

Prior Distribution

θ

Hyperparameters

ξ

ξ1 ξ2 ξ3

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 3 / 37

slide-11
SLIDE 11

Motivation

General Framework

  • P. I. N.

Count 1 1 1 1 1 1 1 1 X8213 2 2 2 2 2 1 1 1 X8222 2 3 3 3 3 3 1 1 1 X8223 1 4 4 4 4 4 1 1 1 X8224 1 5 5 5 5 5 1 1 1 X8225 3 6 6 6 6 6 1 1 1 X8226 2 7 7 7 7 7 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1 X8227 7 8 8 8 8 8 1 1 1

Sampling Model

Yi

Prior Distribution

θ ξ

ξ1 ξ2 ξ3

Lower Bound Upper Bound

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 3 / 37

slide-12
SLIDE 12

Motivation

Robust Bayesian Analysis (Berger et al., 1994, pp. 24–25)

A prior distribution should be easy to elicit and interpret, easy to handle computationally, reasonable to reflect uncertainty, extensible to higher dimensions, and adaptable to incorporate constraints

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 4 / 37

slide-13
SLIDE 13

Motivation

HOW?

?

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 5 / 37

slide-14
SLIDE 14

Imprecise Inferential Framework

Imprecise Inferential Framework

Canonical Representation Consider a family of probability measures Pθ whose density with respect to µ: dPθ(y) = exp{θ · t(y) − A(θ)}dµ(y), where t : Rm → Rk is a measurable function of y and the cumulant transform A(θ) = ln

  • exp{θ · t(y)}dµ(y)

serves to normalize the measure Pθ.

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 6 / 37

slide-15
SLIDE 15

Imprecise Inferential Framework

Imprecise Inferential Framework

Conjugate Prior Formulation We consider the following family of prior measures for Pθ with respect to Lebesgue measure: dπξ2,ξ1,ξ0(θ) ∝ exp{−ξ2θ2 + θξ1 − ξ0A(θ) − M(ξ2, ξ1, ξ0)}dθ, where ξ = (ξ2, ξ1, ξ0) are hyperparameters and M(ξ2, ξ1, ξ0) = ln ∞

−∞

exp{−ξ2θ2 + θξ1 − ξ0A(θ)}dθ < +∞ is the cumulant transform of ξ producing the densities πξ(θ).

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 6 / 37

slide-16
SLIDE 16

Imprecise Inferential Framework

Illustration (based on Poisson samples)

Consider the problem of parameter estimation (Scenario 1) when a prior is conjugate to a likelihood

using a log-gamma prior distribution

(Scenario 2) when a prior is not conjugate to a likelihood

using a normal prior distribution

(Scenario 3) under the generalized linear model setting

having only intercept incorporating a single predictor (with an intercept)

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 7 / 37

slide-17
SLIDE 17

Illustration Scenario I

Scenario I

Using Log-Gamma Priors If µ ∼ Gamma(α, β) and θ = log(µ), πα,β(θ) ∝ eαθ−βeθ We see the following p(θ|y) ∝ e(yθ−eθ)e(αθ−βeθ) = e(α+y)θ−(β+1)eθ which has the form p(θ|y) ∝ exp(−ξ2θ2 + ξ1θ − ξ0eθ) with hyperparameters ξ2 = 0, ξ1 = α1 + y, ξ0 = β + 1, (1)

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 8 / 37

slide-18
SLIDE 18

Illustration Scenario I

Set Basic Analytical Frame

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10 Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 9 / 37

slide-19
SLIDE 19

Illustration Scenario I

Assume Poisson Mean Parameter (µ = 1)

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10 Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 10 / 37

slide-20
SLIDE 20

Illustration Scenario I

Before Seeing Data

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10 Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 11 / 37

slide-21
SLIDE 21

Illustration Scenario I

After Seeing One Observation

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10

1

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 12 / 37

slide-22
SLIDE 22

Illustration Scenario I

One More Observation

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10

1 1

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 13 / 37

slide-23
SLIDE 23

Illustration Scenario I

See What Data Tell Us

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10

1 1 1

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 14 / 37

slide-24
SLIDE 24

Illustration Scenario I

Continue to Watch!

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10

1 1 1

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 15 / 37

slide-25
SLIDE 25

Illustration Scenario I

Learning Process from Ten Observations

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10

1 1 1 2 1 2

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 16 / 37

slide-26
SLIDE 26

Illustration Scenario I

Contour Levels of Prior Expectation E[Y]

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10

1 1 1 2 1 2

0.05 0.3 0.5 0.7 1 1.5 2.5 5 10 21 Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 17 / 37

slide-27
SLIDE 27

Illustration Scenario I

Computational Efficiency

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10

1 1 1 2 1 2

0.05 0.3 0.5 0.7 1 1.5 2.5 5 10 21 0.5 2 0.667 1.5 0.75 1.333 0.6 1 0.5 0.8 0.714 1 0.75 1 0.667 0.875 0.8 1 0.727 0.9 min max Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 18 / 37

slide-28
SLIDE 28

Illustration Scenario I

Elicitation Under Almost Complete Prior Ignorance

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10

1 1 1 2 1 2

0.05 0.3 0.5 0.7 1 1.5 2.5 5 10 21 0.5 2 0.667 1.5 0.75 1.333 0.6 1 0.5 0.8 0.714 1 0.75 1 0.667 0.875 0.8 1 0.727 0.9 min max Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 19 / 37

slide-29
SLIDE 29

Illustration Scenario I

Elicitation – Improper Priors

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10

1 1 1 2 1 2

0.05 0.3 0.5 0.7 1 1.5 2.5 5 10 21 0.5 2 0.667 1.5 0.75 1.333 0.6 1 0.5 0.8 0.714 1 0.75 1 0.667 0.875 0.8 1 0.727 0.9 min max

  • Uniform
  • Jeffreys'

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 20 / 37

slide-30
SLIDE 30

Illustration Scenario I

Meaningful Interpretation for Communication

2 4 6 8 10 2 4 6 8 10

ξ1 (sum) ξ0 (number of observations)

1 2 3 4 5 6 7 8 9 10

1 1 1 2 1 2

0.05 0.3 0.5 0.7 1 1.5 2.5 5 10 21 0.5 2 0.667 1.5 0.75 1.333 0.6 1 0.5 0.8 0.714 1 0.75 1 0.667 0.875 0.8 1 0.727 0.9 min max

Number of cases Number of patients that you have seen

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 21 / 37

slide-31
SLIDE 31

Illustration Scenario I

All Priors Are Informative In Some Way

2 4 6 8 10 2 4 6 8 10 ξ1 (sum) ξ0 (number of observations)

Number of cases Number of patients that you have seen

C0 C1 C2 C3

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 22 / 37

slide-32
SLIDE 32

Illustration Scenario I

Learning Curve of Imprecise Prior Capturing True

5 10 15 20 0.0 0.5 1.0 1.5 2.0

E(Y)

−−− −− −−−−−−−−−−−−−−−

1 1 1 0 0 2 1 0 2 0 1 2 0 2 1 1 2 1 0 0

upper and lower bounds from C0 upper and lower bounds from C1 sample mean

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 23 / 37

slide-33
SLIDE 33

Illustration Scenario I

Learning Curve of Imprecise Prior Not Capturing True

5 10 15 20 0.0 0.5 1.0 1.5 2.0 2.5 3.0

E(Y)

−−−−−−−−−−−−−−−−−−−−

1 1 1 0 0 2 1 0 2 0 1 2 0 2 1 1 2 1 0 0

upper and lower bounds from C2 upper and lower bounds from C3 sample mean

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 24 / 37

slide-34
SLIDE 34

Illustration Scenario I

Some Priors Are More Informative Than The Others?

2 4 6 8 10 2 4 6 8 10 ξ1 (sum) ξ0 (number of observations)

Number of cases Number of patients that you have seen

C0 C1 C2 C3

You are surprised! from unexpected You are experienced! B a l a n c e d ( ? )

  • r

H i g h l y I n f

  • r

m a t i v e ( ? )

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 25 / 37

slide-35
SLIDE 35

Illustration Scenario II

Scenario II

Using Normal Priors If µ ∼ LN(log(ν), τ 2) and θ = log(µ), πν,τ(θ) ∝ exp

  • − 1

2τ 2 θ2 + ν τ 2 θ

  • ,

(2) We see the following p(θ|y) ∝ exp(yθ − eθ) exp

  • − 1

2τ 2 θ2 + ν τ 2 θ

  • .

(3) which has the form p(θ|y) ∝ exp(−ξ2θ2 + ξ1θ − ξ0eθ) with hyperparameters ξ2 = 1 2τ 2 , ξ1 = ν τ 2 + y, ξ0 = 1, (4)

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 26 / 37

slide-36
SLIDE 36

Illustration Scenario II

Level Sets of Prior Expectation E[Y]

1 2 3 4 1 2 3 4 2 4 6 8 10

ξ0 ξ2 ξ1

0.25 0.5 0.75 1 1.25 1.5 1.75 2 2.25 2.5 2.75 3 3.25 3.5 3.75 4

  • Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 27 / 37

slide-37
SLIDE 37

Illustration Scenario II

Level Sets of Prior Expectation E[Y]

ξ1 ξ2

0.0 0.5 1.0 1.5 2.0 2.5 3.0 2 4 6 8 10

n = 0 n = 1 n = 2

2 4 6 8 10 0.0 0.5 1.0 1.5 2.0 2.5 3.0

n = 5 Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 27 / 37

slide-38
SLIDE 38

Illustration Scenario III-1

Level Sets of Prior Expectation E[θ] (Intercept Model)

1 2 3 4 1 2 3 4 −10 −5 5 10

ξ0 ξ2 ξ1

−4 −3 −2 −1 1 2 3 4

  • Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 28 / 37

slide-39
SLIDE 39

Illustration Scenario III-1

Level Sets of Prior Expectation E[θ] (Intercept Model)

ξ1 ξ2

1 2 3 −10 −5 5 10

n = 0 n = 1 n = 2

−10 −5 5 10 1 2 3

n = 5 Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 28 / 37

slide-40
SLIDE 40

Illustration Scenario III-1

Soft-Linearity

n = 0

−1.0 −0.5 0.0 0.5 1.0 0.2 0.4 0.6 0.8 1.0 −3 −2 −1 1 2

ξ1

ξ2

E[θ]

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 29 / 37

slide-41
SLIDE 41

Illustration Scenario III-1

Soft-Linearity

n = 1

0.0 0.5 1.0 1.5 2.0 0.2 0.4 0.6 0.8 1.0 −3 −2 −1 1 2

ξ1

ξ2

E[θ]

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 29 / 37

slide-42
SLIDE 42

Illustration Scenario III-1

Soft-Linearity

n = 2

1.0 1.5 2.0 2.5 3.0 0.2 0.4 0.6 0.8 1.0 −3 −2 −1 1 2

ξ1

ξ2

E[θ]

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 29 / 37

slide-43
SLIDE 43

Illustration Scenario III-1

Soft-Linearity

n = 5

2.0 2.5 3.0 3.5 4.0 0.2 0.4 0.6 0.8 1.0 −3 −2 −1 1 2

ξ1

ξ2

E[θ]

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 29 / 37

slide-44
SLIDE 44

Illustration Scenario III-1

Focusing Feature

−10 −5 5 10 0.0 0.2 0.4 0.6 0.8 1.0

n = 0

θ CDF ∆[E] = 5 area = 5 sup at (0.2,1,0) inf at (0.2,−1,0)

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 30 / 37

slide-45
SLIDE 45

Illustration Scenario III-1

Focusing Feature

−8 −6 −4 −2 2 4 0.0 0.2 0.4 0.6 0.8 1.0

n = 1

θ CDF ∆[E] = 1.6255 area = 1.6255 sup at (0.2,2,1) inf at (0.2,0,1)

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 30 / 37

slide-46
SLIDE 46

Illustration Scenario III-1

Focusing Feature

−8 −6 −4 −2 2 4 0.0 0.2 0.4 0.6 0.8 1.0

n = 2

θ CDF ∆[E] = 0.9554 area = 0.9554 sup at (0.2,3,2) inf at (0.2,1,2)

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 30 / 37

slide-47
SLIDE 47

Illustration Scenario III-1

Focusing Feature

−8 −6 −4 −2 2 4 0.0 0.2 0.4 0.6 0.8 1.0

n = 5

θ CDF ∆[E] = 0.7264 area = 0.7264 sup at (1,4,5) inf at (0.2,2,5)

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 30 / 37

slide-48
SLIDE 48

Illustration Scenario III-2

Graphical Demonstration

Simulation Design E(β|y, X) =

  • β f(y|X, β)π(β) dβ
  • f(y|X, β)π(β) dβ

where X ∼ MVN2 1

  • ,

1

  • , π(β) ∼ MVN2 (b, B) such as

b = µ0 µ1

  • ,

B = σ0 σ1 1 ρ ρ 1 σ0 σ1

  • Metropolis-Hasting algorithm, Laplace approximation, Importance sampling

methods are used for approximation.

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 31 / 37

slide-49
SLIDE 49

Illustration Scenario III-2

Focusing Feature

  • −2.0

−1.5 −1.0 −0.5 0.0 0.0 0.5 1.0 1.5 2.0

E( β0 ) E( β1 )

  • n=5

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 32 / 37

slide-50
SLIDE 50

Illustration Scenario III-2

Focusing Feature

  • −2.0

−1.5 −1.0 −0.5 0.0 0.0 0.5 1.0 1.5 2.0

E( β0 ) E( β1 )

  • n=10

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 32 / 37

slide-51
SLIDE 51

Illustration Scenario III-2

Focusing Feature

  • −2.0

−1.5 −1.0 −0.5 0.0 0.0 0.5 1.0 1.5 2.0

E( β0 ) E( β1 )

  • n=20

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 32 / 37

slide-52
SLIDE 52

Illustration Scenario III-2

Focusing Feature

  • −2.0

−1.5 −1.0 −0.5 0.0 0.0 0.5 1.0 1.5 2.0

E( β0 ) E( β1 )

  • n=50

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 32 / 37

slide-53
SLIDE 53

Illustration Scenario III-2

Agreement Between Intentional Units

−4 −2 2 −4 −2 2 β0 β1

  • n=5

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 33 / 37

slide-54
SLIDE 54

Illustration Scenario III-2

Agreement Between Intentional Units

−4 −2 2 −4 −2 2 β0 β1

  • n=10

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 33 / 37

slide-55
SLIDE 55

Illustration Scenario III-2

Agreement Between Intentional Units

−4 −2 2 −4 −2 2 β0 β1

  • n=20

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 33 / 37

slide-56
SLIDE 56

Illustration Scenario III-2

Agreement Between Intentional Units

−4 −2 2 −4 −2 2 β0 β1

  • n=50

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 33 / 37

slide-57
SLIDE 57

Illustration Scenario III-2

Working in Progress

Extention to other sampling models

Binomial distribution Geometric distribution Exponential distribution Normal distribution (with known variance) Normal distribution (with known mean)

Incorporation of Kullback-Leibler divergence measure Imprecise inference for comparing two groups Dicision problem in practice Software development

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 34 / 37

slide-58
SLIDE 58

Discussion

Discussion

Advantage of this approach Extension to generalized linear model Easy implementation in software

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 35 / 37

slide-59
SLIDE 59

Discussion

Study Design

Imprecision on Data (systematic) incomplete data misclassified data missing data partially informative Model imprecision Midspecified model Partially correct model

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 36 / 37

slide-60
SLIDE 60

References

References I

Berger, J., Moreno, E., Pericchi, L., Bayarri, M., Bernardo, J., Cano, J., De la Horra, J., Martín, J., Ríos-Insúa, D., Betrò, B., Dasgupta, A., Gustafson, P., Wasserman, L., Kadane, J., Srinivasan, C., Lavine, M., O’Hagan, A., Polasek, W., Robert, C., Goutis, C., Ruggeri, F., Salinetti, G., and Sivaganesan, S. (1994). An overview of robust Bayesian analysis. Test, 3(1):5–124. Diaconis, P. and Ylvisaker, D. (1979). Conjugate Priors for Exponential

  • Families. Ann. Statist., 7(2):269–281.

Irony, T. and Singpurwalla, N. (1997). Noninformative priors do not exist: a Discussion with Jose M. Bernardo. Journal of Statistical Inference and Planning, 65:159–189. Walley, P. (1991). Statistical reasoning with imprecise probabilities. Chapman and Hall, London;.

Chel Hee Lee, Mik

, elis Bickis (USASK)

Imprecise Inference 2016-SEP-04 37 / 37