An overdispersion model with covariates Chun-Yip Yau and Li Song - - PowerPoint PPT Presentation

an overdispersion model with covariates
SMART_READER_LITE
LIVE PREVIEW

An overdispersion model with covariates Chun-Yip Yau and Li Song - - PowerPoint PPT Presentation

How many Xs do you know? An overdispersion model with covariates Chun-Yip Yau and Li Song December 10, 2007 How many Xs do you know? Overview Outline of the presentation Some background of the problem. The model set up. 1.


slide-1
SLIDE 1

How many X’s do you know?

An overdispersion model with covariates

Chun-Yip Yau and Li Song December 10, 2007

slide-2
SLIDE 2

How many X’s do you know? Overview

Outline of the presentation

◮ Some background of the problem. ◮ The model set up.

  • 1. Original overdispersion model(T. Zheng et al JASA 2006).
  • 2. Our modification of the model.
  • 3. Renormalization.

◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.

slide-3
SLIDE 3

How many X’s do you know? Overview

Outline of the presentation

◮ Some background of the problem. ◮ The model set up.

  • 1. Original overdispersion model(T. Zheng et al JASA 2006).
  • 2. Our modification of the model.
  • 3. Renormalization.

◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.

slide-4
SLIDE 4

How many X’s do you know? Overview

Outline of the presentation

◮ Some background of the problem. ◮ The model set up.

  • 1. Original overdispersion model(T. Zheng et al JASA 2006).
  • 2. Our modification of the model.
  • 3. Renormalization.

◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.

slide-5
SLIDE 5

How many X’s do you know? Overview

Outline of the presentation

◮ Some background of the problem. ◮ The model set up.

  • 1. Original overdispersion model(T. Zheng et al JASA 2006).
  • 2. Our modification of the model.
  • 3. Renormalization.

◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.

slide-6
SLIDE 6

How many X’s do you know? Overview

Outline of the presentation

◮ Some background of the problem. ◮ The model set up.

  • 1. Original overdispersion model(T. Zheng et al JASA 2006).
  • 2. Our modification of the model.
  • 3. Renormalization.

◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.

slide-7
SLIDE 7

How many X’s do you know? Overview

Outline of the presentation

◮ Some background of the problem. ◮ The model set up.

  • 1. Original overdispersion model(T. Zheng et al JASA 2006).
  • 2. Our modification of the model.
  • 3. Renormalization.

◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.

slide-8
SLIDE 8

How many X’s do you know? Overview

Outline of the presentation

◮ Some background of the problem. ◮ The model set up.

  • 1. Original overdispersion model(T. Zheng et al JASA 2006).
  • 2. Our modification of the model.
  • 3. Renormalization.

◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.

slide-9
SLIDE 9

How many X’s do you know? Overview

Outline of the presentation

◮ Some background of the problem. ◮ The model set up.

  • 1. Original overdispersion model(T. Zheng et al JASA 2006).
  • 2. Our modification of the model.
  • 3. Renormalization.

◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.

slide-10
SLIDE 10

How many X’s do you know? Overview

Outline of the presentation

◮ Some background of the problem. ◮ The model set up.

  • 1. Original overdispersion model(T. Zheng et al JASA 2006).
  • 2. Our modification of the model.
  • 3. Renormalization.

◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.

slide-11
SLIDE 11

How many X’s do you know? Background

Some background

◮ The interviewees were asked such questions as ”How many

Chun-Yips do you know?”

◮ The dataset consists of

  • 1. Count data for 32 groups of people.
  • 2. 1375 respondents.
  • 3. Personal information of different respondents (24 covariates).

◮ Difficulties of the problem.

slide-12
SLIDE 12

How many X’s do you know? Background

Some background

◮ The interviewees were asked such questions as ”How many

Chun-Yips do you know?”

◮ The dataset consists of

  • 1. Count data for 32 groups of people.
  • 2. 1375 respondents.
  • 3. Personal information of different respondents (24 covariates).

◮ Difficulties of the problem.

slide-13
SLIDE 13

How many X’s do you know? Background

Some background

◮ The interviewees were asked such questions as ”How many

Chun-Yips do you know?”

◮ The dataset consists of

  • 1. Count data for 32 groups of people.
  • 2. 1375 respondents.
  • 3. Personal information of different respondents (24 covariates).

◮ Difficulties of the problem.

slide-14
SLIDE 14

How many X’s do you know? Background

Some background

◮ The interviewees were asked such questions as ”How many

Chun-Yips do you know?”

◮ The dataset consists of

  • 1. Count data for 32 groups of people.
  • 2. 1375 respondents.
  • 3. Personal information of different respondents (24 covariates).

◮ Difficulties of the problem.

slide-15
SLIDE 15

How many X’s do you know? Background

Some background

◮ The interviewees were asked such questions as ”How many

Chun-Yips do you know?”

◮ The dataset consists of

  • 1. Count data for 32 groups of people.
  • 2. 1375 respondents.
  • 3. Personal information of different respondents (24 covariates).

◮ Difficulties of the problem.

slide-16
SLIDE 16

How many X’s do you know? Background

Some background

◮ The interviewees were asked such questions as ”How many

Chun-Yips do you know?”

◮ The dataset consists of

  • 1. Count data for 32 groups of people.
  • 2. 1375 respondents.
  • 3. Personal information of different respondents (24 covariates).

◮ Difficulties of the problem.

slide-17
SLIDE 17

How many X’s do you know? Model specification

The original model

◮ Parameter specification:

  • 1. pij =probability that person i knows person j,
  • 2. ai =

N

  • j=1

pij =gregariousness parameter,

  • 3. bk =prevalence parameter of group k,
  • 4. gik =individual i’s relative propensity to know a person in

group k,

◮ Observations and covariates:

  • 1. yik =the number of people the ith person knows in group k,
  • 2. xip =the pth covariate of the ith person (categorical).
slide-18
SLIDE 18

How many X’s do you know? Model specification

The original model

◮ Parameter specification:

  • 1. pij =probability that person i knows person j,
  • 2. ai =

N

  • j=1

pij =gregariousness parameter,

  • 3. bk =prevalence parameter of group k,
  • 4. gik =individual i’s relative propensity to know a person in

group k,

◮ Observations and covariates:

  • 1. yik =the number of people the ith person knows in group k,
  • 2. xip =the pth covariate of the ith person (categorical).
slide-19
SLIDE 19

How many X’s do you know? Model specification

The original model

◮ Parameter specification:

  • 1. pij =probability that person i knows person j,
  • 2. ai =

N

  • j=1

pij =gregariousness parameter,

  • 3. bk =prevalence parameter of group k,
  • 4. gik =individual i’s relative propensity to know a person in

group k,

◮ Observations and covariates:

  • 1. yik =the number of people the ith person knows in group k,
  • 2. xip =the pth covariate of the ith person (categorical).
slide-20
SLIDE 20

How many X’s do you know? Model specification

The original model

◮ Parameter specification:

  • 1. pij =probability that person i knows person j,
  • 2. ai =

N

  • j=1

pij =gregariousness parameter,

  • 3. bk =prevalence parameter of group k,
  • 4. gik =individual i’s relative propensity to know a person in

group k,

◮ Observations and covariates:

  • 1. yik =the number of people the ith person knows in group k,
  • 2. xip =the pth covariate of the ith person (categorical).
slide-21
SLIDE 21

How many X’s do you know? Model specification

The original model

◮ Parameter specification:

  • 1. pij =probability that person i knows person j,
  • 2. ai =

N

  • j=1

pij =gregariousness parameter,

  • 3. bk =prevalence parameter of group k,
  • 4. gik =individual i’s relative propensity to know a person in

group k,

◮ Observations and covariates:

  • 1. yik =the number of people the ith person knows in group k,
  • 2. xip =the pth covariate of the ith person (categorical).
slide-22
SLIDE 22

How many X’s do you know? Model specification

The original model

◮ Parameter specification:

  • 1. pij =probability that person i knows person j,
  • 2. ai =

N

  • j=1

pij =gregariousness parameter,

  • 3. bk =prevalence parameter of group k,
  • 4. gik =individual i’s relative propensity to know a person in

group k,

◮ Observations and covariates:

  • 1. yik =the number of people the ith person knows in group k,
  • 2. xip =the pth covariate of the ith person (categorical).
slide-23
SLIDE 23

How many X’s do you know? Model specification

The original model

◮ Parameter specification:

  • 1. pij =probability that person i knows person j,
  • 2. ai =

N

  • j=1

pij =gregariousness parameter,

  • 3. bk =prevalence parameter of group k,
  • 4. gik =individual i’s relative propensity to know a person in

group k,

◮ Observations and covariates:

  • 1. yik =the number of people the ith person knows in group k,
  • 2. xip =the pth covariate of the ith person (categorical).
slide-24
SLIDE 24

How many X’s do you know? Model specification

The original model

◮ Parameter specification:

  • 1. pij =probability that person i knows person j,
  • 2. ai =

N

  • j=1

pij =gregariousness parameter,

  • 3. bk =prevalence parameter of group k,
  • 4. gik =individual i’s relative propensity to know a person in

group k,

◮ Observations and covariates:

  • 1. yik =the number of people the ith person knows in group k,
  • 2. xip =the pth covariate of the ith person (categorical).
slide-25
SLIDE 25

How many X’s do you know? Model specification Original model specification

Poisson model with overdispersion

◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:

  • 1. αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1

◮ Renormalization during fitting.

slide-26
SLIDE 26

How many X’s do you know? Model specification Original model specification

Poisson model with overdispersion

◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:

  • 1. αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1

◮ Renormalization during fitting.

slide-27
SLIDE 27

How many X’s do you know? Model specification Original model specification

Poisson model with overdispersion

◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:

  • 1. αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1

◮ Renormalization during fitting.

slide-28
SLIDE 28

How many X’s do you know? Model specification Original model specification

Poisson model with overdispersion

◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:

  • 1. αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1

◮ Renormalization during fitting.

slide-29
SLIDE 29

How many X’s do you know? Model specification Original model specification

Poisson model with overdispersion

◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:

  • 1. αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1

◮ Renormalization during fitting.

slide-30
SLIDE 30

How many X’s do you know? Model specification Original model specification

Poisson model with overdispersion

◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:

  • 1. αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1

◮ Renormalization during fitting.

slide-31
SLIDE 31

How many X’s do you know? Model specification Original model specification

Poisson model with overdispersion

◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:

  • 1. αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1

◮ Renormalization during fitting.

slide-32
SLIDE 32

How many X’s do you know? Model specification Original model specification

Poisson model with overdispersion

◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:

  • 1. αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1

◮ Renormalization during fitting.

slide-33
SLIDE 33

How many X’s do you know? Model specification Original model specification

Poisson model with overdispersion

◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:

  • 1. αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1

◮ Renormalization during fitting.

slide-34
SLIDE 34

How many X’s do you know? Model specification Model modification

Putting covariates in the original model

◮ Put covariates in the individual level parameters. We also

want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion.

◮ αi = Cα + p

  • j=1

φjxij + ˜ αi

◮ γik = Cψ + qk

  • l=1

ψklxil + ˜ γik

◮ e˜ γik ∼ Γ(1, 1 ωk−1) ◮ Integrate out ˜

γ’s, yik ∼Neg-binomial  e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

slide-35
SLIDE 35

How many X’s do you know? Model specification Model modification

Putting covariates in the original model

◮ Put covariates in the individual level parameters. We also

want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion.

◮ αi = Cα + p

  • j=1

φjxij + ˜ αi

◮ γik = Cψ + qk

  • l=1

ψklxil + ˜ γik

◮ e˜ γik ∼ Γ(1, 1 ωk−1) ◮ Integrate out ˜

γ’s, yik ∼Neg-binomial  e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

slide-36
SLIDE 36

How many X’s do you know? Model specification Model modification

Putting covariates in the original model

◮ Put covariates in the individual level parameters. We also

want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion.

◮ αi = Cα + p

  • j=1

φjxij + ˜ αi

◮ γik = Cψ + qk

  • l=1

ψklxil + ˜ γik

◮ e˜ γik ∼ Γ(1, 1 ωk−1) ◮ Integrate out ˜

γ’s, yik ∼Neg-binomial  e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

slide-37
SLIDE 37

How many X’s do you know? Model specification Model modification

Putting covariates in the original model

◮ Put covariates in the individual level parameters. We also

want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion.

◮ αi = Cα + p

  • j=1

φjxij + ˜ αi

◮ γik = Cψ + qk

  • l=1

ψklxil + ˜ γik

◮ e˜ γik ∼ Γ(1, 1 ωk−1) ◮ Integrate out ˜

γ’s, yik ∼Neg-binomial  e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

slide-38
SLIDE 38

How many X’s do you know? Model specification Model modification

Putting covariates in the original model

◮ Put covariates in the individual level parameters. We also

want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion.

◮ αi = Cα + p

  • j=1

φjxij + ˜ αi

◮ γik = Cψ + qk

  • l=1

ψklxil + ˜ γik

◮ e˜ γik ∼ Γ(1, 1 ωk−1) ◮ Integrate out ˜

γ’s, yik ∼Neg-binomial  e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

slide-39
SLIDE 39

How many X’s do you know? Model specification Model modification

Prior Assumptions

◮ Neg-binomial

 e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

◮ Prior assumptions:

  • 1. ˜

αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1
  • 4. p(ψlk) ∝ 1
  • 5. φj ∼N(µφ, σ2

φ)

slide-40
SLIDE 40

How many X’s do you know? Model specification Model modification

Prior Assumptions

◮ Neg-binomial

 e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

◮ Prior assumptions:

  • 1. ˜

αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1
  • 4. p(ψlk) ∝ 1
  • 5. φj ∼N(µφ, σ2

φ)

slide-41
SLIDE 41

How many X’s do you know? Model specification Model modification

Prior Assumptions

◮ Neg-binomial

 e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

◮ Prior assumptions:

  • 1. ˜

αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1
  • 4. p(ψlk) ∝ 1
  • 5. φj ∼N(µφ, σ2

φ)

slide-42
SLIDE 42

How many X’s do you know? Model specification Model modification

Prior Assumptions

◮ Neg-binomial

 e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

◮ Prior assumptions:

  • 1. ˜

αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1
  • 4. p(ψlk) ∝ 1
  • 5. φj ∼N(µφ, σ2

φ)

slide-43
SLIDE 43

How many X’s do you know? Model specification Model modification

Prior Assumptions

◮ Neg-binomial

 e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

◮ Prior assumptions:

  • 1. ˜

αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1
  • 4. p(ψlk) ∝ 1
  • 5. φj ∼N(µφ, σ2

φ)

slide-44
SLIDE 44

How many X’s do you know? Model specification Model modification

Prior Assumptions

◮ Neg-binomial

 e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

◮ Prior assumptions:

  • 1. ˜

αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1
  • 4. p(ψlk) ∝ 1
  • 5. φj ∼N(µφ, σ2

φ)

slide-45
SLIDE 45

How many X’s do you know? Model specification Model modification

Prior Assumptions

◮ Neg-binomial

 e

Cα+

p

  • j=1

φjxij+˜ αi+βk+Cψ+

qk

  • l=1

ψklxil

, ωk  

◮ Prior assumptions:

  • 1. ˜

αi ∼N(µα, σ2

α)

  • 2. βk ∼N(µβ, σ2

β)

  • 3. p(1/ωk) ∝ 1
  • 4. p(ψlk) ∝ 1
  • 5. φj ∼N(µφ, σ2

φ)

slide-46
SLIDE 46

How many X’s do you know? Model specification Model modification

Normalization step

◮ Re-normalizing:

  • 1. Renormalizing between ˜

αi’s and βk’s (see T. Zheng et la JASA 2006)

  • 2. Renormalizing φj’s using constant Cα making

E  e

Cα+

p

  • j=1

φjxij

  = 1.

  • 3. Renormalizing ψj’s using constant Cψ making

E  e

Cψ+

qk

  • l=1

ψklxil

  = 1.

slide-47
SLIDE 47

How many X’s do you know? Model specification Model modification

Normalization step

◮ Re-normalizing:

  • 1. Renormalizing between ˜

αi’s and βk’s (see T. Zheng et la JASA 2006)

  • 2. Renormalizing φj’s using constant Cα making

E  e

Cα+

p

  • j=1

φjxij

  = 1.

  • 3. Renormalizing ψj’s using constant Cψ making

E  e

Cψ+

qk

  • l=1

ψklxil

  = 1.

slide-48
SLIDE 48

How many X’s do you know? Model specification Model modification

Normalization step

◮ Re-normalizing:

  • 1. Renormalizing between ˜

αi’s and βk’s (see T. Zheng et la JASA 2006)

  • 2. Renormalizing φj’s using constant Cα making

E  e

Cα+

p

  • j=1

φjxij

  = 1.

  • 3. Renormalizing ψj’s using constant Cψ making

E  e

Cψ+

qk

  • l=1

ψklxil

  = 1.

slide-49
SLIDE 49

How many X’s do you know? Model specification Model modification

Normalization step

◮ Re-normalizing:

  • 1. Renormalizing between ˜

αi’s and βk’s (see T. Zheng et la JASA 2006)

  • 2. Renormalizing φj’s using constant Cα making

E  e

Cα+

p

  • j=1

φjxij

  = 1.

  • 3. Renormalizing ψj’s using constant Cψ making

E  e

Cψ+

qk

  • l=1

ψklxil

  = 1.

slide-50
SLIDE 50

How many X’s do you know? Fitting Algorithm

Gibbs-Metropolis Algorithm

◮ Fitting procedure:

  • 1. For each i, update αi using Metropolis with jumping

distribution α∗

i ∼ N(α(t−1) i

, jump2

αi).

  • 2. For each k, update βk using Metropolis with jumping

distribution β∗

k ∼ N(β(t−1) k

, jump2

βk).

  • 3. For each j, update φj using Metropolis with jumping

distribution φ∗

j ∼ N(φ(t−1) j

, jump2

φj).

  • 4. For each l, k, update ψlk using Metropolis with jumping

distribution ψ∗

lk ∼ N(ψ(t−1) lk

, jump2

ψlk).

  • 5. Use Gibbs to update µα, σ2

α, µβ, σ2 β, µφ, σ2 φ respectively.

  • 6. For each k, update ωk using Metropolis with jumping

distribution ω∗

k ∼ N(ω(t−1) k

, jump2

ωk).

  • 7. Renormalizing at the end of each iteration.
slide-51
SLIDE 51

How many X’s do you know? Fitting Algorithm

Gibbs-Metropolis Algorithm

◮ Fitting procedure:

  • 1. For each i, update αi using Metropolis with jumping

distribution α∗

i ∼ N(α(t−1) i

, jump2

αi).

  • 2. For each k, update βk using Metropolis with jumping

distribution β∗

k ∼ N(β(t−1) k

, jump2

βk).

  • 3. For each j, update φj using Metropolis with jumping

distribution φ∗

j ∼ N(φ(t−1) j

, jump2

φj).

  • 4. For each l, k, update ψlk using Metropolis with jumping

distribution ψ∗

lk ∼ N(ψ(t−1) lk

, jump2

ψlk).

  • 5. Use Gibbs to update µα, σ2

α, µβ, σ2 β, µφ, σ2 φ respectively.

  • 6. For each k, update ωk using Metropolis with jumping

distribution ω∗

k ∼ N(ω(t−1) k

, jump2

ωk).

  • 7. Renormalizing at the end of each iteration.
slide-52
SLIDE 52

How many X’s do you know? Fitting Algorithm

Gibbs-Metropolis Algorithm

◮ Fitting procedure:

  • 1. For each i, update αi using Metropolis with jumping

distribution α∗

i ∼ N(α(t−1) i

, jump2

αi).

  • 2. For each k, update βk using Metropolis with jumping

distribution β∗

k ∼ N(β(t−1) k

, jump2

βk).

  • 3. For each j, update φj using Metropolis with jumping

distribution φ∗

j ∼ N(φ(t−1) j

, jump2

φj).

  • 4. For each l, k, update ψlk using Metropolis with jumping

distribution ψ∗

lk ∼ N(ψ(t−1) lk

, jump2

ψlk).

  • 5. Use Gibbs to update µα, σ2

α, µβ, σ2 β, µφ, σ2 φ respectively.

  • 6. For each k, update ωk using Metropolis with jumping

distribution ω∗

k ∼ N(ω(t−1) k

, jump2

ωk).

  • 7. Renormalizing at the end of each iteration.
slide-53
SLIDE 53

How many X’s do you know? Fitting Algorithm

Gibbs-Metropolis Algorithm

◮ Fitting procedure:

  • 1. For each i, update αi using Metropolis with jumping

distribution α∗

i ∼ N(α(t−1) i

, jump2

αi).

  • 2. For each k, update βk using Metropolis with jumping

distribution β∗

k ∼ N(β(t−1) k

, jump2

βk).

  • 3. For each j, update φj using Metropolis with jumping

distribution φ∗

j ∼ N(φ(t−1) j

, jump2

φj).

  • 4. For each l, k, update ψlk using Metropolis with jumping

distribution ψ∗

lk ∼ N(ψ(t−1) lk

, jump2

ψlk).

  • 5. Use Gibbs to update µα, σ2

α, µβ, σ2 β, µφ, σ2 φ respectively.

  • 6. For each k, update ωk using Metropolis with jumping

distribution ω∗

k ∼ N(ω(t−1) k

, jump2

ωk).

  • 7. Renormalizing at the end of each iteration.
slide-54
SLIDE 54

How many X’s do you know? Fitting Algorithm

Gibbs-Metropolis Algorithm

◮ Fitting procedure:

  • 1. For each i, update αi using Metropolis with jumping

distribution α∗

i ∼ N(α(t−1) i

, jump2

αi).

  • 2. For each k, update βk using Metropolis with jumping

distribution β∗

k ∼ N(β(t−1) k

, jump2

βk).

  • 3. For each j, update φj using Metropolis with jumping

distribution φ∗

j ∼ N(φ(t−1) j

, jump2

φj).

  • 4. For each l, k, update ψlk using Metropolis with jumping

distribution ψ∗

lk ∼ N(ψ(t−1) lk

, jump2

ψlk).

  • 5. Use Gibbs to update µα, σ2

α, µβ, σ2 β, µφ, σ2 φ respectively.

  • 6. For each k, update ωk using Metropolis with jumping

distribution ω∗

k ∼ N(ω(t−1) k

, jump2

ωk).

  • 7. Renormalizing at the end of each iteration.
slide-55
SLIDE 55

How many X’s do you know? Fitting Algorithm

Gibbs-Metropolis Algorithm

◮ Fitting procedure:

  • 1. For each i, update αi using Metropolis with jumping

distribution α∗

i ∼ N(α(t−1) i

, jump2

αi).

  • 2. For each k, update βk using Metropolis with jumping

distribution β∗

k ∼ N(β(t−1) k

, jump2

βk).

  • 3. For each j, update φj using Metropolis with jumping

distribution φ∗

j ∼ N(φ(t−1) j

, jump2

φj).

  • 4. For each l, k, update ψlk using Metropolis with jumping

distribution ψ∗

lk ∼ N(ψ(t−1) lk

, jump2

ψlk).

  • 5. Use Gibbs to update µα, σ2

α, µβ, σ2 β, µφ, σ2 φ respectively.

  • 6. For each k, update ωk using Metropolis with jumping

distribution ω∗

k ∼ N(ω(t−1) k

, jump2

ωk).

  • 7. Renormalizing at the end of each iteration.
slide-56
SLIDE 56

How many X’s do you know? Fitting Algorithm

Gibbs-Metropolis Algorithm

◮ Fitting procedure:

  • 1. For each i, update αi using Metropolis with jumping

distribution α∗

i ∼ N(α(t−1) i

, jump2

αi).

  • 2. For each k, update βk using Metropolis with jumping

distribution β∗

k ∼ N(β(t−1) k

, jump2

βk).

  • 3. For each j, update φj using Metropolis with jumping

distribution φ∗

j ∼ N(φ(t−1) j

, jump2

φj).

  • 4. For each l, k, update ψlk using Metropolis with jumping

distribution ψ∗

lk ∼ N(ψ(t−1) lk

, jump2

ψlk).

  • 5. Use Gibbs to update µα, σ2

α, µβ, σ2 β, µφ, σ2 φ respectively.

  • 6. For each k, update ωk using Metropolis with jumping

distribution ω∗

k ∼ N(ω(t−1) k

, jump2

ωk).

  • 7. Renormalizing at the end of each iteration.
slide-57
SLIDE 57

How many X’s do you know? Fitting Algorithm

Gibbs-Metropolis Algorithm

◮ Fitting procedure:

  • 1. For each i, update αi using Metropolis with jumping

distribution α∗

i ∼ N(α(t−1) i

, jump2

αi).

  • 2. For each k, update βk using Metropolis with jumping

distribution β∗

k ∼ N(β(t−1) k

, jump2

βk).

  • 3. For each j, update φj using Metropolis with jumping

distribution φ∗

j ∼ N(φ(t−1) j

, jump2

φj).

  • 4. For each l, k, update ψlk using Metropolis with jumping

distribution ψ∗

lk ∼ N(ψ(t−1) lk

, jump2

ψlk).

  • 5. Use Gibbs to update µα, σ2

α, µβ, σ2 β, µφ, σ2 φ respectively.

  • 6. For each k, update ωk using Metropolis with jumping

distribution ω∗

k ∼ N(ω(t−1) k

, jump2

ωk).

  • 7. Renormalizing at the end of each iteration.
slide-58
SLIDE 58

How many X’s do you know? Model selection Full model

Full model

◮ Put all the covariates in without any pre-selection. ◮ Advantages:

  • 1. Easy to start with.
  • 2. More appropriate to do backward selection.

◮ Disadvantages:

  • 1. Likelihood may go wild during the fitting.
  • 2. It would take a long time to converge.
  • 3. It is more complicated to re-normalize.
  • 4. The fitting results exhibit more variability.
slide-59
SLIDE 59

How many X’s do you know? Model selection Full model

Full model

◮ Put all the covariates in without any pre-selection. ◮ Advantages:

  • 1. Easy to start with.
  • 2. More appropriate to do backward selection.

◮ Disadvantages:

  • 1. Likelihood may go wild during the fitting.
  • 2. It would take a long time to converge.
  • 3. It is more complicated to re-normalize.
  • 4. The fitting results exhibit more variability.
slide-60
SLIDE 60

How many X’s do you know? Model selection Full model

Full model

◮ Put all the covariates in without any pre-selection. ◮ Advantages:

  • 1. Easy to start with.
  • 2. More appropriate to do backward selection.

◮ Disadvantages:

  • 1. Likelihood may go wild during the fitting.
  • 2. It would take a long time to converge.
  • 3. It is more complicated to re-normalize.
  • 4. The fitting results exhibit more variability.
slide-61
SLIDE 61

How many X’s do you know? Model selection Full model

Full model

◮ Put all the covariates in without any pre-selection. ◮ Advantages:

  • 1. Easy to start with.
  • 2. More appropriate to do backward selection.

◮ Disadvantages:

  • 1. Likelihood may go wild during the fitting.
  • 2. It would take a long time to converge.
  • 3. It is more complicated to re-normalize.
  • 4. The fitting results exhibit more variability.
slide-62
SLIDE 62

How many X’s do you know? Model selection Full model

Full model

◮ Put all the covariates in without any pre-selection. ◮ Advantages:

  • 1. Easy to start with.
  • 2. More appropriate to do backward selection.

◮ Disadvantages:

  • 1. Likelihood may go wild during the fitting.
  • 2. It would take a long time to converge.
  • 3. It is more complicated to re-normalize.
  • 4. The fitting results exhibit more variability.
slide-63
SLIDE 63

How many X’s do you know? Model selection Full model

Full model

◮ Put all the covariates in without any pre-selection. ◮ Advantages:

  • 1. Easy to start with.
  • 2. More appropriate to do backward selection.

◮ Disadvantages:

  • 1. Likelihood may go wild during the fitting.
  • 2. It would take a long time to converge.
  • 3. It is more complicated to re-normalize.
  • 4. The fitting results exhibit more variability.
slide-64
SLIDE 64

How many X’s do you know? Model selection Full model

Full model

◮ Put all the covariates in without any pre-selection. ◮ Advantages:

  • 1. Easy to start with.
  • 2. More appropriate to do backward selection.

◮ Disadvantages:

  • 1. Likelihood may go wild during the fitting.
  • 2. It would take a long time to converge.
  • 3. It is more complicated to re-normalize.
  • 4. The fitting results exhibit more variability.
slide-65
SLIDE 65

How many X’s do you know? Model selection Full model

Full model

◮ Put all the covariates in without any pre-selection. ◮ Advantages:

  • 1. Easy to start with.
  • 2. More appropriate to do backward selection.

◮ Disadvantages:

  • 1. Likelihood may go wild during the fitting.
  • 2. It would take a long time to converge.
  • 3. It is more complicated to re-normalize.
  • 4. The fitting results exhibit more variability.
slide-66
SLIDE 66

How many X’s do you know? Model selection Full model

Full model

◮ Put all the covariates in without any pre-selection. ◮ Advantages:

  • 1. Easy to start with.
  • 2. More appropriate to do backward selection.

◮ Disadvantages:

  • 1. Likelihood may go wild during the fitting.
  • 2. It would take a long time to converge.
  • 3. It is more complicated to re-normalize.
  • 4. The fitting results exhibit more variability.
slide-67
SLIDE 67

How many X’s do you know? Model selection Pre-selected model

Pre-selected model

◮ Choose covariates using empirical methods to put into the

model.

◮ Advantages:

  • 1. Speed up the program.
  • 2. In most cases, it is efficient.

◮ Disadvantages:

  • 1. The important covariates may be excluded in the pre-selection.
slide-68
SLIDE 68

How many X’s do you know? Model selection Pre-selected model

Pre-selected model

◮ Choose covariates using empirical methods to put into the

model.

◮ Advantages:

  • 1. Speed up the program.
  • 2. In most cases, it is efficient.

◮ Disadvantages:

  • 1. The important covariates may be excluded in the pre-selection.
slide-69
SLIDE 69

How many X’s do you know? Model selection Pre-selected model

Pre-selected model

◮ Choose covariates using empirical methods to put into the

model.

◮ Advantages:

  • 1. Speed up the program.
  • 2. In most cases, it is efficient.

◮ Disadvantages:

  • 1. The important covariates may be excluded in the pre-selection.
slide-70
SLIDE 70

How many X’s do you know? Model selection Pre-selected model

Pre-selected model

◮ Choose covariates using empirical methods to put into the

model.

◮ Advantages:

  • 1. Speed up the program.
  • 2. In most cases, it is efficient.

◮ Disadvantages:

  • 1. The important covariates may be excluded in the pre-selection.
slide-71
SLIDE 71

How many X’s do you know? Model selection Pre-selected model

Pre-selected model

◮ Choose covariates using empirical methods to put into the

model.

◮ Advantages:

  • 1. Speed up the program.
  • 2. In most cases, it is efficient.

◮ Disadvantages:

  • 1. The important covariates may be excluded in the pre-selection.
slide-72
SLIDE 72

How many X’s do you know? Model selection Pre-selected model

Pre-selected model

◮ Choose covariates using empirical methods to put into the

model.

◮ Advantages:

  • 1. Speed up the program.
  • 2. In most cases, it is efficient.

◮ Disadvantages:

  • 1. The important covariates may be excluded in the pre-selection.
slide-73
SLIDE 73

How many X’s do you know? Model selection Pre-selected model

Pre-selection

◮ Fit a model without any covariates. Compute individual

degrees and residuals.

  • 1. ˜

yi =

K

  • k=1

yik

  • 2. rik = √yik −
  • ˆ

aiˆ bk

◮ Choose φ’s:

  • 1. Regress ˜

yi’s (or log(˜ yi)’s) against all the covariates.

  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the φ part of the model.

◮ Choose ψ’s:

  • 1. For each k, regress rik’s against all the covariates.
  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the ψ part of the model.

slide-74
SLIDE 74

How many X’s do you know? Model selection Pre-selected model

Pre-selection

◮ Fit a model without any covariates. Compute individual

degrees and residuals.

  • 1. ˜

yi =

K

  • k=1

yik

  • 2. rik = √yik −
  • ˆ

aiˆ bk

◮ Choose φ’s:

  • 1. Regress ˜

yi’s (or log(˜ yi)’s) against all the covariates.

  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the φ part of the model.

◮ Choose ψ’s:

  • 1. For each k, regress rik’s against all the covariates.
  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the ψ part of the model.

slide-75
SLIDE 75

How many X’s do you know? Model selection Pre-selected model

Pre-selection

◮ Fit a model without any covariates. Compute individual

degrees and residuals.

  • 1. ˜

yi =

K

  • k=1

yik

  • 2. rik = √yik −
  • ˆ

aiˆ bk

◮ Choose φ’s:

  • 1. Regress ˜

yi’s (or log(˜ yi)’s) against all the covariates.

  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the φ part of the model.

◮ Choose ψ’s:

  • 1. For each k, regress rik’s against all the covariates.
  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the ψ part of the model.

slide-76
SLIDE 76

How many X’s do you know? Model selection Pre-selected model

Pre-selection

◮ Fit a model without any covariates. Compute individual

degrees and residuals.

  • 1. ˜

yi =

K

  • k=1

yik

  • 2. rik = √yik −
  • ˆ

aiˆ bk

◮ Choose φ’s:

  • 1. Regress ˜

yi’s (or log(˜ yi)’s) against all the covariates.

  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the φ part of the model.

◮ Choose ψ’s:

  • 1. For each k, regress rik’s against all the covariates.
  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the ψ part of the model.

slide-77
SLIDE 77

How many X’s do you know? Model selection Pre-selected model

Pre-selection

◮ Fit a model without any covariates. Compute individual

degrees and residuals.

  • 1. ˜

yi =

K

  • k=1

yik

  • 2. rik = √yik −
  • ˆ

aiˆ bk

◮ Choose φ’s:

  • 1. Regress ˜

yi’s (or log(˜ yi)’s) against all the covariates.

  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the φ part of the model.

◮ Choose ψ’s:

  • 1. For each k, regress rik’s against all the covariates.
  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the ψ part of the model.

slide-78
SLIDE 78

How many X’s do you know? Model selection Pre-selected model

Pre-selection

◮ Fit a model without any covariates. Compute individual

degrees and residuals.

  • 1. ˜

yi =

K

  • k=1

yik

  • 2. rik = √yik −
  • ˆ

aiˆ bk

◮ Choose φ’s:

  • 1. Regress ˜

yi’s (or log(˜ yi)’s) against all the covariates.

  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the φ part of the model.

◮ Choose ψ’s:

  • 1. For each k, regress rik’s against all the covariates.
  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the ψ part of the model.

slide-79
SLIDE 79

How many X’s do you know? Model selection Pre-selected model

Pre-selection

◮ Fit a model without any covariates. Compute individual

degrees and residuals.

  • 1. ˜

yi =

K

  • k=1

yik

  • 2. rik = √yik −
  • ˆ

aiˆ bk

◮ Choose φ’s:

  • 1. Regress ˜

yi’s (or log(˜ yi)’s) against all the covariates.

  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the φ part of the model.

◮ Choose ψ’s:

  • 1. For each k, regress rik’s against all the covariates.
  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the ψ part of the model.

slide-80
SLIDE 80

How many X’s do you know? Model selection Pre-selected model

Pre-selection

◮ Fit a model without any covariates. Compute individual

degrees and residuals.

  • 1. ˜

yi =

K

  • k=1

yik

  • 2. rik = √yik −
  • ˆ

aiˆ bk

◮ Choose φ’s:

  • 1. Regress ˜

yi’s (or log(˜ yi)’s) against all the covariates.

  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the φ part of the model.

◮ Choose ψ’s:

  • 1. For each k, regress rik’s against all the covariates.
  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the ψ part of the model.

slide-81
SLIDE 81

How many X’s do you know? Model selection Pre-selected model

Pre-selection

◮ Fit a model without any covariates. Compute individual

degrees and residuals.

  • 1. ˜

yi =

K

  • k=1

yik

  • 2. rik = √yik −
  • ˆ

aiˆ bk

◮ Choose φ’s:

  • 1. Regress ˜

yi’s (or log(˜ yi)’s) against all the covariates.

  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the φ part of the model.

◮ Choose ψ’s:

  • 1. For each k, regress rik’s against all the covariates.
  • 2. Use stepwise AIC procedure to choose the important covariates

to enter the ψ part of the model.

slide-82
SLIDE 82

How many X’s do you know? Model selection Model comparison

Model selection using DIC

◮ The usual AIC, BIC criteria may not be easily applied in this

  • case. With the difficulty that the ”number of parameters” is

not so clearly defined.

◮ The DIC criterion (Deviance Information; A. Gelman et la

2007):

  • 1. An analog of AIC.

DIC = mean(deviance) + 2pD

  • 2. deviance = −2 ∗ log likelihood; pD = var(deviance)/2

◮ The DIC is just suggestive rather than definitive.

slide-83
SLIDE 83

How many X’s do you know? Model selection Model comparison

Model selection using DIC

◮ The usual AIC, BIC criteria may not be easily applied in this

  • case. With the difficulty that the ”number of parameters” is

not so clearly defined.

◮ The DIC criterion (Deviance Information; A. Gelman et la

2007):

  • 1. An analog of AIC.

DIC = mean(deviance) + 2pD

  • 2. deviance = −2 ∗ log likelihood; pD = var(deviance)/2

◮ The DIC is just suggestive rather than definitive.

slide-84
SLIDE 84

How many X’s do you know? Model selection Model comparison

Model selection using DIC

◮ The usual AIC, BIC criteria may not be easily applied in this

  • case. With the difficulty that the ”number of parameters” is

not so clearly defined.

◮ The DIC criterion (Deviance Information; A. Gelman et la

2007):

  • 1. An analog of AIC.

DIC = mean(deviance) + 2pD

  • 2. deviance = −2 ∗ log likelihood; pD = var(deviance)/2

◮ The DIC is just suggestive rather than definitive.

slide-85
SLIDE 85

How many X’s do you know? Model selection Model comparison

Model selection using DIC

◮ The usual AIC, BIC criteria may not be easily applied in this

  • case. With the difficulty that the ”number of parameters” is

not so clearly defined.

◮ The DIC criterion (Deviance Information; A. Gelman et la

2007):

  • 1. An analog of AIC.

DIC = mean(deviance) + 2pD

  • 2. deviance = −2 ∗ log likelihood; pD = var(deviance)/2

◮ The DIC is just suggestive rather than definitive.

slide-86
SLIDE 86

How many X’s do you know? Model selection Model comparison

Model selection using DIC

◮ The usual AIC, BIC criteria may not be easily applied in this

  • case. With the difficulty that the ”number of parameters” is

not so clearly defined.

◮ The DIC criterion (Deviance Information; A. Gelman et la

2007):

  • 1. An analog of AIC.

DIC = mean(deviance) + 2pD

  • 2. deviance = −2 ∗ log likelihood; pD = var(deviance)/2

◮ The DIC is just suggestive rather than definitive.

slide-87
SLIDE 87

How many X’s do you know? Results

Overdispersion of pre-selected model

Groups Overdispersion

Michael Christina Christopher Jacqueline James Jennifer Anthony Kimberly Robert Stephanie David Nicole Adoption Twin Suicide Auto Accident

1 1.21.41.61.8 2 2.22.4

Groups Overdispersion

  • Amer. Indians

New Birth Widow(er) Dialysis Postal Worker Pilot Jaycees Diabetic Business Gun Dealer HIV positive AIDS Homeless Rape Prison Homicide

2 4 6 8 10 12

slide-88
SLIDE 88

How many X’s do you know? Results

Reduction of overdispersion

2 4 6 8 10 2 4 6 8 10 Overdispersion without covariates Overdispersion with covariates Full model with all covariates Reduced model with pre−selected covariates Reduced model with post−selected covariates

slide-89
SLIDE 89

How many X’s do you know? Results

DIC of different models

Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4

slide-90
SLIDE 90

How many X’s do you know? Results

DIC of different models

Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4

slide-91
SLIDE 91

How many X’s do you know? Results

DIC of different models

Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4

slide-92
SLIDE 92

How many X’s do you know? Results

DIC of different models

Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4

slide-93
SLIDE 93

How many X’s do you know? Results

DIC of different models

Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4

slide-94
SLIDE 94

How many X’s do you know? Further developments and conclusions

Further developments

◮ Future work:

  • 1. Interpretation of the estimates of covariates. (To be appeared

in the report.)

  • 2. Alternative methods: Generalized Linear Mixed Models.

Likelihood is hard to deal with.

  • 3. Apply the model to the censored data.
  • 4. Other ways to re-normalize. (i.e. set

p

  • j=1

φj = 0; set

n

  • i=1

˜ αi = 0)

◮ Conclusion:

  • 1. The pre-selection we use is a very good method to choose the

right covariates.

  • 2. DIC could be a good criterion for model comparison. It also

shows that the pre-selected model is the best among the easy models we can think of.

  • 3. Putting covariates in the model in the right way can reduce

the overdispersion.

slide-95
SLIDE 95

How many X’s do you know? Further developments and conclusions

Further developments

◮ Future work:

  • 1. Interpretation of the estimates of covariates. (To be appeared

in the report.)

  • 2. Alternative methods: Generalized Linear Mixed Models.

Likelihood is hard to deal with.

  • 3. Apply the model to the censored data.
  • 4. Other ways to re-normalize. (i.e. set

p

  • j=1

φj = 0; set

n

  • i=1

˜ αi = 0)

◮ Conclusion:

  • 1. The pre-selection we use is a very good method to choose the

right covariates.

  • 2. DIC could be a good criterion for model comparison. It also

shows that the pre-selected model is the best among the easy models we can think of.

  • 3. Putting covariates in the model in the right way can reduce

the overdispersion.

slide-96
SLIDE 96

How many X’s do you know? Further developments and conclusions

Further developments

◮ Future work:

  • 1. Interpretation of the estimates of covariates. (To be appeared

in the report.)

  • 2. Alternative methods: Generalized Linear Mixed Models.

Likelihood is hard to deal with.

  • 3. Apply the model to the censored data.
  • 4. Other ways to re-normalize. (i.e. set

p

  • j=1

φj = 0; set

n

  • i=1

˜ αi = 0)

◮ Conclusion:

  • 1. The pre-selection we use is a very good method to choose the

right covariates.

  • 2. DIC could be a good criterion for model comparison. It also

shows that the pre-selected model is the best among the easy models we can think of.

  • 3. Putting covariates in the model in the right way can reduce

the overdispersion.

slide-97
SLIDE 97

How many X’s do you know? Further developments and conclusions

Further developments

◮ Future work:

  • 1. Interpretation of the estimates of covariates. (To be appeared

in the report.)

  • 2. Alternative methods: Generalized Linear Mixed Models.

Likelihood is hard to deal with.

  • 3. Apply the model to the censored data.
  • 4. Other ways to re-normalize. (i.e. set

p

  • j=1

φj = 0; set

n

  • i=1

˜ αi = 0)

◮ Conclusion:

  • 1. The pre-selection we use is a very good method to choose the

right covariates.

  • 2. DIC could be a good criterion for model comparison. It also

shows that the pre-selected model is the best among the easy models we can think of.

  • 3. Putting covariates in the model in the right way can reduce

the overdispersion.

slide-98
SLIDE 98

How many X’s do you know? Further developments and conclusions

Further developments

◮ Future work:

  • 1. Interpretation of the estimates of covariates. (To be appeared

in the report.)

  • 2. Alternative methods: Generalized Linear Mixed Models.

Likelihood is hard to deal with.

  • 3. Apply the model to the censored data.
  • 4. Other ways to re-normalize. (i.e. set

p

  • j=1

φj = 0; set

n

  • i=1

˜ αi = 0)

◮ Conclusion:

  • 1. The pre-selection we use is a very good method to choose the

right covariates.

  • 2. DIC could be a good criterion for model comparison. It also

shows that the pre-selected model is the best among the easy models we can think of.

  • 3. Putting covariates in the model in the right way can reduce

the overdispersion.

slide-99
SLIDE 99

How many X’s do you know? Further developments and conclusions

Further developments

◮ Future work:

  • 1. Interpretation of the estimates of covariates. (To be appeared

in the report.)

  • 2. Alternative methods: Generalized Linear Mixed Models.

Likelihood is hard to deal with.

  • 3. Apply the model to the censored data.
  • 4. Other ways to re-normalize. (i.e. set

p

  • j=1

φj = 0; set

n

  • i=1

˜ αi = 0)

◮ Conclusion:

  • 1. The pre-selection we use is a very good method to choose the

right covariates.

  • 2. DIC could be a good criterion for model comparison. It also

shows that the pre-selected model is the best among the easy models we can think of.

  • 3. Putting covariates in the model in the right way can reduce

the overdispersion.

slide-100
SLIDE 100

How many X’s do you know? Further developments and conclusions

Further developments

◮ Future work:

  • 1. Interpretation of the estimates of covariates. (To be appeared

in the report.)

  • 2. Alternative methods: Generalized Linear Mixed Models.

Likelihood is hard to deal with.

  • 3. Apply the model to the censored data.
  • 4. Other ways to re-normalize. (i.e. set

p

  • j=1

φj = 0; set

n

  • i=1

˜ αi = 0)

◮ Conclusion:

  • 1. The pre-selection we use is a very good method to choose the

right covariates.

  • 2. DIC could be a good criterion for model comparison. It also

shows that the pre-selected model is the best among the easy models we can think of.

  • 3. Putting covariates in the model in the right way can reduce

the overdispersion.

slide-101
SLIDE 101

How many X’s do you know? Further developments and conclusions

Further developments

◮ Future work:

  • 1. Interpretation of the estimates of covariates. (To be appeared

in the report.)

  • 2. Alternative methods: Generalized Linear Mixed Models.

Likelihood is hard to deal with.

  • 3. Apply the model to the censored data.
  • 4. Other ways to re-normalize. (i.e. set

p

  • j=1

φj = 0; set

n

  • i=1

˜ αi = 0)

◮ Conclusion:

  • 1. The pre-selection we use is a very good method to choose the

right covariates.

  • 2. DIC could be a good criterion for model comparison. It also

shows that the pre-selected model is the best among the easy models we can think of.

  • 3. Putting covariates in the model in the right way can reduce

the overdispersion.

slide-102
SLIDE 102

How many X’s do you know? Further developments and conclusions

Further developments

◮ Future work:

  • 1. Interpretation of the estimates of covariates. (To be appeared

in the report.)

  • 2. Alternative methods: Generalized Linear Mixed Models.

Likelihood is hard to deal with.

  • 3. Apply the model to the censored data.
  • 4. Other ways to re-normalize. (i.e. set

p

  • j=1

φj = 0; set

n

  • i=1

˜ αi = 0)

◮ Conclusion:

  • 1. The pre-selection we use is a very good method to choose the

right covariates.

  • 2. DIC could be a good criterion for model comparison. It also

shows that the pre-selected model is the best among the easy models we can think of.

  • 3. Putting covariates in the model in the right way can reduce

the overdispersion.

slide-103
SLIDE 103

How many X’s do you know? Thank you

Thank you

We have not succeeded in answering all our problems. The answers we have found only serve to raise a whole set of new questions. In some ways we feel we are as confused as ever, but we believe we are confused on a higher level and about more important things. —Bernt Oksendal