How many X’s do you know?
An overdispersion model with covariates Chun-Yip Yau and Li Song - - PowerPoint PPT Presentation
An overdispersion model with covariates Chun-Yip Yau and Li Song - - PowerPoint PPT Presentation
How many Xs do you know? An overdispersion model with covariates Chun-Yip Yau and Li Song December 10, 2007 How many Xs do you know? Overview Outline of the presentation Some background of the problem. The model set up. 1.
How many X’s do you know? Overview
Outline of the presentation
◮ Some background of the problem. ◮ The model set up.
- 1. Original overdispersion model(T. Zheng et al JASA 2006).
- 2. Our modification of the model.
- 3. Renormalization.
◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.
How many X’s do you know? Overview
Outline of the presentation
◮ Some background of the problem. ◮ The model set up.
- 1. Original overdispersion model(T. Zheng et al JASA 2006).
- 2. Our modification of the model.
- 3. Renormalization.
◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.
How many X’s do you know? Overview
Outline of the presentation
◮ Some background of the problem. ◮ The model set up.
- 1. Original overdispersion model(T. Zheng et al JASA 2006).
- 2. Our modification of the model.
- 3. Renormalization.
◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.
How many X’s do you know? Overview
Outline of the presentation
◮ Some background of the problem. ◮ The model set up.
- 1. Original overdispersion model(T. Zheng et al JASA 2006).
- 2. Our modification of the model.
- 3. Renormalization.
◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.
How many X’s do you know? Overview
Outline of the presentation
◮ Some background of the problem. ◮ The model set up.
- 1. Original overdispersion model(T. Zheng et al JASA 2006).
- 2. Our modification of the model.
- 3. Renormalization.
◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.
How many X’s do you know? Overview
Outline of the presentation
◮ Some background of the problem. ◮ The model set up.
- 1. Original overdispersion model(T. Zheng et al JASA 2006).
- 2. Our modification of the model.
- 3. Renormalization.
◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.
How many X’s do you know? Overview
Outline of the presentation
◮ Some background of the problem. ◮ The model set up.
- 1. Original overdispersion model(T. Zheng et al JASA 2006).
- 2. Our modification of the model.
- 3. Renormalization.
◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.
How many X’s do you know? Overview
Outline of the presentation
◮ Some background of the problem. ◮ The model set up.
- 1. Original overdispersion model(T. Zheng et al JASA 2006).
- 2. Our modification of the model.
- 3. Renormalization.
◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.
How many X’s do you know? Overview
Outline of the presentation
◮ Some background of the problem. ◮ The model set up.
- 1. Original overdispersion model(T. Zheng et al JASA 2006).
- 2. Our modification of the model.
- 3. Renormalization.
◮ Fitting algorithm. ◮ Model selection. ◮ Some results. ◮ Future developments and conclusions.
How many X’s do you know? Background
Some background
◮ The interviewees were asked such questions as ”How many
Chun-Yips do you know?”
◮ The dataset consists of
- 1. Count data for 32 groups of people.
- 2. 1375 respondents.
- 3. Personal information of different respondents (24 covariates).
◮ Difficulties of the problem.
How many X’s do you know? Background
Some background
◮ The interviewees were asked such questions as ”How many
Chun-Yips do you know?”
◮ The dataset consists of
- 1. Count data for 32 groups of people.
- 2. 1375 respondents.
- 3. Personal information of different respondents (24 covariates).
◮ Difficulties of the problem.
How many X’s do you know? Background
Some background
◮ The interviewees were asked such questions as ”How many
Chun-Yips do you know?”
◮ The dataset consists of
- 1. Count data for 32 groups of people.
- 2. 1375 respondents.
- 3. Personal information of different respondents (24 covariates).
◮ Difficulties of the problem.
How many X’s do you know? Background
Some background
◮ The interviewees were asked such questions as ”How many
Chun-Yips do you know?”
◮ The dataset consists of
- 1. Count data for 32 groups of people.
- 2. 1375 respondents.
- 3. Personal information of different respondents (24 covariates).
◮ Difficulties of the problem.
How many X’s do you know? Background
Some background
◮ The interviewees were asked such questions as ”How many
Chun-Yips do you know?”
◮ The dataset consists of
- 1. Count data for 32 groups of people.
- 2. 1375 respondents.
- 3. Personal information of different respondents (24 covariates).
◮ Difficulties of the problem.
How many X’s do you know? Background
Some background
◮ The interviewees were asked such questions as ”How many
Chun-Yips do you know?”
◮ The dataset consists of
- 1. Count data for 32 groups of people.
- 2. 1375 respondents.
- 3. Personal information of different respondents (24 covariates).
◮ Difficulties of the problem.
How many X’s do you know? Model specification
The original model
◮ Parameter specification:
- 1. pij =probability that person i knows person j,
- 2. ai =
N
- j=1
pij =gregariousness parameter,
- 3. bk =prevalence parameter of group k,
- 4. gik =individual i’s relative propensity to know a person in
group k,
◮ Observations and covariates:
- 1. yik =the number of people the ith person knows in group k,
- 2. xip =the pth covariate of the ith person (categorical).
How many X’s do you know? Model specification
The original model
◮ Parameter specification:
- 1. pij =probability that person i knows person j,
- 2. ai =
N
- j=1
pij =gregariousness parameter,
- 3. bk =prevalence parameter of group k,
- 4. gik =individual i’s relative propensity to know a person in
group k,
◮ Observations and covariates:
- 1. yik =the number of people the ith person knows in group k,
- 2. xip =the pth covariate of the ith person (categorical).
How many X’s do you know? Model specification
The original model
◮ Parameter specification:
- 1. pij =probability that person i knows person j,
- 2. ai =
N
- j=1
pij =gregariousness parameter,
- 3. bk =prevalence parameter of group k,
- 4. gik =individual i’s relative propensity to know a person in
group k,
◮ Observations and covariates:
- 1. yik =the number of people the ith person knows in group k,
- 2. xip =the pth covariate of the ith person (categorical).
How many X’s do you know? Model specification
The original model
◮ Parameter specification:
- 1. pij =probability that person i knows person j,
- 2. ai =
N
- j=1
pij =gregariousness parameter,
- 3. bk =prevalence parameter of group k,
- 4. gik =individual i’s relative propensity to know a person in
group k,
◮ Observations and covariates:
- 1. yik =the number of people the ith person knows in group k,
- 2. xip =the pth covariate of the ith person (categorical).
How many X’s do you know? Model specification
The original model
◮ Parameter specification:
- 1. pij =probability that person i knows person j,
- 2. ai =
N
- j=1
pij =gregariousness parameter,
- 3. bk =prevalence parameter of group k,
- 4. gik =individual i’s relative propensity to know a person in
group k,
◮ Observations and covariates:
- 1. yik =the number of people the ith person knows in group k,
- 2. xip =the pth covariate of the ith person (categorical).
How many X’s do you know? Model specification
The original model
◮ Parameter specification:
- 1. pij =probability that person i knows person j,
- 2. ai =
N
- j=1
pij =gregariousness parameter,
- 3. bk =prevalence parameter of group k,
- 4. gik =individual i’s relative propensity to know a person in
group k,
◮ Observations and covariates:
- 1. yik =the number of people the ith person knows in group k,
- 2. xip =the pth covariate of the ith person (categorical).
How many X’s do you know? Model specification
The original model
◮ Parameter specification:
- 1. pij =probability that person i knows person j,
- 2. ai =
N
- j=1
pij =gregariousness parameter,
- 3. bk =prevalence parameter of group k,
- 4. gik =individual i’s relative propensity to know a person in
group k,
◮ Observations and covariates:
- 1. yik =the number of people the ith person knows in group k,
- 2. xip =the pth covariate of the ith person (categorical).
How many X’s do you know? Model specification
The original model
◮ Parameter specification:
- 1. pij =probability that person i knows person j,
- 2. ai =
N
- j=1
pij =gregariousness parameter,
- 3. bk =prevalence parameter of group k,
- 4. gik =individual i’s relative propensity to know a person in
group k,
◮ Observations and covariates:
- 1. yik =the number of people the ith person knows in group k,
- 2. xip =the pth covariate of the ith person (categorical).
How many X’s do you know? Model specification Original model specification
Poisson model with overdispersion
◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:
- 1. αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
◮ Renormalization during fitting.
How many X’s do you know? Model specification Original model specification
Poisson model with overdispersion
◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:
- 1. αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
◮ Renormalization during fitting.
How many X’s do you know? Model specification Original model specification
Poisson model with overdispersion
◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:
- 1. αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
◮ Renormalization during fitting.
How many X’s do you know? Model specification Original model specification
Poisson model with overdispersion
◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:
- 1. αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
◮ Renormalization during fitting.
How many X’s do you know? Model specification Original model specification
Poisson model with overdispersion
◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:
- 1. αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
◮ Renormalization during fitting.
How many X’s do you know? Model specification Original model specification
Poisson model with overdispersion
◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:
- 1. αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
◮ Renormalization during fitting.
How many X’s do you know? Model specification Original model specification
Poisson model with overdispersion
◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:
- 1. αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
◮ Renormalization during fitting.
How many X’s do you know? Model specification Original model specification
Poisson model with overdispersion
◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:
- 1. αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
◮ Renormalization during fitting.
How many X’s do you know? Model specification Original model specification
Poisson model with overdispersion
◮ yik ∼Poisson(λik) ◮ λik = aibkgik = eαi+βk+γik ◮ gik = eγik ∼ Γ(1, 1 ωk−1) ◮ Integrate out γ’s, yik ∼Neg-binomial(eαi+βk, ωk) ◮ Prior assumptions:
- 1. αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
◮ Renormalization during fitting.
How many X’s do you know? Model specification Model modification
Putting covariates in the original model
◮ Put covariates in the individual level parameters. We also
want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion.
◮ αi = Cα + p
- j=1
φjxij + ˜ αi
◮ γik = Cψ + qk
- l=1
ψklxil + ˜ γik
◮ e˜ γik ∼ Γ(1, 1 ωk−1) ◮ Integrate out ˜
γ’s, yik ∼Neg-binomial e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
How many X’s do you know? Model specification Model modification
Putting covariates in the original model
◮ Put covariates in the individual level parameters. We also
want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion.
◮ αi = Cα + p
- j=1
φjxij + ˜ αi
◮ γik = Cψ + qk
- l=1
ψklxil + ˜ γik
◮ e˜ γik ∼ Γ(1, 1 ωk−1) ◮ Integrate out ˜
γ’s, yik ∼Neg-binomial e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
How many X’s do you know? Model specification Model modification
Putting covariates in the original model
◮ Put covariates in the individual level parameters. We also
want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion.
◮ αi = Cα + p
- j=1
φjxij + ˜ αi
◮ γik = Cψ + qk
- l=1
ψklxil + ˜ γik
◮ e˜ γik ∼ Γ(1, 1 ωk−1) ◮ Integrate out ˜
γ’s, yik ∼Neg-binomial e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
How many X’s do you know? Model specification Model modification
Putting covariates in the original model
◮ Put covariates in the individual level parameters. We also
want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion.
◮ αi = Cα + p
- j=1
φjxij + ˜ αi
◮ γik = Cψ + qk
- l=1
ψklxil + ˜ γik
◮ e˜ γik ∼ Γ(1, 1 ωk−1) ◮ Integrate out ˜
γ’s, yik ∼Neg-binomial e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
How many X’s do you know? Model specification Model modification
Putting covariates in the original model
◮ Put covariates in the individual level parameters. We also
want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion.
◮ αi = Cα + p
- j=1
φjxij + ˜ αi
◮ γik = Cψ + qk
- l=1
ψklxil + ˜ γik
◮ e˜ γik ∼ Γ(1, 1 ωk−1) ◮ Integrate out ˜
γ’s, yik ∼Neg-binomial e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
How many X’s do you know? Model specification Model modification
Prior Assumptions
◮ Neg-binomial
e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
◮ Prior assumptions:
- 1. ˜
αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
- 4. p(ψlk) ∝ 1
- 5. φj ∼N(µφ, σ2
φ)
How many X’s do you know? Model specification Model modification
Prior Assumptions
◮ Neg-binomial
e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
◮ Prior assumptions:
- 1. ˜
αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
- 4. p(ψlk) ∝ 1
- 5. φj ∼N(µφ, σ2
φ)
How many X’s do you know? Model specification Model modification
Prior Assumptions
◮ Neg-binomial
e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
◮ Prior assumptions:
- 1. ˜
αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
- 4. p(ψlk) ∝ 1
- 5. φj ∼N(µφ, σ2
φ)
How many X’s do you know? Model specification Model modification
Prior Assumptions
◮ Neg-binomial
e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
◮ Prior assumptions:
- 1. ˜
αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
- 4. p(ψlk) ∝ 1
- 5. φj ∼N(µφ, σ2
φ)
How many X’s do you know? Model specification Model modification
Prior Assumptions
◮ Neg-binomial
e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
◮ Prior assumptions:
- 1. ˜
αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
- 4. p(ψlk) ∝ 1
- 5. φj ∼N(µφ, σ2
φ)
How many X’s do you know? Model specification Model modification
Prior Assumptions
◮ Neg-binomial
e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
◮ Prior assumptions:
- 1. ˜
αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
- 4. p(ψlk) ∝ 1
- 5. φj ∼N(µφ, σ2
φ)
How many X’s do you know? Model specification Model modification
Prior Assumptions
◮ Neg-binomial
e
Cα+
p
- j=1
φjxij+˜ αi+βk+Cψ+
qk
- l=1
ψklxil
, ωk
◮ Prior assumptions:
- 1. ˜
αi ∼N(µα, σ2
α)
- 2. βk ∼N(µβ, σ2
β)
- 3. p(1/ωk) ∝ 1
- 4. p(ψlk) ∝ 1
- 5. φj ∼N(µφ, σ2
φ)
How many X’s do you know? Model specification Model modification
Normalization step
◮ Re-normalizing:
- 1. Renormalizing between ˜
αi’s and βk’s (see T. Zheng et la JASA 2006)
- 2. Renormalizing φj’s using constant Cα making
E e
Cα+
p
- j=1
φjxij
= 1.
- 3. Renormalizing ψj’s using constant Cψ making
E e
Cψ+
qk
- l=1
ψklxil
= 1.
How many X’s do you know? Model specification Model modification
Normalization step
◮ Re-normalizing:
- 1. Renormalizing between ˜
αi’s and βk’s (see T. Zheng et la JASA 2006)
- 2. Renormalizing φj’s using constant Cα making
E e
Cα+
p
- j=1
φjxij
= 1.
- 3. Renormalizing ψj’s using constant Cψ making
E e
Cψ+
qk
- l=1
ψklxil
= 1.
How many X’s do you know? Model specification Model modification
Normalization step
◮ Re-normalizing:
- 1. Renormalizing between ˜
αi’s and βk’s (see T. Zheng et la JASA 2006)
- 2. Renormalizing φj’s using constant Cα making
E e
Cα+
p
- j=1
φjxij
= 1.
- 3. Renormalizing ψj’s using constant Cψ making
E e
Cψ+
qk
- l=1
ψklxil
= 1.
How many X’s do you know? Model specification Model modification
Normalization step
◮ Re-normalizing:
- 1. Renormalizing between ˜
αi’s and βk’s (see T. Zheng et la JASA 2006)
- 2. Renormalizing φj’s using constant Cα making
E e
Cα+
p
- j=1
φjxij
= 1.
- 3. Renormalizing ψj’s using constant Cψ making
E e
Cψ+
qk
- l=1
ψklxil
= 1.
How many X’s do you know? Fitting Algorithm
Gibbs-Metropolis Algorithm
◮ Fitting procedure:
- 1. For each i, update αi using Metropolis with jumping
distribution α∗
i ∼ N(α(t−1) i
, jump2
αi).
- 2. For each k, update βk using Metropolis with jumping
distribution β∗
k ∼ N(β(t−1) k
, jump2
βk).
- 3. For each j, update φj using Metropolis with jumping
distribution φ∗
j ∼ N(φ(t−1) j
, jump2
φj).
- 4. For each l, k, update ψlk using Metropolis with jumping
distribution ψ∗
lk ∼ N(ψ(t−1) lk
, jump2
ψlk).
- 5. Use Gibbs to update µα, σ2
α, µβ, σ2 β, µφ, σ2 φ respectively.
- 6. For each k, update ωk using Metropolis with jumping
distribution ω∗
k ∼ N(ω(t−1) k
, jump2
ωk).
- 7. Renormalizing at the end of each iteration.
How many X’s do you know? Fitting Algorithm
Gibbs-Metropolis Algorithm
◮ Fitting procedure:
- 1. For each i, update αi using Metropolis with jumping
distribution α∗
i ∼ N(α(t−1) i
, jump2
αi).
- 2. For each k, update βk using Metropolis with jumping
distribution β∗
k ∼ N(β(t−1) k
, jump2
βk).
- 3. For each j, update φj using Metropolis with jumping
distribution φ∗
j ∼ N(φ(t−1) j
, jump2
φj).
- 4. For each l, k, update ψlk using Metropolis with jumping
distribution ψ∗
lk ∼ N(ψ(t−1) lk
, jump2
ψlk).
- 5. Use Gibbs to update µα, σ2
α, µβ, σ2 β, µφ, σ2 φ respectively.
- 6. For each k, update ωk using Metropolis with jumping
distribution ω∗
k ∼ N(ω(t−1) k
, jump2
ωk).
- 7. Renormalizing at the end of each iteration.
How many X’s do you know? Fitting Algorithm
Gibbs-Metropolis Algorithm
◮ Fitting procedure:
- 1. For each i, update αi using Metropolis with jumping
distribution α∗
i ∼ N(α(t−1) i
, jump2
αi).
- 2. For each k, update βk using Metropolis with jumping
distribution β∗
k ∼ N(β(t−1) k
, jump2
βk).
- 3. For each j, update φj using Metropolis with jumping
distribution φ∗
j ∼ N(φ(t−1) j
, jump2
φj).
- 4. For each l, k, update ψlk using Metropolis with jumping
distribution ψ∗
lk ∼ N(ψ(t−1) lk
, jump2
ψlk).
- 5. Use Gibbs to update µα, σ2
α, µβ, σ2 β, µφ, σ2 φ respectively.
- 6. For each k, update ωk using Metropolis with jumping
distribution ω∗
k ∼ N(ω(t−1) k
, jump2
ωk).
- 7. Renormalizing at the end of each iteration.
How many X’s do you know? Fitting Algorithm
Gibbs-Metropolis Algorithm
◮ Fitting procedure:
- 1. For each i, update αi using Metropolis with jumping
distribution α∗
i ∼ N(α(t−1) i
, jump2
αi).
- 2. For each k, update βk using Metropolis with jumping
distribution β∗
k ∼ N(β(t−1) k
, jump2
βk).
- 3. For each j, update φj using Metropolis with jumping
distribution φ∗
j ∼ N(φ(t−1) j
, jump2
φj).
- 4. For each l, k, update ψlk using Metropolis with jumping
distribution ψ∗
lk ∼ N(ψ(t−1) lk
, jump2
ψlk).
- 5. Use Gibbs to update µα, σ2
α, µβ, σ2 β, µφ, σ2 φ respectively.
- 6. For each k, update ωk using Metropolis with jumping
distribution ω∗
k ∼ N(ω(t−1) k
, jump2
ωk).
- 7. Renormalizing at the end of each iteration.
How many X’s do you know? Fitting Algorithm
Gibbs-Metropolis Algorithm
◮ Fitting procedure:
- 1. For each i, update αi using Metropolis with jumping
distribution α∗
i ∼ N(α(t−1) i
, jump2
αi).
- 2. For each k, update βk using Metropolis with jumping
distribution β∗
k ∼ N(β(t−1) k
, jump2
βk).
- 3. For each j, update φj using Metropolis with jumping
distribution φ∗
j ∼ N(φ(t−1) j
, jump2
φj).
- 4. For each l, k, update ψlk using Metropolis with jumping
distribution ψ∗
lk ∼ N(ψ(t−1) lk
, jump2
ψlk).
- 5. Use Gibbs to update µα, σ2
α, µβ, σ2 β, µφ, σ2 φ respectively.
- 6. For each k, update ωk using Metropolis with jumping
distribution ω∗
k ∼ N(ω(t−1) k
, jump2
ωk).
- 7. Renormalizing at the end of each iteration.
How many X’s do you know? Fitting Algorithm
Gibbs-Metropolis Algorithm
◮ Fitting procedure:
- 1. For each i, update αi using Metropolis with jumping
distribution α∗
i ∼ N(α(t−1) i
, jump2
αi).
- 2. For each k, update βk using Metropolis with jumping
distribution β∗
k ∼ N(β(t−1) k
, jump2
βk).
- 3. For each j, update φj using Metropolis with jumping
distribution φ∗
j ∼ N(φ(t−1) j
, jump2
φj).
- 4. For each l, k, update ψlk using Metropolis with jumping
distribution ψ∗
lk ∼ N(ψ(t−1) lk
, jump2
ψlk).
- 5. Use Gibbs to update µα, σ2
α, µβ, σ2 β, µφ, σ2 φ respectively.
- 6. For each k, update ωk using Metropolis with jumping
distribution ω∗
k ∼ N(ω(t−1) k
, jump2
ωk).
- 7. Renormalizing at the end of each iteration.
How many X’s do you know? Fitting Algorithm
Gibbs-Metropolis Algorithm
◮ Fitting procedure:
- 1. For each i, update αi using Metropolis with jumping
distribution α∗
i ∼ N(α(t−1) i
, jump2
αi).
- 2. For each k, update βk using Metropolis with jumping
distribution β∗
k ∼ N(β(t−1) k
, jump2
βk).
- 3. For each j, update φj using Metropolis with jumping
distribution φ∗
j ∼ N(φ(t−1) j
, jump2
φj).
- 4. For each l, k, update ψlk using Metropolis with jumping
distribution ψ∗
lk ∼ N(ψ(t−1) lk
, jump2
ψlk).
- 5. Use Gibbs to update µα, σ2
α, µβ, σ2 β, µφ, σ2 φ respectively.
- 6. For each k, update ωk using Metropolis with jumping
distribution ω∗
k ∼ N(ω(t−1) k
, jump2
ωk).
- 7. Renormalizing at the end of each iteration.
How many X’s do you know? Fitting Algorithm
Gibbs-Metropolis Algorithm
◮ Fitting procedure:
- 1. For each i, update αi using Metropolis with jumping
distribution α∗
i ∼ N(α(t−1) i
, jump2
αi).
- 2. For each k, update βk using Metropolis with jumping
distribution β∗
k ∼ N(β(t−1) k
, jump2
βk).
- 3. For each j, update φj using Metropolis with jumping
distribution φ∗
j ∼ N(φ(t−1) j
, jump2
φj).
- 4. For each l, k, update ψlk using Metropolis with jumping
distribution ψ∗
lk ∼ N(ψ(t−1) lk
, jump2
ψlk).
- 5. Use Gibbs to update µα, σ2
α, µβ, σ2 β, µφ, σ2 φ respectively.
- 6. For each k, update ωk using Metropolis with jumping
distribution ω∗
k ∼ N(ω(t−1) k
, jump2
ωk).
- 7. Renormalizing at the end of each iteration.
How many X’s do you know? Model selection Full model
Full model
◮ Put all the covariates in without any pre-selection. ◮ Advantages:
- 1. Easy to start with.
- 2. More appropriate to do backward selection.
◮ Disadvantages:
- 1. Likelihood may go wild during the fitting.
- 2. It would take a long time to converge.
- 3. It is more complicated to re-normalize.
- 4. The fitting results exhibit more variability.
How many X’s do you know? Model selection Full model
Full model
◮ Put all the covariates in without any pre-selection. ◮ Advantages:
- 1. Easy to start with.
- 2. More appropriate to do backward selection.
◮ Disadvantages:
- 1. Likelihood may go wild during the fitting.
- 2. It would take a long time to converge.
- 3. It is more complicated to re-normalize.
- 4. The fitting results exhibit more variability.
How many X’s do you know? Model selection Full model
Full model
◮ Put all the covariates in without any pre-selection. ◮ Advantages:
- 1. Easy to start with.
- 2. More appropriate to do backward selection.
◮ Disadvantages:
- 1. Likelihood may go wild during the fitting.
- 2. It would take a long time to converge.
- 3. It is more complicated to re-normalize.
- 4. The fitting results exhibit more variability.
How many X’s do you know? Model selection Full model
Full model
◮ Put all the covariates in without any pre-selection. ◮ Advantages:
- 1. Easy to start with.
- 2. More appropriate to do backward selection.
◮ Disadvantages:
- 1. Likelihood may go wild during the fitting.
- 2. It would take a long time to converge.
- 3. It is more complicated to re-normalize.
- 4. The fitting results exhibit more variability.
How many X’s do you know? Model selection Full model
Full model
◮ Put all the covariates in without any pre-selection. ◮ Advantages:
- 1. Easy to start with.
- 2. More appropriate to do backward selection.
◮ Disadvantages:
- 1. Likelihood may go wild during the fitting.
- 2. It would take a long time to converge.
- 3. It is more complicated to re-normalize.
- 4. The fitting results exhibit more variability.
How many X’s do you know? Model selection Full model
Full model
◮ Put all the covariates in without any pre-selection. ◮ Advantages:
- 1. Easy to start with.
- 2. More appropriate to do backward selection.
◮ Disadvantages:
- 1. Likelihood may go wild during the fitting.
- 2. It would take a long time to converge.
- 3. It is more complicated to re-normalize.
- 4. The fitting results exhibit more variability.
How many X’s do you know? Model selection Full model
Full model
◮ Put all the covariates in without any pre-selection. ◮ Advantages:
- 1. Easy to start with.
- 2. More appropriate to do backward selection.
◮ Disadvantages:
- 1. Likelihood may go wild during the fitting.
- 2. It would take a long time to converge.
- 3. It is more complicated to re-normalize.
- 4. The fitting results exhibit more variability.
How many X’s do you know? Model selection Full model
Full model
◮ Put all the covariates in without any pre-selection. ◮ Advantages:
- 1. Easy to start with.
- 2. More appropriate to do backward selection.
◮ Disadvantages:
- 1. Likelihood may go wild during the fitting.
- 2. It would take a long time to converge.
- 3. It is more complicated to re-normalize.
- 4. The fitting results exhibit more variability.
How many X’s do you know? Model selection Full model
Full model
◮ Put all the covariates in without any pre-selection. ◮ Advantages:
- 1. Easy to start with.
- 2. More appropriate to do backward selection.
◮ Disadvantages:
- 1. Likelihood may go wild during the fitting.
- 2. It would take a long time to converge.
- 3. It is more complicated to re-normalize.
- 4. The fitting results exhibit more variability.
How many X’s do you know? Model selection Pre-selected model
Pre-selected model
◮ Choose covariates using empirical methods to put into the
model.
◮ Advantages:
- 1. Speed up the program.
- 2. In most cases, it is efficient.
◮ Disadvantages:
- 1. The important covariates may be excluded in the pre-selection.
How many X’s do you know? Model selection Pre-selected model
Pre-selected model
◮ Choose covariates using empirical methods to put into the
model.
◮ Advantages:
- 1. Speed up the program.
- 2. In most cases, it is efficient.
◮ Disadvantages:
- 1. The important covariates may be excluded in the pre-selection.
How many X’s do you know? Model selection Pre-selected model
Pre-selected model
◮ Choose covariates using empirical methods to put into the
model.
◮ Advantages:
- 1. Speed up the program.
- 2. In most cases, it is efficient.
◮ Disadvantages:
- 1. The important covariates may be excluded in the pre-selection.
How many X’s do you know? Model selection Pre-selected model
Pre-selected model
◮ Choose covariates using empirical methods to put into the
model.
◮ Advantages:
- 1. Speed up the program.
- 2. In most cases, it is efficient.
◮ Disadvantages:
- 1. The important covariates may be excluded in the pre-selection.
How many X’s do you know? Model selection Pre-selected model
Pre-selected model
◮ Choose covariates using empirical methods to put into the
model.
◮ Advantages:
- 1. Speed up the program.
- 2. In most cases, it is efficient.
◮ Disadvantages:
- 1. The important covariates may be excluded in the pre-selection.
How many X’s do you know? Model selection Pre-selected model
Pre-selected model
◮ Choose covariates using empirical methods to put into the
model.
◮ Advantages:
- 1. Speed up the program.
- 2. In most cases, it is efficient.
◮ Disadvantages:
- 1. The important covariates may be excluded in the pre-selection.
How many X’s do you know? Model selection Pre-selected model
Pre-selection
◮ Fit a model without any covariates. Compute individual
degrees and residuals.
- 1. ˜
yi =
K
- k=1
yik
- 2. rik = √yik −
- ˆ
aiˆ bk
◮ Choose φ’s:
- 1. Regress ˜
yi’s (or log(˜ yi)’s) against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the φ part of the model.
◮ Choose ψ’s:
- 1. For each k, regress rik’s against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the ψ part of the model.
How many X’s do you know? Model selection Pre-selected model
Pre-selection
◮ Fit a model without any covariates. Compute individual
degrees and residuals.
- 1. ˜
yi =
K
- k=1
yik
- 2. rik = √yik −
- ˆ
aiˆ bk
◮ Choose φ’s:
- 1. Regress ˜
yi’s (or log(˜ yi)’s) against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the φ part of the model.
◮ Choose ψ’s:
- 1. For each k, regress rik’s against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the ψ part of the model.
How many X’s do you know? Model selection Pre-selected model
Pre-selection
◮ Fit a model without any covariates. Compute individual
degrees and residuals.
- 1. ˜
yi =
K
- k=1
yik
- 2. rik = √yik −
- ˆ
aiˆ bk
◮ Choose φ’s:
- 1. Regress ˜
yi’s (or log(˜ yi)’s) against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the φ part of the model.
◮ Choose ψ’s:
- 1. For each k, regress rik’s against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the ψ part of the model.
How many X’s do you know? Model selection Pre-selected model
Pre-selection
◮ Fit a model without any covariates. Compute individual
degrees and residuals.
- 1. ˜
yi =
K
- k=1
yik
- 2. rik = √yik −
- ˆ
aiˆ bk
◮ Choose φ’s:
- 1. Regress ˜
yi’s (or log(˜ yi)’s) against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the φ part of the model.
◮ Choose ψ’s:
- 1. For each k, regress rik’s against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the ψ part of the model.
How many X’s do you know? Model selection Pre-selected model
Pre-selection
◮ Fit a model without any covariates. Compute individual
degrees and residuals.
- 1. ˜
yi =
K
- k=1
yik
- 2. rik = √yik −
- ˆ
aiˆ bk
◮ Choose φ’s:
- 1. Regress ˜
yi’s (or log(˜ yi)’s) against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the φ part of the model.
◮ Choose ψ’s:
- 1. For each k, regress rik’s against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the ψ part of the model.
How many X’s do you know? Model selection Pre-selected model
Pre-selection
◮ Fit a model without any covariates. Compute individual
degrees and residuals.
- 1. ˜
yi =
K
- k=1
yik
- 2. rik = √yik −
- ˆ
aiˆ bk
◮ Choose φ’s:
- 1. Regress ˜
yi’s (or log(˜ yi)’s) against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the φ part of the model.
◮ Choose ψ’s:
- 1. For each k, regress rik’s against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the ψ part of the model.
How many X’s do you know? Model selection Pre-selected model
Pre-selection
◮ Fit a model without any covariates. Compute individual
degrees and residuals.
- 1. ˜
yi =
K
- k=1
yik
- 2. rik = √yik −
- ˆ
aiˆ bk
◮ Choose φ’s:
- 1. Regress ˜
yi’s (or log(˜ yi)’s) against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the φ part of the model.
◮ Choose ψ’s:
- 1. For each k, regress rik’s against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the ψ part of the model.
How many X’s do you know? Model selection Pre-selected model
Pre-selection
◮ Fit a model without any covariates. Compute individual
degrees and residuals.
- 1. ˜
yi =
K
- k=1
yik
- 2. rik = √yik −
- ˆ
aiˆ bk
◮ Choose φ’s:
- 1. Regress ˜
yi’s (or log(˜ yi)’s) against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the φ part of the model.
◮ Choose ψ’s:
- 1. For each k, regress rik’s against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the ψ part of the model.
How many X’s do you know? Model selection Pre-selected model
Pre-selection
◮ Fit a model without any covariates. Compute individual
degrees and residuals.
- 1. ˜
yi =
K
- k=1
yik
- 2. rik = √yik −
- ˆ
aiˆ bk
◮ Choose φ’s:
- 1. Regress ˜
yi’s (or log(˜ yi)’s) against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the φ part of the model.
◮ Choose ψ’s:
- 1. For each k, regress rik’s against all the covariates.
- 2. Use stepwise AIC procedure to choose the important covariates
to enter the ψ part of the model.
How many X’s do you know? Model selection Model comparison
Model selection using DIC
◮ The usual AIC, BIC criteria may not be easily applied in this
- case. With the difficulty that the ”number of parameters” is
not so clearly defined.
◮ The DIC criterion (Deviance Information; A. Gelman et la
2007):
- 1. An analog of AIC.
DIC = mean(deviance) + 2pD
- 2. deviance = −2 ∗ log likelihood; pD = var(deviance)/2
◮ The DIC is just suggestive rather than definitive.
How many X’s do you know? Model selection Model comparison
Model selection using DIC
◮ The usual AIC, BIC criteria may not be easily applied in this
- case. With the difficulty that the ”number of parameters” is
not so clearly defined.
◮ The DIC criterion (Deviance Information; A. Gelman et la
2007):
- 1. An analog of AIC.
DIC = mean(deviance) + 2pD
- 2. deviance = −2 ∗ log likelihood; pD = var(deviance)/2
◮ The DIC is just suggestive rather than definitive.
How many X’s do you know? Model selection Model comparison
Model selection using DIC
◮ The usual AIC, BIC criteria may not be easily applied in this
- case. With the difficulty that the ”number of parameters” is
not so clearly defined.
◮ The DIC criterion (Deviance Information; A. Gelman et la
2007):
- 1. An analog of AIC.
DIC = mean(deviance) + 2pD
- 2. deviance = −2 ∗ log likelihood; pD = var(deviance)/2
◮ The DIC is just suggestive rather than definitive.
How many X’s do you know? Model selection Model comparison
Model selection using DIC
◮ The usual AIC, BIC criteria may not be easily applied in this
- case. With the difficulty that the ”number of parameters” is
not so clearly defined.
◮ The DIC criterion (Deviance Information; A. Gelman et la
2007):
- 1. An analog of AIC.
DIC = mean(deviance) + 2pD
- 2. deviance = −2 ∗ log likelihood; pD = var(deviance)/2
◮ The DIC is just suggestive rather than definitive.
How many X’s do you know? Model selection Model comparison
Model selection using DIC
◮ The usual AIC, BIC criteria may not be easily applied in this
- case. With the difficulty that the ”number of parameters” is
not so clearly defined.
◮ The DIC criterion (Deviance Information; A. Gelman et la
2007):
- 1. An analog of AIC.
DIC = mean(deviance) + 2pD
- 2. deviance = −2 ∗ log likelihood; pD = var(deviance)/2
◮ The DIC is just suggestive rather than definitive.
How many X’s do you know? Results
Overdispersion of pre-selected model
Groups Overdispersion
Michael Christina Christopher Jacqueline James Jennifer Anthony Kimberly Robert Stephanie David Nicole Adoption Twin Suicide Auto Accident
1 1.21.41.61.8 2 2.22.4
Groups Overdispersion
- Amer. Indians
New Birth Widow(er) Dialysis Postal Worker Pilot Jaycees Diabetic Business Gun Dealer HIV positive AIDS Homeless Rape Prison Homicide
2 4 6 8 10 12
How many X’s do you know? Results
Reduction of overdispersion
2 4 6 8 10 2 4 6 8 10 Overdispersion without covariates Overdispersion with covariates Full model with all covariates Reduced model with pre−selected covariates Reduced model with post−selected covariates
How many X’s do you know? Results
DIC of different models
Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4
How many X’s do you know? Results
DIC of different models
Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4
How many X’s do you know? Results
DIC of different models
Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4
How many X’s do you know? Results
DIC of different models
Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4
How many X’s do you know? Results
DIC of different models
Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4
How many X’s do you know? Further developments and conclusions
Further developments
◮ Future work:
- 1. Interpretation of the estimates of covariates. (To be appeared
in the report.)
- 2. Alternative methods: Generalized Linear Mixed Models.
Likelihood is hard to deal with.
- 3. Apply the model to the censored data.
- 4. Other ways to re-normalize. (i.e. set
p
- j=1
φj = 0; set
n
- i=1
˜ αi = 0)
◮ Conclusion:
- 1. The pre-selection we use is a very good method to choose the
right covariates.
- 2. DIC could be a good criterion for model comparison. It also
shows that the pre-selected model is the best among the easy models we can think of.
- 3. Putting covariates in the model in the right way can reduce
the overdispersion.
How many X’s do you know? Further developments and conclusions
Further developments
◮ Future work:
- 1. Interpretation of the estimates of covariates. (To be appeared
in the report.)
- 2. Alternative methods: Generalized Linear Mixed Models.
Likelihood is hard to deal with.
- 3. Apply the model to the censored data.
- 4. Other ways to re-normalize. (i.e. set
p
- j=1
φj = 0; set
n
- i=1
˜ αi = 0)
◮ Conclusion:
- 1. The pre-selection we use is a very good method to choose the
right covariates.
- 2. DIC could be a good criterion for model comparison. It also
shows that the pre-selected model is the best among the easy models we can think of.
- 3. Putting covariates in the model in the right way can reduce
the overdispersion.
How many X’s do you know? Further developments and conclusions
Further developments
◮ Future work:
- 1. Interpretation of the estimates of covariates. (To be appeared
in the report.)
- 2. Alternative methods: Generalized Linear Mixed Models.
Likelihood is hard to deal with.
- 3. Apply the model to the censored data.
- 4. Other ways to re-normalize. (i.e. set
p
- j=1
φj = 0; set
n
- i=1
˜ αi = 0)
◮ Conclusion:
- 1. The pre-selection we use is a very good method to choose the
right covariates.
- 2. DIC could be a good criterion for model comparison. It also
shows that the pre-selected model is the best among the easy models we can think of.
- 3. Putting covariates in the model in the right way can reduce
the overdispersion.
How many X’s do you know? Further developments and conclusions
Further developments
◮ Future work:
- 1. Interpretation of the estimates of covariates. (To be appeared
in the report.)
- 2. Alternative methods: Generalized Linear Mixed Models.
Likelihood is hard to deal with.
- 3. Apply the model to the censored data.
- 4. Other ways to re-normalize. (i.e. set
p
- j=1
φj = 0; set
n
- i=1
˜ αi = 0)
◮ Conclusion:
- 1. The pre-selection we use is a very good method to choose the
right covariates.
- 2. DIC could be a good criterion for model comparison. It also
shows that the pre-selected model is the best among the easy models we can think of.
- 3. Putting covariates in the model in the right way can reduce
the overdispersion.
How many X’s do you know? Further developments and conclusions
Further developments
◮ Future work:
- 1. Interpretation of the estimates of covariates. (To be appeared
in the report.)
- 2. Alternative methods: Generalized Linear Mixed Models.
Likelihood is hard to deal with.
- 3. Apply the model to the censored data.
- 4. Other ways to re-normalize. (i.e. set
p
- j=1
φj = 0; set
n
- i=1
˜ αi = 0)
◮ Conclusion:
- 1. The pre-selection we use is a very good method to choose the
right covariates.
- 2. DIC could be a good criterion for model comparison. It also
shows that the pre-selected model is the best among the easy models we can think of.
- 3. Putting covariates in the model in the right way can reduce
the overdispersion.
How many X’s do you know? Further developments and conclusions
Further developments
◮ Future work:
- 1. Interpretation of the estimates of covariates. (To be appeared
in the report.)
- 2. Alternative methods: Generalized Linear Mixed Models.
Likelihood is hard to deal with.
- 3. Apply the model to the censored data.
- 4. Other ways to re-normalize. (i.e. set
p
- j=1
φj = 0; set
n
- i=1
˜ αi = 0)
◮ Conclusion:
- 1. The pre-selection we use is a very good method to choose the
right covariates.
- 2. DIC could be a good criterion for model comparison. It also
shows that the pre-selected model is the best among the easy models we can think of.
- 3. Putting covariates in the model in the right way can reduce
the overdispersion.
How many X’s do you know? Further developments and conclusions
Further developments
◮ Future work:
- 1. Interpretation of the estimates of covariates. (To be appeared
in the report.)
- 2. Alternative methods: Generalized Linear Mixed Models.
Likelihood is hard to deal with.
- 3. Apply the model to the censored data.
- 4. Other ways to re-normalize. (i.e. set
p
- j=1
φj = 0; set
n
- i=1
˜ αi = 0)
◮ Conclusion:
- 1. The pre-selection we use is a very good method to choose the
right covariates.
- 2. DIC could be a good criterion for model comparison. It also
shows that the pre-selected model is the best among the easy models we can think of.
- 3. Putting covariates in the model in the right way can reduce
the overdispersion.
How many X’s do you know? Further developments and conclusions
Further developments
◮ Future work:
- 1. Interpretation of the estimates of covariates. (To be appeared
in the report.)
- 2. Alternative methods: Generalized Linear Mixed Models.
Likelihood is hard to deal with.
- 3. Apply the model to the censored data.
- 4. Other ways to re-normalize. (i.e. set
p
- j=1
φj = 0; set
n
- i=1
˜ αi = 0)
◮ Conclusion:
- 1. The pre-selection we use is a very good method to choose the
right covariates.
- 2. DIC could be a good criterion for model comparison. It also
shows that the pre-selected model is the best among the easy models we can think of.
- 3. Putting covariates in the model in the right way can reduce
the overdispersion.
How many X’s do you know? Further developments and conclusions
Further developments
◮ Future work:
- 1. Interpretation of the estimates of covariates. (To be appeared
in the report.)
- 2. Alternative methods: Generalized Linear Mixed Models.
Likelihood is hard to deal with.
- 3. Apply the model to the censored data.
- 4. Other ways to re-normalize. (i.e. set
p
- j=1
φj = 0; set
n
- i=1
˜ αi = 0)
◮ Conclusion:
- 1. The pre-selection we use is a very good method to choose the
right covariates.
- 2. DIC could be a good criterion for model comparison. It also
shows that the pre-selected model is the best among the easy models we can think of.
- 3. Putting covariates in the model in the right way can reduce
the overdispersion.
How many X’s do you know? Thank you