CCP Estimation of Dynamic Discrete Choice Models With Unobserved - - PowerPoint PPT Presentation

ccp estimation of dynamic discrete choice models with
SMART_READER_LITE
LIVE PREVIEW

CCP Estimation of Dynamic Discrete Choice Models With Unobserved - - PowerPoint PPT Presentation

CCP Estimation of Dynamic Discrete Choice Models With Unobserved Heterogeneity Yitian (Sky) LIANG Department of Marketing Sauder School of Business March 7, 2013 Roadmap Summary of the paper (5 mins) Motivating example: bus engine


slide-1
SLIDE 1

CCP Estimation of Dynamic Discrete Choice Models With Unobserved Heterogeneity

Yitian (Sky) LIANG

Department of Marketing Sauder School of Business

March 7, 2013

slide-2
SLIDE 2

Roadmap

◮ Summary of the paper (5 mins) ◮ Motivating example: bus engine replacement model (Rust,

1987) (10 mins)

◮ Estimator and algorithm (10 mins) ◮ Application result in the motivating example (5 mins)

slide-3
SLIDE 3

Summary

◮ Motivation: unobserved heterogeneity (unobserved correlated

state variables)

◮ Can’t have consistent first-stage estimates of CCP ◮ Violation of CI

◮ Develop a modified EM algorithm to estimate the structural

parameters and the distribution of unobserved state variables

◮ Develop the concept of “finite dependence” (will not covered)

◮ identification? ◮ facilitate estimation?

slide-4
SLIDE 4

Motivating Example (Setup): Our Friend - Harold Zurcher

◮ Infinite horizon (later in the application, they set it to be finite

horizon)

◮ Choice space {d1t, d2t}, i.e. replace the engine v.s keep it. ◮ State space {xt, s, ǫt}, i.e. accumulated mileage since the last

replacement, brand of the bus and transitory shocks (not

  • bserved by the econometrician)

◮ Controlled transition rule:

◮ xt+1 = xt + 1 if d2t = 1. ◮ xt+1 = 0 if d1t = 1.

◮ Per-period payoff:

u (d1t, xt, s) = d1t · ǫ1t + (1 − d1t) · (θ0 + θ1xt + θ2s + ǫ2t).

slide-5
SLIDE 5

Harold Zurcher Cont.

◮ Hotz and Miller (1993): difference between conditional value

function can be represented by flow payoff and CCP, i.e.

v2 (x, s)−v (x1, s) = θ0 +θ1x +θ2s +β log [p1 (0, s)]−β log [p1 (x + 1, s)] .

◮ Then we have: p1 (x, s) = 1 1+exp[v2(x,s)−v(x1,s)]. ◮ Let πs be the probability a bus is brand s.

slide-6
SLIDE 6

Harold Zurcher Cont. (Suppose know ˆ p)

◮ MLE,

  • ˆ

θ, ˆ π

  • = argmaxθ,π
  • n log [

s πsΠtl (dnt | xnt, s, ˆ

p1, θ)].

◮ EM Algorithm

◮ Expectation step: ◮ ˆ

qns = Pr

  • sn = s | dn, xn; ˆ

θ, ˆ π, ˆ p1

  • =

ˆ πsΠtl(dnt | xnt, s, ˆ p1, θ)

  • s′ ˆ

πs′ Πtl(dnt | xnt, s′, ˆ p1, θ) ◮ ˆ

πs = 1

N

N

n=1 ˆ

qns.

◮ Maximization step:

ˆ θ = argmaxθ

  • n log [

s ˆ

πsΠtl (dnt | xnt, s, ˆ p1, θ)].

slide-7
SLIDE 7

Harold Zurcher Cont. (Update ˆ p)

◮ Two ways to update CCP: model-based v.s non-model-based ◮ Non model based update of CCP

p1 (x, s) = Pr {d1nt = 1 | sn = s, xnt = x} = E [d1ntqns | xnt = x] E [qns | xnt = x]

◮ Sample analogue:

ˆ p1 (x, s) =

  • n
  • t d1ntˆ

qnsI (xnt = x)

  • n
  • t ˆ

qnsI (xnt = x)

◮ Model based update:

p(m+1)

1

(xnt, s) = l

  • dnt | xnt, s, p(m)

1

, θ(m) .

slide-8
SLIDE 8

General Model

◮ Larger choice space, non-stationarity (i.e. finite horizon) ◮ Unobserved heterogeneity changes over time: need to estimate

its transition π (st+1|st).

◮ Initial value problem: need to estimate π (s1|x1). ◮ Sketch of the algorithm

◮ Expectation step: sequential update

qns → π (s1|x1) , π (st+1|st) → pjt (x, s).

◮ Maximization step: maximize the conditional likelihood w.r.t

structural parameters.

slide-9
SLIDE 9

General Model - Likelihood

L (dn, xn | xn1; θ, π, p) =

  • s1
  • s2

· · ·

  • sT

[π (s1|xn1) L1 (dn1, xn2| xn1, s1; θ, π, p) ×

  • ΠT

t=2

  • π (st|st−1) Lt (dnt, xn,t+1| xnt, st; θ, π, p)
  • .

where Lt (dnt, xn,t+1| xnt, st; θ, π, p) = ΠJ

j=1 [ljt (xnt, snt, θ, π, p) fjt (xn,t+1|xnt, snt, θ)]djnt .

slide-10
SLIDE 10

The Algorithm - Expectation Step

Update q(m)

nst :

q(m+1)

nst

= L(m)

n

(snt = s) L(m)

n

, where

Lnt (snt = s) =

  • s1

· · ·

  • st−1
  • st+1

· · ·

  • sT

π (s1|xn1) Ln1 (s1)

  • Πt−1

t′=2π (st′|st′−1) Lnt′ (st′)

  • ×π (st|st−1) Lnt (s) π (st+1|s) Ln,t+1 (st+1)
  • ΠT

t′=t+2π (st′|st′−1) Lnt′ (st′)

slide-11
SLIDE 11

The Algorithm - Expectation Step Cont.

Update π(m) (s|x): π(m+1) (s|x) = N

n=1 q(m+1) ns1

I (xn1 = x) N

n=1 I (xn1 = x)

. Update π(m+1) (s′|s): π(m+1) s′|s

  • =

N

n=1

T

t=2 q(m+1) ns′t|s q(m+1) ns,t−1

N

n=1

T

t=2 q(m+1) ns,t−1

, where the definition of q(m+1)

ns′t|s is on page 1847.

slide-12
SLIDE 12

The Algorithm - Expecation Step Cont. & Maximization Step

Update p(m+1)

jt

(x, s): p(m+1)

jt

(x, s) = N

n=1 dnjtq(m+1) nst

I (xnt = x) N

n=1 q(m+1) nst

I (xnt = x) . Maximization step:

θ(m+1) = argmaxθ

  • n
  • t
  • s
  • j

q(m+1)

nst

log Lt

  • dnt, xn,t+1|xnt, snt = s; θ, π(m+1), p(m+1)

.

slide-13
SLIDE 13

Alternative Algorithm - Two Stage Estimator

◮ Stage 1: recover θ1, π (s1|x1), π (s′|s), pjt (xt, st) by using the

EM algorithm.

◮ Stage 2: recover θ2. ◮ Key idea: non-parametric representation of the likelihood (free

  • f structural parameters):

Lt (dnt, xn,t+1|xnt, snt; θ1, π, p) = ΠJ

j=1 [ljt (xnt, snt, θ, π, p) fjt (xn,t+1|xnt, snt, θ1)]djnt

= ΠJ

j=1 [pjt (xnt, snt) fjt (xn,t+1|xnt, snt, θ1)]djnt .

slide-14
SLIDE 14

Alternative Algorithm - Two Stage Estimator Cont.

◮ Stage 1 expectation step: update q and π ◮ Stage 1 maximization step: maximize the conditional

likelihood w.r.t p and θ1

◮ Stage 2: given stage 1 estimates, can apply any CCP based

method to recover θ2, i.e. Hotz and Miller (1993), BBL (2007).

slide-15
SLIDE 15

Back to Harold Zurcher