Identifjcation analysis and higher-order approximation of DSGE - - PowerPoint PPT Presentation

identifjcation analysis and higher order approximation of
SMART_READER_LITE
LIVE PREVIEW

Identifjcation analysis and higher-order approximation of DSGE - - PowerPoint PPT Presentation

Identifjcation analysis and higher-order approximation of DSGE models Willi Mutschler 1 Introduction Introduction diffjcult to maximize likelihood/posterior or minimize some (moment) lack of vs. strength of identifjcation actually


slide-1
SLIDE 1

Identifjcation analysis and higher-order approximation of DSGE models

Willi Mutschler

1

slide-2
SLIDE 2

Introduction

slide-3
SLIDE 3

Introduction

Identifjcation Problem

  • distinct parameter values do not lead to a distinct probability

distribution of data (in particular moments and spectra) p(Y|θ) = p(Y|θ0) θ = θ0

  • lack of identifjcation leads to wrong conclusions from calibration,

estimation and inference

  • in practice, many caveats due to identifjability issues and/or an

unfortunate choice of observables

  • diffjcult to maximize likelihood/posterior or minimize some (moment)
  • bjective function
  • estimators often lie on the boundary of theoretically admissible space
  • Gaussian asymptotics yield poor approximations
  • BUT: identifjability is a model property and can be analyzed without

actually estimating the model

  • lack of vs. strength of identifjcation

2

slide-4
SLIDE 4

Introduction

Identifjcation Problem

  • distinct parameter values do not lead to a distinct probability

distribution of data (in particular moments and spectra) p(Y|θ) = p(Y|θ0) θ = θ0

  • lack of identifjcation leads to wrong conclusions from calibration,

estimation and inference

  • in practice, many caveats due to identifjability issues and/or an

unfortunate choice of observables

  • diffjcult to maximize likelihood/posterior or minimize some (moment)
  • bjective function
  • estimators often lie on the boundary of theoretically admissible space
  • Gaussian asymptotics yield poor approximations
  • BUT: identifjability is a model property and can be analyzed without

actually estimating the model

  • lack of vs. strength of identifjcation

2

slide-5
SLIDE 5

Introduction

Identifjcation Problem

  • distinct parameter values do not lead to a distinct probability

distribution of data (in particular moments and spectra) p(Y|θ) = p(Y|θ0) θ = θ0

  • lack of identifjcation leads to wrong conclusions from calibration,

estimation and inference

  • in practice, many caveats due to identifjability issues and/or an

unfortunate choice of observables

  • diffjcult to maximize likelihood/posterior or minimize some (moment)
  • bjective function
  • estimators often lie on the boundary of theoretically admissible space
  • Gaussian asymptotics yield poor approximations
  • BUT: identifjability is a model property and can be analyzed without

actually estimating the model

  • lack of vs. strength of identifjcation

2

slide-6
SLIDE 6

Example (1): ARMA(1,1)

slide-7
SLIDE 7

Example (1): ARMA(1,1) (I)

  • consider the following ARMA(1,1)-process

xt − φ1xt−1 = εt − φ2εt−1, with εt

iid

∼ N(0, σ2)

with parameter vector θ = (φ1, φ2, σ)′:

ARMA(1,1) with θ = (0.4, 0.4, 1)

time x 20 40 60 80 100 −2 −1 1 2 3

3

slide-8
SLIDE 8

Example (1): ARMA(1,1) (I)

  • consider the following ARMA(1,1)-process

xt − φ1xt−1 = εt − φ2εt−1, with εt

iid

∼ N(0, σ2)

with parameter vector θ = (φ1, φ2, σ)′:

ARMA(1,1) with θ = (0, 0, 1)

time x 20 40 60 80 100 −2 −1 1 2 3

3

slide-9
SLIDE 9

Example (1): ARMA(1,1) (II)

  • autocovariance function: Γ = (γ0, γ1, . . . , γh) with

γ0 = (1 + φ2

2 − 2φ1φ2)σ2

1 − φ2

1

, γ1 = (φ1 − φ2)(1 − φ1φ2)σ2 1 − φ2

1

, γh = φ1γh−1

  • rank of Jacobian of

w.r.t is not full at

1 2

  • similar argument applies to the spectral density matrix

4

slide-10
SLIDE 10

Example (1): ARMA(1,1) (II)

  • autocovariance function: Γ = (γ0, γ1, . . . , γh) with

γ0 = (1 + φ2

2 − 2φ1φ2)σ2

1 − φ2

1

, γ1 = (φ1 − φ2)(1 − φ1φ2)σ2 1 − φ2

1

, γh = φ1γh−1

  • rank of Jacobian of Γ w.r.t θ is not full at φ1 = φ2
  • similar argument applies to the spectral density matrix

4

slide-11
SLIDE 11

Example (1): ARMA(1,1) (II)

  • autocovariance function: Γ = (γ0, γ1, . . . , γh) with

γ0 = (1 + φ2

2 − 2φ1φ2)σ2

1 − φ2

1

, γ1 = (φ1 − φ2)(1 − φ1φ2)σ2 1 − φ2

1

, γh = φ1γh−1

  • rank of Jacobian of Γ w.r.t θ is not full at φ1 = φ2
  • similar argument applies to the spectral density matrix

4

slide-12
SLIDE 12

Example (2): Simple DSGE Model

slide-13
SLIDE 13

Example (2): Simple DSGE Model(I)

  • consider a simple purely forward looking log-linearized model

rt = ψπt + εM

t

(TR) xt = Etxt+1 − 1 τ (rt − Etπt+1) + εD

t

(IS) πt = βEtπt+1 + κxt + εS

t

(PC)

  • r

1

1

1 1

A0

rt xt

t yt

1

1 A1

Etrt

1

Etxt

1

Et

t 1 Etyt

1

M t D t S t

t

  • stationary solution of the model, i.e. Eigenvalues of A

1 0 A1 lie within unit

circle, implies: yt A

1 0 A1Etyt 1

A

1 t j

A

1 0 A1 jA 1 0 Et t j

A

1 t

5

slide-14
SLIDE 14

Example (2): Simple DSGE Model(I)

  • consider a simple purely forward looking log-linearized model

rt = ψπt + εM

t

(TR) xt = Etxt+1 − 1 τ (rt − Etπt+1) + εD

t

(IS) πt = βEtπt+1 + κxt + εS

t

(PC)

  • r

   1 −ψ

1 τ

1 −κ 1   

  • A0

   rt xt πt   

yt

=    1

1 τ

β   

  • A1

   Etrt+1 Etxt+1 Etπt+1   

  • Etyt+1

+    εM

t

εD

t

εS

t

  

εt

  • stationary solution of the model, i.e. Eigenvalues of A

1 0 A1 lie within unit

circle, implies: yt A

1 0 A1Etyt 1

A

1 t j

A

1 0 A1 jA 1 0 Et t j

A

1 t

5

slide-15
SLIDE 15

Example (2): Simple DSGE Model(I)

  • consider a simple purely forward looking log-linearized model

rt = ψπt + εM

t

(TR) xt = Etxt+1 − 1 τ (rt − Etπt+1) + εD

t

(IS) πt = βEtπt+1 + κxt + εS

t

(PC)

  • r

   1 −ψ

1 τ

1 −κ 1   

  • A0

   rt xt πt   

yt

=    1

1 τ

β   

  • A1

   Etrt+1 Etxt+1 Etπt+1   

  • Etyt+1

+    εM

t

εD

t

εS

t

  

εt

  • stationary solution of the model, i.e. Eigenvalues of A−1

0 A1 lie within unit

circle, implies: yt = A−1

0 A1Etyt+1 + A−1 0 εt = ∞

  • j=0

(A−1

0 A1)jA−1 0 Etεt+j = A−1 0 εt

5

slide-16
SLIDE 16

Example (2): Simple DSGE Model(II)

  • solution/data-generating-process/reduced-form

   rt xt πt    = 1

κψ τ + 1

   1 κψ ψ − 1

τ

1 − 1

τ

− κ

τ

κ 1   

  • A−1

   εM

t

εD

t

εS

t

  

  • some insights
  • some parameters ( ) do not enter solution, thus do not enter the

likelihood (or any other objective)

  • if we fjx (or calibrate)

, all parameters are identifjable

  • some parameters enter as products, e.g.

, identifying them separately may be impossible (depending on choice of observables)

  • is already the product of several other structural parameters (Calvo or

Rotemberg)

  • restrictions necessary to ensure regularity (Eigenvalues inside unit circle)

imply bounds involving all structural parameters, i.e. parameter space is not variation free

6

slide-17
SLIDE 17

Example (2): Simple DSGE Model(II)

  • solution/data-generating-process/reduced-form

   rt xt πt    = 1

κψ τ + 1

   1 κψ ψ − 1

τ

1 − 1

τ

− κ

τ

κ 1   

  • A−1

   εM

t

εD

t

εS

t

  

  • some insights
  • some parameters ( ) do not enter solution, thus do not enter the

likelihood (or any other objective)

  • if we fjx (or calibrate)

, all parameters are identifjable

  • some parameters enter as products, e.g.

, identifying them separately may be impossible (depending on choice of observables)

  • is already the product of several other structural parameters (Calvo or

Rotemberg)

  • restrictions necessary to ensure regularity (Eigenvalues inside unit circle)

imply bounds involving all structural parameters, i.e. parameter space is not variation free

6

slide-18
SLIDE 18

Example (2): Simple DSGE Model(II)

  • solution/data-generating-process/reduced-form

   rt xt πt    = 1

κψ τ + 1

   1 κψ ψ − 1

τ

1 − 1

τ

− κ

τ

κ 1   

  • A−1

   εM

t

εD

t

εS

t

  

  • some insights
  • some parameters (β) do not enter solution, thus do not enter the

likelihood (or any other objective)

  • if we fjx (or calibrate)

, all parameters are identifjable

  • some parameters enter as products, e.g.

, identifying them separately may be impossible (depending on choice of observables)

  • is already the product of several other structural parameters (Calvo or

Rotemberg)

  • restrictions necessary to ensure regularity (Eigenvalues inside unit circle)

imply bounds involving all structural parameters, i.e. parameter space is not variation free

6

slide-19
SLIDE 19

Example (2): Simple DSGE Model(II)

  • solution/data-generating-process/reduced-form

   rt xt πt    = 1

κψ τ + 1

   1 κψ ψ − 1

τ

1 − 1

τ

− κ

τ

κ 1   

  • A−1

   εM

t

εD

t

εS

t

  

  • some insights
  • some parameters (β) do not enter solution, thus do not enter the

likelihood (or any other objective)

  • if we fjx (or calibrate) β, all parameters are identifjable
  • some parameters enter as products, e.g.

, identifying them separately may be impossible (depending on choice of observables)

  • is already the product of several other structural parameters (Calvo or

Rotemberg)

  • restrictions necessary to ensure regularity (Eigenvalues inside unit circle)

imply bounds involving all structural parameters, i.e. parameter space is not variation free

6

slide-20
SLIDE 20

Example (2): Simple DSGE Model(II)

  • solution/data-generating-process/reduced-form

   rt xt πt    = 1

κψ τ + 1

   1 κψ ψ − 1

τ

1 − 1

τ

− κ

τ

κ 1   

  • A−1

   εM

t

εD

t

εS

t

  

  • some insights
  • some parameters (β) do not enter solution, thus do not enter the

likelihood (or any other objective)

  • if we fjx (or calibrate) β, all parameters are identifjable
  • some parameters enter as products, e.g. κψ

τ , identifying them separately

may be impossible (depending on choice of observables)

  • is already the product of several other structural parameters (Calvo or

Rotemberg)

  • restrictions necessary to ensure regularity (Eigenvalues inside unit circle)

imply bounds involving all structural parameters, i.e. parameter space is not variation free

6

slide-21
SLIDE 21

Example (2): Simple DSGE Model(II)

  • solution/data-generating-process/reduced-form

   rt xt πt    = 1

κψ τ + 1

   1 κψ ψ − 1

τ

1 − 1

τ

− κ

τ

κ 1   

  • A−1

   εM

t

εD

t

εS

t

  

  • some insights
  • some parameters (β) do not enter solution, thus do not enter the

likelihood (or any other objective)

  • if we fjx (or calibrate) β, all parameters are identifjable
  • some parameters enter as products, e.g. κψ

τ , identifying them separately

may be impossible (depending on choice of observables)

  • κ is already the product of several other structural parameters (Calvo or

Rotemberg)

  • restrictions necessary to ensure regularity (Eigenvalues inside unit circle)

imply bounds involving all structural parameters, i.e. parameter space is not variation free

6

slide-22
SLIDE 22

Example (2): Simple DSGE Model(II)

  • solution/data-generating-process/reduced-form

   rt xt πt    = 1

κψ τ + 1

   1 κψ ψ − 1

τ

1 − 1

τ

− κ

τ

κ 1   

  • A−1

   εM

t

εD

t

εS

t

  

  • some insights
  • some parameters (β) do not enter solution, thus do not enter the

likelihood (or any other objective)

  • if we fjx (or calibrate) β, all parameters are identifjable
  • some parameters enter as products, e.g. κψ

τ , identifying them separately

may be impossible (depending on choice of observables)

  • κ is already the product of several other structural parameters (Calvo or

Rotemberg)

  • restrictions necessary to ensure regularity (Eigenvalues inside unit circle)

imply bounds involving all structural parameters, i.e. parameter space is not variation free

6

slide-23
SLIDE 23

Example (2): Simple DSGE Model (I)

concerned with injectivity of two mappings

  • uniqueness of solution

֒ → from the structural parameters to the reduced-form parameters

  • uniqueness of probability distribution

֒ → from the solution to observable data

7

slide-24
SLIDE 24

Example (2): Simple DSGE Model (II)

Local Identifjcation for Linearized Gaussian DSGE Models

  • autocovariogram (Iskrev, 2010)
  • spectral density (Qu and Tkachenko, 2012)
  • control theory for minimal systems (Komunjer and Ng, 2011)

Global Identifjcation for Linearized Gaussian DSGE Models

  • Kullback-Leibler discrepancy (Qu and Tkachenko, 2017)

Weak Identifjcation for Linearized Gaussian DSGE Models

  • Bayesian indicators (Koop, Pesaran and Smith, 2013)
  • indirect inference on VAR approximation (Le, Meenagh, Minford and

Wickens, 2017)

  • score test on Gaussian likelihood (Qu, 2014)

8

slide-25
SLIDE 25

Example (2): Simple DSGE Model (II)

Local Identifjcation for Linearized Gaussian DSGE Models

  • autocovariogram (Iskrev, 2010)

autocovariogram (Iskrev, 2010)

  • spectral density (Qu and Tkachenko, 2012)

spectral density (Qu and Tkachenko, 2012)

  • control theory for minimal systems (Komunjer and Ng, 2011)

Global Identifjcation for Linearized Gaussian DSGE Models

  • Kullback-Leibler discrepancy (Qu and Tkachenko, 2017)

Weak Identifjcation for Linearized Gaussian DSGE Models

  • Bayesian indicators (Koop, Pesaran and Smith, 2013)

Bayesian indicators (Koop, Pesaran and Smith, 2013)

  • indirect inference on VAR approximation (Le, Meenagh, Minford and

Wickens, 2017)

  • score test on Gaussian likelihood (Qu, 2014)

autocovariogram (Iskrev, 2010) spectral density (Qu and Tkachenko, 2012) Bayesian indicators (Koop, Pesaran and Smith, 2013)

8

slide-26
SLIDE 26

Example (2): Simple DSGE Model (III)

VERY SIMILAR TO EXAMPLE (1) Iskrev (2010)’s approach

  • check whether derivative of the theoretical mean, variance and

autocovariogram of observables w.r.t structural parameters has full rank ֒ → time domain approach Qu and Tkachenko (2012)’s approach

  • check whether derivative of the theoretical mean and spectrum
  • f observables w.r.t structural parameters has full rank

֒ → frequency domain approach

9

slide-27
SLIDE 27

Example (2): Simple DSGE Model (IV)

Koop, Pesaran and Smith (2013)’s approach

  • suppose θ2 is identifjed, whereas θ1 is weakly identifjed such

that the rank of the reduced-form parameters depends on the sample size T

  • for growing T the posterior precision of
  • θ1 divided by the sample size will go to zero.
  • θ2 divided by the sample size will go to a constant.

֒ → Bayesian simulation approach

10

slide-28
SLIDE 28

Example (3): An and Schorfheide (2007)

slide-29
SLIDE 29

Example (3): An and Schorfheide (2007) (I)

prototypical small-scale business cycle model Ct At −τ 1 At

  • Uc

t

= βEt       Rt πt+1 Ct+1 At+1 −τ 1 At+1

  • Uc

t+1

      (1) 1 = 1 v

  • 1 −

Ct At τ + φ(πt − π)

  • 1 − 1

2v

  • πt + π

2v

  • − φβEt

Ct+1/At+1 Ct/At −τ Yt+1/At+1 Yt/At (πt+1 − π)πt+1

  • (2)

Yt = Ct + Gt + φ 2 (πt − π)2 Yt (3) Y∗

t = (1 − ν)

1 τ At

  • Yt

Yt − Gt

  • gt

(4)

11

slide-30
SLIDE 30

Example (3): An and Schorfheide (2007) (II)

  • stochastic processes

ln At = ln γ + ln At−1 + ln zt ln zt = ρz ln zt−1 + σzǫz,t (5) ln(gt) = (1 − ρg) ln(g) + ρg ln(gt−1) + σgǫg,t (6)

  • monetary policy modeled by a Taylor Rule

Rt = R∗1−ρr

t

Rρr

t−1eσrǫr,t

(7)

  • two specifjcations for R∗

t are considered

R∗

t =

     rπ∗ πt

π∗

ψ1

Yt Y∗

t

ψ2 (T R1 fmex-price rule) rπ∗ πt

π∗

ψ1

Yt γYt−1

ψ2 (T R2 output-growth rule)

  • ǫz,t ∼ N(0, 1), ǫg,t ∼ N(0, 1) and ǫr,t ∼ N(0, 1)

12

slide-31
SLIDE 31

Example (3): An and Schorfheide (2007) (III)

SIMILAR TO EXAMPLE (2)

  • log-linearized model with

yt, πt and rt observable

  • yt = Et[

yt+1] + gt − Et[ gt+1] − 1 τ ( rt − Et[ πt+1] − Et[ zt+1])

  • πt = βEt[

πt+1] + τ 1 − v vπ∗2φ

  • κ

( yt − gt)

  • gt = ρg

gt−1 + σgǫg,t

  • zt = ρz

zt−1 + σzǫz,t

  • rt = ρr

rt−1 + (1 − ρr)ψ1 πt + (1 − ρr)ψ2

  • yt −

gt

  • yt −

yt−1 + zt

  • + σrǫr,t
  • θ = (τ, φ, ν, ψ1, ψ2, ρr, ρg, ρz, β, π∗, γ, σr, σg, σz)

Obvious identifjcation failure elasticity of demand 1 and price stickiness are not jointly identifjable, but only composite parameter

13

slide-32
SLIDE 32

Example (3): An and Schorfheide (2007) (III)

SIMILAR TO EXAMPLE (2)

  • log-linearized model with

yt, πt and rt observable

  • yt = Et[

yt+1] + gt − Et[ gt+1] − 1 τ ( rt − Et[ πt+1] − Et[ zt+1])

  • πt = βEt[

πt+1] + τ 1 − v vπ∗2φ

  • κ

( yt − gt)

  • gt = ρg

gt−1 + σgǫg,t

  • zt = ρz

zt−1 + σzǫz,t

  • rt = ρr

rt−1 + (1 − ρr)ψ1 πt + (1 − ρr)ψ2

  • yt −

gt

  • yt −

yt−1 + zt

  • + σrǫr,t
  • θ = (τ, φ, ν, ψ1, ψ2, ρr, ρg, ρz, β, π∗, γ, σr, σg, σz)

Obvious identifjcation failure Obvious identifjcation failure ֒ → elasticity of demand 1

ν and price stickiness φ are not jointly

identifjable, but only composite parameter κ Obvious identifjcation failure

13

slide-33
SLIDE 33

Example (3): Identifjcation Analysis

slide-34
SLIDE 34

Example (3): Identifjcation Analysis (I)

Figure 1: Non-Identifjcation Sets: Linearized Model T R1

ν , φ ψ

1

, ψ

2

, ρ

r

, σ

r

ψ

2

, ν , φ ρ

r

, ν , φ ρ

z

, ν , φ σ

r

, ν , φ ψ

1

, ψ

2

, ν , φ ψ

1

, ρ

r

, ν , φ ψ

2

, ρ

r

, ν , φ τ , ν , φ 50 100 95 69 1 2 1 2 1 1 1 1 100 100 % of total draws Iskrev (2010) Qu and Tkachenko (2012)

Notes: Identifjcation results for Iskrev (2010) and Qu and Tkachenko (2012) for 100 draws from the prior domain using analytical derivatives with robust tolerance level, 30 lags and 10000

  • subintervalls. Sets by brute-force method. Source: Mutschler (2015)

T R1 corresponds to the output-gap specifjcation of the Taylor-rule. 14

slide-35
SLIDE 35

Example (3): Identifjcation Analysis (II)

Figure 2: Non-Identifjcation Sets: Linearized Model T R2

ν , φ ψ

1

, ν , φ σ

g

, ν , φ ψ

1

, ψ

2

, ρ

r

, ν , φ ψ

1

, ψ

2

, σ

r

, ν , φ τ , ρ

z

, σ

r

, ν , φ τ , ρ

z

, σ

z

, ν , φ 50 100 92 1 1 3 2 1 1 100 % of total draws Iskrev (2010) Qu and Tkachenko (2012)

Notes: Identifjcation results for Iskrev (2010) and Qu and Tkachenko (2012) for 100 draws from the prior domain using analytical derivatives with robust tolerance level, 30 lags and 10000

  • subintervalls. Sets by brute-force method. Source: Mutschler (2015)

T R2 corresponds to the output-growth specifjcation of the Taylor-rule. 15

slide-36
SLIDE 36

Example (3): Identifjcation Analysis (III)

Table 1: Average Posterior Precisions: Linearized Model T R1

Obs PARAMETERS τ ψ1 ψ2 r(A) ρg ρz 100σr 100σg 100σz ν HESSIAN METHOD - GAUSSIAN PRIORS 20 5.07 5.09 8.58 5.01 3.80 98.85 36.40 9.61 17.40 368.01 50 2.04 2.04 4.12 2.11 4.88 46.96 36.25 7.70 10.68 307.86 100 1.03 1.03 3.05 1.15 3.33 56.86 43.00 7.13 11.71 264.77 1000 0.16 0.20 5.49 0.17 7.96 73.15 47.64 5.97 14.07 100.05 10000 0.03 0.15 5.04 0.19 5.71 28.69 46.59 4.87 9.36 29.69 MCMC METHOD - GAUSSIAN PRIORS 20 5.28 5.03 10.84 5.38 10.69 113.62 30.75 8.70 19.30 364.50 50 2.03 2.07 5.15 2.18 10.68 51.34 33.28 6.87 11.07 329.11 100 1.01 1.06 3.10 1.11 3.71 50.30 39.19 6.36 10.34 283.05 1000 0.14 0.10 1.40 0.15 7.54 65.27 43.20 5.55 12.93 84.67 10000 0.06 0.01 0.29 0.03 8.17 74.32 46.40 5.35 15.91 43.79 Notes: φ and ρr are fjxed at true values. β = exp

  • −r(A)/400
  • .

Nelder-Mead simplex optimization routine for posterior mode and Hessian. MCMC method uses variances from draws of marginal posteriors, i.e. Random-Walk Metropolis-Hastings algorithm with 3 chains á 20000 draws. Gaussian proposal density initialized at mode and Hessian with scale parameter equal to 0.6, acceptance ratios lie in between 20%-35%. Gaussian priors correspond to using truncated independent normal distributions with mean set to the true value and standard deviation equal to 0.1.

16

slide-37
SLIDE 37

Example (3): Identifjcation Analysis (III)

Table 1: Average Posterior Precisions: Linearized Model T R1

Obs PARAMETERS τ ψ1 ψ2 r(A) ρg ρz 100σr 100σg 100σz ν HESSIAN METHOD - GAUSSIAN PRIORS 20 5.07 5.07 5.09 5.09 8.58 5.01 5.01 3.80 98.85 36.40 9.61 17.40 368.01 50 2.04 2.04 4.12 2.11 4.88 46.96 36.25 7.70 10.68 307.86 100 1.03 1.03 3.05 1.15 3.33 56.86 43.00 7.13 11.71 264.77 1000 0.16 0.20 5.49 0.17 7.96 73.15 47.64 5.97 14.07 100.05 10000 0.03 0.03 0.15 0.15 5.04 0.19 0.19 5.71 28.69 46.59 4.87 9.36 29.69 MCMC METHOD - GAUSSIAN PRIORS 20 5.28 5.28 5.03 5.03 10.84 10.84 5.38 5.38 10.69 113.62 30.75 8.70 19.30 364.50 50 2.03 2.07 5.15 2.18 10.68 51.34 33.28 6.87 11.07 329.11 100 1.01 1.06 3.10 1.11 3.71 50.30 39.19 6.36 10.34 283.05 1000 0.14 0.10 1.40 0.15 7.54 65.27 43.20 5.55 12.93 84.67 10000 0.06 0.06 0.01 0.01 0.29 0.29 0.03 0.03 8.17 74.32 46.40 5.35 15.91 43.79 Notes: φ and ρr are fjxed at true values. β = exp

  • −r(A)/400
  • .

Nelder-Mead simplex optimization routine for posterior mode and Hessian. MCMC method uses variances from draws of marginal posteriors, i.e. Random-Walk Metropolis-Hastings algorithm with 3 chains á 20000 draws. Gaussian proposal density initialized at mode and Hessian with scale parameter equal to 0.6, acceptance ratios lie in between 20%-35%. Gaussian priors correspond to using truncated independent normal distributions with mean set to the true value and standard deviation equal to 0.1.

5.07 5.09 5.01 0.03 0.15 0.19 5.28 5.03 10.84 5.38 0.06 0.01 0.29 0.03 16

slide-38
SLIDE 38

Computational and numerical issues

  • Identifjcation criteria generally come to the same conclusion,

yet there are computational and numerical issues

  • Numerical instability of solution algorithm
  • Size of matrices: use different or robust tolerance level for ranks
  • If feasible: Use analytical rather than numerical derivatives and

robust tolerance levels

  • Lag length (ISK), subintervals for integral of spectra (QT), fjltering

and speed (KPS)

17

slide-39
SLIDE 39

Identifjcation of nonlinear and non-Gaussian DSGE models

slide-40
SLIDE 40

General model framework and solution

General DSGE model Etf (zt+1, zt, zt−1, ut|θ) = 0, zt = g(zt−1, ut|θ), yt = ˜ g(xt, ut|θ), where yt are observables, xt states, ut shocks and zt all endogenous Solution method: Perturbation

  • Taylor-approximation around the non-stochastic steady-state:

zt z gx xt

1

x guut 1 2 gxx xt

1

x xt

1

x 2gxu xt

1

ut guu ut ut g

2

1 6

18

slide-41
SLIDE 41

General model framework and solution

General DSGE model Etf (zt+1, zt, zt−1, ut|θ) = 0, zt = g(zt−1, ut|θ), yt = ˜ g(xt, ut|θ), where yt are observables, xt states, ut shocks and zt all endogenous Solution method: Perturbation

  • Taylor-approximation around the non-stochastic steady-state:

zt = ¯ z + gx(xt−1 − ¯ x) + guut + 1 2

  • gxx(xt−1 − ¯

x) ⊗ (xt−1 − ¯ x) + 2gxu(xt−1 ⊗ ut) + guu(ut ⊗ ut) + gσσσ2 + 1 6[. . . ] + . . .

18

slide-42
SLIDE 42

Pruning

Problem of higher-order Taylor approximations

  • Possibility of explosive behavior in higher-order approximations
  • Model may not be stationary or does not have an ergodic

probability distribution Solution: Pruning

  • Idea: Leaving out terms in the solution that have higher-order

effects than the approximation order

  • Kim, Kim, Schaumburg and Sims (2008) and Andreasen,

Fernández-Villaverde and Rubio-Ramírez (2016) show that pruned state space is stationary and ergodic

  • Lombardo and Uhlig (2014) or Lan and Meyer-Gohde (2013)

provide theoretical foundation for this seemingly ad-hoc procedure

19

slide-43
SLIDE 43

Pruning

Problem of higher-order Taylor approximations

  • Possibility of explosive behavior in higher-order approximations
  • Model may not be stationary or does not have an ergodic

probability distribution Solution: Pruning

  • Idea: Leaving out terms in the solution that have higher-order

effects than the approximation order

  • Kim, Kim, Schaumburg and Sims (2008) and Andreasen,

Fernández-Villaverde and Rubio-Ramírez (2016) show that pruned state space is stationary and ergodic

  • Lombardo and Uhlig (2014) or Lan and Meyer-Gohde (2013)

provide theoretical foundation for this seemingly ad-hoc procedure

19

slide-44
SLIDE 44

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: x

0 and x 1 gx gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt xf

t

xs

t

gxxf

t 1

gxxs

t 1

gxx xf

t 1 2

2gxx xf

t 1xs t 1 2

gxx xs

t 1 2

guut

  • Stable solution: Prune terms that contain xf

t xs t and xs t 2

xf

t

gxxf

t 1

guut xs

t

gxxs

t 1

gxx xf

t 1 2

xf

t 2

g2

x

xf

t 1 2

2gxguxf

t 1ut

g2

uu2 t

  • Pruned solution can be rewritten as a stable state-space system

xf

t

xs

t

xf 2

t zt

gx gx gxx g2

x A

xf

t 1

xs

t 1

xf 2

t 1 zt

1

gu 2gxgu g2

u B

ut xf

t 1ut

u2

t 2 u

t

g2

u 2 u c

20

slide-45
SLIDE 45

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: ¯

x = 0 and ¯ x = (1 − gx)/gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt xf

t

xs

t

gxxf

t 1

gxxs

t 1

gxx xf

t 1 2

2gxx xf

t 1xs t 1 2

gxx xs

t 1 2

guut

  • Stable solution: Prune terms that contain xf

t xs t and xs t 2

xf

t

gxxf

t 1

guut xs

t

gxxs

t 1

gxx xf

t 1 2

xf

t 2

g2

x

xf

t 1 2

2gxguxf

t 1ut

g2

uu2 t

  • Pruned solution can be rewritten as a stable state-space system

xf

t

xs

t

xf 2

t zt

gx gx gxx g2

x A

xf

t 1

xs

t 1

xf 2

t 1 zt

1

gu 2gxgu g2

u B

ut xf

t 1ut

u2

t 2 u

t

g2

u 2 u c

20

slide-46
SLIDE 46

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: ¯

x = 0 and ¯ x = (1 − gx)/gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt = xf

t +xs t = gxxf t−1+gxxs t−1+gxx

  • xf

t−1

2 +2gxx

  • xf

t−1xs t−1

2 +gxx

  • xs

t−1

2+guut

  • Stable solution: Prune terms that contain xf

t xs t and xs t 2

xf

t

gxxf

t 1

guut xs

t

gxxs

t 1

gxx xf

t 1 2

xf

t 2

g2

x

xf

t 1 2

2gxguxf

t 1ut

g2

uu2 t

  • Pruned solution can be rewritten as a stable state-space system

xf

t

xs

t

xf 2

t zt

gx gx gxx g2

x A

xf

t 1

xs

t 1

xf 2

t 1 zt

1

gu 2gxgu g2

u B

ut xf

t 1ut

u2

t 2 u

t

g2

u 2 u c

20

slide-47
SLIDE 47

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: ¯

x = 0 and ¯ x = (1 − gx)/gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt = xf

t +xs t = gxxf t−1+gxxs t−1+gxx

  • xf

t−1

2 +2gxx

  • xf

t−1xs t−1

2 +gxx

  • xs

t−1

2+guut

  • Stable solution: Prune terms that contain xf

t xs t and (xs t )2

xf

t = gxxf t−1 + guut,

xs

t = gxxs t−1 + gxx(xf t−1)2,

xf

t 2

g2

x

xf

t 1 2

2gxguxf

t 1ut

g2

uu2 t

  • Pruned solution can be rewritten as a stable state-space system

xf

t

xs

t

xf 2

t zt

gx gx gxx g2

x A

xf

t 1

xs

t 1

xf 2

t 1 zt

1

gu 2gxgu g2

u B

ut xf

t 1ut

u2

t 2 u

t

g2

u 2 u c

20

slide-48
SLIDE 48

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: ¯

x = 0 and ¯ x = (1 − gx)/gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt = xf

t +xs t = gxxf t−1+gxxs t−1+gxx

  • xf

t−1

2 +2gxx

  • xf

t−1xs t−1

2 +gxx

  • xs

t−1

2+guut

  • Stable solution: Prune terms that contain xf

t xs t and (xs t )2

xf

t = gxxf t−1 + guut,

xs

t = gxxs t−1 + gxx(xf t−1)2,

xf

t 2

g2

x

xf

t 1 2

2gxguxf

t 1ut

g2

uu2 t

  • Pruned solution can be rewritten as a stable state-space system

   xf

t

xs

t

xf 2

t

  

zt

=    gx gx gxx g2

x

  

  • A

   xf

t 1

xs

t 1

xf 2

t 1

  

  • zt

1

+    gu 2gxgu g2

u

  

  • B

   ut xf

t 1ut

u2

t 2 u

  

  • t

g2

u 2 u

  • c

20

slide-49
SLIDE 49

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: ¯

x = 0 and ¯ x = (1 − gx)/gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt = xf

t +xs t = gxxf t−1+gxxs t−1+gxx

  • xf

t−1

2 +2gxx

  • xf

t−1xs t−1

2 +gxx

  • xs

t−1

2+guut

  • Stable solution: Prune terms that contain xf

t xs t and (xs t )2

xf

t = gxxf t−1 + guut,

xs

t = gxxs t−1 + gxx(xf t−1)2,

xf

t 2

g2

x

xf

t 1 2

2gxguxf

t 1ut

g2

uu2 t

  • Pruned solution can be rewritten as a stable state-space system

   xf

t

xs

t

xf 2

t

  

zt

=    gx gx gxx g2

x

  

  • A

   xf

t 1

xs

t 1

xf 2

t 1

  

  • zt

1

+    gu 2gxgu g2

u

  

  • B

   ut xf

t 1ut

u2

t 2 u

  

  • t

g2

u 2 u

  • c

20

slide-50
SLIDE 50

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: ¯

x = 0 and ¯ x = (1 − gx)/gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt = xf

t +xs t = gxxf t−1+gxxs t−1+gxx

  • xf

t−1

2 +2gxx

  • xf

t−1xs t−1

2 +gxx

  • xs

t−1

2+guut

  • Stable solution: Prune terms that contain xf

t xs t and (xs t )2

xf

t = gxxf t−1 + guut,

xs

t = gxxs t−1 + gxx(xf t−1)2,

xf

t 2

g2

x

xf

t 1 2

2gxguxf

t 1ut

g2

uu2 t

  • Pruned solution can be rewritten as a stable state-space system

   xf

t

xs

t

xf 2

t

  

zt

=    gx gx gxx g2

x

  

  • A

   xf

t−1

xs

t 1

xf 2

t 1

  

  • zt

1

+    gu 2gxgu g2

u

  

  • B

   ut xf

t 1ut

u2

t 2 u

  

  • t

g2

u 2 u

  • c

20

slide-51
SLIDE 51

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: ¯

x = 0 and ¯ x = (1 − gx)/gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt = xf

t +xs t = gxxf t−1+gxxs t−1+gxx

  • xf

t−1

2 +2gxx

  • xf

t−1xs t−1

2 +gxx

  • xs

t−1

2+guut

  • Stable solution: Prune terms that contain xf

t xs t and (xs t )2

xf

t = gxxf t−1 + guut,

xs

t = gxxs t−1 + gxx(xf t−1)2,

xf

t 2

g2

x

xf

t 1 2

2gxguxf

t 1ut

g2

uu2 t

  • Pruned solution can be rewritten as a stable state-space system

   xf

t

xs

t

xf 2

t

  

zt

=    gx gx gxx g2

x

  

  • A

   xf

t 1

xs

t−1

xf 2

t−1

  

  • zt

1

+    gu 2gxgu g2

u

  

  • B

   ut xf

t 1ut

u2

t 2 u

  

  • t

g2

u 2 u

  • c

20

slide-52
SLIDE 52

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: ¯

x = 0 and ¯ x = (1 − gx)/gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt = xf

t +xs t = gxxf t−1+gxxs t−1+gxx

  • xf

t−1

2 +2gxx

  • xf

t−1xs t−1

2 +gxx

  • xs

t−1

2+guut

  • Stable solution: Prune terms that contain xf

t xs t and (xs t )2

xf

t = gxxf t−1 + guut,

xs

t = gxxs t−1 + gxx(xf t−1)2,

(xf

t )2 = g2 x

  • xf

t−1

2 + 2gxguxf

t−1ut + g2 uu2 t

  • Pruned solution can be rewritten as a stable state-space system

   xf

t

xs

t

xf 2

t

  

zt

=    gx gx gxx g2

x

  

  • A

   xf

t 1

xs

t 1

xf 2

t−1

  

  • zt

1

+    gu 2gxgu g2

u

  

  • B

   ut xf

t−1ut

u2

t 2 u

  

  • t

g2

u 2 u

  • c

20

slide-53
SLIDE 53

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: ¯

x = 0 and ¯ x = (1 − gx)/gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt = xf

t +xs t = gxxf t−1+gxxs t−1+gxx

  • xf

t−1

2 +2gxx

  • xf

t−1xs t−1

2 +gxx

  • xs

t−1

2+guut

  • Stable solution: Prune terms that contain xf

t xs t and (xs t )2

xf

t = gxxf t−1 + guut,

xs

t = gxxs t−1 + gxx(xf t−1)2,

(xf

t )2 = g2 x

  • xf

t−1

2 + 2gxguxf

t−1ut + g2 uu2 t

  • Pruned solution can be rewritten as a stable state-space system

   xf

t

xs

t

xf 2

t

  

zt

=    gx gx gxx g2

x

  

  • A

   xf

t−1

xs

t−1

xf 2

t−1

  

  • zt

1

+    gu 2gxgu g2

u

  

  • B

   ut xf

t−1ut

u2

t 2 u

  

  • t

g2

u 2 u

  • c

20

slide-54
SLIDE 54

Example with simple univariate model

  • xt = gxxt−1 + gxxx2

t−1 + guut, |gx| < 1, gxx > 0

  • Two fjxed points: ¯

x = 0 and ¯ x = (1 − gx)/gxx

  • Decompose state vector into 1st- and 2nd-order effects

xt = xf

t +xs t = gxxf t−1+gxxs t−1+gxx

  • xf

t−1

2 +2gxx

  • xf

t−1xs t−1

2 +gxx

  • xs

t−1

2+guut

  • Stable solution: Prune terms that contain xf

t xs t and (xs t )2

xf

t = gxxf t−1 + guut,

xs

t = gxxs t−1 + gxx(xf t−1)2,

(xf

t )2 = g2 x

  • xf

t−1

2 + 2gxguxf

t−1ut + g2 uu2 t

  • Pruned solution can be rewritten as a stable state-space system

   xf

t

xs

t

xf 2

t

  

zt

=    gx gx gxx g2

x

  

  • A

   xf

t−1

xs

t−1

xf 2

t−1

  

  • zt−1

+    gu 2gxgu g2

u

  

  • B

   ut xf

t−1ut

u2

t − σ2 u

  

  • ξt

+    g2

uσ2 u

  

  • c

20

slide-55
SLIDE 55

Pruned state-space

Pruned state-space Given an extended state vector zt and an extended vector of innovations ξt, the pruned solution of a DSGE model can be rewritten as a linear time-invariant state-space system: zt = c + Azt−1 + Bξt yt = ¯ y + d + Czt−1 + Dξt

  • Same procedure for higher-order approximations
  • Straightforward to compute moments, cumulants and

polyspectra

  • Note: Even if ut is Gaussian,

t is not!

Higher-order statistics (HOS) may contain additional information for estimation and identifjcation

21

slide-56
SLIDE 56

Pruned state-space

Pruned state-space Given an extended state vector zt and an extended vector of innovations ξt, the pruned solution of a DSGE model can be rewritten as a linear time-invariant state-space system: zt = c + Azt−1 + Bξt yt = ¯ y + d + Czt−1 + Dξt

  • Same procedure for higher-order approximations
  • Straightforward to compute moments, cumulants and

polyspectra

  • Note: Even if ut is Gaussian,

t is not!

Higher-order statistics (HOS) may contain additional information for estimation and identifjcation

21

slide-57
SLIDE 57

Pruned state-space

Pruned state-space Given an extended state vector zt and an extended vector of innovations ξt, the pruned solution of a DSGE model can be rewritten as a linear time-invariant state-space system: zt = c + Azt−1 + Bξt yt = ¯ y + d + Czt−1 + Dξt

  • Same procedure for higher-order approximations
  • Straightforward to compute moments, cumulants and

polyspectra

  • Note: Even if ut is Gaussian,

t is not!

Higher-order statistics (HOS) may contain additional information for estimation and identifjcation

21

slide-58
SLIDE 58

Pruned state-space

Pruned state-space Given an extended state vector zt and an extended vector of innovations ξt, the pruned solution of a DSGE model can be rewritten as a linear time-invariant state-space system: zt = c + Azt−1 + Bξt yt = ¯ y + d + Czt−1 + Dξt

  • Same procedure for higher-order approximations
  • Straightforward to compute moments, cumulants and

polyspectra

  • Note: Even if ut is Gaussian, ξt is not!

Higher-order statistics (HOS) may contain additional information for estimation and identifjcation

21

slide-59
SLIDE 59

Pruned state-space

Pruned state-space Given an extended state vector zt and an extended vector of innovations ξt, the pruned solution of a DSGE model can be rewritten as a linear time-invariant state-space system: zt = c + Azt−1 + Bξt yt = ¯ y + d + Czt−1 + Dξt

  • Same procedure for higher-order approximations
  • Straightforward to compute moments, cumulants and

polyspectra

  • Note: Even if ut is Gaussian, ξt is not!

֒ → Higher-order statistics (HOS) may contain additional information for estimation and identifjcation

21

slide-60
SLIDE 60

Analytical derivatives

slide-61
SLIDE 61

Analytical derivatives

  • We view f as a function of θ and of the steady-state vector z(θ),

which is also a function of θ.

  • Thus, implicitly we have f z
  • 0. Differentiating yields

df f z f z z f z f z

1

f

  • The same holds for the Jacobian

d f vec f z vec f z z vec f

  • And the Hessian

dH vec f z vec f z z vec f

22

slide-62
SLIDE 62

Analytical derivatives

  • We view f as a function of θ and of the steady-state vector z(θ),

which is also a function of θ.

  • Thus, implicitly we have f(z(θ), θ) = 0. Differentiating yields

df := ∂f(z(θ), θ) ∂θ′ = ∂f ∂z′ ∂z ∂θ′ + ∂f ∂θ′ = 0 ⇔ ∂z ∂θ′ = − ∂f ∂z′ −1 ∂f ∂θ′

  • The same holds for the Jacobian

d f vec f z vec f z z vec f

  • And the Hessian

dH vec f z vec f z z vec f

22

slide-63
SLIDE 63

Analytical derivatives

  • We view f as a function of θ and of the steady-state vector z(θ),

which is also a function of θ.

  • Thus, implicitly we have f(z(θ), θ) = 0. Differentiating yields

df := ∂f(z(θ), θ) ∂θ′ = ∂f ∂z′ ∂z ∂θ′ + ∂f ∂θ′ = 0 ⇔ ∂z ∂θ′ = − ∂f ∂z′ −1 ∂f ∂θ′

  • The same holds for the Jacobian

dDf := ∂vec(Df(z(θ), θ)) ∂θ′ = ∂vec(Df) ∂z′ ∂z ∂θ′ + ∂vec(Df) ∂θ′

  • And the Hessian

dH vec f z vec f z z vec f

22

slide-64
SLIDE 64

Analytical derivatives

  • We view f as a function of θ and of the steady-state vector z(θ),

which is also a function of θ.

  • Thus, implicitly we have f(z(θ), θ) = 0. Differentiating yields

df := ∂f(z(θ), θ) ∂θ′ = ∂f ∂z′ ∂z ∂θ′ + ∂f ∂θ′ = 0 ⇔ ∂z ∂θ′ = − ∂f ∂z′ −1 ∂f ∂θ′

  • The same holds for the Jacobian

dDf := ∂vec(Df(z(θ), θ)) ∂θ′ = ∂vec(Df) ∂z′ ∂z ∂θ′ + ∂vec(Df) ∂θ′

  • And the Hessian

dH := ∂vec(Hf(z(θ), θ)) ∂θ′ = ∂vec(Hf) ∂z′ ∂z ∂θ′ + ∂vec(Hf) ∂θ′ .

22

slide-65
SLIDE 65

Analytical derivatives

  • Now it is straightforward (but very tedious) to derive the

analytical derivatives of fjrst-order, second-order and third-order solution matrices wrt parameters

  • Once we have that, the pruned state-space can be used to

compute analytical derivatives of fjrst four moments or corresponding polyspectra

  • On paper Kronecker products provide closed-form expressions
  • Computationally generalized Sylvester equations are more

effjcient

23

slide-66
SLIDE 66

Identifjcation Criteria: Time Domain

Proposition Time Domain Consider the pruned state-space of a nonlinear DSGE model. Let q ≤ T and assume that m(θ, q) :=

  • µ′

y

m2(θ, q)′ m3(θ, q)′ m4(θ, q)′′ is a continuously differentiable function of θ ∈ Θ. Let θ0 ∈ Θ be a regular point, θ is then locally identifjable at a point θ0 from the fjrst four cumulants (or moments) of yt, if and only if M(q) := ∂m(θ0, q) ∂θ′ has a full column rank equal to the number of parameters for q ≤ T.

24

slide-67
SLIDE 67

Identifjcation Criteria: Frequency Domain

Proposition Frequency Domain

Consider the pruned state-space of a nonlinear DSGE model. Assume that the power spectrum, bispectrum and trispectrum are continuous in ω ∈ [−π; π] and continuous and differentiable in θ ∈ Θ. Let G(θ) = d (µy(θ))′ d (µy(θ)) + π

−π

d (S2,y(ω1; θ))∗ d (S2,y(ω1; θ)) dω1 + π

−π

π

−π

d (S3,y(ω1, ω2; θ))∗ d (S3,y(ω1, ω2; θ)) dω1dω2 + π

−π

π

−π

π

−π

d (S4,y(ω1, ω2, ω3; θ))∗ d (S4,y(ω1, ω2, ω3; θ)) dω1dω2dω3 and θ0 ∈ Θ be a regular point. Furthermore, assume there is an open neighborhood of θ0 in which G(θ0) has a constant rank. Then, θ is locally identifjable at a point θ0 from the mean, power spectrum, bispectrum and trispectrum of yt, if and only if G(θ0) is nonsingular, i.e. its rank is equal to the number of parameters.

25

slide-68
SLIDE 68

Strength of Identifjcation: Bayesian Learning Rate Indicator

Koop, Pesaran and Smith (2013)’s approach

  • suppose θ2 is identifjed, whereas θ1 is weakly identifjed such

that the rank of the reduced-form parameters depends on the sample size T

  • for growing T the posterior precision of
  • θ1 divided by the sample size will go to zero.
  • θ2 divided by the sample size will go to a constant.

֒ → Bayesian simulation approach using a nonlinear Kalman fjlter or a particle fjlter to evaluate the likelihood

26

slide-69
SLIDE 69

Identifjcation analysis of An and Schorfheide (2007)

Figure 1: Non-identifjed sets

d u m p y 20 40 60 80 100 100 100 % of total draws Sets responsible for rank failure using M2 Sets responsible for rank failure using G2 Notes: Identifjcation results for 100 draws from the prior domain using analytical derivatives with robust tolerance level, T = 30 and N = 10 000. Sets by brute-force method.

27

slide-70
SLIDE 70

Concluding Remarks

slide-71
SLIDE 71

Concluding Remarks

  • identifjability is a model property that depends on functional

specifjcations and choice of observables

  • how to ”solve” lack of identifjcation:
  • change your model (slightly), e.g. add shocks (preference,

investment-specifjc technology), capital utilization, monetary policy rule, utility function, habit formation,...

  • nonlinear or non-Gaussian approach might enrich identifjability

and model dynamics

  • my own toolbox works for the framework of Schmitt-Grohé and

Uribe (2004) and Andreasen, Fernández-Villaverde and Rubio-Ramírez (2018)

  • extension of DYNARE’s

identification(order=2|3,pruning) is still work-in-progress

28

slide-72
SLIDE 72

Appendix

slide-73
SLIDE 73

Global Identifjcation

  • Global identifjcation is, however, more diffjcult to verify than

local identifjcation.

  • A very recent approach proposed by Qu and Tkachenko (2017) is

to search for the solution of the Kullback-Leibler discrepancy ∆KL(θ|θ0) = −

  • log

p(Y|θ) p(Y|θ0)

  • p(Y|θ0)dY = 0

by using a frequency domain transformation.

  • If θ0 is the unique solution, then the DSGE model is globally

identifjed.

  • However, fjnding the solution to this objective function is

computationally challenging and only demonstrated for a stylized small scale linear DSGE model.