estimation of dsge models
play

Estimation of DSGE models St ephane Adjemian e du Maine, GAINS - PowerPoint PPT Presentation

Estimation of DSGE models St ephane Adjemian e du Maine, GAINS & CEPREMAP Universit stephane.adjemian@ens.fr July 2, 2007 July 2, 2007 Universit e du Maine, GAINS & CEPREMAP Page 1 DSGE models (I, structural form) Our


  1. Estimation of DSGE models St´ ephane Adjemian e du Maine, GAINS & CEPREMAP Universit´ stephane.adjemian@ens.fr July 2, 2007 July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 1

  2. DSGE models (I, structural form) • Our model is given by: E t [ F θ ( y t +1 , y t , y t − 1 , ε t )] = 0 (1) with ε t ∼ iid (0 , Σ) is a random vector ( r × 1) of structural innovations, y t ∈ Λ ⊆ R n a vector of endogenous variables, F θ : Λ 3 × R r → Λ a real function in C 2 parameterized by a real vector θ ∈ Θ ⊆ R q gathering the deep parameters of the model. • The model is stochastic, forward looking and non linear. • We want to estimate (a subset of) θ . For any estimation approach (indirect inference, simulated moments, maximum likelihood,...) we need first to solve this model. July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 2

  3. DSGE models (II, reduced form) • We assume that a unique, stable and invariant, solution exists. This solution is a non linear stochastic difference equation: y t = H θ ( y t − 1 , ε t ) (2) The endogenous variables are written as a function of their past levels and the contemporaneous structural shocks. H θ collects the policy rules and transition functions. • Generally, it is not possible to get a closed form solution and we have to consider an approximation (local or global) of the true solution (2). • Dynare uses a local approximation around the deterministic steady state. Global approximations are not yet implemented in dynare . July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 3

  4. DSGE models (III, reduced form) • Substituting (2) in (1) for y t and y t +1 we obtain: E t [ F θ (( H θ ( y t , ε t +1 ) , H θ ( y t − 1 , ε t ) , y t − 1 , ε t )] = 0 • Substituting again (for y t in y t +1 ) and dropping time we get: � � � H θ ( y, ε ) , ε ′ � �� F θ ( H θ , H θ ( y, ε ) , y, ε = 0 (3) E t where y and ε are in the time t information set, but not ε ′ which is assumed to be iid (0 , Σ). F θ is known and H θ is the unknown. We are looking for a function H θ satisfying this equation for all possible states ( y, ε )... • This task is far easier if we “solve” only locally (around the deterministic steady state) this functional equation. July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 4

  5. Local approximation of the reduced form (I) • The deterministic steady state is defined by the following system of n equations: F θ ( y ∗ ( θ ) , y ∗ ( θ ) , y ∗ ( θ ) , 0) = 0 • The steady state depends on the deep parameters θ . Even for medium scaled models, as in Smets and Wouters, it is often possible to obtain a closed form solution for the steady state ⇒ Must be supplied to dynare . • Obviously, function H θ must satisfy the following equality: y ∗ = H θ ( y ∗ , 0) • Once the steady state is known, we can compute the jacobian matrix associated to F θ ... July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 5

  6. Local approximation of the reduced form (II) ∂y t +1 , F y = ∂ F θ ∂ F θ ∂ F θ y = y t − 1 − y ∗ , F y + = • Let ˆ ∂y t , F y − = ∂y t − 1 , F ε = ∂ F θ ∂y t − 1 and H ε = ∂ H θ ∂ H θ ∂ε t , H y = ∂ε t . • F y + , F y , F y − , F ε are known and H y , H ε are the unknowns. • With a first order Taylor expansion of the functional equation (3) around y ∗ : � y + H ε ε ) + H ε ε ′ � 0 ≃ F θ ( y ∗ , y ∗ , y ∗ , 0) + F y + H y ( H y ˆ + F y ( H y ˆ y + H ε ε ) + F y − ˆ y + F ε ε Where all the derivatives are evaluated at the deterministic steady state and F θ ( y ∗ , y ∗ , y ∗ , 0) = 0. July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 6

  7. Local approximation of the reduced form (III) • Applying the conditional expectation operator, we obtain: 0 ≃ F y + ( H y ( H y ˆ y + H ε ε )) + F y ( H y ˆ y + H ε ε ) + F y − ˆ y + F ε ε or equivalently: 0 ≃ F y + H y H y ˆ y + F y H y ˆ y + F y − ˆ y F y + H y H ε ε + F y H ε ε + F ε ε • This equation must hold for any state (ˆ y, ε ), so that the unknowns H y and H ε must satisfy:   = F y + H y H y + F y H y + F y − 0  = F y + H y H ε + F y H ε + F ε 0 July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 7

  8. Local approximation of the reduced form (IV) • This system is triangular ( H ε does not appear in the first equation) ⇒ “easy” to solve. • The first equation is a quadratic equation... But the unknown is a squared matrix ( H y ). This equation may be solved with any spectral method. dynare uses a generalized Schur decomposition. A unique solution exists iff BK conditions are satisfied. • The second equation is linear in the unknown H ε , a unique solution exists iff F y + H y + F y is an inversible matrix ( � if F y and F y + are diagonal matrices, each endogenous variable have to appear at time t or with a lead). July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 8

  9. Local approximation of the reduced form (V) • Finally the local dynamic is given by: y t = y ∗ + H y ( θ ) ( y t − 1 − y ∗ ) + H ε ( θ ) ε t where y ∗ , H y ( θ ) and H ε ( θ ) are nonlinear functions of the deep parameters. • This result can be used to approximate the theoretical moments: E ∞ [ y t ] = y ∗ ( θ ) V ∞ [ y t ] = H y ( θ ) V ∞ [ y t ] H y ( θ ) ′ + H ε ( θ )Σ H ε ( θ ) ′ The second equation is a kind of sylvester equation and may be solved using the vec operator and kronecker product. • This result can also be used to approximate the likelihood. July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 9

  10. Estimation (I, Likelihood) • A direct estimation approach is to maximize the likelihood with respect to θ and vech (Σ). • All the endogenous variables are not observed! Let y ⋆ t be a subset of y t gathering all the observed variables. • To bring the model to the data, we use a state-space representation: y ⋆ t = Zy t + η t (4a) y t = H θ ( y t − 1 , ε t ) (4b) Equation (4b) is the reduced form of the DSGE model ⇒ state equation . Equation (4a) selects a subset of the endogenous variables ( Z is a m × n matrix) and a non structural error may be added ⇒ measurement equation . July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 10

  11. Estimation (II, Likelihood) • Let Y ⋆ T = { y ⋆ 1 , y ⋆ 2 , . . . , y ⋆ T } be the sample. • Let ψ be the vector of parameters to be estimated ( θ ,vech (Σ) and the covariance matrix of η ). • The likelihood, that is the density of Y ⋆ T conditionally on the parameters, is given by: T � � � L ( ψ ; Y ⋆ T ) = p ( Y ⋆ T | ψ ) = p ( y ⋆ y ⋆ t |Y ⋆ 0 | ψ ) (5) p t − 1 , ψ t =1 • To evaluate the likelihood we need to specify the marginal density p ( y ⋆ 0 | ψ ) (or p ( y 0 | ψ )) and the conditional density � � y ⋆ t |Y ⋆ . p t − 1 , ψ July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 11

  12. Estimation (III, Likelihood) • The state-space model (4), or the reduced form (2), describes the evolution of the endogenous variables’ distribution. • The distribution of the initial condition ( y 0 ) is equal to the ergodic distribution of the stochastic difference equation (so that the distribution of y t is time invariant ⇒ example with an AR(1) ). • If the reduced form is linear (or linearized) and if the disturbances are gaussian (say ε ∼ N (0 , Σ), then the initial (ergodic) distribution is gaussian: y 0 ∼ N ( E ∞ [ y t ] , V ∞ [ y t ]) • Unit roots (diffuse kalman filter). July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 12

  13. Estimation (IV, Likelihood) • The density of y ⋆ t |Y ⋆ t − 1 is not direct, because y ⋆ t depends on unobserved endogenous variables. • The following identity can be used: � � � y ⋆ t |Y ⋆ p ( y ⋆ t | y t , ψ ) p ( y t |Y ⋆ = t − 1 , ψ )d y t (6) p t − 1 , ψ Λ The density of y ⋆ t |Y ⋆ t − 1 is the mean of the density of y ⋆ t | y t weigthed by the density of y t |Y ⋆ t − 1 . • The first conditional density is given by the measurement equation (4a). • A Kalman filter is used to evaluate the density of the latent variables ( y t ) conditional on the sample up to time t − 1 ( Y ⋆ t − 1 ) [ ⇒ predictive density ]. July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 13

  14. Estimation (V, Likelihood & Kalman Filter) • The Kalman filter can be seen as a bayesian recursive estimation device: Z p ( y t |Y ⋆ p ( y t | y t − 1 , ψ ) p ( y t − 1 |Y ⋆ t − 1 , ψ ) = t − 1 , ψ ) d y t − 1 (7a) Λ p ( y ⋆ t | y t , ψ ) p ( y t |Y ⋆ t − 1 , ψ ) p ( y t |Y ⋆ R ` ´ t , ψ ) = (7b) Λ p ( y ⋆ y t |Y ⋆ t | y t , ψ ) p t − 1 , ψ d y t • Equation (7a) says that the predictive density of the latent variables is the mean of the density of y t | y t − 1 , given by the state equation (4b), weigthed by the density y t − 1 conditional on Y ⋆ t − 1 (given by (7b)). • The update equation (7b) is a direct application of the Bayes theorem and tells us how to update our knowledge about the latent variables when new information (data) is available. July 2, 2007 Universit´ e du Maine, GAINS & CEPREMAP Page 14

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend