priors from general equilibrium models for vars by marco
play

Priors from General Equilibrium Models for VARs by Marco del Negro - PowerPoint PPT Presentation

Priors from General Equilibrium Models for VARs by Marco del Negro and Frank Schorfheide Presenter: Keith OHara March 10, 2014 Presenter: Keith OHara ( /) DSGE-VARs March 10, 2014 1 / 30 The Main Idea of the Paper Use the implied


  1. Priors from General Equilibrium Models for VARs by Marco del Negro and Frank Schorfheide Presenter: Keith O’Hara March 10, 2014 Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 1 / 30

  2. The Main Idea of the Paper Use the implied moments of a DSGE model as the prior for a Bayesian VAR (a ‘DSGE-VAR( λ )’). ◮ This is similar to a ‘dummy observation’ approach. We can view the policy functions of a DSGE model, S t = F S t − 1 + G ε t , as a VAR(1) with tight cross-equation restrictions. The parameter λ ∈ (0 , ∞ ) controls how ‘close’ the DSGE-VAR matches the DSGE model dynamics; it corresponds to the ratio of ‘dummy observations’ to actual data. As λ → ∞ , we approach the DSGE model. Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 2 / 30

  3. Solving and Estimating a DSGE Model Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 3 / 30

  4. Solving a DSGE Model We can express a log-linearized DSGE model as a system of linear expectational difference equations: A θ S t = B θ E t [ S t +1 ] + C θ S t − 1 + D θ ε t which we ‘solve’ to get a first-order VAR for the state’s transition: S t = F S t − 1 + G ε t . F solves the following matrix quadratic, which implies a solution for G : 0 = B θ F 2 − A θ F + C θ G = ( A θ − B θ F ) − 1 D θ We solve the matrix polynomial as a generalized eigenvalue problem. Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 4 / 30

  5. The Log-Linearized Model For the log-linearized model in this paper, this corresponds to 1 x t = E t x t +1 − τ − 1 ( R t − E t [ π t +1 ]) + (1 − ρ g ) g t + ρ z τ z t π t = γ r ∗ E t [ π t +1 ] + κ ( x t − g t ) R t = ρ R R t − 1 + (1 − ρ R ) ( ψ 1 π t + ψ 2 x t ) + ν t with shock processes: z t = ρ z z t − 1 + σ z ε z , t g t = ρ g g t − 1 + σ g ε g , t ν t = σ R ε R , t So S t = [ x t , π t , R t , z t , g t , ν t ] ⊤ . Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 5 / 30

  6. Solving a DSGE Model Three cases to consider: (1) In the case of as many explosive eigenvalues ( λ e ) as forward-looking equations, we have a unique solution to the problem. (2) If there are more stable eigenvalues ( λ s ) than forward-looking equations, there are many stable solutions for F , one for each block-partition of λ s . Here, equilibrium is indeterminate, and we face the issue of equilibrium selection. (3) If there are less stable eigenvalues than forward-looking equations, then there are no non-explosive solutions, as there is no block-partition of λ s such that all λ are stable. Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 6 / 30

  7. Estimating a DSGE Model With our solution S t = F S t − 1 + G ε t , we have the state equation of a filtering problem. Assuming Gaussian disturbances, which, coupled with our linear problem, implies a Kalman filter approach with measurement equation Y t = H ⊤ S t + ε Y , t We proceed in three short steps, repeated for all t in { 1 , . . . , T } : ◮ predict the state at time t + 1 given information available at time t ; ◮ update the state with new Y t +1 information; and ◮ calculate the likelihood at t + 1 based on forecast errors of Y t +1 and the covariance matrix of these forecasts. Classical ML and Bayesian estimation procedures are standard, with the latter being particularly popular; probably due to identification. Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 7 / 30

  8. DSGE-VAR Details: Setup and Prior Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 8 / 30

  9. VAR Model The standard VAR( p ) model is denoted by u t | y t − 1 ∼ N ( 0 , Σ u ) y t = Φ 0 + Φ 1 y t − 1 + · · · + Φ p y t − p + u t , Let k = 1 + n × p . In stacked form, we have Y = X Φ + U with likelihood function p ( Y | Φ , Σ u ) ∝ | Σ u | − T / 2 × � �� � − 1 Σ − 1 u ( Y ⊤ Y − Φ ⊤ X ⊤ Y − Y ⊤ X Φ + Φ ⊤ X ⊤ X Φ) exp 2tr Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 9 / 30

  10. The Prior We look at the prior in terms of a ‘dummy observation’ approach. The prior is of the form p (Φ , Σ u | θ ) = c − 1 ( θ ) | Σ u | − λ T + n +1 × 2 � �� � − 1 λ T Σ − 1 u (Γ ∗ yy ( θ ) − Φ ⊤ Γ ∗ xy ( θ ) − Γ ∗ yx ( θ )Φ + Φ ⊤ Γ ∗ exp 2tr xx ( θ )Φ) where Γ ∗ yy ( θ ) := E θ [ y t y ⊤ t ], etc, are the population moments. The form of the normalizing term c − 1 ( θ ) is a little complicated. Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 10 / 30

  11. The Prior For a model solution of the form S t = F S t − 1 + G ε t Y t = H ⊤ S t we first compute the steady state covariance matrix of the state by solving the discrete Lyapunov equation Ω ss = F Ω ss F ⊤ + G Q G ⊤ using a doubling algorithm, where E θ [ S t S ⊤ t ] = Ω ss . Then we compute the Γ ∗ matrices with Γ ∗ yy ( θ ) = H ⊤ Ω ss H Γ ∗ yx h ( θ ) = H ⊤ F h Ω ss H Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 11 / 30

  12. Computational Aside: Solving for Ω ss For completeness... we solve Ω ss = F Ω ss F ⊤ + G Q G ⊤ by iteration. Let Q := G Q G ⊤ . Set Ω ss (0) = Q and B (0) = F . Then, for i = 1 , 2 , . . . Ω ss ( i + 1) = Ω ss ( i ) + B ( i )Ω ss ( i ) B ( i ) ⊤ B ( i + 1) = B ( i ) B ( i ) ⊤ Continue until the difference between Ω ss ( i + 1) and Ω ss ( i ) is ‘small’. Note: Ω ss is a symmetric positive-definite matrix, so the relevant matrix norm here is the largest singular value (from a SVD). Could also use the vec-Kronecker trick: vec( ABC ) = ( C ⊤ ⊗ A )vec( B ). Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 12 / 30

  13. The Prior Let xx ( θ )] − 1 Γ ∗ Φ ∗ ( θ ) = [Γ ∗ xy ( θ ) xx ( θ )] − 1 Γ ∗ Σ ∗ u ( θ ) = Γ ∗ yy ( θ ) − Γ ∗ yx ( θ )[Γ ∗ xy ( θ ) ◮ Interpretation: If the data were generated by the DSGE model at hand, Φ ∗ ( θ ) is the coefficient matrix of the VAR( p ) that minimizes the one-step-ahead QFE loss. Given a θ , the prior distribution is of the usual IW-N form: Σ u | θ ∼ IW ( λ T Σ ∗ u ( θ ) , λ T − k , n ) � xx ( θ )) − 1 � Φ ∗ ( θ ) , Σ u ⊗ ( λ T Γ ∗ Φ | Σ u , θ ∼ N The joint prior is then given by p (Φ , Σ u , θ ) = p (Φ , Σ u | θ ) p ( θ ) Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 13 / 30

  14. DSGE-VAR Posterior Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 14 / 30

  15. The Posterior: Block 1 The joint posterior distribution is factorized similarly: p (Φ , Σ u , θ | Y ) = p (Φ , Σ u | Y , θ ) p ( θ | Y ) The ML estimates are � � − 1 � λ T Γ ∗ xx ( θ ) + X ⊤ X [ λ T Γ ∗ xy + X ⊤ Y ] Φ( θ ) = � � 1 1 � ( λ T Γ ∗ yy ( θ ) + Y ⊤ Y ) − ( λ + 1) T × Σ u ( θ ) = ( λ + 1) T � � xx ( θ ) + X ⊤ X ) − 1 ( λ T Γ ∗ ( λ T Γ ∗ yx ( θ ) + Y ⊤ X )( λ T Γ ∗ xy ( θ ) + X ⊤ Y ) The prior and likelihood are conjugate, so � � ( λ + 1) T � Σ u | Y , θ ∼ IW Σ u ( θ ) , (1 + λ ) T − k , n � xx ( θ ) + X ⊤ X ) − 1 � � Φ( θ ) , Σ u ⊗ ( λ T Γ ∗ Φ | Y , Σ u , θ ∼ N Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 15 / 30

  16. The Posterior: Block 2 The posterior distribution of the DSGE parameters is p ( θ | Y ) ∝ p ( Y | θ ) p ( θ ) where the marginal likelihood is � p ( Y | θ ) = p ( Y | Φ , Σ u ) p (Φ , Σ u ) d (Φ , Σ u ) (1) The authors show (in the appendix) that the closed form for (1) is p ( Y | θ ) = p ( Y | Φ , Σ) p (Φ , Σ | θ ) p (Φ , Σ | Y ) Σ u ( θ ) | − ( λ +1) T − k xx ( θ ) + X ⊤ X | − n 2 | ( λ + 1) T � = | λ T Γ ∗ 2 xx ( θ ) | − n u ( θ ) | − λ T − k 2 | λ T Σ ∗ | λ T Γ ∗ 2 � n n (( λ +1) T − k ) × (2 π ) − nT / 2 2 i =1 Γ[(( λ + 1) T − k + 1 − i ) / 2] 2 � n n ( λ T − k ) i =1 Γ[( λ T − k + 1 − i ) / 2] 2 2 Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 16 / 30

  17. Sampling Algorithm Our previous discussion implies that a Metropolis-within-Gibbs MCMC algorithm would be appropriate. Given some value for θ , we sample Σ u from � � ( λ + 1) T � Σ u | Y , θ ∼ IW Σ u ( θ ) , (1 + λ ) T − k , n Then, given θ and Σ u , sample � xx ( θ ) + X ⊤ X ) − 1 � � Φ( θ ) , Σ u ⊗ ( λ T Γ ∗ Φ | Y , Σ u , θ ∼ N Given Φ and Σ u , we evaluate a new θ draw using a Random Walk Metropolis MCMC algorithm. Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 17 / 30

  18. Random Walk Metropolis Sampling Algorithm Given some initial θ (perhaps the posterior mode), draw a proposal θ ( ∗ ) from a jumping distribution, N ( θ ( h − 1) , c · Σ m ) where Σ m is the inverse of the Hessian computed at the posterior mode and c is a scaling factor. Compute the acceptation ratio, p ( Y | θ ( ∗ ) ) p ( θ ( ∗ ) ) ν = p ( Y | θ ( h − 1) ) p ( θ ( h − 1) ) Finally, we accept or reject the proposal according to � θ ( ∗ ) P = min { ν, 1 } θ ( h ) = θ ( h − 1) else Given this θ ( h ) , draw a new Σ u , and so on. Presenter: Keith O’Hara ( ∼ /) DSGE-VARs March 10, 2014 18 / 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend