discussion of the time varying volatility of
play

Discussion of The Time-Varying Volatility of Macroeconomic - PowerPoint PPT Presentation

Discussion of The Time-Varying Volatility of Macroeconomic Fluctuations by Justiniano and Primiceri Marco Del Negro Federal Reserve Bank of New York NYU Macroeconometrics Reading Group, March 31, 2014 Disclaimer: The views expressed are


  1. Discussion of “The Time-Varying Volatility of Macroeconomic Fluctuations” by Justiniano and Primiceri Marco Del Negro Federal Reserve Bank of New York NYU Macroeconometrics Reading Group, March 31, 2014 Disclaimer: The views expressed are mine and do not necessarily reflect those of the Federal Reserve Bank of New York or the Federal Reserve System

  2. Motivation: Standardized Policy shocks in Gaussian DSGE r 6 6 5 5 Exc. Kurtosis:4.3 4 4 Standard Deviations 3 3 2 2 1 1 0 0 1965 1969 1973 1977 1981 1985 1989 1993 1997 2001 2005 2009 Marco Del Negro JP discussion 2 / 1

  3. The Smets and Wouters DSGE Model - DSSW variant • Christiano, Eichenbaum, and Evans (2005) + several shocks. • Stochastic growth model + . . . real rigidites nominal rigidites investment adjustment costs price stickiness variable capital utilization wage stickiness partial indexation to lagged inflation + habit persistence • 7 shocks: Neutral technology, investment specific technology, labor supply, price mark-up, government spending, “discount rate” , policy. Marco Del Negro JP discussion 3 / 1

  4. Estimating a DSGE model • Linearized DSGE = state space model • Transition equation: s t = T ( θ ) s t − 1 + R ( θ ) ǫ t • Measurement equation: y t = D ( θ ) + Z ( θ ) s t where y t and s t are the vectors of observables and states, respectively, and θ is the vector of DSGE model parameters (so-called “deep” parameters). • Likelihood p ( Y 1: T | θ ) computed using the Kalman filter. • Random-Walk Metropolis algorithm to obtain draws from the posterior p ( θ | Y 1: T ) – see Del Negro, Schorfheide, “Bayesian Macroeconometrics”, (in Handbook of Bayesian Econometrics , Koop, Geweke, van Dijk eds.) Marco Del Negro JP discussion 4 / 1

  5. Measurement equations • y t = D ( θ ) + Z ( θ ) s t LN (( GDPC ) / LNSINDEX ) ∗ 100 Output growth = LN ((( PCEC − Durables ) / GDPDEF ) / LNSINDEX ) ∗ Consumption growth = Investment growth = LN ((( FPI + durables ) / GDPDEF ) / LNSINDEX ) ∗ 100 Real Wage growth = LN ( PRS 85006103 / GDPDEF ) ∗ 100 Hours = LN (( PRS 85006023 ∗ CE 16 OV / 100) / LNSINDEX ) ∗ 100 Inflation = LN ( GDPDEF / GDPDEF ( − 1)) ∗ 100 FFR = FEDERAL FUNDS RATE / 4 • Sample 1954:III up to 2004:IV. • Same prior p ( θ ) as DSSW. Marco Del Negro JP discussion 5 / 1

  6. Estimating linear DSGEs with SV • Measurement: y t = D ( θ ) s + Z ( θ ) s t • Transition: s t +1 = T ( θ ) s t + R ( θ ) ε t where θ are the DSGE parameters • Shocks ε q , t = σ q σ q , t η q , t η q , t ∼ N (0 , 1), i.i.d. across q , t . log σ q , t = log σ q , t − 1 + ζ q , t , σ q , 0 = 1 , ζ q , t ∼ N (0 , ω 2 q ) • Non linear: Fernandez-Villaverde and Rubo-Ramirez (ReStud 2007,...) Marco Del Negro JP discussion 6 / 1

  7. Inference • The joint distribution of data and observables is: p ( y 1: T | s 1: T , θ ) p ( s 1: T | ε 1: T , θ ) p ( ε 1: T | ˜ σ 1: T , θ ) σ 1: T | ω 2 q ) p ( ω 2 p (˜ q ) p ( θ ) 1:¯ 1:¯ where ˜ σ t = log σ t • Priors: • p ( θ ) ‘usual’ • IG prior for ω 2 q : � ν � νω 2 / 2 − νω 2 2 � � 2 − 1 p ( ω 2 q | ν, ω 2 ) = Γ( ν/ 2) ( ω 2 q ) − ν 2 exp 2 ω 2 q Marco Del Negro JP discussion 7 / 1

  8. Gibbs Sampler • What’s the idea? Suppose you want to draw from p ( x , y ) and you don’t know how ... • But you know how to draw from p ( x | y ) ∝ p ( x , y ) and p ( y | x ) ∝ p ( x , y ) • Gibbs sampler: you obtain draws from p ( x , y ) by drawing repeatedly from p ( x | y ) and p ( y | x ) Marco Del Negro JP discussion 8 / 1

  9. Why does it work? • Some theory of Markov chains. • Say you want to draw from the marginal p ( x ) (note, by Bayes’ law if you have draws from the marginal you also have draws from the joint p ( x , y )). • If you find a Markov transition kernel K ( x , x ′ ) that solves the fixed point integral equation : � K ( x , x ′ ) p ( x ′ ) dx ′ p ( x ) = (and that is π ∗ -irreducible and aperiodic ) ... • Then if you generate draws x i , i = 1 , ..., m from x ′ starting from x ′ , | K ( A , x ′ ) m − p ( A ) | → 0 for any set A and any x and 1 � � h ( x i ) → h ( x ) p ( x ) dix m i Marco Del Negro JP discussion 9 / 1

  10. Why does it work? • But wait... the Gibbs sample does provide a Markov transition kernel � K ( x , x ′ ) = p ( x | y ) p ( y | x ′ ) dy • ... that solves the fixed point integral equation : � K ( x , x ′ ) p ( x ′ ) dx ′ p ( x ) = � �� � p ( x | y ) p ( y | x ′ ) dy p ( x ′ ) dx ′ = � �� � p ( y | x ′ ) p ( x ′ ) dx ′ p ( x | y ) = dy � p ( x | y ) p ( y ) dy = p ( x ) = (and sufficient conditions for π ∗ -irreducibility and aperiodicity are usually met, see Chib and Greenberg 1996). Marco Del Negro JP discussion 10 / 1

  11. Gibbs Sampler σ 1: T , ω 2 1) Draw from p ( θ, s 1: T , ε 1: T | ˜ 1:q , y 1: T ): 1.a) [Metropolis-Hastings] Draw from the marginal p ( θ | ˜ σ 1: T , y 1: T ) ∝ p ( y 1: T | ˜ σ 1: T , θ ) p ( θ ) where p ( y 1: T | ˜ σ 1: T , θ ) = � p ( y 1: T | s 1: T , θ ) p ( s 1: T | ε 1: T , θ ) p ( ε 1: T | ˜ σ 1: T , θ ) · d ( s 1: T , ε 1: T ) with ε t | ˜ σ 1: T ∼ N (0 , ∆ t ) ) ( 1.b) [Simulation smoother] Draw from the conditional: p ( s 1: T , ε 1: T | θ, ˜ σ 1: T , y 1: T ) Marco Del Negro JP discussion 11 / 1

  12. σ 1: T | ε 1: T , ω 2 2) [ Kim-Sheppard-Chib] Draw from p (˜ 1:q , . . . ) by drawing from: σ 1: T | ω 2 p ( ε 1: T | ˜ σ 1: T , θ ) p (˜ q ) 1:¯ 3) Draw from p ( ω 2 σ 1: T | ω 2 q ) p ( ω 2 1:q | σ 1: T , . . . ) ∝ p (˜ q ): 1:¯ 1:¯ T ! ν + T 2 ω 2 + 1 , ν X ω 2 σ q , t − 1 ) 2 q | σ 1: T , · · · ∼ IG (˜ σ q , t − ˜ 2 2 t =1 Marco Del Negro JP discussion 12 / 1

  13. Step 1a: Draw from p ( θ | ˜ σ 1: T , y 1: T ) • Usual MH step on p ( y 1: T | ˜ σ 1: T , θ ) p ( θ ) Marco Del Negro JP discussion 13 / 1

  14. Step 1b (Simulation smoother) Option 1: Carter and Kohn • Since � T − 1 � � p ( s 0: T | y 1: T ) = p ( s t | s t +1 , y 1: t ) p ( s T | y 1: T ) t =0 the sequence s 1: T , conditional on y 1: T , can be drawn recursively: 1 Draw s T from p ( s T | y 1: T ) 2 For t = T − 1 , .., 0, draw s t from p ( s t | s t +1 , y 1: t ) • How do I draw from p ( s T | y 1: T )? • i) I know that s T | y 1: T is gaussian, ii) I have s T | T = E [ s T | y 1: T ] and P T | T = Var[ s T | y 1: T ] from the filtering procedure ⇒ � � s T | y 1: T ∼ N s T | T , P T | T Marco Del Negro JP discussion 14 / 1

  15. • How do we draw from p ( s t | s t +1 , y 1: t )? We know that � s t +1 | t � P t +1 | t � �� s t +1 TP t | t � � y 1: t ∼ N P t | t T ′ � s t s t | t P t | t ( s t +1 − s t +1 | t )( s t − s t | t ) ′ � � Note: 1) easy to show that E = TP t | t , 2) we know all these matrices from the Kalman filter. • Then ... E [ s t | s t +1 , y 1: t ] = s t | t + P ′ t | t T ′ P − 1 t +1 | t ( s t +1 − s t +1 | t ) t | t T ′ P − 1 Var [ s t | s t +1 , y 1: t ] = P t | t − P ′ t +1 | t TP t | t • ... and s t | s t +1 , y 1: t ∼ N ( E [ s t | s t +1 , y 1: t ] , Var [ s t | s t +1 , y 1: t ]) Marco Del Negro JP discussion 15 / 1

  16. Step 1b Option 2: Durbin and Koopman (Biometrika 2002) The idea: • Say you have two normally distributed random variables, x and y . You know how to (i) draw from the joint p ( x , y ) and (ii) to compute I E [ x | y ]. • You want to generate a draw from x | y 0 ∼ N ( I E [ x | y 0 ] , W ) for some y 0 . Proceed as follows: 1 Generate a draw ( x + , y + ) from p ( x , y ). By definition, x + is also a draw from p ( x | y + ) = N ( I E [ x | y + ] , W ) or, alternatively, x + − I E [ x | y + ] is a draw from N (0 , W ) . E [ x | y 0 ] + x + − I E [ x | y + ] is a draw from N ( I E [ x | y 0 ] , W ) 2 Use I Since the variables are normally distributed the scale W does not depend on the location y (draw a two dimensional normal, or review the formulas for normal updating, to convince yourself that is the case). Hence p ( x | y + ) and p ( x | y 0 ) have the same variance W , which E [ x | y 0 ] + x + − I E [ x | y + ] is a draw from N ( I E [ x | y 0 ] , W ). means that I Marco Del Negro JP discussion 16 / 1

  17. Durbin and Koopman • Imagine you know how to compute the smoothed estimates of the shocks I E [ ε 1: T | y 1: T ] (see Koopman, Disturbance smoother for state space models, Biometrika 1993) • ... and want to obtain draws from p ( ε 1: T | y 1: T ) (again, we omit θ for notational simplicity). Proceed as follows: 1 Generate a new draw ( ε + 1: T , s + 1: T , y + 1: T ) from p ( ε 1: T , s 1: T , y 1: T ) by drawing s 0 | 0 and ε 1: T from their respective distributions, and then using the transition and measurement equations. E [ ε 1: T | y + 2 Compute I E [ ε 1: T | y 1: T ] and I 1: T ] (and I E [ s 1: T | y 1: T ] and E [ s 1: T | y + I 1: T ] if need the states); E [ ε 1: T | y 1: T ] + ε + E [ ε 1: T | y + 3 Compute I 1: T − I 1: T ] (and E [ s 1: T | y 1: T ] + s + E [ s 1: T | y + I 1: T − I 1: T ] ). Marco Del Negro JP discussion 17 / 1

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend