perturbation methods for dsge models
play

Perturbation methods for DSGE models St ephane Adjemian - PowerPoint PPT Presentation

Perturbation methods for DSGE models St ephane Adjemian stephane.adjemian@univ-lemans.fr March, 2016 cba Introduction In this chapter we show how to solve DSGE models using perturbation technics. Basically, the idea is to replace


  1. Perturbation methods for DSGE models St´ ephane Adjemian stephane.adjemian@univ-lemans.fr March, 2016 cba

  2. Introduction ◮ In this chapter we show how to solve DSGE models using perturbation technics. ◮ Basically, the idea is to replace the original problem by a simpler one, without loosing the properties of interest in the original model (if possible). ◮ This auxiliary model is obtained by perturbing the original model in the vicinity of the original model’s deterministic steady state. ◮ We will show how we can easily solve the auxiliary model. ◮ It is important to understand that we do not approximate the solution of the DSGE model. We rather compute the exact solution of an approximation of the original DSGE model, hoping it provides an accurate approximation of the solution of the original DSGE model. cba

  3. Outline Introduction The perturbation approach The RBC model First order approximation Higher order approximation Perturbation methods with Dynare cba

  4. Perturbation approach Square root function ◮ Suppose that we need to compute √ 1 + ǫ for small values of ǫ ... ◮ But that the computational burden of such an operation is very high. ◮ We approximate this task using a famous result from Newton: Generalized binomial theorem For all ( x , y ) ∈ R 2 such that | x / y | > 1 and for all r ∈ R we have: � � ∞ r � ( x + y ) r = x r − k y k k k =0 where the binomial coefficient is defined as follows: � k − 1 � r � = r k i =0 ( r − i ) = � k − 1 k k ! i =0 ( k − i ) See Graham, Knuth and Patashnik (1994). cba

  5. Perturbation approach Square root function approximation ◮ Applying this theorem for r = 1 / 2 , we find the following expression: � � √ ∞ 1 / 2 � ε k 1 + ε = k k =0 = 1 + 1 2 ε − 1 8 ε 2 + 1 5 7 16 ε 3 − 128 ε 4 + 256 ε 5 + · · · ◮ The power function with integer exponent is much easier to evaluate than the square root function. ◮ But the theorem states that we should evaluate an infinite number of power functions! ◮ Noting that the terms of the infinite series are rapidly converging to zero, provided | ε | < 1, we can truncate this expression. For instance: √ 1 + ε = 1 + 1 2 ε − 1 8 ε 2 + O � ε 3 � √ 1 + ε ≃ 1 + 1 2 ε − 1 8 ε 2 ⇒ cba

  6. Perturbation approach Square root function approximation error � ε 3 � ◮ The symbol O , to be read big ’O’ of ε cubed , hides the rest of the infinite series. ◮ This symbol means that for sufficiently small values of ε there exists a positive constant Γ independent of ε such that the absolute value � ε 3 � is less than Γ | ε | 3 . of O ◮ More generally, when we approximate a function f ( ε ) by a truncated infinite series, p − 1 � c i f ( i ) (0) ε i + O ( ε p ) f ( ε ) = i =0 O ( ε p ) means that the accuracy error does not grow faster than ε at the power p when ε is small. cba

  7. Perturbation approach Square root function approximation error 1 . 4 1 . 2 1 0 . 8 First order 0 . 6 Second order Third order 0 . 4 Fourth order Fifth order 0 . 2 True 0 . 2 0 . 4 0 . 6 0 . 8 1 1 . 2 1 . 4 1 . 6 1 . 8 2 Five approximations to √ 1 + ε . The bold curve is the graphical representation of the true square root function between 0 and 2. The other curves represents the approximations of the square root function around x = 1 for ε ranging from -1 to 1. cba

  8. Perturbation approach Square root function approximation error 0 . 5 First order Second order 0 . 4 Third order Fourth order Fifth order 0 . 3 Twentieth order 0 . 2 0 . 1 -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0 Approximation errors. Each curve represents the absolute value of the difference between the true function and its approximation, for different values of ε . cba

  9. Perturbation approach Square root function approximation error ◮ The higher is the approximation (truncation) order, the closer is the approximation to the true function. ◮ A striking feature is that the approximation errors are smaller for positive values of ε than for negative values. ◮ The square root function is much more curved at the origin (we have an infinite slope at zero) than above one. ◮ Obviously these approximations are not valid for any values of ε . ◮ The perturbations ε have to be small. But what is a small ε ? ◮ The generalized binomial theorem assumes that ε is less than one in absolute value so that the infinite series exists. ◮ If ε > 1, the infinite series cannot exist because lim p →∞ ε p = ∞ . ◮ In this context, a small ε is any ε ∈ ( − 1 , 1), we define r = 1 as the radius of convergence . ◮ Put differently, one can expect that the approximation will behave very poorly if ε > 1. ◮ The determination of the radius of convergence is generally not obvious (unknown in the case of DSGE models). cba

  10. Perturbation approach Square root function and its approximations (timing) ◮ In the following table we report the relative execution time (smaller is better) and approximation error for three approximations of √ 1 + ǫ with ǫ = . 01. ◮ The time execution is relative to the direct computation of the square root. ◮ Polynomials (approximation order greater than one) are computed with the Horner scheme. ◮ Matlab code is available here. Approx. order Relative time Approx. error 1 . 2438 × 10 − 5 1 . 2502 − 6 . 2112 × 10 − 8 2 . 5220 3 . 8791 × 10 − 10 3 . 7947 cba

  11. Stochastic RBC model Equations As an example, consider the RBC model, where the dynamics of consumption, physical capital and productivity are given by: � � α e a t +1 k α − 1 1 t +1 + 1 − δ = β E t (1) c t c t +1 k t +1 = e a t k α t + (1 − δ ) k t − c t (2) a t = ϕ a t − 1 + ǫ t (3) ◮ { ǫ t } ∼ iid (0 , σ 2 ǫ ), usually the distribution of the innovations is Gaussian. ◮ E t [ X t +1 ] is the expectation conditional on the information available at time t . ◮ The information set at time t contains the previous realizations of the endogenous variables, the contemporaneous innovations and the variables decided at time t ). cba

  12. Log linearization ◮ Suppose that we have the following recurrent equation: x t = f ( x t − 1 ) with the steady state x ⋆ such that x ⋆ = f ( x ⋆ ), which is assumed to be non zero. ◮ Define ˜ x t such that x t = x ⋆ e ˜ x t , or equivalently ˜ x t = log x t − log x ⋆ the percentage deviation from the steady state. ◮ We can rewrite the recurrent equation in terms of ˜ x t : � x t − 1 � x t = f x ⋆ e ˜ x ⋆ e ˜ ◮ A first order Taylor approximation of both sides around ˜ x t = 0 gives: x ⋆ + x ⋆ ˜ x t ≈ f ( x ⋆ ) + x ⋆ f ′ ( x ⋆ )˜ x t − 1 x t ≈ f ′ ( x ⋆ )˜ ⇔ ˜ x t − 1 cba

  13. Stochastic RBC model Log linearization ◮ The exogenous variable a t is already in logarithm and its law of motion is linear, we only log-linearize with respect to c t and k t . Exercise 1. Show that the log linearized version of (1)-(2) is given by: � �� c t +1 + ρ + δ � a t +1 − (1 − α )˜ c t − ˜ ˜ ˜ k t +1 = 0 (4) E t 1 + ρ k t +1 = y ⋆ k t − c ⋆ ˜ k ⋆ a t + β − 1 ˜ k ⋆ ˜ c t (5) with ˜ a t = a t . ◮ We do not need to compute explicitly the deterministic steady state to approximate the model around the deterministic steady state! ⇒ Steady state ratios. ◮ We even do not need to specify functional forms... cba

  14. Stochastic RBC model Log linearization (without explicit functions) Exercise 2. Suppose that the Euler and transition equations are given by: e a t +1 f ′ ( k t +1 ) + 1 − δ ) u ′ ( c t ) = β E t � u ′ ( c t +1 � �� k t +1 = e a t f ( k t ) + (1 − δ ) k t − c t where y t = e a t f ( k t ) is the level of production, f ( k ) is a neoclassical production function, and u ( c ) is the instantaneous utility function. Let α be the elasticity of output with respect to capital at the steady state and γ be the absolute value of the elasticity of the marginal utility with respect to consumption at the steady state. (1) Characterize the steady state. (2) Compute the steady state ratios c ⋆ / k ⋆ and y ⋆ / k ⋆ . (3) Show that the log-linearized Euler and transition equations are: � c t +1 ) + ρ + δ �� � a t +1 − (1 − α )˜ E t γ (˜ c t − ˜ ˜ k t +1 = 0 1 + ρ y ⋆ k t − c ⋆ k ⋆ a t + β − 1 ˜ c t − ˜ k ⋆ ˜ k t +1 = 0 cba

  15. Stochastic RBC model Solution of the log linearized model ◮ A solution is a time invariant mapping between the states ( a t and k t ) and the controls ( c t , k t +1 ). ◮ If c t = ψ ( k t , a t ) is known, one can build time series for all the endogenous variables by iterating over (2)-(3). ◮ Except under rare occasions, it is generally not possible to obtain a closed form solution for this mapping. Exercise 3. Show that it is possible to solve analytically the previous RBC model if δ = 1. ◮ If the model is linear (or linearized) one can show that the solution is linear (provided that the solution exists). ◮ We postulate a linear solution: c t = η ck k t + η ca a t (6) k t +1 = η kk k t + η ka a t A unique solution exists iff there exists a unique vector ( η ck , η ca , η kk , η ka ) such that (6) is consistent with (4), (5) and (3). cba

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend