analysing identification issues in dsge models
play

Analysing identification issues in DSGE models Nikolai Iskrev, - PowerPoint PPT Presentation

Analysing identification issues in DSGE models Nikolai Iskrev, Marco Ratto Bank of Portugal, Lisbon, PORTUGAL European Commission, Joint Research Centre, Ispra, ITALY Preliminary and incomplete March 10, 2010 Aims A growing interest for


  1. Analysing identification issues in DSGE models Nikolai Iskrev, Marco Ratto Bank of Portugal, Lisbon, PORTUGAL European Commission, Joint Research Centre, Ispra, ITALY Preliminary and incomplete March 10, 2010

  2. Aims A growing interest for identification issues in economic modeling (Canova and Sala, 2009; Komunjer and Ng, 2009; Iskrev, 2010). 1. we present a new method for computing derivatives with respect to the deep parameters in linearized DSGE models. 2. we present the ongoing development of the identification toolbox within the DYNARE framework. Such a toolbox includes the identification tests recently proposed by Iskrev and aims at integrating them with global sensitivity analysis methodologies (Ratto, 2008), to get useful insight about global identification properties. 1

  3. Derivatives 1. derivatives useful for the quantitative analysis of models and of identification. 2. Closed form expressions for computing analytical derivatives are presented in (Iskrev, 2010), with extensive use of sparse Kronecker-product matrices: (i) computationally inefficient, (ii) large amount of memory allocation, (iii) unsuitable for large-scale models. 3. Our approach leads to a dramatic increase in the speed of computations at virtually no cost in terms of accuracy. 2

  4. DSGE Models: Structural model and reduced form A DSGE model is summarized by a system g of m non-linear equations: � � E t g (ˆ z t , ˆ z t +1 , ˆ z t − 1 , u t | θ ) = 0 (1) Most studies use linear approximations of the original models: Γ 0 ( θ ) z t = Γ 1 ( θ ) E t z t +1 + Γ 2 ( θ ) z t − 1 + Γ 3 ( θ ) u t (2) z ∗ . The elements of the matrices Γ 0 , Γ 1 , Γ 2 and where z t = ˆ z t − ˆ Γ 3 are functions of θ . 3

  5. Depending on the value of θ , there may exist zero, one, or many stable solutions. Assuming that a unique solution exists, it can be cast in the following form z t = A ( θ ) z t − 1 + B ( θ ) u t (3) In most applications the model in (3) cannot be taken to the data directly since some of the variables in z t are not observed. Instead, the solution of the DSGE model is expressed in a state space form, with transition equation given by (3), and a measurement equation x t = Cz t + Du t + ν t (4) 4

  6. We define τ the vector collecting the non-constant elements of z ∗ , A , and Ω , ˆ i.e. τ := [ τ ′ z , τ ′ A , τ ′ Ω ] ′ . 5

  7. Theoretical first and second moments From (3)-(4) it follows that the unconditional first and second moments of x t are given by := E x t µ x = s (5) C Σ z (0) C ′ � if i = 0 cov( x t + i , x ′ := t ) Σ x ( i ) = (6) CA i Σ z (0) C ′ if i > 0 where Σ z (0) := E z t z ′ t solves the matrix equation Σ z (0) = A Σ z (0) A ′ + Ω (7) 6

  8. Denote the observed data with X T := [ x ′ 1 , . . . , x ′ T ] ′ , and let Σ T be its covariance matrix, i.e. E X T X ′ := Σ T T   Σ x (1) ′ , . . . , Σ x ( T − 1) ′ Σ x (0) , . . . , Σ x ( T − 2) ′ Σ x (1) , Σ x (0) ,   =  (8)   . . . . . . . . . . . .    Σ x ( T − 1) , Σ x ( T − 2) , . . . , Σ x (0) 7

  9. Let σ T be a vector collecting the unique elements of Σ T , i.e. σ T := [vech(Σ x (0)) ′ , vec(Σ x (1)) ′ , ..., vec(Σ x ( T − 1)) ′ ] ′ T ] ′ be a ( T − 1) l 2 + l ( l + 3) / 2 -dimensional vector ′ and m T := [ µ ′ , σ collecting the parameters that determine the first two moments of the data. m T is a function of θ . If either u t is Gaussian, or there are no distributional assumptions about the structural shocks, the model- implied restrictions on m T contain all information that can be used for the estimation of θ . The identifiability of θ depends on whether that information is sufficient or not. 8

  10. Identification: The rank condition Definition 1. Let θ ∈ Θ ⊂ R k be the parameter vector of interest, and suppose that inference about θ is made on the basis of T observations of a random vector x with a known joint probability density function f ( X ; θ ) , where X = [ x 1 , . . . , x T ] . A point θ 0 ∈ Θ is said to be globally identified if f ( X ; ˜ θ ) = f ( X ; θ 0 ) with probability 1 ⇒ ˜ θ = θ 0 (9) for any ˜ If (9) is true only for values ˜ θ ∈ Θ . θ in an open neighborhood of θ 0 , then θ 0 is said to be locally identified. 9

  11. The Gaussian case: Theorem 1. Suppose that the data X T is generated by the model (3) - (4) with parameter vector θ 0 . Then θ 0 is globally identified if m T (˜ θ ) = m T ( θ 0 ) ⇔ ˜ θ = θ 0 (10) for any ˜ If (10) is true only for values ˜ θ ∈ Θ . θ in an open neighborhood of θ 0 , the identification of θ 0 is local. If the structural shocks are normally distributed, then the condition in (10) is also necessary for identification. 10

  12. If the data is not normally distributed, higher-order moments may provide additional information about θ , not contained in the first two moments. Therefore, identification based on the mean and the variance of X is only sufficient but not necessary for identification with the complete distribution. The condition in (10) requires that the mapping from the population moments of the sample - m T ( θ ) , to θ is unique. In general, there are no known global conditions for unique solutions of systems of non-linear equations, and it is therefore difficult to establish the global identifiability of θ . 11

  13. Local identification: The rank condition Theorem 2. Suppose that m T is a continuously differentiable function of θ . Then θ 0 is locally identifiable if the Jacobian matrix J ( q ) := ∂ m q has a full column rank at θ 0 for q ≤ T . ∂ θ ′ This condition is both necessary and sufficient when q = T if u t is normally distributed. 12

  14. Given J ( T ) = ∂ m T ∂ τ (11) ∂ τ ′ ∂ θ ′ another necessary condition discussed in Iskrev (2010): Corollary 1. The point θ 0 is locally identifiable only if the rank of J 2 = ∂ τ ∂ θ ′ at θ 0 is equal to k . The condition is necessary because the distribution of X T depends on θ only through τ , irrespectively of the distribution of vu t . It is not sufficient since, unless all state variables are observed, τ may be unidentifiable. 13

  15. Local identification: Comments The local identifiability of a point θ 0 can be established by verifying that the Jacobian matrix J ( T ) has full column rank when evaluated at θ 0 . Local identification at one point in Θ , however, does not guarantee that the model is locally identified everywhere in the parameter space. There may be some points where the model is locally identified, and others where it is not. 14

  16. Local identifiability everywhere in Θ is necessary but not sufficient to ensure global identification, however: 1. local identification makes possible the consistent estimation of θ , and is sufficient for the estimator to have the usual asymptotic properties (see Florens et al. (2008)). 2. with the help of the Jacobian matrix we can detect problems that are a common cause for identification failures in DSGE models. (a) a deep parameter θ j does not affect the solution of the model: Consequently, ∂ m T ∂θ j = 0 for any T , and the rank condition for identification will fail (e.g. the unidentifiability of the Taylor rule coefs in a simple NK model Cochrane (2007)). 15

  17. (b) two or more parameters enter in the solution in a manner which makes them indistinguishable, e.g. as a product or a ratio. As a result it will be impossible to identify the parameters separately, and some of the columns of the Jacobian matrix will be linearly dependent. (e.g. the equivalence between the intertemporal and multisectoral investment adjustment cost parameters in Kim (2003)). 16

  18. In these papers the problems are discovered by solving the models explicitly in terms of the deep parameters. That approach, however, is not feasible for larger models, which can only be solved numerically. We can exploit the fact that the Jacobian matrix in Theorem 2 be computed analytically for linearized models of any size or complexity. 17

  19. Computing the Jacobian matrix The simplest method for computing the Jacobian matrix of the mapping from θ to m T is by numerical differentiation. The problem with this approach is that numerical derivatives tend to be inaccurate for highly non-linear functions. In the present context this may lead to wrong conclusions concerning the rank of the Jacobian matrix and the identifiability of the parameters in the model. For this reason, Iskrev (2010) applied analytical derivatives, employing implicit derivation. As shown in Iskrev (2010), it helps to consider the mapping from θ to m T as comprising two steps: (1) a transformation from θ to τ ; (2) a transformation from τ to m T . 18

  20. Thus, the Jacobian matrix can be expressed as J ( T ) = ∂ m T ∂ τ (12) ∂ τ ′ ∂ θ ′ The derivation of the first term on the right-hand side is straightforward since the function mapping τ into m T is available explicitly (see the definition of τ and equations (5)-(7)); thus the Jacobian matrix J 1 ( T ) := ∂ m T may be obtained by direct ∂ τ ′ differentiation. The elements of the second term J 1 ( T ) := ∂ τ ∂ θ ′ , the Jacobian of the transformation from θ to τ , can be divided into three groups corresponding to the three blocks of τ : τ z , τ A and τ Ω . In Iskrev z ∗ is a known function of θ , implied (2010) it is assumed that ˆ 19

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend