learning and technology growth regimes
play

Learning and Technology Growth Regimes Andrew Foerster 1 Christian - PowerPoint PPT Presentation

Introduction The Model Learning Finding The Steady State Results Learning and Technology Growth Regimes Andrew Foerster 1 Christian Matthes 2 1 FRB Kansas City 2 FRB Richmond January 2018 The views expressed are solely those of the authors


  1. Introduction The Model Learning Finding The Steady State Results Learning and Technology Growth Regimes Andrew Foerster 1 Christian Matthes 2 1 FRB Kansas City 2 FRB Richmond January 2018 The views expressed are solely those of the authors and do not necessarily reflect the views of the Federal Reserve Banks of Kansas City and Richmond or the Federal Reserve System.

  2. Introduction The Model Learning Finding The Steady State Results Introduction • Markov-Switching used in a Variety of Economic Environments • Monetary or Fiscal Policy

  3. Introduction The Model Learning Finding The Steady State Results Introduction • Markov-Switching used in a Variety of Economic Environments • Monetary or Fiscal Policy • Variances of Shocks

  4. Introduction The Model Learning Finding The Steady State Results Introduction • Markov-Switching used in a Variety of Economic Environments • Monetary or Fiscal Policy • Variances of Shocks • Persistence of Exogenous Processes

  5. Introduction The Model Learning Finding The Steady State Results Introduction • Markov-Switching used in a Variety of Economic Environments • Monetary or Fiscal Policy • Variances of Shocks • Persistence of Exogenous Processes • Typically Models Assume Full Information about State Variables, Regime Perfectly Observed

  6. Introduction The Model Learning Finding The Steady State Results Do We Always Know The Regime? - Empirical Evidence for Technology Growth Regimes • Utilization-adjusted TFP ( z t ) • Investment-specific technology ( u t ) measured as PCE deflator divided by nonresidential fixed investment deflator • Estimate regime switching process ) + z t − 1 + σ z ( s σ z ( s µ z z t = µ z t ) ε z , t t ) + u t − 1 + σ u ( s σ u ( s µ u u t = µ u t ) ε u , t t • Two Regimes Each

  7. Introduction The Model Learning Finding The Steady State Results Data 2 1.8 Index, 1947:Q1 = 100 1.6 1.4 1.2 1 log IST log TFP 0.8 1950 1960 1970 1980 1990 2000 2010

  8. Introduction The Model Learning Finding The Steady State Results Table: Evidence for TFP Regimes from Fernald Mean Std Dev 1947-1973 2.00 3.68 1974-1995 0.60 3.44 1996-2004 1.99 2.66 2005-2017 0.39 2.35

  9. Introduction The Model Learning Finding The Steady State Results Estimated (Filtered) Regimes Low TFP Growth Low IST Growth 1 1 0.5 0.5 0 0 1960 1980 2000 1960 1980 2000 Low TFP Volatility Low IST Volatility 1 1 0.5 0.5 0 0 1960 1980 2000 1960 1980 2000

  10. Introduction The Model Learning Finding The Steady State Results Table: Parameter Estimates P µ j P µ j P σ j P σ j µ j ( L ) µ j ( H ) σ j ( L ) σ j ( H ) LL HH LL HH TFP ( j = z ) 0.6872 1.8928 2.5591 3.8949 0.9855 0.9833 0.9803 0.9796 ( 0.6823 ) ( 0.6476 ) ( 0.2805 ) ( 0.3744 ) ( 0.0335 ) ( 0.0295 ) ( 0.0215 ) ( 0.0232 ) IST ( j = u ) 0.4128 2.0612 1.1782 3.9969 0.9948 0.9839 0.9627 0.9026 ( 0.1112 ) ( 0.1694 ) ( 0.0745 ) ( 0.4091 ) ( 0.0049 ) ( 0.0162 ) ( 0.0197 ) ( 0.0463 )

  11. Introduction The Model Learning Finding The Steady State Results This Paper • Provide General Methodology for Perturbing MS Models where Agents Infer the Regime by Bayesian Updating • Agents are Fully Rational

  12. Introduction The Model Learning Finding The Steady State Results This Paper • Provide General Methodology for Perturbing MS Models where Agents Infer the Regime by Bayesian Updating • Agents are Fully Rational • Extension of Work on Full Information MS Models (Foerster, Rubio, Waggoner & Zha) • Key Issue: How do we Define the Steady State? What Point do we Approximate Around? • Joint Approximations to Both Learning Process and Decision Rules

  13. Introduction The Model Learning Finding The Steady State Results This Paper • Provide General Methodology for Perturbing MS Models where Agents Infer the Regime by Bayesian Updating • Agents are Fully Rational • Extension of Work on Full Information MS Models (Foerster, Rubio, Waggoner & Zha) • Key Issue: How do we Define the Steady State? What Point do we Approximate Around? • Joint Approximations to Both Learning Process and Decision Rules • Second- or Higher-Order Approximations - Often Not Considered in Learning Literature

  14. Introduction The Model Learning Finding The Steady State Results This Paper • Provide General Methodology for Perturbing MS Models where Agents Infer the Regime by Bayesian Updating • Agents are Fully Rational • Extension of Work on Full Information MS Models (Foerster, Rubio, Waggoner & Zha) • Key Issue: How do we Define the Steady State? What Point do we Approximate Around? • Joint Approximations to Both Learning Process and Decision Rules • Second- or Higher-Order Approximations - Often Not Considered in Learning Literature • Use a RBC Model as a Laboratory

  15. Introduction The Model Learning Finding The Steady State Results The Model ∞ β t [ log c t + ξ log ( 1 − l t )] ˜ ∑ E 0 t = 0

  16. Introduction The Model Learning Finding The Steady State Results The Model ∞ β t [ log c t + ξ log ( 1 − l t )] ˜ ∑ E 0 t = 0 c t + x t = y t y t = exp ( z t ) k α t − 1 l 1 − α t k t = ( 1 − δ ) k t − 1 + exp ( u t ) x t ) + z t − 1 + σ z ( s σ z ( s µ z z t = µ z t ) ε z , t t ) + u t − 1 + σ u ( s σ u ( s µ u u t = µ u t ) ε u , t t

  17. Introduction The Model Learning Finding The Steady State Results Bayesian Learning λ s t ( x t − 1 , ε t ) = [ u t z t ] ′ y t = ˜ ˜

  18. Introduction The Model Learning Finding The Steady State Results Bayesian Learning λ s t ( x t − 1 , ε t ) = [ u t z t ] ′ y t = ˜ ˜ ε t = λ s t ( ˜ y t , x t − 1 )

  19. Introduction The Model Learning Finding The Steady State Results Bayesian Learning λ s t ( x t − 1 , ε t ) = [ u t z t ] ′ y t = ˜ ˜ ε t = λ s t ( ˜ y t , x t − 1 ) prior � �� � likelihood � �� � n s J s t = i ( y t , x t − 1 ) φ ε ( λ s t = i ( y t , x t − 1 )) ∑ p s , i ψ s , t − 1 s = 1 ψ i , t = . j = 1 J s t = j ( y t , x t − 1 ) φ ε ( λ s t = j ( y t , x t − 1 )) ∑ n s ∑ n s s = 1 p s , j ψ s , t − 1

  20. Introduction The Model Learning Finding The Steady State Results Bayesian Learning • To Keep Probabilities between 0 and 1, Define ( ) ψ i , t η i , t = log 1 − ψ i , t • ∫ n s n s ( ε ′ ) p s , s ′ ∑ ∑ � E t f ( ... ) = ˜ f ( ... ) φ ε 1 + exp ( − η s , t ) s = 1 s ′ = 1

  21. Introduction The Model Learning Finding The Steady State Results Issues When Applying Perturbation to MS Models • What Point Should we Approximate Around? • What Markov-Switching Parameters Should be Perturbed? • Best Understood in an Example - Let’s Assume we are Only Interested in Approximating (a Stationary) TFP Process

  22. Introduction The Model Learning Finding The Steady State Results Equilibrium Conditions • Equilibrium Conditions   µ ( s t ) + σ ( s t ) ε t − z t   )( ) ( zt − µ ( 1 )   p 1,1 p 2,1 1  σ ( 1 ) φ ε 1 + exp ( − η 1, t − 1 ) +  σ ( 1 ) 1 + exp ( − η 2, t − 1 )     − η 1, t log )( ) ( zt − µ ( 2 )   p 1,2 p 2,2 1  1 + exp ( − η 1, t − 1 ) +  σ ( 2 ) φ ε = 0 σ ( 2 ) 1 + exp ( − η 2, t − 1 )     )( ) ( zt − µ ( 2 )   p 1,2 p 2,2 1  1 + exp ( − η 1, t − 1 ) +  σ ( 2 ) φ ε σ ( 2 ) 1 + exp ( − η 2, t − 1 )    − η 2, t  log )( ) ( zt − µ ( 1 ) p 1,1 p 2,1 1 σ ( 1 ) φ ε 1 + exp ( − η 1, t − 1 ) + σ ( 1 ) 1 + exp ( − η 2, t − 1 )

  23. Introduction The Model Learning Finding The Steady State Results Steady State • Steady State of TFP Independent of σ z • (In RBC Model We Rescale all Variables to Make Everything Stationary) • The First Equation Would Also Appear in a Full Information Version of the Model • So Under Full Information Perturbing σ z is not Necessary and Leads to Loss of Information - Partition Principle of Foerster, Rubio, Waggoner & Zha • What About Learning?

  24. Introduction The Model Learning Finding The Steady State Results Steady State • Steady State with Naive Perturbation   µ − z ss ¯   ( )   p 1,1 p 2,1 σ φ ε ( zss − µ σ ) 1  1 + exp ( − η 1, ss ) +  1 + exp ( − η 2, ss )     − η 1, ss log ( )   p 1,2 p 2,2 zss − µ 1 σ φ ε ( σ )  1 + exp ( − η 1, ss ) +  = 0 1 + exp ( − η 2, ss )     ( )   p 1,2 p 2,2 zss − µ σ φ ε ( 1 σ )  1 + exp ( − η 1, ss ) +  1 + exp ( − η 2, ss )    − η 2, ss  log ( ) p 1,1 p 2,1 σ φ ε ( zss − µ σ ) 1 1 + exp ( − η 1, ss ) + 1 + exp ( − η 2, ss )

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend