learning in macroeconomic models
play

Learning in Macroeconomic Models Wouter J. Den Haan London School - PowerPoint PPT Presentation

Learning in Macroeconomic Models Wouter J. Den Haan London School of Economics 2011 by Wouter J. Den Haan c August 28, 2011 Intro Simple No Feedback Recursive LS With Feedback Topics Overview A bit of history of economic thought


  1. Learning in Macroeconomic Models Wouter J. Den Haan London School of Economics � 2011 by Wouter J. Den Haan c August 28, 2011

  2. Intro Simple No Feedback Recursive LS With Feedback Topics Overview • A bit of history of economic thought • How expectations are formed can matter in the long run • Seignorage model • Learning without feedback • Learning with feedback • Adaptive learning • Least-squares learning • Bayesian versus least-squares learning • Decision theoretic foundation of Adam & Marcet

  3. Intro Simple No Feedback Recursive LS With Feedback Topics Overview continued Topics • Learning & PEA • Learning & sun spots

  4. Intro Simple No Feedback Recursive LS With Feedback Topics Why are expectations important? • Most economic problems have intertemporal consequences • = ⇒ future matters • Moreover, future is uncertain • Characteristics/behavior other agents can also be uncertain • = ⇒ expectations can also matter in one-period problems

  5. Intro Simple No Feedback Recursive LS With Feedback Topics History of economic thought • adaptive expectations: � � � E t [ x t + 1 ] = � x t − � E t − 1 [ x t ] + ω E t − 1 [ x t ] • very popular until the 70s

  6. Intro Simple No Feedback Recursive LS With Feedback Topics History of economic thought problematic features of adaptive expectations: • agents can be systematically wrong • agents are completely passive: � � • � E t x t + j , j ≥ 1 only changes (at best) when x t changes • = ⇒ Pigou cycles are not possible • = ⇒ model predictions under estimate speed of adjustment (e.g. for disinflation policies)

  7. Intro Simple No Feedback Recursive LS With Feedback Topics History of economic thought problematic features of adaptive expectations: • adaptive expectations about x t + 1 � = adaptive expecations about ∆ x t + 1 • (e.g. price level versus inflation) • why wouldn’t (some) agents use existing models to form expectations? • expectations matter but still no role for randomness (of future realizations) • so no reason for buffer stock savings • no role for (model) uncertainty either

  8. Intro Simple No Feedback Recursive LS With Feedback Topics History of economic thought rational expectations became popular because: • agents are no longer passive machines, but forward looking • i.e., agents think through what could be consequences of their own actions and those of others (in particular government) • consistency between model predictions and of agents being described • randomness of future events become important � � c − γ � = ( E t [ c t + 1 ]) − γ • e.g., E t t + 1

  9. Intro Simple No Feedback Recursive LS With Feedback Topics History of economic thought problematic features of rational expectations • agents have to know complete model • make correct predictions about all possible realizations • on and off the equilibrium path • costs of forming expecations are ignored • how agents get rational expectations is not explained

  10. Intro Simple No Feedback Recursive LS With Feedback Topics History of economic thought problematic features of rational expectations • makes analysis more complex • behavior this period depends on behavior tomorrow for all possible realizations • = ⇒ we have to solve for policy functions , not just simulate the economy

  11. Intro Simple No Feedback Recursive LS With Feedback Topics Expectations matter • Simple example to show that how expectations are formed can matter in the long run • See Adam, Evans, & Honkapohja (2006) for a more elaborate analysis

  12. Intro Simple No Feedback Recursive LS With Feedback Topics Model • Overlapping generations • Agents live for 2 periods • Agents save by holding money • No random shocks

  13. Intro Simple No Feedback Recursive LS With Feedback Topics Model c 1, t , c 2, t ln c 1, t + ln c 2, t max s.t. P t c 2, t ≤ 1 + ( 2 − c 1, t ) P e t + 1 no randomness = ⇒ we can work with expected value of variables instead of expected utility

  14. Intro Simple No Feedback Recursive LS With Feedback Topics Agent’s behavior First-order condition: 1 P t 1 1 1 = = P e π e c 1, t c 2, t c 2, t t + 1 t + 1 Solution for consumption: c 1, t = 1 + π e t + 1 /2 Solution for real money balance (=savings): m t = 2 − c 1, t = 1 − π e t + 1 /2

  15. Intro Simple No Feedback Recursive LS With Feedback Topics Money supply s M t = M

  16. Intro Simple No Feedback Recursive LS With Feedback Topics Equilibrium Equilibrium in period t implies M = M t � � 1 − π e M = P t t + 1 /2 M P t = 1 − π e t + 1 /2

  17. Intro Simple No Feedback Recursive LS With Feedback Topics Equilibrium Combining with equilibrium in period t − 1 gives 1 − π e P t t /2 π t = = 1 − π e P t − 1 t + 1 /2 Thus: π e t & π e t + 1 = ⇒ money demand = ⇒ actual inflation π t

  18. Intro Simple No Feedback Recursive LS With Feedback Topics Rational expectations solution Optimizing behavior & equilibrium: P t = T ( π e t , π e t + 1 ) P t − 1 Rational expectations equilibrium (REE): π e π t = t = ⇒ π t = T ( π t , π t + 1 ) = ⇒ 3 − 2 π t + 1 = π t π t + 1 = R ( π t )

  19. Intro Simple No Feedback Recursive LS With Feedback Topics Multiple steady states • There are two solutions to π = 3 − 2 π = ⇒ there are two steady states • π = 1 (no inflation) and perfect consumption smoothing • π = 2 (high inflation) and no consumption smoothing at all • Initial value for π t not given, but given an initial condition the time path is fully determined

  20. Intro Simple No Feedback Recursive LS With Feedback Topics Rational expectations and stability 2 1.8 1.6 π t +1 1.4 1.2 1 45 o 0.8 1 1.2 1.4 1.6 1.8 2 π t

  21. Intro Simple No Feedback Recursive LS With Feedback Topics Rational expectations and stability π 1 : value in period 1 < π 1 1 : divergence = 1 : economy stays at low-inflation steady state π 1 π 1 > 1 : convergence to high-inflation steady state

  22. Intro Simple No Feedback Recursive LS With Feedback Topics Alternative expecations • Suppose that t + 1 = 1 2 π t − 1 + 1 π e 2 π e t • still the same two steady states, but • π = 1 is stable • π = 2 is not stable

  23. Intro Simple No Feedback Recursive LS With Feedback Topics Adaptive expectations and stability 1.1 1.05 1 0.95 0.9 π t 0.85 Initial conditions: π e 1 = 1 . 5, π e 2 = 1 . 5 0.8 0.75 0.7 0.65 0 5 10 15 time

  24. Intro Simple No Feedback Recursive LS With Feedback Topics Learning without feedback Setup: 1 Agents know the complete model, except they do not know dgp exogenous processes 2 Agents use observations to update beliefs 3 Exogenous processes do not depend on beliefs = ⇒ no feedback from learning to behavior of variable being forecasted

  25. Intro Simple No Feedback Recursive LS With Feedback Topics Learning without feedback & convergence • If agents can learn the dgp of the exogenous processes, then you typically converge to REE • They may not learn the correct dgp if • Agents use limited amount of data • Agents use misspecified time series process

  26. Intro Simple No Feedback Recursive LS With Feedback Topics Learning without feedback - Example • Consider the following asset pricing model P t = E t [ β ( P t + 1 + D t + 1 )] • If → ∞ β t + j D t + j = 0 lim j − then � � ∞ β j D t + j ∑ P t = E t j = 1

  27. Intro Simple No Feedback Recursive LS With Feedback Topics Learning without feedback - Example • Suppose that D t = ρ D t − 1 + ε t , ε t ∼ N ( 0, σ 2 ) (1) • REE: D t P t = 1 − βρ (note that P t could be negative so P t is like a deviation from steady state level)

  28. Intro Simple No Feedback Recursive LS With Feedback Topics Learning without feedback - Example • Suppose that agents do not know value of ρ • Approach here: • If period t belief = � ρ t , then D t P t = 1 − β � ρ t • Agents ignore that their beliefs may change, � � � � = E t D t + j • i.e., � P t + j E t is assumed to equal 1 − β � ρ t + j � � 1 ρ t E t D t + j 1 − β �

  29. Intro Simple No Feedback Recursive LS With Feedback Topics Learning without feedback - Example How to learn about ρ ? • Least squares learning using { D t } T t = 1 & correct dgp • Least squares learning using { D t } T t = 1 & incorrect dgp • Least squares learning using { D t } T T & correct dgp t = T − ¯ • Least squares learning using { D t } T T & incorrect dgp t = T − ¯ • Bayesian updating (also called rational learning) • Lots of other possibilities

  30. Intro Simple No Feedback Recursive LS With Feedback Topics Convergence again • Suppose that the true dgp is given by D t = ρ t D t − 1 + ε t � � ρ t ∈ ρ low , ρ high � ρ high w.p. p ( ρ t ) ρ t + 1 = ρ low w.p. 1 − p ( ρ t ) • Suppose that agents think the true dgp is given by D t = ρ D t − 1 + ε t • = ⇒ Agents will never learn (see homework for importance of sample used to estimate ρ )

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend