Learning in Macroeconomic Models Wouter J. Den Haan London School - - PowerPoint PPT Presentation

learning in macroeconomic models
SMART_READER_LITE
LIVE PREVIEW

Learning in Macroeconomic Models Wouter J. Den Haan London School - - PowerPoint PPT Presentation

Learning in Macroeconomic Models Wouter J. Den Haan London School of Economics by Wouter J. Den Haan c Intro Simple No Feedback Recursive LS With Feedback Topics Overview A bit of history of economic thought How expectations


slide-1
SLIDE 1

Learning in Macroeconomic Models

Wouter J. Den Haan London School of Economics

c by Wouter J. Den Haan

slide-2
SLIDE 2

Intro Simple No Feedback Recursive LS With Feedback Topics

Overview

  • A bit of history of economic thought
  • How expectations are formed can matter in the long run
  • Seignorage model
  • Learning without feedback
  • Learning with feedback
  • Simple adaptive learning
  • Least-squares learning
  • Bayesian versus least-squares learning
  • Decision theoretic foundation of Adam & Marcet

2 / 95

slide-3
SLIDE 3

Intro Simple No Feedback Recursive LS With Feedback Topics

Overview continued

Topics

  • Learning & PEA
  • Learning & sunspots

3 / 95

slide-4
SLIDE 4

Intro Simple No Feedback Recursive LS With Feedback Topics

Why are expectations important?

  • Most economic problems have intertemporal consequences
  • =

⇒ future matters

  • Moreover, future is uncertain
  • Characteristics/behavior other agents can also be uncertain
  • =

⇒ expectations can also matter in one-period problems

4 / 95

slide-5
SLIDE 5

Intro Simple No Feedback Recursive LS With Feedback Topics

History of economic thought

  • adaptive expectations:
  • Et [xt+1] =

Et−1 [xt] + ω

  • xt −

Et−1 [xt]

  • very popular until the 70s

5 / 95

slide-6
SLIDE 6

Intro Simple No Feedback Recursive LS With Feedback Topics

History of economic thought

problematic features of adaptive expectations:

  • agents can be systematically wrong
  • agents are completely passive:

Et

  • xt+j
  • , j ≥ 1 only changes (at best) when xt changes
  • =

⇒ Pigou cycles are not possible

  • =

⇒ model predictions underestimate speed of adjustment (e.g. for disinflation policies)

6 / 95

slide-7
SLIDE 7

Intro Simple No Feedback Recursive LS With Feedback Topics

History of economic thought

problematic features of adaptive expectations:

  • adaptive expectations about xt+1 = adaptive expectations

about ∆xt+1

  • (e.g. price level versus inflation)
  • why wouldn’t (some) agents use existing models to form

expectations?

  • expectations matter but still no role for randomness (of future

realizations)

  • so no reason for buffer stock savings
  • no role for (model) uncertainty either

7 / 95

slide-8
SLIDE 8

Intro Simple No Feedback Recursive LS With Feedback Topics

History of economic thought

rational expectations became popular because:

  • agents are no longer passive machines, but forward looking
  • i.e., agents think through what could be consequences of their
  • wn actions and those of others (in particular government)
  • consistency between model predictions and of agents being

described

  • randomness of future events become important
  • e.g., Et
  • c−γ

t+1

  • = (Et [ct+1])−γ

8 / 95

slide-9
SLIDE 9

Intro Simple No Feedback Recursive LS With Feedback Topics

History of economic thought

problematic features of rational expectations

  • agents have to know complete model
  • make correct predictions about all possible realizations
  • on and off the equilibrium path
  • costs of forming expectations are ignored
  • how agents get rational expectations is not explained

9 / 95

slide-10
SLIDE 10

Intro Simple No Feedback Recursive LS With Feedback Topics

History of economic thought

problematic features of rational expectations

  • makes analysis more complex
  • behavior this period depends on behavior tomorrow for all

possible realizations

  • =

⇒ we have to solve for policy functions, not just simulate the economy

10 / 95

slide-11
SLIDE 11

Intro Simple No Feedback Recursive LS With Feedback Topics

Expectations matter

  • Simple example to show that how expectations are formed can

matter in the long run

  • See Adam, Evans, & Honkapohja (2006) for a more elaborate

analysis

11 / 95

slide-12
SLIDE 12

Intro Simple No Feedback Recursive LS With Feedback Topics

Model

  • Overlapping generations
  • Agents live for 2 periods
  • Agents save by holding money
  • No random shocks

12 / 95

slide-13
SLIDE 13

Intro Simple No Feedback Recursive LS With Feedback Topics

Model

max

c1,t,c2,t ln c1,t + ln c2,t

s.t. c2,t ≤ 1 + Pt Pe

t+1

(2 − c1,t)

no randomness =

⇒ we can work with expected value of variables

instead of expected utility

13 / 95

slide-14
SLIDE 14

Intro Simple No Feedback Recursive LS With Feedback Topics

Agent’s behavior

First-order condition: 1 c1,t

=

Pt Pe

t+1

1 c2,t

=

1 πe

t+1

1 c2,t Solution for consumption: c1,t = 1 + πe

t+1/2

Solution for real money balance (=savings): mt = 2 − c1,t = 1 − πe

t+1/2

14 / 95

slide-15
SLIDE 15

Intro Simple No Feedback Recursive LS With Feedback Topics

Money supply

M

s t = M

15 / 95

slide-16
SLIDE 16

Intro Simple No Feedback Recursive LS With Feedback Topics

Equilibrium

Equilibrium in period t implies M

=

Mt M

=

Pt

  • 1 − πe

t+1/2

  • Pt

=

M 1 − πe

t+1/2

16 / 95

slide-17
SLIDE 17

Intro Simple No Feedback Recursive LS With Feedback Topics

Equilibrium

Combining with equilibrium in period t − 1 gives πt = Pt Pt−1

=

1 − πe

t/2

1 − πe

t+1/2

Thus: πe

t & πe t+1 =

⇒ money demand = ⇒ actual inflation πt

17 / 95

slide-18
SLIDE 18

Intro Simple No Feedback Recursive LS With Feedback Topics

Rational expectations solution

Optimizing behavior & equilibrium: Pt Pt−1

= T(πe

t, πe t+1)

Rational expectations equilibrium (REE): πt

=

πe

t

= ⇒

πt

=

T(πt, πt+1)

= ⇒

πt+1

=

3 − 2 πt πt+1

=

R (πt)

18 / 95

slide-19
SLIDE 19

Intro Simple No Feedback Recursive LS With Feedback Topics

Multiple steady states

  • There are two solutions to

π = 3 − 2 π

= ⇒ there are two steady states

  • π = 1 (no inflation) and perfect consumption smoothing
  • π = 2 (high inflation), money has no value & no consumption

smoothing at all

19 / 95

slide-20
SLIDE 20

Intro Simple No Feedback Recursive LS With Feedback Topics

Unique solution

  • Initial value for πt not given, but given an initial condition the

time path is fully determined

  • πt converging to 2 means mt converging to zero and Pt

converging to ∞

20 / 95

slide-21
SLIDE 21

Intro Simple No Feedback Recursive LS With Feedback Topics

Rational expectations and stability

1 1.2 1.4 1.6 1.8 2 0.8 1 1.2 1.4 1.6 1.8 2

πt πt+1 45o 21 / 95

slide-22
SLIDE 22

Intro Simple No Feedback Recursive LS With Feedback Topics

Rational expectations and stability

π1 : value in period 1 π1

<

1 : divergence π1

=

1 : economy stays at low-inflation steady state π1

>

1 : convergence to high-inflation steady state

22 / 95

slide-23
SLIDE 23

Intro Simple No Feedback Recursive LS With Feedback Topics

Alternative expectations

  • Suppose that

πe

t+1 = 1

2πt−1 + 1 2πe

t

  • still the same two steady states, but
  • π = 1 is stable
  • π = 2 is not stable

23 / 95

slide-24
SLIDE 24

Intro Simple No Feedback Recursive LS With Feedback Topics

Adaptive expectations and stability

5 10 15 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1

time πt Initial conditions: πe

1 = 1.5, πe 2 = 1.5

24 / 95

slide-25
SLIDE 25

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning without feedback

Setup:

1 Agents know the complete model, except

they do not know dgp exogenous processes

2 Agents use observations to update beliefs 3 Exogenous processes do not depend on beliefs

= ⇒ no feedback from learning to behavior of variable being

forecasted

25 / 95

slide-26
SLIDE 26

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning without feedback & convergence

  • If agents can learn the dgp of the exogenous processes, then

you typically converge to REE

  • They may not learn the correct dgp if
  • Agents use limited amount of data
  • Agents use misspecified time series process

26 / 95

slide-27
SLIDE 27

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning without feedback - Example

  • Consider the following asset pricing model

Pt = Et [β (Pt+1 + Dt+1)]

  • If

lim

j− →∞ βt+jDt+j = 0

then Pt = Et

j=1

βjDt+j

  • 27 / 95
slide-28
SLIDE 28

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning without feedback - Example

  • Suppose that

Dt = ρDt−1 + εt, εt ∼ N(0, σ2) (1)

  • REE:

Pt = Dt 1 − βρ (note that Pt could be negative so Pt is like a deviation from steady state level)

28 / 95

slide-29
SLIDE 29

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning without feedback - Example

  • Suppose that agents do not know value of ρ
  • Approach here:
  • If period t belief =

ρt, then Pt = Dt 1 − β ρt

  • Agents ignore that their beliefs may change,
  • i.e.,

Et

  • Pt+j

=Et

  • Dt+j

1−β ρt+j

  • is assumed to equal

1 1−β ρt Et

  • Dt+j
  • 29 / 95
slide-30
SLIDE 30

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning without feedback - Example

How to learn about ρ?

  • Least squares learning using {Dt}T

t=1 & correct dgp

  • Least squares learning using {Dt}T

t=1 & incorrect dgp

  • Least squares learning using {Dt}T

t=T− ¯ T & correct dgp

  • Least squares learning using {Dt}T

t=T− ¯ T & incorrect dgp

  • Bayesian updating (also called rational learning)
  • Lots of other possibilities

30 / 95

slide-31
SLIDE 31

Intro Simple No Feedback Recursive LS With Feedback Topics

Convergence again

  • Suppose that the true dgp is given by

Dt

=

ρtDt−1 + εt ρt

  • ρlow, ρhigh
  • ρt+1

=

ρhigh w.p. p(ρt) ρlow w.p. 1 − p(ρt)

  • Suppose that agents think the true dgp is given by

Dt = ρDt−1 + εt

  • =

⇒ Agents will never learn

(see homework for importance of sample used to estimate ρ)

31 / 95

slide-32
SLIDE 32

Intro Simple No Feedback Recursive LS With Feedback Topics

Recursive least-squares

  • time-series model:

yt = x

tγ + ut

  • least-squares estimator
  • γT = R−1

T

X

TYt

T where X

T

=

  • x1

x2

· · ·

xT

  • Y

T

=

  • y1

y2

· · ·

yT

  • RT

=

X

TXT/T

32 / 95

slide-33
SLIDE 33

Intro Simple No Feedback Recursive LS With Feedback Topics

Recursive least-squares

RT

=

RT−1 + (xTx

T − RT−1)

T

  • γT

=

  • γT−1 + R−1

T xT (yT − x T

γT−1) T

33 / 95

slide-34
SLIDE 34

Intro Simple No Feedback Recursive LS With Feedback Topics

Proof for R

X

TXT

T ?

=

X

T−1XT−1

(T−1)

+ xTx

T

T

X

T−1XT−1

T(T−1)

T−1

T

  • X

TXT ?

=

X

T−1XT−1 + T−1 T xTx T − X

T−1XT−1

T

X

TXT − X

TXT

T ?

=

X

T−1XT−1 + xTx T − xTx

T

T

X

T−1XT−1

T

X

T−1XT−1 + xTx T

X

T−1XT−1+xTx T

T ?

=

X

T−1XT−1 + xTx T

xTx

T+X T−1XT−1

T

34 / 95

slide-35
SLIDE 35

Intro Simple No Feedback Recursive LS With Feedback Topics

Proof for gamma

(X

TXT)−1

× X

TYT ?

=

  • X

T−1XT−1

−1 X

T−1YT−1+

(X

TXT)−1

  • xTyT

−xTx

T

  • X

T−1XT−1

−1 X

T−1YT−1

  • X

TYT ?

=

  • X

T−1XT−1 + xTx T

X

T−1XT−1

−1 X

T−1YT−1

+

  • xTyT

−xTx

T

  • X

T−1XT−1

−1 X

T−1YT−1

  • X

TYT ?

=

  • I + xTx

T

  • X

T−1XT−1

−1 X

T−1YT−1

+

  • xTyT

−xTx

T

  • X

T−1XT−1

−1 X

T−1YT−1

  • 35 / 95
slide-36
SLIDE 36

Intro Simple No Feedback Recursive LS With Feedback Topics

Reasons to adopt recursive formulation

  • makes proving analytical results easier
  • less computer intensive,
  • but standard LS gives the same answer
  • there are intuitive generalizations:

RT

=

RT−1 + ω(T)

  • xTx

T − RT−1

  • γT

=

  • γT−1 + ω(T)R−1

T xT

  • yT − x

T

γT−1

  • ω(T) is the "gain"

36 / 95

slide-37
SLIDE 37

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning with feedback

1 Explanation of the idea 2 Simple adaptive learning 3 Least-squares learning

  • E-stability and convergence

4 Bayesian versus least-squares learning 5 Decision theoretic foundation of Adam & Marcet

37 / 95

slide-38
SLIDE 38

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning with feedback - basic setup

Model: pt = ρ Et−1 [pt] + δxt−1 + εt RE solution: pt

=

δ 1 − ρxt−1 + εt

=

arext−1 + εt

38 / 95

slide-39
SLIDE 39

Intro Simple No Feedback Recursive LS With Feedback Topics

What is behind model

Model: pt = ρ Et−1 [pt] + δxt−1 + εt Stories:

  • Lucas aggregate supply model
  • Muth market model

See Evans and Honkapohja (2009) for details

39 / 95

slide-40
SLIDE 40

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning with feedback - basic setup

Perceived law of motion (PLM) at t − 1: pt = at−1xt−1 + εt (2) Actual law of motion (ALM): pt = ρ at−1xt−1 + δxt−1 + εt = (ρ at−1 + δ) xt−1 + εt (3)

40 / 95

slide-41
SLIDE 41

Intro Simple No Feedback Recursive LS With Feedback Topics

Updating beliefs I: Simple adaptive

ALM: pt = (ρ at−1 + δ) xt−1 + εt Simple adaptive learning:

at = ρ at−1 + δ

  • could be rationalized if
  • agents observe xt−1 and εt
  • t is more like an iteration and in each iteration agents get to
  • bserve long time-series to update

41 / 95

slide-42
SLIDE 42

Intro Simple No Feedback Recursive LS With Feedback Topics

Simple adaptive learning: Convergence

  • at = ρ

at−1 + δ

  • r in general
  • at −

at−1 = T ( at−1)

42 / 95

slide-43
SLIDE 43

Intro Simple No Feedback Recursive LS With Feedback Topics

Simple adaptive learning: Convergence

Key questions:

1 Does

at converge?

2 If yes, does it converge to a ?

Answers: If |ρ| < 1, then the answer to both is yes.

43 / 95

slide-44
SLIDE 44

Intro Simple No Feedback Recursive LS With Feedback Topics

Updating beliefs: LS learning

Suppose agents use least-squares learning

  • at

=

at−1 + R−1

t xt−1 (pt − xt−1

at−1) t

=

at−1 + R−1

t xt−1 ((ρ

at−1 + δ) xt−1 + εt − xt−1 at−1) t Rt

=

Rt−1 + (xt−1xt−1 − Rt−1) t

44 / 95

slide-45
SLIDE 45

Intro Simple No Feedback Recursive LS With Feedback Topics

Updating beliefs: LS learning

  • at

=

at−1 + 1 t R−1

t xt−1 (pt − xt−1

at−1)

=

at−1 + 1 t R−1

t xt−1 ((ρ

at−1 + δ) xt−1 + εt − xt−1 at−1) Rt

=

Rt−1 + 1 t (xt−1xt−1 − Rt−1) To get system with only lags on RHS, let Rt = St−1

  • at

=

at−1 + 1 t S−1

t−1xt−1 ((ρ

at−1 + δ) xt−1 + εt − xt−1 at−1) St

=

St−1 + 1 t (xtxt − St−1) t t + 1

45 / 95

slide-46
SLIDE 46

Intro Simple No Feedback Recursive LS With Feedback Topics

Updating beliefs: LS learning

Let θt = at St

  • Then the system can be written as
  • θt

=

  • θt−1 + 1

t Q( θt−1, xt, xt−1, εt)

  • r

∆ θt

=

T( θt−1, xt, xt−1, εt, t) Note that T (·) = 1 t Q (·)

46 / 95

slide-47
SLIDE 47

Intro Simple No Feedback Recursive LS With Feedback Topics

Key question

  • If

∆ θt = 1 t Q( θt−1, xt, xt−1, εt, t) then what can we "expect": about θt?

  • In particular, can we "expect" that

lim

t− →∞

at = are

47 / 95

slide-48
SLIDE 48

Intro Simple No Feedback Recursive LS With Feedback Topics

Corresponding differential equation

Much can be learned from following differential equation dθ dτ = h (θ (τ)) where h (θ) = lim

t→∞ E [Q(θ, xt, xt−1, εt)]

48 / 95

slide-49
SLIDE 49

Intro Simple No Feedback Recursive LS With Feedback Topics

Corresponding differential equation

In our example h (θ)

=

lim

t→∞ E [Q(θ, xt, xt−1, εt)]

=

lim

t→∞ E

S−1xt−1 ((ρa + δ) xt−1 + εt − xt−1a)

(xtxt − S)

t t+1

  • =
  • MS−1 ((ρ − 1) a + δ)

M − S

  • where

M = lim

t→∞ E

  • x2

t

  • 49 / 95
slide-50
SLIDE 50

Intro Simple No Feedback Recursive LS With Feedback Topics

Analyze the differential equation

dθ dτ = h (θ (τ)) =

  • MS−1 ((ρ − 1) a + δ)

M − S

dτ = 0 if M = S & a = δ 1 − ρ Thus, the (unique) rest point of h (θ) is the rational expectations solution

50 / 95

slide-51
SLIDE 51

Intro Simple No Feedback Recursive LS With Feedback Topics

E-stability

  • θt −

θt−1 = 1 t Q

  • θt−1, xt, xt−1, εt, t
  • Limiting behavior can be analyzed using

dθ dτ = h(θ(τ)) = lim

t→∞ E [Q(θ, xt, xt−1, εt)]

A solution θ∗, e.g. [aRE, M], is "E-stable" if h(θ) is stable at θ∗

51 / 95

slide-52
SLIDE 52

Intro Simple No Feedback Recursive LS With Feedback Topics

E-stability

  • h(θ) is stable if real part of the eigenvalues is negative:
  • Here:

h(θ) = (ρ − 1) a + δ M − S

  • =

⇒ convergence of differentiable system if ρ − 1 < 0

  • =

⇒ convergence even if ρ < −1!

52 / 95

slide-53
SLIDE 53

Intro Simple No Feedback Recursive LS With Feedback Topics

Implications of E-stability?

  • Recursive least-squares: stochastics in T (·) mapping
  • =

⇒ what will happen is less certain, even with E-stability

53 / 95

slide-54
SLIDE 54

Intro Simple No Feedback Recursive LS With Feedback Topics

General implications of E-stability?

  • If a solution is not E-stable:
  • =

⇒ non-convergence is a probability 1 event

  • If a solution is E-stable:
  • the presence of stochastics make the theorems non-trivial
  • in general only info about mean dynamics

54 / 95

slide-55
SLIDE 55

Intro Simple No Feedback Recursive LS With Feedback Topics

Mean dynamics

See Evans and Honkapohja textbook for formal results.

  • Theorems are a bit tricky, but are of the following kind:

If a solution f ∗ is E-stable, then the time path under learning will either leave the neighborhood in finite time or will converge towards f ∗. Moreover, the longer it does not leave this neighborhood, the smaller the probability that it will

  • So there are two parts
  • mean dynamics: convergence towards fixed point
  • escape dynamics: (large) shocks may push you away from fixed

point

55 / 95

slide-56
SLIDE 56

Intro Simple No Feedback Recursive LS With Feedback Topics

Importance of Gain

  • γT =

γT−1 + ω(T)R−1

T xT

  • yT − x

T

γT−1

  • Gain in least squares updating formula, ω (T), plays a key role

in theorems

  • ω (T) −

→ 0 too fast: you may end up in somthing that is not

an equilibrium

  • ω (T) −

→ 0 too slowly:,you may not converge towards it

  • So depending on the application, you may need conditions like

t=1

ω(t)2 < ∞ and

t=1

ω(t) = ∞

56 / 95

slide-57
SLIDE 57

Intro Simple No Feedback Recursive LS With Feedback Topics

Special cases

  • In simple cases, stronger results can be obtained
  • Evans (1989) shows that the system of equations (2) and (3)

with standard recursive least squares (gain of 1/t) converges to rational expectations solution if ρ < 1 (so also if ρ < −1).

57 / 95

slide-58
SLIDE 58

Intro Simple No Feedback Recursive LS With Feedback Topics

Bayesian learning

  • LS learning has some disadvantages:
  • why "least-squares" and not something else?
  • how to choose gain?
  • why don’t agents incorporate that beliefs change?
  • Beliefs are updated each period

= ⇒ Bayesian learning is an obvious thing to consider

58 / 95

slide-59
SLIDE 59

Intro Simple No Feedback Recursive LS With Feedback Topics

Bayesian versus LS learning

  • LS learning = Bayesian learning with uninformed prior

at least not always

  • Bullard and Suda (2009) provide following nice example

59 / 95

slide-60
SLIDE 60

Intro Simple No Feedback Recursive LS With Feedback Topics

Bayesian versus LS learning

Model: pt = ρLpt−1 + ρ0 Et−1 [pt] + ρ1 Et−1 [pt+1] + εt (4)

  • Key difference with earlier model:
  • two extra terms

60 / 95

slide-61
SLIDE 61

Intro Simple No Feedback Recursive LS With Feedback Topics

Bayesian versus LS learning

The RE solution:

  • pt = bpt−1 + εt

where b is a solution to b = ρL + ρ0b + ρ1b2

61 / 95

slide-62
SLIDE 62

Intro Simple No Feedback Recursive LS With Feedback Topics

Bayesian learning - setup

  • PLM:

pt = bt−1pt−1 + εt and εt has a known distribution

  • plug PLM into (4) =

⇒ ALM

  • but a Bayesian learner is a bit more careful

62 / 95

slide-63
SLIDE 63

Intro Simple No Feedback Recursive LS With Feedback Topics

Bayesian learner understands he is learning

  • Et−1 [pt+1]

=

  • Et−1
  • ρLpt + ρ0

Et [pt+1] + ρ1 Et [pt+2]

  • =

ρLpt + Et−1

  • ρ0

Et [pt+1] + ρ1 Et [pt+2]

  • =

ρLpt + Et−1

  • ρ0

btpt + ρ1 btpt+1

  • and he realizes, for example, that

bt and pt are both affected by εt!

63 / 95

slide-64
SLIDE 64

Intro Simple No Feedback Recursive LS With Feedback Topics

Bayesian learner understands he is learning

  • Bayesian learner realizes that
  • Et−1
  • btpt+1
  • =

Et−1

  • bt
  • Et−1 [pt+1]

and calculates Et−1

  • btpt+1
  • explicitly
  • LS learner forms expectations thinking that
  • Et−1
  • btpt+1
  • =
  • Et−1
  • bt−1pt+1
  • =

bt−1 Et−1

  • ρLp + ρ0

bt−1 + ρ1 bt−1

  • pt
  • 64 / 95
slide-65
SLIDE 65

Intro Simple No Feedback Recursive LS With Feedback Topics

Bayesian versus LS learning

  • Bayesian learner cares about a covariance term
  • Bullard and Suda (2009) show that Bayesian is simillar to LS

learning in terms of E-stability

  • Such covariance terms more important in nonlinear frameworks
  • Unfortunately not much done with nonlinear models

65 / 95

slide-66
SLIDE 66

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning what?

Model: Pt = βEt [Pt+1 + Dt+1]

  • Learning can be incorporated in many ways.
  • Obvious choices here:

1 learn about dgp Dt and use true mapping for Pt = P (Dt) 2 know dgp Dt and learn about Pt = P (Dt) 3 learn about both

66 / 95

slide-67
SLIDE 67

Intro Simple No Feedback Recursive LS With Feedback Topics

Learning what?

1 Adam, Marcet, Nicolini (2009): one can solve several asset

pricing puzzles using a simple model if learning is learning about Et [Pt+1] (instead of learning about dgp Dt)

2 Adam and Marcet (2011): provide micro foundations that this

is a sensible choice

67 / 95

slide-68
SLIDE 68

Intro Simple No Feedback Recursive LS With Feedback Topics

Simple model

Model: Pt = βEt [Pt+1 + Dt+1] Dt+1 Dt

= aεt+1

with Et [εt+1] = 1 εt i.i.d.

68 / 95

slide-69
SLIDE 69

Intro Simple No Feedback Recursive LS With Feedback Topics

Model properties REE

  • Solution:

Pt = βa 1 − βaDt

  • Pt/Dt is constant
  • Pt/Pt−1 is i.i.d.

69 / 95

slide-70
SLIDE 70

Intro Simple No Feedback Recursive LS With Feedback Topics

Adam, Marcet, & Nicolini 2009

PLM:

  • Et

Pt+1 Pt

  • = γt

ALM: Pt Pt−1

=

1 − βγt−1 1 − βγt aεt =

  • a + aβ∆γt

1 − βγt

  • εt

γt+1

=

  • a + aβ∆γt

1 − βγt

  • 70 / 95
slide-71
SLIDE 71

Intro Simple No Feedback Recursive LS With Feedback Topics

Model properties with learning

  • Solution is quite nonlinear
  • especially if γt is close to β−1
  • Serial correlation.
  • in fact there is momentum. For example:

γt = a & ∆γt > 0 = ⇒ ∆γt+1 > 0 γt = a & ∆γt < 0 = ⇒ ∆γt+1 < 0

  • Pt/Dt is time varying

71 / 95

slide-72
SLIDE 72

Intro Simple No Feedback Recursive LS With Feedback Topics

Adam, Marcet, & Nicolini 2011

Agent i does following optimization problem max Ei,t [·]

Ei,t is based on a sensible probability measure

Ei,t is not necessarily the true conditional expectation

72 / 95

slide-73
SLIDE 73

Intro Simple No Feedback Recursive LS With Feedback Topics

Adam, Marcet, & Nicolini 2011

  • Setup leads to standard first-order conditions but with

Ei,t instead of Et

  • For example

Pt = β Ei,t [Pt+1 + Dt+1] if agent i is not constrained

  • Key idea:
  • price determination is difficult
  • agents do not know this mapping
  • =

⇒ they forecast Ei,t [Pt+1] directly

  • =

⇒ law of iterated expectations cannot be used because next period agent i may be constrained in which case the equality does not hold

73 / 95

slide-74
SLIDE 74

Intro Simple No Feedback Recursive LS With Feedback Topics

Topics - Overview

1 E-stability and sunspots 2 Learning and nonlinearities

Parameterized expectations

3 Two representations of sunspots

74 / 95

slide-75
SLIDE 75

Intro Simple No Feedback Recursive LS With Feedback Topics

E-stability and sunspots

Model: xt = ρEt [xt+1] xt cannot explode no initial condition Solution:

|ρ| <

1 : xt = 0 ∀t

|ρ| ≥

1 : xt = ρ−1xt−1 + et ∀t where et is the sunspot (which has Et [et] = 0

75 / 95

slide-76
SLIDE 76

Intro Simple No Feedback Recursive LS With Feedback Topics

Adaptive learning

PLM: xt = atxt−1 + et ALM: xt

=

  • atρxt−1

= ⇒

at+1 = atρ

  • thus divergence when |ρ| > 1 (sunspot solutions)

76 / 95

slide-77
SLIDE 77

Intro Simple No Feedback Recursive LS With Feedback Topics

Adaptive learning

PLM: xt = atxt−1 + et ALM: xt

=

  • atρxt−1

= ⇒

at+1 = atρ

  • thus divergence when |ρ| > 1 (sunspot solutions)

77 / 95

slide-78
SLIDE 78

Intro Simple No Feedback Recursive LS With Feedback Topics

Stability puzzle

  • There are few counter examples and not too clear why sunspots

are not learnable in RBC-type models

  • sunspot solutions are learnable in some New Keynesian models

(Evans and McGough 2005)

  • McGough, Meng, and Xue 2011 provide a counterexample and

show that an RBC model with negative externalities has learnable sunspot solutions

78 / 95

slide-79
SLIDE 79

Intro Simple No Feedback Recursive LS With Feedback Topics

PEA and learning

  • Learning is usually done in linear frameworks
  • PEA parameterized the conditional expectations in nonlinear

frameworks

  • =

⇒ PEA is a natural setting to do

  • adaptive learning as well as
  • recursive learning

79 / 95

slide-80
SLIDE 80

Intro Simple No Feedback Recursive LS With Feedback Topics

Model

Pt

=

E

  • β

Dt+1 Dt −ν

(Pt+1 + Dt+1)

  • = G (Xt)

Xt : state variables

80 / 95

slide-81
SLIDE 81

Intro Simple No Feedback Recursive LS With Feedback Topics

Conventional PEA in a nutshell

  • Start with a guess for G (Xt), say g(xt; η0)
  • g (·) may have wrong functional form
  • xt may only be a subset of Xt
  • η0 are the coefficients of g (·)

81 / 95

slide-82
SLIDE 82

Intro Simple No Feedback Recursive LS With Feedback Topics

Conventional PEA in a nutshell

  • Iterate to find fixed point for ηi

1 use ηi to generate time path {Pt}T t=1 2 let

ˆ ηi = arg min

η ∑ t

(yt+1 − g (xt; η))2 where yt+1 = β Dt+1 Dt −ν (Pt+1 + Dt+1)

3 Dampen if necessary

ηi+1 = ω ˆ ηi + (1 − ω) ηi

82 / 95

slide-83
SLIDE 83

Intro Simple No Feedback Recursive LS With Feedback Topics

Interpretation of conventional PEA

  • Agents have beliefs
  • Agents get to observe long sample generated with these beliefs
  • Agents update beliefs
  • Corresponds to adaptive expectations
  • no stochastics if T is large enough

83 / 95

slide-84
SLIDE 84

Intro Simple No Feedback Recursive LS With Feedback Topics

Recursive PEA

  • Agents form expectations using g (xt; ηt)
  • Solve for Pt using

Pt = g (xt; ηt)

  • Update beliefs using this one additional observation
  • Go to the next period using ηt+1

84 / 95

slide-85
SLIDE 85

Intro Simple No Feedback Recursive LS With Feedback Topics

Recursive methods and convergence

Look at recursive formulation of LS:

  • γt =

γt−1 + 1 t R−1

t xt

  • yt − x

t

γt−1

  • !!! ∆ ˆ

γt gets smaller as t gets bigger

85 / 95

slide-86
SLIDE 86

Intro Simple No Feedback Recursive LS With Feedback Topics

General form versus common factor represenation

sunspot literature distinquishes between:

1 General form representation of a sunspot 2 Common factor representation of a sunspot

86 / 95

slide-87
SLIDE 87

Intro Simple No Feedback Recursive LS With Feedback Topics

First consider non-sun-spot indeterminacy

Model: kt+1 + a1kt + a2kt−1

=

0 or

(1 − λ1L) (1 − λ2L) kt+1 =

Also:

  • k0 given
  • kt has to remain finite

87 / 95

slide-88
SLIDE 88

Intro Simple No Feedback Recursive LS With Feedback Topics

Multiplicity

Solution: kt

=

b1λt

1 + b2λt 2

k0

=

b1 + b2 Thus many possible choices for b1 and b2 if |λ1| < 1 and |λ1| < 2

88 / 95

slide-89
SLIDE 89

Intro Simple No Feedback Recursive LS With Feedback Topics

Multiplicity

  • What if we impose recursivity?

kt = ¯ dkt−1

  • Does that get rid of multiplicity? No, but it does reduce the

number of solutions from ∞ to 2

  • ¯

d2 + a1¯ d + a2

  • kt−1

=

0 ∀t

= ⇒

  • ¯

d2 + a1¯ d + a2

  • =

the 2 solutions correspond to setting either λ1 or λ2 equal to 0

89 / 95

slide-90
SLIDE 90

Intro Simple No Feedback Recursive LS With Feedback Topics

Back to sunspots

Doing the same trick with sunspots gives a solution with following two properties:

1 it has a serially correlated sunspot component

with the same factor as the endogenous variable (i.e. the common factor)

2 there are two of these

90 / 95

slide-91
SLIDE 91

Intro Simple No Feedback Recursive LS With Feedback Topics

General form representation

Model: Et [kt+1 + a1kt + a2kt−1]

=

0 or Et [(1 − λ1L) (1 − λ2L) kt+1]

=

General form representation: kt

=

b1λt

1 + b2λt 2 + et

k0

=

b1 + b2 + e0 where et is serially uncorrelated

91 / 95

slide-92
SLIDE 92

Intro Simple No Feedback Recursive LS With Feedback Topics

Common factor representation

Model: Et [kt+1 + a1kt + a2kt−1]

=

0 or Et [(1 − λ1L) (1 − λ2L) kt+1]

=

Common factor representation: kt

=

b1λt

i + ζt

ζt

=

λiζt−1 + et k0

=

bi + ζ0 λi

∈ {λ1, λ2}

where et is serially uncorrelated

92 / 95

slide-93
SLIDE 93

Intro Simple No Feedback Recursive LS With Feedback Topics

References

  • Adam, K., G. Evans, and S. Honkapohja, 2006, Are hyperinflation paths learnable,

Journal of Economic Dynamics and Control.

  • Paper shows that the high-inflation steady state is stable under learning in the

seignorage model as discussed in slides.

  • Adam, K., A. Marcet, and J.P. Nicolini, 2009, Stock market volatility and learning,

manuscript.

  • Paper shows that learning about endogenous variables like prices gives you much

more "action" than learning about exogenous processes (i.e. they show that learning with feedback is more interesting than learning without feedback).

93 / 95

slide-94
SLIDE 94

Intro Simple No Feedback Recursive LS With Feedback Topics

References

  • Adam, K., and A. Marcet, 2011, Internal rationality, imperfect market knowledge and

asset prices, Journal of Economic Theory.

  • Paper motivates that the thing to be learned is the conditional expectation in the

Euler equation.

  • Bullard, J. and J. Suda, 2009, The stability of macroeconomics systems with Bayesian

learners, manuscript.

  • Paper gives a nice example on the difference between Bayesian and

Least-Squares learning.

94 / 95

slide-95
SLIDE 95

Intro Simple No Feedback Recursive LS With Feedback Topics

References

  • Evans, G.W., and S. Honkapohja, 2001, Learning and expectations in macroeconomics.
  • Textbook on learning dealing with all the technical stuff very carefully.
  • Evans, G.W., and S. Honkaphja, 2009, Learning and macroeconomics, Annual Review
  • f Economics.
  • Survey paper.
  • Evans, G.W., and B. McGough, 2005, Indeterminacy and the stability puzzle in

non-convex economies, Contributions to macroeconomics.

  • Paper argues that sunspot solutions (or indeterminate solutions more generally)

are not stable (not learnable) in RBC-type models.

  • Evans, G.W., and B. McGough, 2005, Monetary policy, indeterminacy, and learning,

Journal of Economic Dynamics and Control.

  • Paper shows that the NK model with some Taylor rules has learnable sunspot

solutions

  • McGough, B., Q. Meng, and J. Xue, 2011, Indeterminacy and E-stability in real

business cycle models with factor-generated externalities, manuscript.

  • Paper provides a nice example of a sunspot in a linearized RBC-type model that

95 / 95