When Inequality Matters for Macro and Macro Matters for Inequality - - PowerPoint PPT Presentation

when inequality matters for macro and macro matters for
SMART_READER_LITE
LIVE PREVIEW

When Inequality Matters for Macro and Macro Matters for Inequality - - PowerPoint PPT Presentation

When Inequality Matters for Macro and Macro Matters for Inequality Discussion by Christopher Carroll 1 (with help from Edmund Crawley 1 ) 1 Johns Hopkins University ccarroll@jhu.edu edmundcrawley@gmail.com NBER Macro Annual Meeting, April


slide-1
SLIDE 1

‘When Inequality Matters for Macro and Macro Matters for Inequality’

Discussion by Christopher Carroll1 (with help from Edmund Crawley1)

1Johns Hopkins University

ccarroll@jhu.edu edmundcrawley@gmail.com

NBER Macro Annual Meeting, April 7, 2017

slide-2
SLIDE 2

What’s Wrong With Macroeconomics?

◮ Since 2007: Lots of criticism of benchmark macro models

slide-3
SLIDE 3

What’s Wrong With Macroeconomics?

◮ Since 2007: Lots of criticism of benchmark macro models ◮ Larry Summers (2011)’s (in)famous quote:

slide-4
SLIDE 4

What’s Wrong With Macroeconomics?

◮ Since 2007: Lots of criticism of benchmark macro models ◮ Larry Summers (2011)’s (in)famous quote:

◮ “Almost nothing from the academic macroeconomics literature

  • ver the prior 30 years was useful in understanding what to do

in the crisis”

slide-5
SLIDE 5

What’s Wrong With Macroeconomics?

◮ Since 2007: Lots of criticism of benchmark macro models ◮ Larry Summers (2011)’s (in)famous quote:

◮ “Almost nothing from the academic macroeconomics literature

  • ver the prior 30 years was useful in understanding what to do

in the crisis”

◮ Many others have said similar things (if usually more politely)

slide-6
SLIDE 6

Can This Paper Fix It?

After thinking it over, a number of policymakers in recent years have said that an important part of solving the problem will be to take heterogeneity much more seriously:

◮ Fed Chair Janet Yellen (2016)

slide-7
SLIDE 7

Can This Paper Fix It?

After thinking it over, a number of policymakers in recent years have said that an important part of solving the problem will be to take heterogeneity much more seriously:

◮ Fed Chair Janet Yellen (2016) ◮ IMF Chief Economist Olivier Blanchard (2016)

slide-8
SLIDE 8

Can This Paper Fix It?

After thinking it over, a number of policymakers in recent years have said that an important part of solving the problem will be to take heterogeneity much more seriously:

◮ Fed Chair Janet Yellen (2016) ◮ IMF Chief Economist Olivier Blanchard (2016) ◮ ECB Governing Board Member Benoit Coeure (2013)

slide-9
SLIDE 9

Can This Paper Fix It?

After thinking it over, a number of policymakers in recent years have said that an important part of solving the problem will be to take heterogeneity much more seriously:

◮ Fed Chair Janet Yellen (2016) ◮ IMF Chief Economist Olivier Blanchard (2016) ◮ ECB Governing Board Member Benoit Coeure (2013) ◮ Bank of England Chief Economist Andy Haldane (2016)

slide-10
SLIDE 10

This Paper Is a Major Step in the Right Direction

  • 1. (Almost) Nobody objects in principle to adding heterogeneity
slide-11
SLIDE 11

This Paper Is a Major Step in the Right Direction

  • 1. (Almost) Nobody objects in principle to adding heterogeneity
  • 2. Obstacle has been math/computational difficulty in practice
slide-12
SLIDE 12

This Paper Is a Major Step in the Right Direction

  • 1. (Almost) Nobody objects in principle to adding heterogeneity
  • 2. Obstacle has been math/computational difficulty in practice
  • 3. Methods here promise to enormously reduce that barrier
slide-13
SLIDE 13

This Paper Is a Major Step in the Right Direction

  • 1. (Almost) Nobody objects in principle to adding heterogeneity
  • 2. Obstacle has been math/computational difficulty in practice
  • 3. Methods here promise to enormously reduce that barrier

◮ But: Methods work only in continuous time

slide-14
SLIDE 14

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ...

slide-15
SLIDE 15

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

slide-16
SLIDE 16

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

slide-17
SLIDE 17

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

slide-18
SLIDE 18

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

  • 1. Finance: Many disasters
slide-19
SLIDE 19

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

  • 1. Finance: Many disasters
  • 2. Computational Modeling:
slide-20
SLIDE 20

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

  • 1. Finance: Many disasters
  • 2. Computational Modeling:

◮ Computers are Digital Devices

slide-21
SLIDE 21

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

  • 1. Finance: Many disasters
  • 2. Computational Modeling:

◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway

slide-22
SLIDE 22

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

  • 1. Finance: Many disasters
  • 2. Computational Modeling:

◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ...

slide-23
SLIDE 23

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

  • 1. Finance: Many disasters
  • 2. Computational Modeling:

◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ... ◮ Avoided

slide-24
SLIDE 24

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

  • 1. Finance: Many disasters
  • 2. Computational Modeling:

◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ... ◮ Avoided ◮ Detected

slide-25
SLIDE 25

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

  • 1. Finance: Many disasters
  • 2. Computational Modeling:

◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ... ◮ Avoided ◮ Detected ◮ Understood

slide-26
SLIDE 26

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

  • 1. Finance: Many disasters
  • 2. Computational Modeling:

◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ... ◮ Avoided ◮ Detected ◮ Understood ◮ Fixed

slide-27
SLIDE 27

We Left the Continuum For Good Reasons

Should We Return? Depends whether it is:

◮ Magic ... ◮ Black Magic

History makes me wary:

  • 1. Finance: Many disasters
  • 2. Computational Modeling:

◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ... ◮ Avoided ◮ Detected ◮ Understood ◮ Fixed ◮ ... if you don’t rely on ‘magic’

slide-28
SLIDE 28

What Is a ‘Shock’ In Continuous Time?

Two issues:

  • 1. Nature
slide-29
SLIDE 29

What Is a ‘Shock’ In Continuous Time?

Two issues:

  • 1. Nature
  • 2. Timing
slide-30
SLIDE 30

Definition: ‘Friedman’ Income Process

Friedman (1957) got it right (almost):1 yt = pt + z1,t pt+1 = pt + z2,t Et[z•,t+n] = In logs, still an amazingly good description of annual HH y dynamics (pace time aggregation issues):

◮ DeBacker, Heim, Panousi, Ramnath, and Vidangos (2013)

1He thought in levels; today we interpret y and p as logs.

slide-31
SLIDE 31

Definition: ‘Friedman’ Income Process

Friedman (1957) got it right (almost):1 yt = pt + z1,t pt+1 = pt + z2,t Et[z•,t+n] = In logs, still an amazingly good description of annual HH y dynamics (pace time aggregation issues):

◮ DeBacker, Heim, Panousi, Ramnath, and Vidangos (2013)

◮ Millions of IRS records 1He thought in levels; today we interpret y and p as logs.

slide-32
SLIDE 32

Definition: ‘Friedman’ Income Process

Friedman (1957) got it right (almost):1 yt = pt + z1,t pt+1 = pt + z2,t Et[z•,t+n] = In logs, still an amazingly good description of annual HH y dynamics (pace time aggregation issues):

◮ DeBacker, Heim, Panousi, Ramnath, and Vidangos (2013)

◮ Millions of IRS records ◮ Improvement from more complexity is minimal 1He thought in levels; today we interpret y and p as logs.

slide-33
SLIDE 33
  • 1. Nature of Their Shocks

zt = z1,t + z2,t

slide-34
SLIDE 34
  • 1. Nature of Their Shocks

zt = z1,t + z2,t

slide-35
SLIDE 35
  • 1. Nature of Their Shocks

zt = z1,t + z2,t where

◮ z1,t supposed to capture ‘transitory’

slide-36
SLIDE 36
  • 1. Nature of Their Shocks

zt = z1,t + z2,t where

◮ z1,t supposed to capture ‘transitory’ ◮ z2,t supposed to capture ‘persistent’

slide-37
SLIDE 37
  • 1. Nature of Their Shocks

zt = z1,t + z2,t where

◮ z1,t supposed to capture ‘transitory’ ◮ z2,t supposed to capture ‘persistent’

◮ Let the data say whether ‘permanent’

slide-38
SLIDE 38
  • 1. Nature of Their Shocks

zt = z1,t + z2,t where

◮ z1,t supposed to capture ‘transitory’ ◮ z2,t supposed to capture ‘persistent’

◮ Let the data say whether ‘permanent’

slide-39
SLIDE 39
  • 1. Nature of Their Shocks

zt = z1,t + z2,t where

◮ z1,t supposed to capture ‘transitory’ ◮ z2,t supposed to capture ‘persistent’

◮ Let the data say whether ‘permanent’

So far, looks good

slide-40
SLIDE 40
  • 1. Both Shocks Work The Same Way

‘Jump-drift’ process:

◮ Income flow rate might jump (up or down)

slide-41
SLIDE 41
  • 1. Both Shocks Work The Same Way

‘Jump-drift’ process:

◮ Income flow rate might jump (up or down) ◮ Decays toward zero like an AR(1)

slide-42
SLIDE 42
  • 1. Both Shocks Work The Same Way

‘Jump-drift’ process:

◮ Income flow rate might jump (up or down) ◮ Decays toward zero like an AR(1) ◮ Call the two ‘decay’ parameters ρ1 and ρ2

slide-43
SLIDE 43
  • 1. Both Shocks Work The Same Way

‘Jump-drift’ process:

◮ Income flow rate might jump (up or down) ◮ Decays toward zero like an AR(1) ◮ Call the two ‘decay’ parameters ρ1 and ρ2

slide-44
SLIDE 44
  • 1. Both Shocks Work The Same Way

‘Jump-drift’ process:

◮ Income flow rate might jump (up or down) ◮ Decays toward zero like an AR(1) ◮ Call the two ‘decay’ parameters ρ1 and ρ2

Calibration of ρ1 and ρ2?

◮ Optimize to best match

slide-45
SLIDE 45
  • 1. Both Shocks Work The Same Way

‘Jump-drift’ process:

◮ Income flow rate might jump (up or down) ◮ Decays toward zero like an AR(1) ◮ Call the two ‘decay’ parameters ρ1 and ρ2

Calibration of ρ1 and ρ2?

◮ Optimize to best match

◮ Guvenen, Karahan, Ozkan, and Song (2015); ‘GKOS’

slide-46
SLIDE 46
  • 1. Both Shocks Work The Same Way

‘Jump-drift’ process:

◮ Income flow rate might jump (up or down) ◮ Decays toward zero like an AR(1) ◮ Call the two ‘decay’ parameters ρ1 and ρ2

Calibration of ρ1 and ρ2?

◮ Optimize to best match

◮ Guvenen, Karahan, Ozkan, and Song (2015); ‘GKOS’

slide-47
SLIDE 47
  • 1. Both Shocks Work The Same Way

‘Jump-drift’ process:

◮ Income flow rate might jump (up or down) ◮ Decays toward zero like an AR(1) ◮ Call the two ‘decay’ parameters ρ1 and ρ2

Calibration of ρ1 and ρ2?

◮ Optimize to best match

◮ Guvenen, Karahan, Ozkan, and Song (2015); ‘GKOS’

Results:

◮ ‘Transitory’: Half life is a quarter

slide-48
SLIDE 48
  • 1. Both Shocks Work The Same Way

‘Jump-drift’ process:

◮ Income flow rate might jump (up or down) ◮ Decays toward zero like an AR(1) ◮ Call the two ‘decay’ parameters ρ1 and ρ2

Calibration of ρ1 and ρ2?

◮ Optimize to best match

◮ Guvenen, Karahan, Ozkan, and Song (2015); ‘GKOS’

Results:

◮ ‘Transitory’: Half life is a quarter ◮ ‘Persistent’: ρ2 ≈ 0.99 (quarterly)

slide-49
SLIDE 49
  • 1. Both Shocks Work The Same Way

‘Jump-drift’ process:

◮ Income flow rate might jump (up or down) ◮ Decays toward zero like an AR(1) ◮ Call the two ‘decay’ parameters ρ1 and ρ2

Calibration of ρ1 and ρ2?

◮ Optimize to best match

◮ Guvenen, Karahan, Ozkan, and Song (2015); ‘GKOS’

Results:

◮ ‘Transitory’: Half life is a quarter ◮ ‘Persistent’: ρ2 ≈ 0.99 (quarterly)

◮ It’s (nearly) a permanent shock

slide-50
SLIDE 50
  • 1. Assessment?

‘Persistent’ shock:

◮ Yay!

slide-51
SLIDE 51
  • 1. Assessment?

‘Persistent’ shock:

◮ Yay! ◮ Common interpretation of GKOS (including by GKOS):

slide-52
SLIDE 52
  • 1. Assessment?

‘Persistent’ shock:

◮ Yay! ◮ Common interpretation of GKOS (including by GKOS):

◮ ‘Friedman’ permanent shocks wrong way to think about it

slide-53
SLIDE 53
  • 1. Assessment?

‘Persistent’ shock:

◮ Yay! ◮ Common interpretation of GKOS (including by GKOS):

◮ ‘Friedman’ permanent shocks wrong way to think about it ◮ Looks pretty good to me!

slide-54
SLIDE 54
  • 1. Transitory Shock

Possibly more problematic. Take 2008 stimulus checks:

◮ Arrived as a lump sum of $600 (or $1200) at instant of time

slide-55
SLIDE 55
  • 1. Transitory Shock

Possibly more problematic. Take 2008 stimulus checks:

◮ Arrived as a lump sum of $600 (or $1200) at instant of time ◮ AKMWW: $600 shock (for someone with weekly paycheck) is:

Week Amount 1 $35.07 2 $33.09 ... ... 13 $17.39 ... ... ∞ Sum: $600

slide-56
SLIDE 56
  • 1. Transitory Shock

Possibly more problematic. Take 2008 stimulus checks:

◮ Arrived as a lump sum of $600 (or $1200) at instant of time ◮ AKMWW: $600 shock (for someone with weekly paycheck) is:

Week Amount 1 $35.07 2 $33.09 ... ... 13 $17.39 ... ... ∞ Sum: $600

slide-57
SLIDE 57
  • 1. Transitory Shock

Possibly more problematic. Take 2008 stimulus checks:

◮ Arrived as a lump sum of $600 (or $1200) at instant of time ◮ AKMWW: $600 shock (for someone with weekly paycheck) is:

Week Amount 1 $35.07 2 $33.09 ... ... 13 $17.39 ... ... ∞ Sum: $600 Is this equivalent to $600 lump sum?

slide-58
SLIDE 58
  • 1. Transitory Shock

Possibly more problematic. Take 2008 stimulus checks:

◮ Arrived as a lump sum of $600 (or $1200) at instant of time ◮ AKMWW: $600 shock (for someone with weekly paycheck) is:

Week Amount 1 $35.07 2 $33.09 ... ... 13 $17.39 ... ... ∞ Sum: $600 Is this equivalent to $600 lump sum?

◮ For nondurables spending – see next slides

slide-59
SLIDE 59
  • 1. Transitory Shock

Possibly more problematic. Take 2008 stimulus checks:

◮ Arrived as a lump sum of $600 (or $1200) at instant of time ◮ AKMWW: $600 shock (for someone with weekly paycheck) is:

Week Amount 1 $35.07 2 $33.09 ... ... 13 $17.39 ... ... ∞ Sum: $600 Is this equivalent to $600 lump sum?

◮ For nondurables spending – see next slides ◮ For durables, surely not:

slide-60
SLIDE 60
  • 1. Transitory Shock

Possibly more problematic. Take 2008 stimulus checks:

◮ Arrived as a lump sum of $600 (or $1200) at instant of time ◮ AKMWW: $600 shock (for someone with weekly paycheck) is:

Week Amount 1 $35.07 2 $33.09 ... ... 13 $17.39 ... ... ∞ Sum: $600 Is this equivalent to $600 lump sum?

◮ For nondurables spending – see next slides ◮ For durables, surely not:

◮ Several papers find lump sums are used as car down payments

slide-61
SLIDE 61
  • 1. Transitory Shock

Possibly more problematic. Take 2008 stimulus checks:

◮ Arrived as a lump sum of $600 (or $1200) at instant of time ◮ AKMWW: $600 shock (for someone with weekly paycheck) is:

Week Amount 1 $35.07 2 $33.09 ... ... 13 $17.39 ... ... ∞ Sum: $600 Is this equivalent to $600 lump sum?

◮ For nondurables spending – see next slides ◮ For durables, surely not:

◮ Several papers find lump sums are used as car down payments ◮ $35.07 (or $70.14) would not suffice!

slide-62
SLIDE 62
  • 1. ‘Deaton’ Permanent Income

In a certainty equivalent model: Dt = Et ∞

  • n=0

R−nyt+n

  • (1)

Ct = (r/R)Dt (2)

◮ AKMWW ‘transitory’ shock makes Dt ↑ by $600

slide-63
SLIDE 63
  • 1. ‘Deaton’ Permanent Income

In a certainty equivalent model: Dt = Et ∞

  • n=0

R−nyt+n

  • (1)

Ct = (r/R)Dt (2)

◮ AKMWW ‘transitory’ shock makes Dt ↑ by $600 ◮ Exactly equivalent to lump sum of $600 if

slide-64
SLIDE 64
  • 1. ‘Deaton’ Permanent Income

In a certainty equivalent model: Dt = Et ∞

  • n=0

R−nyt+n

  • (1)

Ct = (r/R)Dt (2)

◮ AKMWW ‘transitory’ shock makes Dt ↑ by $600 ◮ Exactly equivalent to lump sum of $600 if

◮ Perfect foresight

slide-65
SLIDE 65
  • 1. ‘Deaton’ Permanent Income

In a certainty equivalent model: Dt = Et ∞

  • n=0

R−nyt+n

  • (1)

Ct = (r/R)Dt (2)

◮ AKMWW ‘transitory’ shock makes Dt ↑ by $600 ◮ Exactly equivalent to lump sum of $600 if

◮ Perfect foresight ◮ No liquidity constraints

slide-66
SLIDE 66
  • 1. ‘Deaton’ Permanent Income

In a certainty equivalent model: Dt = Et ∞

  • n=0

R−nyt+n

  • (1)

Ct = (r/R)Dt (2)

◮ AKMWW ‘transitory’ shock makes Dt ↑ by $600 ◮ Exactly equivalent to lump sum of $600 if

◮ Perfect foresight ◮ No liquidity constraints ◮ Perfect capital markets

slide-67
SLIDE 67
  • 1. ‘Deaton’ Permanent Income

In a certainty equivalent model: Dt = Et ∞

  • n=0

R−nyt+n

  • (1)

Ct = (r/R)Dt (2)

◮ AKMWW ‘transitory’ shock makes Dt ↑ by $600 ◮ Exactly equivalent to lump sum of $600 if

◮ Perfect foresight ◮ No liquidity constraints ◮ Perfect capital markets

slide-68
SLIDE 68
  • 1. ‘Deaton’ Permanent Income

In a certainty equivalent model: Dt = Et ∞

  • n=0

R−nyt+n

  • (1)

Ct = (r/R)Dt (2)

◮ AKMWW ‘transitory’ shock makes Dt ↑ by $600 ◮ Exactly equivalent to lump sum of $600 if

◮ Perfect foresight ◮ No liquidity constraints ◮ Perfect capital markets

That is, if there is no reason to do any of the incredibly hard and impressive work they do to deal with liquidity constraints, uncertainty, time-varying interest rates, etc etc etc

slide-69
SLIDE 69
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

slide-70
SLIDE 70
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

◮ AKMWW shock approaches a ‘Friedman’ transitory shock

slide-71
SLIDE 71
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

◮ AKMWW shock approaches a ‘Friedman’ transitory shock

◮ In practice, numerics must break down somewhere

slide-72
SLIDE 72
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

◮ AKMWW shock approaches a ‘Friedman’ transitory shock

◮ In practice, numerics must break down somewhere ◮ Interesting to know half-life mark where it breaks down

slide-73
SLIDE 73
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

◮ AKMWW shock approaches a ‘Friedman’ transitory shock

◮ In practice, numerics must break down somewhere ◮ Interesting to know half-life mark where it breaks down ◮ A week? Indistinguishable from ‘Friedman’ shock

slide-74
SLIDE 74
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

◮ AKMWW shock approaches a ‘Friedman’ transitory shock

◮ In practice, numerics must break down somewhere ◮ Interesting to know half-life mark where it breaks down ◮ A week? Indistinguishable from ‘Friedman’ shock ◮ A quarter? Starts to be problematic

slide-75
SLIDE 75
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

◮ AKMWW shock approaches a ‘Friedman’ transitory shock

◮ In practice, numerics must break down somewhere ◮ Interesting to know half-life mark where it breaks down ◮ A week? Indistinguishable from ‘Friedman’ shock ◮ A quarter? Starts to be problematic

◮ Say, for analyzing 2008 stimulus

slide-76
SLIDE 76
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

◮ AKMWW shock approaches a ‘Friedman’ transitory shock

◮ In practice, numerics must break down somewhere ◮ Interesting to know half-life mark where it breaks down ◮ A week? Indistinguishable from ‘Friedman’ shock ◮ A quarter? Starts to be problematic

◮ Say, for analyzing 2008 stimulus ◮ Summers would not be impressed

slide-77
SLIDE 77
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

◮ AKMWW shock approaches a ‘Friedman’ transitory shock

◮ In practice, numerics must break down somewhere ◮ Interesting to know half-life mark where it breaks down ◮ A week? Indistinguishable from ‘Friedman’ shock ◮ A quarter? Starts to be problematic

◮ Say, for analyzing 2008 stimulus ◮ Summers would not be impressed

slide-78
SLIDE 78
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

◮ AKMWW shock approaches a ‘Friedman’ transitory shock

◮ In practice, numerics must break down somewhere ◮ Interesting to know half-life mark where it breaks down ◮ A week? Indistinguishable from ‘Friedman’ shock ◮ A quarter? Starts to be problematic

◮ Say, for analyzing 2008 stimulus ◮ Summers would not be impressed

Am pretty confident that it can be made to work OK ...

◮ ... with recalibration

slide-79
SLIDE 79
  • 1. The Good News

As ρ2 ↓ 0

◮ In theory:

◮ AKMWW shock approaches a ‘Friedman’ transitory shock

◮ In practice, numerics must break down somewhere ◮ Interesting to know half-life mark where it breaks down ◮ A week? Indistinguishable from ‘Friedman’ shock ◮ A quarter? Starts to be problematic

◮ Say, for analyzing 2008 stimulus ◮ Summers would not be impressed

Am pretty confident that it can be made to work OK ...

◮ ... with recalibration ◮ ... and for some Q’s (like stimulus) maybe need 3 not two z’s

slide-80
SLIDE 80
  • 2. Timing of Shocks

Frequency of arrival of shocks:

slide-81
SLIDE 81
  • 2. Timing of Shocks

Frequency of arrival of shocks: Transitory Arrives once every 3 years

slide-82
SLIDE 82
  • 2. Timing of Shocks

Frequency of arrival of shocks: Transitory Arrives once every 3 years

slide-83
SLIDE 83
  • 2. Timing of Shocks

Frequency of arrival of shocks: Transitory Arrives once every 3 years Permanent Arrives once every 38 years

slide-84
SLIDE 84
  • 2. Timing of Shocks

Frequency of arrival of shocks: Transitory Arrives once every 3 years Permanent Arrives once every 38 years Yikes!

slide-85
SLIDE 85
  • 2. Timing of Shocks

Frequency of arrival of shocks: Transitory Arrives once every 3 years Permanent Arrives once every 38 years Yikes!

◮ Low, Meghir, and Pistaferri (2010): z2 shocks are mostly:

slide-86
SLIDE 86
  • 2. Timing of Shocks

Frequency of arrival of shocks: Transitory Arrives once every 3 years Permanent Arrives once every 38 years Yikes!

◮ Low, Meghir, and Pistaferri (2010): z2 shocks are mostly:

◮ Promotions

slide-87
SLIDE 87
  • 2. Timing of Shocks

Frequency of arrival of shocks: Transitory Arrives once every 3 years Permanent Arrives once every 38 years Yikes!

◮ Low, Meghir, and Pistaferri (2010): z2 shocks are mostly:

◮ Promotions ◮ Job Changes

slide-88
SLIDE 88
  • 2. Timing of Shocks

Frequency of arrival of shocks: Transitory Arrives once every 3 years Permanent Arrives once every 38 years Yikes!

◮ Low, Meghir, and Pistaferri (2010): z2 shocks are mostly:

◮ Promotions ◮ Job Changes

slide-89
SLIDE 89
  • 2. Timing of Shocks

Frequency of arrival of shocks: Transitory Arrives once every 3 years Permanent Arrives once every 38 years Yikes!

◮ Low, Meghir, and Pistaferri (2010): z2 shocks are mostly:

◮ Promotions ◮ Job Changes

which happen maybe once every 5 years, not 38

slide-90
SLIDE 90
  • 2. Timing of Shocks

Frequency of arrival of shocks: Transitory Arrives once every 3 years Permanent Arrives once every 38 years Yikes!

◮ Low, Meghir, and Pistaferri (2010): z2 shocks are mostly:

◮ Promotions ◮ Job Changes

which happen maybe once every 5 years, not 38

◮ Vast lit: Trans shocks large at annual frequency

slide-91
SLIDE 91

Micro: Not Quite There Yet ...

Is their description of HH income ‘good enough’?

◮ In short: Not yet.

slide-92
SLIDE 92

Micro: Not Quite There Yet ...

Is their description of HH income ‘good enough’?

◮ In short: Not yet.

◮ Sim process at HH level, compare results to data

slide-93
SLIDE 93

Micro: Not Quite There Yet ...

Is their description of HH income ‘good enough’?

◮ In short: Not yet.

◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss

slide-94
SLIDE 94

Micro: Not Quite There Yet ...

Is their description of HH income ‘good enough’?

◮ In short: Not yet.

◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss

◮ Good news: With recalibration, it could be serious

slide-95
SLIDE 95

Micro: Not Quite There Yet ...

Is their description of HH income ‘good enough’?

◮ In short: Not yet.

◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss

◮ Good news: With recalibration, it could be serious ◮ Many ‘millions of data points’ papers out there besides GKOS

slide-96
SLIDE 96

Micro: Not Quite There Yet ...

Is their description of HH income ‘good enough’?

◮ In short: Not yet.

◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss

◮ Good news: With recalibration, it could be serious ◮ Many ‘millions of data points’ papers out there besides GKOS

◮ Most of them estimate Friedmanesque process

slide-97
SLIDE 97

Micro: Not Quite There Yet ...

Is their description of HH income ‘good enough’?

◮ In short: Not yet.

◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss

◮ Good news: With recalibration, it could be serious ◮ Many ‘millions of data points’ papers out there besides GKOS

◮ Most of them estimate Friedmanesque process ◮ GKOS specialized in measuring leptokurtosis, recessions, tails

slide-98
SLIDE 98

Micro: Not Quite There Yet ...

Is their description of HH income ‘good enough’?

◮ In short: Not yet.

◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss

◮ Good news: With recalibration, it could be serious ◮ Many ‘millions of data points’ papers out there besides GKOS

◮ Most of them estimate Friedmanesque process ◮ GKOS specialized in measuring leptokurtosis, recessions, tails ◮ They aren’t aiming at any of these targets anyway

slide-99
SLIDE 99

Micro: Not Quite There Yet ...

Is their description of HH income ‘good enough’?

◮ In short: Not yet.

◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss

◮ Good news: With recalibration, it could be serious ◮ Many ‘millions of data points’ papers out there besides GKOS

◮ Most of them estimate Friedmanesque process ◮ GKOS specialized in measuring leptokurtosis, recessions, tails ◮ They aren’t aiming at any of these targets anyway ◮ ⇒ Match rest of literature, not GKOS

slide-100
SLIDE 100

Macro: Aggregate C and Y Dynamics

Claim to solve two (related) puzzles:

  • 1. Campbell and Deaton (1989): ‘Excess smoothness’:
slide-101
SLIDE 101

Macro: Aggregate C and Y Dynamics

Claim to solve two (related) puzzles:

  • 1. Campbell and Deaton (1989): ‘Excess smoothness’:

◮ var(∆ log C) < var(∆ log Y )

slide-102
SLIDE 102

Macro: Aggregate C and Y Dynamics

Claim to solve two (related) puzzles:

  • 1. Campbell and Deaton (1989): ‘Excess smoothness’:

◮ var(∆ log C) < var(∆ log Y )

  • 2. Campbell and Mankiw (1989): ‘Excess sensitivity’:
slide-103
SLIDE 103

Macro: Aggregate C and Y Dynamics

Claim to solve two (related) puzzles:

  • 1. Campbell and Deaton (1989): ‘Excess smoothness’:

◮ var(∆ log C) < var(∆ log Y )

  • 2. Campbell and Mankiw (1989): ‘Excess sensitivity’:

◮ Flavin (1981)

slide-104
SLIDE 104

Macro: Aggregate C and Y Dynamics

Claim to solve two (related) puzzles:

  • 1. Campbell and Deaton (1989): ‘Excess smoothness’:

◮ var(∆ log C) < var(∆ log Y )

  • 2. Campbell and Mankiw (1989): ‘Excess sensitivity’:

◮ Flavin (1981) ◮ ∆ log C inappropriately ‘sensitive’ to predictable ∆ log Y

slide-105
SLIDE 105
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y

slide-106
SLIDE 106
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)
slide-107
SLIDE 107
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)

◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1

slide-108
SLIDE 108
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)

◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y

slide-109
SLIDE 109
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)

◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y

  • 2. log Y is random walk: Zt+1,2 = Zt,2 + ǫt+1
slide-110
SLIDE 110
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)

◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y

  • 2. log Y is random walk: Zt+1,2 = Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = ∆ log Yt+1

slide-111
SLIDE 111
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)

◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y

  • 2. log Y is random walk: Zt+1,2 = Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal

slide-112
SLIDE 112
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)

◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y

  • 2. log Y is random walk: Zt+1,2 = Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal

  • 3. Growth is serially correlated: ∆Zt+1,2 = γ∆Zt,2 + ǫt+1
slide-113
SLIDE 113
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)

◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y

  • 2. log Y is random walk: Zt+1,2 = Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal

  • 3. Growth is serially correlated: ∆Zt+1,2 = γ∆Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = (1/(1 − γ))∆ log Yt+1

slide-114
SLIDE 114
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)

◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y

  • 2. log Y is random walk: Zt+1,2 = Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal

  • 3. Growth is serially correlated: ∆Zt+1,2 = γ∆Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = (1/(1 − γ))∆ log Yt+1 ◮ var(∆ log Ct+1) = (1/(1 − γ))2var(∆ log Yt+1)

slide-115
SLIDE 115
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)

◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y

  • 2. log Y is random walk: Zt+1,2 = Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal

  • 3. Growth is serially correlated: ∆Zt+1,2 = γ∆Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = (1/(1 − γ))∆ log Yt+1 ◮ var(∆ log Ct+1) = (1/(1 − γ))2var(∆ log Yt+1)

slide-116
SLIDE 116
  • 1. Excess Smoothness: Campbell and Deaton (1989)

Consequences for ‘Deaton’ permanent income Dt of shock to Yt depend critically on time series process for Y Consider three options:

  • 1. No permanent shocks to log Y : (Zt,2 = 0 ∀ t)

◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y

  • 2. log Y is random walk: Zt+1,2 = Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal

  • 3. Growth is serially correlated: ∆Zt+1,2 = γ∆Zt,2 + ǫt+1

◮ Then ∆ log Ct+1 = (1/(1 − γ))∆ log Yt+1 ◮ var(∆ log Ct+1) = (1/(1 − γ))2var(∆ log Yt+1)

They assume the last

slide-117
SLIDE 117
  • 1. Problem: Who Knows the Right Income Process?

◮ Huge literature in 1980s and 1990s on this exact subject

slide-118
SLIDE 118
  • 1. Problem: Who Knows the Right Income Process?

◮ Huge literature in 1980s and 1990s on this exact subject

◮ Stock (1991): Confidence bands for the largest root are large

slide-119
SLIDE 119
  • 1. Problem: Who Knows the Right Income Process?

◮ Huge literature in 1980s and 1990s on this exact subject

◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01

slide-120
SLIDE 120
  • 1. Problem: Who Knows the Right Income Process?

◮ Huge literature in 1980s and 1990s on this exact subject

◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01

◮ Much of vast DSGE literature assumes an AR(1) in levels:

slide-121
SLIDE 121
  • 1. Problem: Who Knows the Right Income Process?

◮ Huge literature in 1980s and 1990s on this exact subject

◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01

◮ Much of vast DSGE literature assumes an AR(1) in levels:

◮ log Yt+1 = 0.95 log Yt

slide-122
SLIDE 122
  • 1. Problem: Who Knows the Right Income Process?

◮ Huge literature in 1980s and 1990s on this exact subject

◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01

◮ Much of vast DSGE literature assumes an AR(1) in levels:

◮ log Yt+1 = 0.95 log Yt

◮ Corr(∆ log Yt, ∆ log Yt−1) = 0.37 need not imply unexpected

shocks have long lasting effects

slide-123
SLIDE 123
  • 1. Problem: Who Knows the Right Income Process?

◮ Huge literature in 1980s and 1990s on this exact subject

◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01

◮ Much of vast DSGE literature assumes an AR(1) in levels:

◮ log Yt+1 = 0.95 log Yt

◮ Corr(∆ log Yt, ∆ log Yt−1) = 0.37 need not imply unexpected

shocks have long lasting effects

◮ e.g.:

(1 − aL)(1 − ρL) log Yt = ut

slide-124
SLIDE 124
  • 1. Problem: Who Knows the Right Income Process?

◮ Huge literature in 1980s and 1990s on this exact subject

◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01

◮ Much of vast DSGE literature assumes an AR(1) in levels:

◮ log Yt+1 = 0.95 log Yt

◮ Corr(∆ log Yt, ∆ log Yt−1) = 0.37 need not imply unexpected

shocks have long lasting effects

◮ e.g.:

(1 − aL)(1 − ρL) log Yt = ut

◮ Their model is a = 0.37, ρ = 1

slide-125
SLIDE 125
  • 1. Problem: Who Knows the Right Income Process?

◮ Huge literature in 1980s and 1990s on this exact subject

◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01

◮ Much of vast DSGE literature assumes an AR(1) in levels:

◮ log Yt+1 = 0.95 log Yt

◮ Corr(∆ log Yt, ∆ log Yt−1) = 0.37 need not imply unexpected

shocks have long lasting effects

◮ e.g.:

(1 − aL)(1 − ρL) log Yt = ut

◮ Their model is a = 0.37, ρ = 1 ◮ But a = 0.44, ρ = 0.9 also has 0.37 autocorrelation

slide-126
SLIDE 126
  • 1. Relative Volatility is Highly Sensitive to ρ

One asset, small open economy model (CSTW)

0.90 0.92 0.94 0.96 0.98 1.00

ρ

0.4 0.5 0.6 0.7 0.8

σ(∆logCt) σ(∆logYt)

logYt = ρlogYt − 1 + εt

◮ Basically, any number between 0.4 and maybe 2 is consistent

with some statistically unrejectable description

slide-127
SLIDE 127
  • 2. Do They Explain Campbell and Mankiw (1989)?

In principle, it might:

◮ Shock arrives at t that has important info about

Et[∆ log Yt+1]

slide-128
SLIDE 128
  • 2. Do They Explain Campbell and Mankiw (1989)?

In principle, it might:

◮ Shock arrives at t that has important info about

Et[∆ log Yt+1]

◮ But lots of AMKWW consumers are ‘nearly constrained’

slide-129
SLIDE 129
  • 2. Do They Explain Campbell and Mankiw (1989)?

In principle, it might:

◮ Shock arrives at t that has important info about

Et[∆ log Yt+1]

◮ But lots of AMKWW consumers are ‘nearly constrained’ ◮ So, don’t respond until they get cash in hand

slide-130
SLIDE 130
  • 2. Do They Explain Campbell and Mankiw (1989)?

In principle, it might:

◮ Shock arrives at t that has important info about

Et[∆ log Yt+1]

◮ But lots of AMKWW consumers are ‘nearly constrained’ ◮ So, don’t respond until they get cash in hand

◮ N.B.: This is exactly the circumstance in which z1 shock is

NOT equivalent to Friedman shock

slide-131
SLIDE 131
  • 2. The Test

Campbell and Mankiw (1989) say C can be modeled by ∆ log Ct+1 = (1 − λ)σ(rt+1 − ϑ) + λ∆Et[∆ log Yt+1]

slide-132
SLIDE 132
  • 2. The Test

Campbell and Mankiw (1989) say C can be modeled by ∆ log Ct+1 = (1 − λ)σ(rt+1 − ϑ) + λ∆Et[∆ log Yt+1] λ is income of ‘rule-of-thumb’ (c = y) consumers. Right test? ∆ log Ct+1 = γ0 + γ1Et−1[rt+1] + γ2∆Et−1[∆ log Yt+1]

slide-133
SLIDE 133
  • 2. The Test

Campbell and Mankiw (1989) say C can be modeled by ∆ log Ct+1 = (1 − λ)σ(rt+1 − ϑ) + λ∆Et[∆ log Yt+1] λ is income of ‘rule-of-thumb’ (c = y) consumers. Right test? ∆ log Ct+1 = γ0 + γ1Et−1[rt+1] + γ2∆Et−1[∆ log Yt+1] Hall (1978): γ2 = 0 in standard representative agent model

slide-134
SLIDE 134
  • 2. We Can’t Figure Out Their Results

From their paper:

slide-135
SLIDE 135
  • 2. We Can’t Figure Out Their Results

From their paper: Hall: Coefficient in IV reg for RA model is 0 AKMWW: They find 0.247 Why?

◮ rt+1 omitted from RHS of regression? They say no.

slide-136
SLIDE 136
  • 2. We Can’t Figure Out Their Results

From their paper: Hall: Coefficient in IV reg for RA model is 0 AKMWW: They find 0.247 Why?

◮ rt+1 omitted from RHS of regression? They say no. ◮ Time aggregation? (Their notation says t − 1 ...)

slide-137
SLIDE 137
  • 2. We Can’t Figure Out Their Results

From their paper: Hall: Coefficient in IV reg for RA model is 0 AKMWW: They find 0.247 Why?

◮ rt+1 omitted from RHS of regression? They say no. ◮ Time aggregation? (Their notation says t − 1 ...) ◮ Something else?

slide-138
SLIDE 138
  • 2. We Can’t Figure Out Their Results

From their paper: Hall: Coefficient in IV reg for RA model is 0 AKMWW: They find 0.247 Why?

◮ rt+1 omitted from RHS of regression? They say no. ◮ Time aggregation? (Their notation says t − 1 ...) ◮ Something else? ◮ If we can’t understand RA model, no hope for 2-asset model

slide-139
SLIDE 139

The Real Excess Smoothness/Sensitivity Problem

◮ First ‘structural’ model of monetary policy I know of is

Rotemberg and Woodford (1997)

slide-140
SLIDE 140

The Real Excess Smoothness/Sensitivity Problem

◮ First ‘structural’ model of monetary policy I know of is

Rotemberg and Woodford (1997)

◮ One big problem they faced:

slide-141
SLIDE 141

The Real Excess Smoothness/Sensitivity Problem

◮ First ‘structural’ model of monetary policy I know of is

Rotemberg and Woodford (1997)

◮ One big problem they faced:

◮ ∆ log Ct+1 = σ(rt+1 − ϑ)

slide-142
SLIDE 142

The Real Excess Smoothness/Sensitivity Problem

◮ First ‘structural’ model of monetary policy I know of is

Rotemberg and Woodford (1997)

◮ One big problem they faced:

◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly

slide-143
SLIDE 143

The Real Excess Smoothness/Sensitivity Problem

◮ First ‘structural’ model of monetary policy I know of is

Rotemberg and Woodford (1997)

◮ One big problem they faced:

◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly ◮ Data: Half life of response of C to MP shocks looks like a year

slide-144
SLIDE 144

The Real Excess Smoothness/Sensitivity Problem

◮ First ‘structural’ model of monetary policy I know of is

Rotemberg and Woodford (1997)

◮ One big problem they faced:

◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly ◮ Data: Half life of response of C to MP shocks looks like a year ◮ Their (embarrassed) solution: Assume C is just stuck for a year

slide-145
SLIDE 145

The Real Excess Smoothness/Sensitivity Problem

◮ First ‘structural’ model of monetary policy I know of is

Rotemberg and Woodford (1997)

◮ One big problem they faced:

◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly ◮ Data: Half life of response of C to MP shocks looks like a year ◮ Their (embarrassed) solution: Assume C is just stuck for a year ◮ (Embarrassed) motivation: You plan your budget a year in

advance and can’t change it

slide-146
SLIDE 146

The Real Excess Smoothness/Sensitivity Problem

◮ First ‘structural’ model of monetary policy I know of is

Rotemberg and Woodford (1997)

◮ One big problem they faced:

◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly ◮ Data: Half life of response of C to MP shocks looks like a year ◮ Their (embarrassed) solution: Assume C is just stuck for a year ◮ (Embarrassed) motivation: You plan your budget a year in

advance and can’t change it

slide-147
SLIDE 147

The Real Excess Smoothness/Sensitivity Problem

◮ First ‘structural’ model of monetary policy I know of is

Rotemberg and Woodford (1997)

◮ One big problem they faced:

◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly ◮ Data: Half life of response of C to MP shocks looks like a year ◮ Their (embarrassed) solution: Assume C is just stuck for a year ◮ (Embarrassed) motivation: You plan your budget a year in

advance and can’t change it

Vast subsequent literature captures same fact using ‘habits’

slide-148
SLIDE 148

Problems With Habits

◮ No micro evidence for habits

slide-149
SLIDE 149

Problems With Habits

◮ No micro evidence for habits

◮ Deaton (1992)

slide-150
SLIDE 150

Problems With Habits

◮ No micro evidence for habits

◮ Deaton (1992) ◮ Dynan (2000)

slide-151
SLIDE 151

Problems With Habits

◮ No micro evidence for habits

◮ Deaton (1992) ◮ Dynan (2000) ◮ Subsequent literature: No micro evidence for habits

slide-152
SLIDE 152

Problems With Habits

◮ No micro evidence for habits

◮ Deaton (1992) ◮ Dynan (2000) ◮ Subsequent literature: No micro evidence for habits

◮ Habits deepen problem RA model has with κ

slide-153
SLIDE 153

Problems With Habits

◮ No micro evidence for habits

◮ Deaton (1992) ◮ Dynan (2000) ◮ Subsequent literature: No micro evidence for habits

◮ Habits deepen problem RA model has with κ

◮ Standard habits parameterization might imply ¯

κ = 0.01.

slide-154
SLIDE 154

Problems With Habits

◮ No micro evidence for habits

◮ Deaton (1992) ◮ Dynan (2000) ◮ Subsequent literature: No micro evidence for habits

◮ Habits deepen problem RA model has with κ

◮ Standard habits parameterization might imply ¯

κ = 0.01.

◮ Gap between ‘rule-of-thumb’ and ‘rational’ widens to chasm

slide-155
SLIDE 155

Problems With Habits

◮ No micro evidence for habits

◮ Deaton (1992) ◮ Dynan (2000) ◮ Subsequent literature: No micro evidence for habits

◮ Habits deepen problem RA model has with κ

◮ Standard habits parameterization might imply ¯

κ = 0.01.

◮ Gap between ‘rule-of-thumb’ and ‘rational’ widens to chasm

◮ Hard to maintain straight face and say this is ‘microfounded’

slide-156
SLIDE 156

Macro Consumption Puzzles Solved?

Summary:

  • 1. Campbell and Deaton (1989): It’s not really a puzzle
slide-157
SLIDE 157

Macro Consumption Puzzles Solved?

Summary:

  • 1. Campbell and Deaton (1989): It’s not really a puzzle
  • 2. Campbell and Mankiw (1989): We’re deeply confused
slide-158
SLIDE 158

Macro Consumption Puzzles Solved?

Summary:

  • 1. Campbell and Deaton (1989): It’s not really a puzzle
  • 2. Campbell and Mankiw (1989): We’re deeply confused
  • 3. Real ‘excess smoothness’ problem: They don’t examine it
slide-159
SLIDE 159

Macro Consumption Puzzles Solved?

Summary:

  • 1. Campbell and Deaton (1989): It’s not really a puzzle
  • 2. Campbell and Mankiw (1989): We’re deeply confused
  • 3. Real ‘excess smoothness’ problem: They don’t examine it

◮ Though they could; maybe should do this instead

slide-160
SLIDE 160

What Should They Do Instead?

◮ Their huge contribution is methodological

slide-161
SLIDE 161

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right

slide-162
SLIDE 162

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

slide-163
SLIDE 163

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

slide-164
SLIDE 164

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

slide-165
SLIDE 165

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible

slide-166
SLIDE 166

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right

slide-167
SLIDE 167

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model

slide-168
SLIDE 168

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others

slide-169
SLIDE 169

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others

slide-170
SLIDE 170

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others

My preference: Model where

slide-171
SLIDE 171

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others

My preference: Model where

  • 1. Both micro and macro have pure permanent shocks
slide-172
SLIDE 172

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others

My preference: Model where

  • 1. Both micro and macro have pure permanent shocks
  • 2. Without the liquid and illiquid assets
slide-173
SLIDE 173

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others

My preference: Model where

  • 1. Both micro and macro have pure permanent shocks
  • 2. Without the liquid and illiquid assets
  • 3. In partial and general equilibrium
slide-174
SLIDE 174

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others

My preference: Model where

  • 1. Both micro and macro have pure permanent shocks
  • 2. Without the liquid and illiquid assets
  • 3. In partial and general equilibrium
  • 4. They examine how close they can get to Friedman z1
slide-175
SLIDE 175

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others

My preference: Model where

  • 1. Both micro and macro have pure permanent shocks
  • 2. Without the liquid and illiquid assets
  • 3. In partial and general equilibrium
  • 4. They examine how close they can get to Friedman z1
slide-176
SLIDE 176

What Should They Do Instead?

◮ Their huge contribution is methodological ◮ They need to convince skeptics it’s right ◮ Wrong way to do that:

◮ Solve model so innovative and complicated that authors

themselves are confused about how findings relate to literature

◮ Right approach:

◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others

My preference: Model where

  • 1. Both micro and macro have pure permanent shocks
  • 2. Without the liquid and illiquid assets
  • 3. In partial and general equilibrium
  • 4. They examine how close they can get to Friedman z1

Then write 3 or 4 other papers containing the (remaining) substantive (vs. methodological) contributions of the present paper

slide-177
SLIDE 177

A Final Concern: It Will Be a Black Box

slide-178
SLIDE 178

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

slide-179
SLIDE 179

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

slide-180
SLIDE 180

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

Differences:

slide-181
SLIDE 181

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

Differences:

◮ For key macro questions (fiscal, monetary policy):

slide-182
SLIDE 182

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

Differences:

◮ For key macro questions (fiscal, monetary policy):

◮ Distribution of MPC may be close to ‘sufficient statistic’

slide-183
SLIDE 183

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

Differences:

◮ For key macro questions (fiscal, monetary policy):

◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff

slide-184
SLIDE 184

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

Differences:

◮ For key macro questions (fiscal, monetary policy):

◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff

◮ Microeconomic data abundant, becoming more so (‘big data’)

slide-185
SLIDE 185

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

Differences:

◮ For key macro questions (fiscal, monetary policy):

◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff

◮ Microeconomic data abundant, becoming more so (‘big data’)

◮ Both Houseold Level and Regional

slide-186
SLIDE 186

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

Differences:

◮ For key macro questions (fiscal, monetary policy):

◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff

◮ Microeconomic data abundant, becoming more so (‘big data’)

◮ Both Houseold Level and Regional

◮ Microeconomic theory is often more intuitive, transparent

slide-187
SLIDE 187

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

Differences:

◮ For key macro questions (fiscal, monetary policy):

◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff

◮ Microeconomic data abundant, becoming more so (‘big data’)

◮ Both Houseold Level and Regional

◮ Microeconomic theory is often more intuitive, transparent

◮ It’s about behavior of individuals

slide-188
SLIDE 188

A Final Concern: It Will Be a Black Box

The new DYNARE?

◮ Unlimited Theory, Little Understanding, and Almost No Data

Differences:

◮ For key macro questions (fiscal, monetary policy):

◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff

◮ Microeconomic data abundant, becoming more so (‘big data’)

◮ Both Houseold Level and Regional

◮ Microeconomic theory is often more intuitive, transparent

◮ It’s about behavior of individuals ◮ Most macroeconomists are individuals

slide-189
SLIDE 189

Conclusion

This is a hugely important paper

◮ For its methodological contribution

slide-190
SLIDE 190

Conclusion

This is a hugely important paper

◮ For its methodological contribution ◮ More work is needed before that will become obvious

slide-191
SLIDE 191

References I

Blanchard, Olivier (2016): “Do DSGE Models Have a Future?,” Discussion paper, Petersen Institute for International Economics, Available at https://piie.com/system/files/documents/pb16-11.pdf. Campbell, John, and Angus Deaton (1989): “Why is Consumption So Smooth?,” The Review of Economic Studies, 56(3), 357–373, http://www.jstor.org/stable/2297552. Campbell, John Y., and N. Gregory Mankiw (1989): “Consumption, Income, and Interest Rates: Reinterpreting the Time-Series Evidence,” in NBER Macroeconomics Annual, 1989, ed. by Olivier J. Blanchard, and Stanley Fischer, pp. 185–216. MIT Press, Cambridge, MA, http://www.nber.org/papers/w2924.pdf. Coeure, Benoit (2013): “The relevance of household-level data for monetary policy and financial stability analysis,” . Deaton, Angus S. (1992): Understanding Consumption. Oxford University Press, New York. DeBacker, Jason, Bradley Heim, Vasia Panousi, Shanthi Ramnath, and Ivan Vidangos (2013): “Rising Inequality: Transitory or Persistent? New Evidence from a Panel of U.S. Tax Returns,” Brookings Papers on Economic Activity, Spring, 67–122. Dynan, Karen E. (2000): “Habit Formation in Consumer Preferences: Evidence from Panel Data,” American Economic Review, 90(3), http://www.jstor.org/stable/117335. Flavin, Marjorie B. (1981): “The Adjustment of Consumption to Changing Expectations About Future Income,” Journal of Political Economy, 89, 974–1009, http://www.jstor.org/stable/1830816. Friedman, Milton A. (1957): A Theory of the Consumption Function. Princeton University Press. Guvenen, Fatih, Fatih Karahan, Serdar Ozkan, and Jae Song (2015): “What do data on millions of US workers reveal about life-cycle earnings risk?,” Discussion paper, National Bureau of Economic Research. Haldane, Andy (2016): “The Dappled World,” Discussion paper, Bank of England, Available at http://www.bankofengland.co.uk/publications/Pages/speeches/2016/937.aspx. Hall, Robert E. (1978): “Stochastic Implications of the Life-Cycle/Permanent Income Hypothesis: Theory and Evidence,” Journal of Political Economy, 96, 971–87, Available at http://www.stanford.edu/~rehall/Stochastic-JPE-Dec-1978.pdf. Low, Hamish, Costas Meghir, and Luigi Pistaferri (2010): “Wage risk and employment risk over the life cycle,” The American economic review, 100(4), 1432–1467.

slide-192
SLIDE 192

References II

Rotemberg, Julio J., and Michael Woodford (1997): “An Optimization-Based Econometric Model for the Evaluation of Monetary Policy,” in NBER Macroeconomics Annual, 1997, ed. by Benjamin S. Bernanke, and Julio J. Rotemberg, vol. 12, pp. 297–346. MIT Press, Cambridge, MA. Summers, Lawrence H. (2011): “Larry Summers and Martin Wolf on New Economic Thinking,” Financial Times video interview, http://tinyurl.com/dl201108a. Yellen, Janet (2016): “Macroeconomic Research After the Crisis,” Available at https://www.federalreserve.gov/newsevents/speech/yellen20161014a.htm.