‘When Inequality Matters for Macro and Macro Matters for Inequality’
Discussion by Christopher Carroll1 (with help from Edmund Crawley1)
1Johns Hopkins University
When Inequality Matters for Macro and Macro Matters for Inequality - - PowerPoint PPT Presentation
When Inequality Matters for Macro and Macro Matters for Inequality Discussion by Christopher Carroll 1 (with help from Edmund Crawley 1 ) 1 Johns Hopkins University ccarroll@jhu.edu edmundcrawley@gmail.com NBER Macro Annual Meeting, April
1Johns Hopkins University
◮ “Almost nothing from the academic macroeconomics literature
◮ “Almost nothing from the academic macroeconomics literature
◮ But: Methods work only in continuous time
◮ Computers are Digital Devices
◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway
◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ...
◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ... ◮ Avoided
◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ... ◮ Avoided ◮ Detected
◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ... ◮ Avoided ◮ Detected ◮ Understood
◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ... ◮ Avoided ◮ Detected ◮ Understood ◮ Fixed
◮ Computers are Digital Devices ◮ Ultimately they discretize everything anyway ◮ Errors are more easily ... ◮ Avoided ◮ Detected ◮ Understood ◮ Fixed ◮ ... if you don’t rely on ‘magic’
1He thought in levels; today we interpret y and p as logs.
◮ Millions of IRS records 1He thought in levels; today we interpret y and p as logs.
◮ Millions of IRS records ◮ Improvement from more complexity is minimal 1He thought in levels; today we interpret y and p as logs.
◮ Let the data say whether ‘permanent’
◮ Let the data say whether ‘permanent’
◮ Let the data say whether ‘permanent’
◮ Guvenen, Karahan, Ozkan, and Song (2015); ‘GKOS’
◮ Guvenen, Karahan, Ozkan, and Song (2015); ‘GKOS’
◮ Guvenen, Karahan, Ozkan, and Song (2015); ‘GKOS’
◮ Guvenen, Karahan, Ozkan, and Song (2015); ‘GKOS’
◮ Guvenen, Karahan, Ozkan, and Song (2015); ‘GKOS’
◮ It’s (nearly) a permanent shock
◮ ‘Friedman’ permanent shocks wrong way to think about it
◮ ‘Friedman’ permanent shocks wrong way to think about it ◮ Looks pretty good to me!
◮ Several papers find lump sums are used as car down payments
◮ Several papers find lump sums are used as car down payments ◮ $35.07 (or $70.14) would not suffice!
◮ Perfect foresight
◮ Perfect foresight ◮ No liquidity constraints
◮ Perfect foresight ◮ No liquidity constraints ◮ Perfect capital markets
◮ Perfect foresight ◮ No liquidity constraints ◮ Perfect capital markets
◮ Perfect foresight ◮ No liquidity constraints ◮ Perfect capital markets
◮ AKMWW shock approaches a ‘Friedman’ transitory shock
◮ AKMWW shock approaches a ‘Friedman’ transitory shock
◮ AKMWW shock approaches a ‘Friedman’ transitory shock
◮ AKMWW shock approaches a ‘Friedman’ transitory shock
◮ AKMWW shock approaches a ‘Friedman’ transitory shock
◮ AKMWW shock approaches a ‘Friedman’ transitory shock
◮ Say, for analyzing 2008 stimulus
◮ AKMWW shock approaches a ‘Friedman’ transitory shock
◮ Say, for analyzing 2008 stimulus ◮ Summers would not be impressed
◮ AKMWW shock approaches a ‘Friedman’ transitory shock
◮ Say, for analyzing 2008 stimulus ◮ Summers would not be impressed
◮ AKMWW shock approaches a ‘Friedman’ transitory shock
◮ Say, for analyzing 2008 stimulus ◮ Summers would not be impressed
◮ AKMWW shock approaches a ‘Friedman’ transitory shock
◮ Say, for analyzing 2008 stimulus ◮ Summers would not be impressed
◮ Promotions
◮ Promotions ◮ Job Changes
◮ Promotions ◮ Job Changes
◮ Promotions ◮ Job Changes
◮ Promotions ◮ Job Changes
◮ Sim process at HH level, compare results to data
◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss
◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss
◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss
◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss
◮ Most of them estimate Friedmanesque process
◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss
◮ Most of them estimate Friedmanesque process ◮ GKOS specialized in measuring leptokurtosis, recessions, tails
◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss
◮ Most of them estimate Friedmanesque process ◮ GKOS specialized in measuring leptokurtosis, recessions, tails ◮ They aren’t aiming at any of these targets anyway
◮ Sim process at HH level, compare results to data ◮ Am sure it would make huge miss
◮ Most of them estimate Friedmanesque process ◮ GKOS specialized in measuring leptokurtosis, recessions, tails ◮ They aren’t aiming at any of these targets anyway ◮ ⇒ Match rest of literature, not GKOS
◮ var(∆ log C) < var(∆ log Y )
◮ var(∆ log C) < var(∆ log Y )
◮ var(∆ log C) < var(∆ log Y )
◮ Flavin (1981)
◮ var(∆ log C) < var(∆ log Y )
◮ Flavin (1981) ◮ ∆ log C inappropriately ‘sensitive’ to predictable ∆ log Y
◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1
◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y
◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y
◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y
◮ Then ∆ log Ct+1 = ∆ log Yt+1
◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y
◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal
◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y
◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal
◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y
◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal
◮ Then ∆ log Ct+1 = (1/(1 − γ))∆ log Yt+1
◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y
◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal
◮ Then ∆ log Ct+1 = (1/(1 − γ))∆ log Yt+1 ◮ var(∆ log Ct+1) = (1/(1 − γ))2var(∆ log Yt+1)
◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y
◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal
◮ Then ∆ log Ct+1 = (1/(1 − γ))∆ log Yt+1 ◮ var(∆ log Ct+1) = (1/(1 − γ))2var(∆ log Yt+1)
◮ Then ∆ log Ct+1 = (r/R)∆ log Yt+1 ◮ C vastly smoother than Y
◮ Then ∆ log Ct+1 = ∆ log Yt+1 ◮ Variances of ∆ log Ct+1 and ∆ log Yt+1 are equal
◮ Then ∆ log Ct+1 = (1/(1 − γ))∆ log Yt+1 ◮ var(∆ log Ct+1) = (1/(1 − γ))2var(∆ log Yt+1)
◮ Stock (1991): Confidence bands for the largest root are large
◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01
◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01
◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01
◮ log Yt+1 = 0.95 log Yt
◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01
◮ log Yt+1 = 0.95 log Yt
◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01
◮ log Yt+1 = 0.95 log Yt
◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01
◮ log Yt+1 = 0.95 log Yt
◮ Their model is a = 0.37, ρ = 1
◮ Stock (1991): Confidence bands for the largest root are large ◮ 90% interval: 0.88 to 1.01
◮ log Yt+1 = 0.95 log Yt
◮ Their model is a = 0.37, ρ = 1 ◮ But a = 0.44, ρ = 0.9 also has 0.37 autocorrelation
0.90 0.92 0.94 0.96 0.98 1.00
0.4 0.5 0.6 0.7 0.8
σ(∆logCt) σ(∆logYt)
◮ N.B.: This is exactly the circumstance in which z1 shock is
◮ ∆ log Ct+1 = σ(rt+1 − ϑ)
◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly
◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly ◮ Data: Half life of response of C to MP shocks looks like a year
◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly ◮ Data: Half life of response of C to MP shocks looks like a year ◮ Their (embarrassed) solution: Assume C is just stuck for a year
◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly ◮ Data: Half life of response of C to MP shocks looks like a year ◮ Their (embarrassed) solution: Assume C is just stuck for a year ◮ (Embarrassed) motivation: You plan your budget a year in
◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly ◮ Data: Half life of response of C to MP shocks looks like a year ◮ Their (embarrassed) solution: Assume C is just stuck for a year ◮ (Embarrassed) motivation: You plan your budget a year in
◮ ∆ log Ct+1 = σ(rt+1 − ϑ) ◮ Says that when r ↑, Ct+1 ↓ instantly ◮ Data: Half life of response of C to MP shocks looks like a year ◮ Their (embarrassed) solution: Assume C is just stuck for a year ◮ (Embarrassed) motivation: You plan your budget a year in
◮ Deaton (1992)
◮ Deaton (1992) ◮ Dynan (2000)
◮ Deaton (1992) ◮ Dynan (2000) ◮ Subsequent literature: No micro evidence for habits
◮ Deaton (1992) ◮ Dynan (2000) ◮ Subsequent literature: No micro evidence for habits
◮ Deaton (1992) ◮ Dynan (2000) ◮ Subsequent literature: No micro evidence for habits
◮ Standard habits parameterization might imply ¯
◮ Deaton (1992) ◮ Dynan (2000) ◮ Subsequent literature: No micro evidence for habits
◮ Standard habits parameterization might imply ¯
◮ Deaton (1992) ◮ Dynan (2000) ◮ Subsequent literature: No micro evidence for habits
◮ Standard habits parameterization might imply ¯
◮ Hard to maintain straight face and say this is ‘microfounded’
◮ Though they could; maybe should do this instead
◮ Solve model so innovative and complicated that authors
◮ Solve model so innovative and complicated that authors
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others
◮ Solve model so innovative and complicated that authors
◮ Solve cleanest, clearest, simplest models possible ◮ Where everyone can see their answers are right ◮ They do this with the 20-year-old KS model ◮ Should do it with some others
◮ Distribution of MPC may be close to ‘sufficient statistic’
◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff
◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff
◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff
◮ Both Houseold Level and Regional
◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff
◮ Both Houseold Level and Regional
◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff
◮ Both Houseold Level and Regional
◮ It’s about behavior of individuals
◮ Distribution of MPC may be close to ‘sufficient statistic’ ◮ Results robust to alternative treatments of aggregate stuff
◮ Both Houseold Level and Regional
◮ It’s about behavior of individuals ◮ Most macroeconomists are individuals
Blanchard, Olivier (2016): “Do DSGE Models Have a Future?,” Discussion paper, Petersen Institute for International Economics, Available at https://piie.com/system/files/documents/pb16-11.pdf. Campbell, John, and Angus Deaton (1989): “Why is Consumption So Smooth?,” The Review of Economic Studies, 56(3), 357–373, http://www.jstor.org/stable/2297552. Campbell, John Y., and N. Gregory Mankiw (1989): “Consumption, Income, and Interest Rates: Reinterpreting the Time-Series Evidence,” in NBER Macroeconomics Annual, 1989, ed. by Olivier J. Blanchard, and Stanley Fischer, pp. 185–216. MIT Press, Cambridge, MA, http://www.nber.org/papers/w2924.pdf. Coeure, Benoit (2013): “The relevance of household-level data for monetary policy and financial stability analysis,” . Deaton, Angus S. (1992): Understanding Consumption. Oxford University Press, New York. DeBacker, Jason, Bradley Heim, Vasia Panousi, Shanthi Ramnath, and Ivan Vidangos (2013): “Rising Inequality: Transitory or Persistent? New Evidence from a Panel of U.S. Tax Returns,” Brookings Papers on Economic Activity, Spring, 67–122. Dynan, Karen E. (2000): “Habit Formation in Consumer Preferences: Evidence from Panel Data,” American Economic Review, 90(3), http://www.jstor.org/stable/117335. Flavin, Marjorie B. (1981): “The Adjustment of Consumption to Changing Expectations About Future Income,” Journal of Political Economy, 89, 974–1009, http://www.jstor.org/stable/1830816. Friedman, Milton A. (1957): A Theory of the Consumption Function. Princeton University Press. Guvenen, Fatih, Fatih Karahan, Serdar Ozkan, and Jae Song (2015): “What do data on millions of US workers reveal about life-cycle earnings risk?,” Discussion paper, National Bureau of Economic Research. Haldane, Andy (2016): “The Dappled World,” Discussion paper, Bank of England, Available at http://www.bankofengland.co.uk/publications/Pages/speeches/2016/937.aspx. Hall, Robert E. (1978): “Stochastic Implications of the Life-Cycle/Permanent Income Hypothesis: Theory and Evidence,” Journal of Political Economy, 96, 971–87, Available at http://www.stanford.edu/~rehall/Stochastic-JPE-Dec-1978.pdf. Low, Hamish, Costas Meghir, and Luigi Pistaferri (2010): “Wage risk and employment risk over the life cycle,” The American economic review, 100(4), 1432–1467.
Rotemberg, Julio J., and Michael Woodford (1997): “An Optimization-Based Econometric Model for the Evaluation of Monetary Policy,” in NBER Macroeconomics Annual, 1997, ed. by Benjamin S. Bernanke, and Julio J. Rotemberg, vol. 12, pp. 297–346. MIT Press, Cambridge, MA. Summers, Lawrence H. (2011): “Larry Summers and Martin Wolf on New Economic Thinking,” Financial Times video interview, http://tinyurl.com/dl201108a. Yellen, Janet (2016): “Macroeconomic Research After the Crisis,” Available at https://www.federalreserve.gov/newsevents/speech/yellen20161014a.htm.