reading the tea leaves model uncertainty robust forecasts
play

Reading the Tea Leaves: Model Uncertainty, Robust Forecasts, and the - PowerPoint PPT Presentation

Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts Forecasts Decomposing the autocorrelation in forecast errors Reading the Tea Leaves: Model Uncertainty, Robust Forecasts, and the Autocorrelation of


  1. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Reading the Tea Leaves: Model Uncertainty, Robust Forecasts, and the Autocorrelation of Analysts’ Forecast Errors J.T. Linnainmaa, W. Torous and J.Yae December 1, 2016 J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  2. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Table of Contents Introduction Autocorrelation Puzzle Hansen-Sargent Robust Forecasting Empirical Methodology Autocorrelation Decomposition J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  3. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Autocorrelation Puzzle For a one-period forecast, if analysts know the process and seek to minimize mean squared-error, forcast errors will have mean zero and be serially uncorrelated. Empirical evidence that forecaster errors tend to be positive and auto-correlated. This would imply that analysts do not learn from past mistakes. Why? J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  4. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Motivating Example Parameter uncertainty: x t = ( a + u t ) x t − 1 + ε t where u t ∼ N ( 0 , σ 2 ) The error dissipates as analysts learn. Model (Knightian) uncertainty : Analyst does not know the underlying model. They have an approximating model. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  5. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Robust Forecasting In a robust forecast , analysts overestimate Authors find that variation in mean forecast errors contributes to one-fifth of the measured autocorrelation Estimation errors of earnings growth shocks contributes another one-fifth of the measured autocorrelation Finally, model uncertainty contributes to 60% of the measured autocorrelation. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  6. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Why is this Important? Contributes to the literature on analyst behavior and asset pricing anomalies. Important regarding the question of efficient distribution of information and welfare. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  7. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Relation to other literature Other literature Uppal and Wang (2003), Maenhout (2004), and Epstein and Schneider (2008) suggest that model uncertainty is of first-order importance for portfolio choice and asset pricing. Hilary and Hsu (2013) find analyst consistency rather than accuracy determines their ranking. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  8. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Earning and Signals Processes Earnings Process y t + 1 = µ + x t + 1 + a t + 1 where y t + 1 is the reported earnings, x t + 1 is the persistent ( permanent) component of the earnings process and a t + 1 is noise. Signal Process (Private Signal) s t = e t + 1 + n t where e t + 1 is the permanent earnings shock and n t ∼ N ( 0 , σ 2 n ) . All shocks have zero cross-correlations, autocorrelations and cross-autocorrelations. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  9. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Earning and Signals Processes The analyst’s objective in period t is estimate y t + 1 given the history of earnings and signals E [ y t + 1 | s t , s t − 1 ,..., s 1 , y t , y t − 1 ,..., y 1 ] = E [ y t + 1 | F t ] This is the linear part of the model J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  10. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors The Uncertainty Environment a w t = κ 0 + κ 1 a ∗ t where a w is the worse case realization and a ∗ is the analyst’s approximating model Analysts do not know the distribution but we assume they σ 2 approximate this noise as a t ∼ i . i . d . N ( 0 , ˆ a ∗ ) . The author’s σ 2 assume the approximated variance, ˆ a , is equal to the real variance σ 2 a in order to ensure the approximating model is good. The actual realization is a w t ∼ N ( κ 0 , κ 2 1 σ 2 a ) , where κ 0 is a real number and κ 1 is a non-negative number. The realization is a function of a random draw from this distribution. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  11. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors The Robust Forecasting Problem ( κ 0 , κ 1 ) E [ { y w min max t − ˆ y t | t − 1 | F t − 1 ] ˆ y t | t − 1 subject to E [ { ( a w t − a ∗ x w } 2 | F t − 1 ] ≤ η 2 σ 2 t ) +(ˆ t | t − 1 − ˆ x t | t − 1 ) a � �� � � �� � Deviation Perceived Bias where y w is the worst ex ante outcome; ˆ y t | t − 1 is the analyst’s x w optimal forecast given information hitherto; ˆ t | t − 1 is the optimal forecast of x t (using a Kalman filter) under the worst case; and x t | t − 1 is the optimal forecast forecast of x t given the analyst’s ˆ expectations of the “evil agent’s” choice of κ 0 and κ 1 . Finally, a w t is the worse case realization of a t , whereas a ∗ is the approximating estimate. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  12. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Direct and Indirect Effects t − a ∗ ) is the direct effect. This expresses the amount of ( a w distortion induced by the “evil agent.” x w (ˆ t || t − 1 − ˆ x t | t − 1 ) is the indirect effect from the analysts relying on inaccurate historical information in their future estimations. η measures the agent’s concern for model misspecification and σ 2 a the variance of the noise induced by the “evil agent.” Thus, ησ 2 a is the degree of robustness in the model. As η → ∞ , the entropy becomes so great that it becomes impossible for the analyst to distinguish models. When η = 0, we have a standard Rational Expectations model. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  13. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Minimax Optimization The analyst solves a static optimization problem: the forecasts are independent from her last forecasts and the same solution applies at every date t. The analyst knows the parameters of the true earnings process completely determine her current estimate ( ˆ y t | t − 1 ) . In other words, after choosing (ˆ κ 0 , ˆ κ 1 ) , their estimate of the “evil agent’s” noise process, the analyst obtains an optimal forecast using a Kalman filter. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  14. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Intuition behind Lemma 2.1 The forecast is a function of the previous forecast ˆ y t , the forecast error ( y t − ˆ y t ) and the additional signal s t . The Kalman gain K captures how much the analyst uses previous forecast errors to revise estimates of x t The weight w measures how much the analyst uses the extra-signal s t to estimate e t + 1 , the permanent growth shock. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

  15. Introduction Robust Forecasting Empirical Methodology Empirical Analysis of Analysts’ Forecasts Decomposing the autocorrelation in forecast errors Intuition behind Prop. 2 If ˆ θ = θ - that is the analyst predicts the true values of the model - autocorrelation of forecast errors goes to zero. With robust forecasting, analyst knows everything but the distribution of the noise a t . The first term goes to zero but the second two terms are strictly positive. J.T. Linnainmaa, W. Torous and J.Yae Reading the Tea Leaves: Model Uncertainty, Robust Forecasts,

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend