analysis of multiple time series
play

Analysis of Multiple Time Series Kevin Sheppard - PDF document

Analysis of Multiple Time Series Kevin Sheppard ttsr Oxford MFE This version: February 24, 2020 February March, 2020 This weeks material Vector Autoregressions


  1. Analysis of Multiple Time Series Kevin Sheppard ❤tt♣✿✴✴✇✇✇✳❦❡✈✐♥s❤❡♣♣❛r❞✳❝♦♠ Oxford MFE This version: February 24, 2020 February – March, 2020

  2. This week’s material � Vector Autoregressions � Basic examples � Properties ◮ Stationarity � Revisiting univariate ARMA processes � Forecasting ◮ Granger Causality ◮ Impulse Response functions � Cointegration ◮ Examining long-run relationships ◮ Determining whether a VAR is cointegrated ◮ Error Correction Models ◮ Testing for Cointegration ⊲ Engle-Granger Lots of revisiting univariate time series. 2 / 67

  3. Why VAR analysis? � Stationary VARs ◮ Determine whether variables feedback into one another ◮ Improve forecasts ◮ Model the effect of a shock in one series on another ◮ Differentiate between short-run and long-run dynamics � Cointegration ◮ Link random walks ◮ Uncover long run relationships ◮ Can improve medium to long term forecasting a lot 3 / 67

  4. VAR Defined � P th order autoregression, AR(P): y t = φ 0 + φ 1 y t − 1 + . . . + φ P y t − p + ǫ t � P th order vector autoregression, VAR(P): y t = Φ 0 + Φ 1 y t − 1 + . . . + Φ P y t − p + ǫ t where y t and ǫ t are k by 1 vectors � Bivariate VAR(1): � y 1 ,t � φ 01 � φ 11 � � y 1 ,t − 1 � ǫ 1 ,t � � � � φ 12 = + + y 2 ,t φ 02 φ 21 φ 22 y 2 ,t − 1 ǫ 2 ,t � Compactly expresses two linked models: y 1 ,t = φ 01 + φ 11 y 1 ,t − 1 + φ 12 y 2 ,t − 1 + ǫ 1 ,t y 2 ,t = φ 02 + φ 21 y 1 ,t − 1 + φ 22 y 2 ,t − 1 + ǫ 2 ,t 4 / 67

  5. Stationarity Revisited � Stationarity is a statistically meaningful form of regularity. A stochastic process { y t } is covariance stationary if E[ y t ] = µ ∀ t σ 2 < ∞∀ t V[ y t ] = σ 2 E[( y t − µ )( y t − s − µ )] = γ s ∀ t, s � AR(1) stationarity: y t = φy t − 1 + ǫ t ◮ | φ | < 1 ◮ ǫ t is white noise � AR(P) stationarity: y t = φ 1 y t − 1 + . . . + φ P y t − P + ǫ t ◮ Roots of ( z P − φ 1 z P − 1 − φ 2 z P − 2 − . . . − φ P − 1 z − φ P ) less than 1 ◮ ǫ t is white noise � No dependence on t 5 / 67

  6. Relationship to AR � AR(1) y t = φ 0 + φ 1 y t − 1 + ǫ t = φ 0 + φ 1 ( φ 0 + φ 1 y t − 2 + ǫ t − 1 ) + ǫ t = φ 0 + φ 1 φ 0 + φ 2 1 y t − 2 + φ 1 ǫ t − 1 + ǫ t = φ 0 + φ 1 φ 0 + φ 2 1 ( φ 0 + φ 1 y t − 3 + ǫ t − 2 ) + φ 1 ǫ t − 1 + ǫ t ∞ ∞ � � φ i φ i = φ 0 1 + 1 ǫ t − i i =0 i =0 ∞ � = (1 − φ 1 ) − 1 φ 0 + φ i 1 ǫ t − i i =0 6 / 67

  7. Relationship to AR � VAR(1) y t = Φ 0 + Φ 1 y t − 1 + ǫ t = Φ 0 + Φ 1 ( Φ 0 + Φ 1 y t − 2 + ǫ t − 1 ) + ǫ t = Φ 0 + Φ 1 Φ 0 + Φ 2 1 y t − 2 + Φ 1 ǫ t − 1 + ǫ t = Φ 0 + Φ 1 Φ 0 + Φ 2 1 ( Φ 0 + Φ 1 y t − 3 + ǫ t − 2 ) + Φ 1 ǫ t − 1 + ǫ t ∞ ∞ � � Φ i Φ i = 1 Φ 0 + 1 ǫ t − i i =0 i =0 ∞ � = ( I k − Φ 1 ) − 1 Φ 0 + Φ i 1 ǫ t − i i =0 7 / 67

  8. Properties of a VAR(1) and AR(1) AR(1) : y t = φ 0 + φ 1 y t − 1 + ǫ t VAR(1) : y t = Φ 0 + Φ 1 y t − 1 + ǫ t AR(1) VAR(1) ( I k − Φ 1 ) − 1 Φ 0 Mean φ 0 / (1 − φ 1 ) σ 2 / (1 − φ 2 ( I − Φ 1 ⊗ Φ 1 ) − 1 vec ( Σ ) Variance 1 ) s th Autocovariance Γ s = Φ s γ s = φ s 1 V[ y t ] 1 V[ y t ] -s th Autocovariance ′ Γ − s = V[ y t ] Φ s γ − s = φ s 1 V[ y t ] 1 Autocovariances of vector processes are not symmetric, but Γ s = Γ ′ − s � Stationarity ◮ AR(1): | φ 1 | < 1 ◮ VAR(1): | λ i | < 1 where λ i are the eigenvalues of Φ 1 8 / 67

  9. Stock and Bond VAR � VWM from CRSP � TERM constructed from 10-year bond return minus 1-year return from FRED � February 1962 until December 2018 (683 months) � VWM t � φ 01 � φ 11 , 1 � � VWM t − 1 � ǫ 1 ,t � � � � φ 12 , 1 = + + TER M t φ 02 φ 21 , 1 φ 22 , 1 TER M t − 1 ǫ 2 ,t � Market model: V WM t = φ 01 + φ 11 , 1 V WM t − 1 + φ 12 , 1 10 Y R t − 1 + ǫ 1 ,t � Long bond model TER M t = φ 01 + φ 21 , 1 VWM t − 1 + φ 22 , 1 TER M t − 1 + ǫ 2 ,t . � Estimates � V WM t � V WM t − 1 � ǫ 1 ,t     0 . 801 0 . 059 0 . 166 � � � (0 . 122) (0 . 004) (0 . 000) =  + +    − 0 . 104 0 . 116 TERM t 0 . 232 TERM t − 1 ǫ 2 ,t (0 . 041) (0 . 002) (0 . 000) 9 / 67

  10. Stock and Bond VAR � Estimates from VAR VWM t = 0 . 816 + 0 . 060 VWM t − 1 + 0 . 168 TER M t − 1 (0 . 000) (0 . 117) (0 . 003) TER M t = 0 . 228 − 0 . 104 VWM t − 1 + 0 . 115 TER M t − 1 (0 . 045) (0 . 000) (0 . 002) � Estimates from AR VWM t = 0 . 830 + 0 . 073 VWM t − 1 (0 . 000) (0 . 057) TER M t = 0 . 137 + 0 . 098 TER M t − 1 (0 . 224) (0 . 011) 10 / 67

  11. Comparing AR and VAR forecasts 1-month-ahead forecasts of the VWM returns 0.2 0.1 0.0 727198 728659 730120 731581 733042 734503 735964 1-month-ahead forecasts of 10-year bond returns 0.2 0.1 0.0 -0.1 727198 728659 730120 731581 733042 734503 735964 11 / 67

  12. Monetary Policy VAR � Standard tool in monetary policy analysis ◮ Unemployment rate (differenced) ◮ Federal Funds rate ◮ Inflation rate (differenced)       ∆ ❯◆❊▼P t ∆ ❯◆❊▼P t − 1 ǫ 1 ,t  = Φ 0 + Φ 1  +  . ǫ 2 ,t ❋❋ t ❋❋ t − 1    ∆ ■◆❋ t ∆ ■◆❋ t − 1 ǫ 3 ,t ∆ ln ❯◆❊▼P t − 1 ∆ ■◆❋ t − 1 ❋❋ t − 1 ∆ ln ❯◆❊▼P t 0 . 624 0 . 015 0 . 016 (0 . 000) (0 . 001) (0 . 267) − 0 . 816 0 . 979 − 0 . 045 ❋❋ t (0 . 000) (0 . 000) (0 . 317) ∆ ■◆❋ t − 0 . 501 − 0 . 009 − 0 . 401 (0 . 010) (0 . 626) (0 . 000) 12 / 67

  13. Interpreting Estimates � Variable scale affects cross-parameter estimates ◮ Not an issue in ARMA analysis � Standardizing data can improve interpretation when scales differ ∆ ln ❯◆❊▼P t − 1 ∆ ■◆❋ t − 1 ❋❋ t − 1 ∆ ln ❯◆❊▼P t 0 . 624 0 . 153 0 . 053 (0 . 000) (0 . 001) (0 . 267) − 0 . 080 0 . 979 − 0 . 015 ❋❋ t (0 . 000) (0 . 000) (0 . 317) ∆ ■◆❋ t − 0 . 151 − 0 . 028 − 0 . 401 (0 . 010) (0 . 626) (0 . 000) � Other important measures – statistical significance, persistence, model selection – are unaffected by standardization 13 / 67

  14. VAR(P) is really a VAR(1) � Companion form: y t = Φ 0 + Φ 1 y t − 1 + Φ 2 y t − 2 + . . . + Φ P y t − P + ǫ t � Reform into a single VAR(1) where µ = E [ y t ] = ( I − Φ 1 − . . . − Φ P ) − 1 Φ 0 z t = Υz t − 1 + ξ t   . . . Φ 1 Φ 2 Φ 3 Φ P − 1 Φ P   y t − µ . . . I k 0 0 0 0   y t − 1 − µ     . . . 0 I k 0 0 0 z t =  , Υ =  .    .     . . . . . . . . . . . . .    . . . . . .   y t − P +1 − µ . . . 0 0 0 I k 0 ◮ All results can be directly applied to the companion form. ◮ Can also be used to transform AR(P) into VAR(1) 14 / 67

  15. Revisiting Univariate Forecasting � Consider standard AR(1) y t = φ 0 + φ 1 y t − 1 + ǫ t � Optimal 1-step ahead forecast: E t [ y t +1 ] = E t [ φ 0 ] + E t [ φ 1 y t ] + E t [ ǫ t +1 ] = φ 0 + φ 1 y t + 0 � Optimal 2-step ahead forecast: E t [ y t +2 ] = E t [ φ 0 ] + E t [ φ 1 y t +1 ] + E t [ ǫ t +2 ] = φ 0 + φ 1 E t [ y t +1 ] + 0 = φ 0 + φ 1 ( φ 0 + φ 1 y t ) = φ 0 + φ 1 φ 0 + φ 2 1 y t � Optimal h -step ahead forecast: h − 1 � φ i 1 φ 0 + φ h E t [ y t + h ] = 1 y t i =0 15 / 67

  16. Forecasting with VARs � Identical to univariate case y t = Φ 0 + Φ 1 y t − 1 + ǫ t � Optimal 1-step ahead forecast: E t [ y t +1 ] = E t [ Φ 0 ] + E t [ Φ 1 y t ] + E t [ ǫ t +1 ] = Φ 0 + Φ 1 y t + 0 � Optimal h-step ahead forecast: E t [ y t + h ] = Φ 0 + Φ 1 Φ 0 + . . . + Φ h − 1 Φ 0 + Φ h 1 y t 1 h − 1 � Φ i 1 Φ 0 + Φ h = 1 y t i =0 � Higher order forecast can be recursively computed E t [ y t + h ] = Φ 0 + Φ 1 E t [ y t + h − 1 ] + . . . + Φ P E t [ y t + h − P ] 16 / 67

  17. What makes a good forecast? � Forecast residuals e t + h | t = y t + h − ˆ ˆ y t + h | t � Residuals are not white noise � Can contain an MA( h − 1 ) component ◮ Forecast error for y t +1 − ˆ y t +1 | t − h +1 was not known at time t . � Plot your residuals � Residual ACF � Mincer-Zarnowitz regressions � Three period procedure ◮ Training sample: Used to build model ◮ Validation sample: Used to refine model ◮ Evaluation sample: Ultimate test, ideally 1 shot 17 / 67

  18. Multi-step Forecasting � Two methods � Iterative method ◮ Build model for 1-step ahead forecasts y t = Φ 0 + Φ 1 y t − 1 + ǫ t ◮ Iterate forecast out to period h h − 1 � Φ i 1 Φ 0 + Φ h y t + h | t = ˆ 1 y t i =0 ◮ Makes efficient use of information ◮ Imposes a lot of structure on the problem � Direct Method ◮ Build model for h -step ahead forecasts y t = Φ 0 + Φ h y t − h + ǫ t ◮ Directly forecast using a pseudo 1-step ahead method y t + h | t = Φ 0 + Φ h y t ˆ ◮ Robust to some nonlinearities 18 / 67

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend