non stationary time series and unit root tests
play

Non-Stationary Time Series and Unit Root Tests Heino Bohn Nielsen - PowerPoint PPT Presentation

Econometrics 2 Non-Stationary Time Series and Unit Root Tests Heino Bohn Nielsen 1 of 28 Outline (1) Deviations from stationarity: Trends. Level shifts. Variance changes. Unit roots. (2) More on autoregressive unit root


  1. Econometrics 2 Non-Stationary Time Series and Unit Root Tests Heino Bohn Nielsen 1 of 28

  2. Outline (1) Deviations from stationarity: • Trends. • Level shifts. • Variance changes. • Unit roots. (2) More on autoregressive unit root processes. (3) Dickey-Fuller unit root test. • AR(1). • AR(p). • Deterministic terms. (4) Further issues. 2 of 28

  3. Stationarity • The main assumption on the time series data so far has been stationarity. Recall the de fi nition: A time series is called weakly stationary if E [ y t ] = µ V [ y t ] = E [( y t − µ ) 2 ] = γ 0 Cov [ y t , y t − k ] = E [( y t − µ ) ( y t − k − µ )] = γ k for k = 1 , 2 , ... • This can be violated in di ff erent ways. Examples of non-stationarity: (A) Deterministic trends (trend stationarity). (B) Level shifts. (C) Variance changes. (D) Unit roots (stochastic trends). 3 of 28

  4. Four Non-Stationary Time Series (A) Stationary and trend-stationary process (B) Process with a level shift 10 x t 5 5 ~ x t 0 0 0 50 100 150 200 0 50 100 150 200 (C) Process with a change in the variance (D) Unit root process 10 5 5 0 0 -5 0 50 100 150 200 0 50 100 150 200 4 of 28

  5. (A) Trend-Stationarity • Observation: Many macro-economic variables are trending. We need a model for a trending variable, y t . • Assume that x t is stationary and that y t is x t plus a deterministic linear trend, e.g. x t = θx t − 1 + � t , | θ | < 1 , y t = x t + µ 0 + µ 1 t. • Remarks: (1) y t has a trending mean, E [ y t ] = µ 0 + µ 1 t , and is non-stationary. (2) The de-trended variable, b x t = y t − b µ 0 − b µ 1 t , is stationary. We say that y t is trend-stationary . (3) The stochastic part is stationary and standard asymptotics apply to b x t . (4) Solution: Extend the regression with a deterministic trend, e.g. y t = β 0 + β 1 · z t + β 3 · t + � t . 5 of 28

  6. (B) Level Shifts and Structural Breaks • Another type of non-stationarity in due to changes in parameters, e.g. a level shift: ½ µ 1 for t = 1 , 2 , ..., T 0 E [ y t ] = µ 2 for t = T 0 + 1 , T 0 + 2 , ..., T. • If each sub-sample is stationary, then there are two modelling approaches: (1) Include a dummy variable ½ 0 for t = 1 , 2 , ..., T 0 D t = 1 for t = T 0 + 1 , T 0 + 2 , ..., T in the regression model, y t = β 0 + β 1 · z t + β 3 · D t + � t . If y t − β 3 · D t is stationary, standard asymptotics apply. (2) Analyze the two sub-samples separately. This is particularly relevant if we think that more parameters have changed. 6 of 28

  7. (C) Changing Variances • A third type of non-stationary is related to changes in the variance. An example is y t = 0 . 5 · y t − 1 + � t , where ½ N (0 , 1) for t = 1 , 2 , ..., T 0 � t ∼ N (0 , 5) for t = T 0 + 1 , T 0 + 2 , ..., T The interpretation is that the time series covers di ff erent regimes. • A natural solution is to model the regimes separately. • Alternatively we can try to model the variance. We return to so-called ARCH models for changing variance later. 7 of 28

  8. (D) Unit Roots • If there is a unit root in an autoregressive model, no standard asymptotics apply! Consider the DGP y t = θy t − 1 + � t , � t ∼ N (0 , 1) , for t = 1 , 2 , ..., 500 , and y 0 = 0 . Consider the distribution of b θ . • Note: the shape, location and variance of the distributions. (C) Distribution of ^ (D) Distribution of ^ θ for θ =0.5 θ for θ =1 Distribution of ^ θ Distribution of ^ θ 10.0 N(s=0.0389) N(s=0.00588) 100 7.5 5.0 50 2.5 0.0 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.97 0.98 0.99 1.00 1.01 8 of 28

  9. Properties of a Stationary AR(1) • Consider the AR(1) model y t = θy t − 1 + � t , t = 1 , 2 , ..., T. The characteristic polynomial is θ ( z ) = 1 − θz , with characteristic root z 1 = θ − 1 and inverse root φ 1 = z − 1 = θ . The stationarity condition is that | θ | < 1 . 1 • Recall the solution y t = � t + θ� t − 1 + θ 2 � t − 2 + ... + θ t − 1 � 1 + θ t y 0 , where θ s → 0 . Shocks have only transitory e ff ects; y t has an attractor. • Note the properties E [ y t ] = θ t y 0 → 0 σ 2 V [ y t ] = σ 2 + θ 2 σ 2 + θ 4 σ 2 + ... + θ t − 1 σ → 1 − θ 2 ρ s = Corr ( y t , y t − s ) = θ s 9 of 28

  10. Simulated Example θ = 0.8 θ = 1 (A) Shock to a stationary process, (B) Shock to a unit root process, 10 10 5 0 0 0 20 40 60 80 100 0 20 40 60 80 100 θ =0.8 θ =1 (C) ACF for stationary process, (D) ACF for unit root process, 1.0 1.0 0.5 0.5 0 5 10 15 20 25 0 5 10 15 20 25 10 of 28

  11. Autoregression with a Unit Root • Consider the AR(1) with θ = 1 , i.e. y t = y t − 1 + � t . The characteristic polynomial is θ ( z ) = 1 − z . There is a unit root, θ (1) = 0 . • The solution is given by X t y t = y 0 + ∆ y 1 + ∆ y 2 + ... + ∆ y t = y 0 + � 1 + � 2 + ... + � t = y 0 + � i . i =1 • Note the remarkable di ff erences between θ = 1 and a stationary process, | θ | < 1 : (1) The e ff ect of the initial value, y 0 , stays in the process. And E [ y t | y 0 ] = y 0 . (2) Shocks, � t , have permanent e ff ects. Accumulated to a random walk component, P � i , called a stochastic trend. 11 of 28

  12. (3) The variance increases, hX t i = t · σ 2 . V [ y t ] = V i =1 � i The process is clearly non-stationary. (4) The covariance, Cov ( y t , y t − s ) , is given by E [( y t − y 0 )( y t − s − y 0 )] = E [( � 1 + � 2 + ... + � t )( � 1 + � 2 + ... + � t − s )] = ( t − s ) σ 2 . The autocorrelation is r ( t − s ) σ 2 Cov ( y t , y t − s ) t − s p p Corr ( y t , y t − s ) = = tσ 2 · ( t − s ) σ 2 = , t V [ y t ] · V [ y t − s ] which dies out very slowly with s . (5) The fi rst di ff erence, ∆ y t = � t , is stationary. y t is called integrated of fi rst order, I (1) . 12 of 28

  13. Deterministic Terms • To model actual time series we include deterministic terms, e.g. y t = δ + θy t − 1 + � t . With a unit root, the terms accumulate! • If | θ | < 1 , the solution is X t θ i � t − i + (1 + θ + θ 2 + ... ) δ, y t = θ t y 0 + i =0 where the mean converges to (1 + θ + θ 2 + ... ) δ → δ/ (1 − θ ) . • If θ = 1 , the solution is X t X t y t = y 0 + ( δ + � i ) = y 0 + δt + � i . i =1 i =1 The constant term produces a deterministic linear trend: Random walk with drift. Note the parallel between a deterministic and a stochastic trend. 13 of 28

  14. Unit Root Testing • Estimate an autoregressive model and test whether θ (1) = 0 , i.e. whether z = 1 is a root in the autoregressive polynomial. • This is a straightforward hypothesis test! We compare two relevant models: H 0 and H A . The complication is that the asymptotic distributions are not standard. • Issues: (1) Remember to specify the statistical model carefully! (2) What kinds of deterministic components are relevant: constant or trend? (3) What are the properties of the model under the null and under the alternative. Are both H 0 and H A relevant? (4) What is the relevant distribution of the test statistic? 14 of 28

  15. Dickey-Fuller Test in an AR(1) • Consider an AR(1) model y t = θy t − 1 + � t . The unit root hypothesis is θ (1) = 1 − θ = 0 . The one-sided test against stationarity: H 0 : θ = 1 against H A : − 1 < θ < 1 . • An equivalent formulation is ∆ y t = πy t − 1 + � t , where π = θ − 1 = − θ (1) . The hypothesis θ (1) = 0 translates into H 0 : π = 0 H A : − 2 < π < 0 . against • The Dickey-Fuller (DF) test statistic is simply the t − ratio, i.e. b b θ − 1 π b τ = = π ) . se ( b se ( b θ ) The asymptotic distribution is Dickey-Fuller, DF, and not N (0 , 1) . 15 of 28

  16. Quantile Distribution 1% 2 . 5% 5% 10% N (0 , 1) − 2 . 33 − 1 . 96 − 1 . 64 − 1 . 28 − 2 . 56 − 2 . 23 − 1 . 94 − 1 . 62 DF − 3 . 43 − 3 . 12 − 2 . 86 − 2 . 57 DF c − 3 . 96 − 3 . 66 − 3 . 41 − 3 . 13 DF l (A) Dickey-Fuller distributions 0.6 DF c DF l DF N(0,1) 0.4 0.2 0.0 -4 -2 0 2 4 16 of 28

  17. Dickey-Fuller Test in an AR(p) • The DF distribution is derived under the assumption that � t is IID. For the AR(p) process we derive the Augmented Dickey-Fuller (ADF) test. • Consider the case of p = 3 lags: y t = θ 1 y t − 1 + θ 2 y t − 2 + θ 3 y t − 3 + � t . A unit root in θ ( z ) = 1 − θ 1 z − θ 2 z 2 − θ 3 z 3 corresponds to θ (1) = 0 . To avoid testing a restriction on 1 − θ 1 − θ 2 − θ 3 , the model is rewritten as y t − y t − 1 = ( θ 1 − 1) y t − 1 + θ 2 y t − 2 + θ 3 y t − 3 + � t y t − y t − 1 = ( θ 1 − 1) y t − 1 + ( θ 2 + θ 3 ) y t − 2 + θ 3 ( y t − 3 − y t − 2 ) + � t y t − y t − 1 = ( θ 1 + θ 2 + θ 3 − 1) y t − 1 + ( θ 2 + θ 3 )( y t − 2 − y t − 1 ) + θ 3 ( y t − 3 − y t − 2 ) + � t ∆ y t = πy t − 1 + c 1 ∆ y t − 1 + c 2 ∆ y t − 2 + � t , where π = θ 1 + θ 2 + θ 3 − 1 = − θ (1) , c 1 = − ( θ 2 + θ 3 ) , c 2 = − θ 3 . 17 of 28

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend