time domain models box jenkins popularized an approach to
play

Time Domain Models Box & Jenkins popularized an approach to time - PowerPoint PPT Presentation

Time Domain Models Box & Jenkins popularized an approach to time series analysis based on Auto-Regressive Integrated Moving Average (ARIMA) models. 1 Autoregressive Models Autoregressive model of order p (AR( p )): x t =


  1. Time Domain Models • Box & Jenkins popularized an approach to time series analysis based on – Auto-Regressive – Integrated – Moving Average (ARIMA) models. 1

  2. Autoregressive Models • Autoregressive model of order p (AR( p )): x t = φ 1 x t − 1 + φ 2 x t − 2 + · · · + φ p x t − p + w t , where: – x t is stationary with mean 0; – φ 1 , φ 2 , . . . , φ p are constants with φ p � = 0; – w t is uncorrelated with x t − j , j = 1 , 2 , . . . 2

  3. • To model a series with non-zero mean µ : � + φ 2 � + · · · + φ p � � ( x t − µ ) = φ 1 � x t − 1 − µ � x t − 2 − µ + w t , x t − p − µ or x t = α + φ 1 x t − 1 + φ 2 x t − 2 + · · · + φ p x t − p + w t , where α = µ (1 − φ 1 − φ 2 − · · · − φ p ) . • Note that the intercept α is not µ . 3

  4. • Note also that � � w t = x t − φ 1 x t − 1 + φ 2 x t − 2 + · · · + φ p x t − p and is therefore also stationary. • Furthermore, for k > 0, � � w t − k = x t − k − φ 1 x t − k − 1 + φ 2 x t − k − 2 + · · · + φ p x t − k − p and w t is uncorrelated with all terms on the right hand side. • So w t is uncorrelated with w t − k . • That is, { w t } is white noise . 4

  5. The Autoregressive Operator • Use the backshift operator: x t = φ 1 Bx t + φ 2 B 2 x t + · · · + φ p B P x t + w t , or 1 − φ 1 B − φ 2 B 2 − · · · − φ p B p � � x t = w t . • The autoregressive operator is φ ( B ) = 1 − φ 1 B − φ 2 B 2 − · · · − φ p B p . • In operator form, the model equation is φ ( B ) x t = w t . 5

  6. Example: AR (1) • For the first-order model: x t = φx t − 1 + w t . • Also x t − 1 = φx t − 2 + w t − 1 so � + w t x t = φ � φx t − 2 + w t − 1 = φ 2 x t − 2 + φw t − 1 + w t . 6

  7. • Now use x t − 2 = φx t − 3 + w t − 2 so � + φw t − 1 + w t x t = φ 2 � φx t − 3 + w t − 2 = φ 3 x t − 3 + φ 2 w t − 2 + φw t − 1 + w t . • Continuing: k − 1 x t = φ k x t − k + φ j w t − j . � j =0 7

  8. • We have shown: k − 1 x t = φ k x t − k + φ j w t − j . � j =0 • Since x t is stationary, if | φ | < 1 then φ k x t − k → 0 as k → ∞ , so ∞ φ j w t − j , � x t = j =0 • an infinite moving average, or linear process . 8

  9. Moments • Mean: E( x t ) = 0. • Autocovariances: for h ≥ 0 � � γ ( h ) = cov x t + h , x t       φ j w t + h − j φ k w t − k � � = E     j k = σ 2 w φ h 1 − φ 2 . 9

  10. • Autocorrelations: for h ≥ 0 ρ ( h ) = γ ( h ) γ (0) = φ h . • Note that ρ ( h ) = φρ ( h − 1) , h = 1 , 2 , . . . • Compare with the original equation x t = φx t − 1 + w t . 10

  11. Simulations • plot(arima.sim(model = list(ar = .9), 100)) 11

  12. Causality • What if | φ | > 1? Rewrite x t = φx t − 1 + w t . as x t = φ − 1 x t +1 − φ − 1 w t +1 • Now ∞ φ − j w t + j , � x t = − j =1 a sum of future noise terms. This process is said to be not causal . If | φ | < 1 the process is causal. 12

  13. The Autoregressive Operator Again • Compare the original equation: ⇒ x t = (1 − φB ) − 1 w t . x t = φx t − 1 + w t ⇐ ⇒ (1 − φB ) x t = w t ⇐ • with the (infinite) moving average representation:   ∞ ∞ φ j w t − j = φ j B j � �  w t x t =  j =0 j =0 13

  14. • So ∞ (1 − φB ) − 1 = φ j B j . � j =0 • Compare with ∞ ∞ 1 (1 − φz ) − 1 = ( φz ) j = φ j z j , � � 1 − φz = j =0 j =0 valid for | z | < 1 (because | φ | < 1). • We can manipulate expressions in B as if it were a complex number z with | z | < 1. 14

  15. Stationary versus Transient • E.g. AR(1): • Stationary version, when | φ | < 1: ∞ φ j w t − j � x t = j =0 • But suppose we want to simulate, using x t = φx t − 1 + w t , t = 1 , 2 , . . . • What about x 0 ? 15

  16. • One possibility: let x 0 = 0. • Then x 1 = w 1 , x 2 = w 2 + φw 1 , and generally t − 1 φ j w t − j . � x t = j =0 • This means that 1 + φ 2 + φ 4 + · · · + φ 2( t − 1) � var( x t ) = σ 2 � . w • var( x t ) depends on t ⇒ this version is not stationary. 16

  17. • But, if | φ | < 1, then for large t , σ 2 1 + φ 2 + φ 4 + . . . var( x t ) ≈ σ 2 � � w = w 1 − φ 2 • Also, under the same conditions (more work!), ≈ σ 2 w φ | h | � � cov x t + h , x t 1 − φ 2 . • This version is called asymptotically stationary. • The non-stationarity is only for small t , and is called tran- sient . Simulations use a burn-in or spin-up period: discard the first few simulated values. 17

  18. σ 2 � � • But note: in the stationary version, x 0 ∼ N 0 , w . 1 − φ 2 • If we simulate x 0 from this distribution, and for t > 0 use x t = φx t − 1 + w t , t = 1 , 2 , . . . , then the result is exactly stationary. • That is, we can use a simulation with no spin-up. • This is harder for AR( p ) when p > 1, so most simulators use a spin-up period. 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend