Time Domain Models Box & Jenkins popularized an approach to time - - PowerPoint PPT Presentation

time domain models box jenkins popularized an approach to
SMART_READER_LITE
LIVE PREVIEW

Time Domain Models Box & Jenkins popularized an approach to time - - PowerPoint PPT Presentation

Time Domain Models Box & Jenkins popularized an approach to time series analysis based on Auto-Regressive Integrated Moving Average (ARIMA) models. 1 Autoregressive Models Autoregressive model of order p (AR( p )): x t =


slide-1
SLIDE 1

Time Domain Models

  • Box & Jenkins popularized an approach to time series analysis

based on – Auto-Regressive – Integrated – Moving Average (ARIMA) models.

1

slide-2
SLIDE 2

Autoregressive Models

  • Autoregressive model of order p (AR(p)):

xt = φ1xt−1 + φ2xt−2 + · · · + φpxt−p + wt, where: – xt is stationary with mean 0; – φ1, φ2, . . . , φp are constants with φp = 0; – wt is uncorrelated with xt−j, j = 1, 2, . . .

2

slide-3
SLIDE 3
  • To model a series with non-zero mean µ:

(xt − µ) = φ1

xt−1 − µ + φ2 xt−2 − µ + · · · + φp

  • xt−p − µ
  • + wt,
  • r

xt = α + φ1xt−1 + φ2xt−2 + · · · + φpxt−p + wt, where α = µ (1 − φ1 − φ2 − · · · − φp) .

  • Note that the intercept α is not µ.

3

slide-4
SLIDE 4
  • Note also that

wt = xt −

  • φ1xt−1 + φ2xt−2 + · · · + φpxt−p
  • and is therefore also stationary.
  • Furthermore, for k > 0,

wt−k = xt−k −

  • φ1xt−k−1 + φ2xt−k−2 + · · · + φpxt−k−p
  • and wt is uncorrelated with all terms on the right hand side.
  • So wt is uncorrelated with wt−k.
  • That is, {wt} is white noise.

4

slide-5
SLIDE 5

The Autoregressive Operator

  • Use the backshift operator:

xt = φ1Bxt + φ2B2xt + · · · + φpBPxt + wt,

  • r
  • 1 − φ1B − φ2B2 − · · · − φpBp

xt = wt.

  • The autoregressive operator is

φ(B) = 1 − φ1B − φ2B2 − · · · − φpBp.

  • In operator form, the model equation is φ(B)xt = wt.

5

slide-6
SLIDE 6

Example: AR(1)

  • For the first-order model:

xt = φxt−1 + wt.

  • Also

xt−1 = φxt−2 + wt−1 so xt = φ

φxt−2 + wt−1 + wt

= φ2xt−2 + φwt−1 + wt.

6

slide-7
SLIDE 7
  • Now use

xt−2 = φxt−3 + wt−2 so xt = φ2 φxt−3 + wt−2

+ φwt−1 + wt

= φ3xt−3 + φ2wt−2 + φwt−1 + wt.

  • Continuing:

xt = φkxt−k +

k−1

  • j=0

φjwt−j.

7

slide-8
SLIDE 8
  • We have shown:

xt = φkxt−k +

k−1

  • j=0

φjwt−j.

  • Since xt is stationary, if |φ| < 1 then φkxt−k → 0 as k → ∞,

so xt =

  • j=0

φjwt−j,

  • an infinite moving average, or linear process.

8

slide-9
SLIDE 9

Moments

  • Mean: E(xt) = 0.
  • Autocovariances: for h ≥ 0

γ(h) = cov

  • xt+h, xt
  • = E

   

j

φjwt+h−j

   

k

φkwt−k

   

= σ2

wφh

1 − φ2.

9

slide-10
SLIDE 10
  • Autocorrelations: for h ≥ 0

ρ(h) = γ(h) γ(0) = φh.

  • Note that

ρ(h) = φρ(h − 1), h = 1, 2, . . .

  • Compare with the original equation

xt = φxt−1 + wt.

10

slide-11
SLIDE 11

Simulations

  • plot(arima.sim(model = list(ar = .9), 100))

11

slide-12
SLIDE 12

Causality

  • What if |φ| > 1? Rewrite

xt = φxt−1 + wt. as xt = φ−1xt+1 − φ−1wt+1

  • Now

xt = −

  • j=1

φ−jwt+j, a sum of future noise terms. This process is said to be not

  • causal. If |φ| < 1 the process is causal.

12

slide-13
SLIDE 13

The Autoregressive Operator Again

  • Compare the original equation:

xt = φxt−1 + wt ⇐ ⇒ (1 − φB)xt = wt ⇐ ⇒ xt = (1 − φB)−1wt.

  • with the (infinite) moving average representation:

xt =

  • j=0

φjwt−j =

 

  • j=0

φjBj

  wt

13

slide-14
SLIDE 14
  • So

(1 − φB)−1 =

  • j=0

φjBj.

  • Compare with

(1 − φz)−1 = 1 1 − φz =

  • j=0

(φz)j =

  • j=0

φjzj, valid for |z| < 1 (because |φ| < 1).

  • We can manipulate expressions in B as if it were a complex

number z with |z| < 1.

14

slide-15
SLIDE 15

Stationary versus Transient

  • E.g. AR(1):
  • Stationary version, when |φ| < 1:

xt =

  • j=0

φjwt−j

  • But suppose we want to simulate, using

xt = φxt−1 + wt, t = 1, 2, . . .

  • What about x0?

15

slide-16
SLIDE 16
  • One possibility: let x0 = 0.
  • Then x1 = w1, x2 = w2 + φw1, and generally

xt =

t−1

  • j=0

φjwt−j.

  • This means that

var(xt) = σ2

w

  • 1 + φ2 + φ4 + · · · + φ2(t−1)

.

  • var(xt) depends on t ⇒ this version is not stationary.

16

slide-17
SLIDE 17
  • But, if |φ| < 1, then for large t,

var(xt) ≈ σ2

w

  • 1 + φ2 + φ4 + . . .
  • =

σ2

w

1 − φ2

  • Also, under the same conditions (more work!),

cov

  • xt+h, xt
  • ≈ σ2

wφ|h|

1 − φ2.

  • This version is called asymptotically stationary.
  • The non-stationarity is only for small t, and is called tran-
  • sient. Simulations use a burn-in or spin-up period: discard

the first few simulated values.

17

slide-18
SLIDE 18
  • But note: in the stationary version, x0 ∼ N
  • 0,

σ2

w

1−φ2

  • .
  • If we simulate x0 from this distribution, and for t > 0 use

xt = φxt−1 + wt, t = 1, 2, . . . , then the result is exactly stationary.

  • That is, we can use a simulation with no spin-up.
  • This is harder for AR(p) when p > 1, so most simulators use

a spin-up period.

18