ARMA Autocorrelation Functions For a moving average process, MA( q - - PowerPoint PPT Presentation

arma autocorrelation functions for a moving average
SMART_READER_LITE
LIVE PREVIEW

ARMA Autocorrelation Functions For a moving average process, MA( q - - PowerPoint PPT Presentation

ARMA Autocorrelation Functions For a moving average process, MA( q ): x t = w t + 1 w t 1 + 2 w t 2 + + q w t q . So (with 0 = 1) ( h ) = cov x t + h , x t q q


slide-1
SLIDE 1

ARMA Autocorrelation Functions

  • For a moving average process, MA(q):

xt = wt + θ1wt−1 + θ2wt−2 + · · · + θqwt−q.

  • So (with θ0 = 1)

γ(h) = cov

  • xt+h, xt
  • = E

   

q

  • j=0

θjwt+h−j

   

q

  • k=0

θkwt−k

   

=

        

σ2

w q−h

  • j=0

θjθj+h, 0 ≤ h ≤ q h > q.

1

slide-2
SLIDE 2
  • So the ACF is

ρ(h) =

                    

q−h

  • j=0

θjθj+h

q

  • j=0

θ2

j

, 0 ≤ h ≤ q h > q.

  • Notes:

– In these expressions, θ0 = 1 for convenience. – γ(q) = 0 but γ(h) = 0 for h > q. This characterizes MA(q).

2

slide-3
SLIDE 3
  • For an autoregressive process, AR(p):

xt = φ1xt−1 + φ2xt−2 + · · · + φpxt−p + wt.

  • So

γ(h) = cov

  • xt+h, xt
  • = E

   

p

  • j=1

φjxt+h−j + wt+h

  xt  

=

p

  • j=1

φjγ(h − j) + cov

  • wt+h, xt
  • .

3

slide-4
SLIDE 4
  • Because xt is causal, xt is wt+ a linear combination of wt−1, wt−2, . . . .
  • So

cov

  • wt+h, xt
  • =

  

σ2

w

h = 0 h > 0.

  • Hence

γ(h) =

p

  • j=1

φjγ(h − j), h > 0 and γ(0) =

p

  • j=1

φjγ(−j) + σ2

w.

4

slide-5
SLIDE 5
  • If we know the parameters φ1, φ2, . . . , φp and σ2

w, these equa-

tions for h = 0 and h = 1, 2, . . . , p form p + 1 linear equations in the p + 1 unknowns γ(0), γ(1), . . . , γ(p).

  • The other autocovariances can then be found recursively

from the equation for h > p.

  • Alternatively, if we know (or have estimated) γ(0), γ(1), . . . , γ(p),

they form p + 1 linear equations in the p + 1 parameters φ1, φ2, . . . , φp and σ2

w.

  • These are the Yule-Walker equations.

5

slide-6
SLIDE 6
  • For the ARMA(p, q) model with p > 0 and q > 0:

xt =φ1xt−1 + φ2xt−2 + · · · + φpxt−p + wt + θ1wt−1 + θ2wt−2 + · · · + θqwt−q, a generalized set of Yule-Walker equations must be used.

  • The moving average models ARMA(0, q) = MA(q) are the
  • nly ones with a closed form expression for γ(h).
  • For AR(p) and ARMA(p, q) with p > 0, the recursive equation

means that for h > max(p, q + 1), γ(h) is a sum of geomet- rically decaying terms, possibly damped oscillations.

6

slide-7
SLIDE 7
  • The recursive equation is

γ(h) =

p

  • j=1

φjγ(h − j), h > q.

  • What kinds of sequences satisfy an equation like this?

– Try γ(h) = z−h for some constant z. – The equation becomes 0 = z−h −

p

  • j=1

φjz−(h−j) = z−h

 1 −

p

  • j=1

φjzj

  = z−hφ(z).

7

slide-8
SLIDE 8
  • So if φ(z) = 0, then γ(h) = z−h satisfies the equation.
  • Since φ(z) is a polynomial of degree p, there are p solutions,

say z1, z2, . . . , zp.

  • So a more general solution is

γ(h) =

p

  • l=1

clz−h

l

, for any constants c1, c2, . . . , cp.

  • If z1, z2, . . . , zp are distinct, this is the most general solution;

if some roots are repeated, the general form is a little more complicated.

8

slide-9
SLIDE 9
  • If all z1, z2, . . . , zp are real, this is a sum of geometrically

decaying terms.

  • If any root is complex, its complex conjugate must also be a

root, and these two terms may be combined into geometri- cally decaying sine-cosine terms.

  • The constants c1, c2, . . . , cp are determined by initial condi-

tions; in the ARMA case, these are the Yule-Walker equa- tions.

  • Note that the various rates of decay are the zeros of φ(z),

the autoregressive operator, and do not depend on θ(z), the moving average operator.

9

slide-10
SLIDE 10
  • Example: ARMA(1, 1)

xt = φxt−1 + θwt−1 + wt.

  • The recursion is

γ(h) = φγ(h − 1), h = 2, 3, . . .

  • So γ(h) = cφh for h = 1, 2, . . . , but c = 1.
  • Graphically, the ACF decays geometrically, but with a differ-

ent value at h = 0.

10

slide-11
SLIDE 11
  • 5

10 15 20 25 0.2 0.4 0.6 0.8 1.0 Index ARMAacf(ar = 0.9, ma = −0.5, 24)

11

slide-12
SLIDE 12

The Partial Autocorrelation Function

  • An MA(q) can be identified from its ACF: non-zero to lag q,

and zero afterwards.

  • We need a similar tool for AR(p).
  • The partial autocorrelation function (PACF) fills that role.

12

slide-13
SLIDE 13
  • Recall: for multivariate random variables X, Y, Z, the partial

correlations of X and Y given Z are the correlations of: – the residuals of X from its regression on Z; and – the residuals of Y from its regression on Z.

  • Here “regression” means conditional expectation, or best lin-

ear prediction, based on population distributions, not a sam- ple calculation.

  • In a time series, the partial autocorrelations are defined as

φh,h = partial correlation of xt+h and xt given xt+h−1, xt+h−2, . . . , xt+1.

13

slide-14
SLIDE 14
  • For an autoregressive process, AR(p):

xt = φ1xt−1 + φ2xt−2 + · · · + φpxt−p + wt,

  • If h > p, the regression of xt+h on xt+h−1, xt+h−2, . . . , xt+1 is

φ1xt+h−1 + φ2xt+h−2 + · · · + φpxt+h−p

  • So the residual is just wt+h, which is uncorrelated with

xt+h−1, xt+h−2, . . . , xt+1 and xt.

14

slide-15
SLIDE 15
  • So the partial autocorrelation is zero for h > p:

φh,h = 0, h > p.

  • We can also show that φp,p = φp, which is non-zero by as-

sumption.

  • So φp,p = 0 but φh,h = 0 for h > p. This characterizes AR(p).

15

slide-16
SLIDE 16

The Inverse Autocorrelation Function

  • SAS’s proc arima also shows the Inverse Autocorrelation Func-

tion (IACF).

  • The IACF of the ARMA(p, q) model

φ(B)xt = θ(B)wt is defined to be the ACF of the inverse (or dual) process θ(B)x(inverse)

t

= φ(B)wt.

  • The IACF has the same property as the PACF: AR(p) is

characterized by an IACF that is nonzero at lag p but zero for larger lags.

16

slide-17
SLIDE 17

Summary: Identification of ARMA processes

  • AR(p) is characterized by a PACF or IACF that is:

– nonzero at lag p; – zero for lags larger than p.

  • MA(q) is characterized by an ACF that is:

– nonzero at lag q; – zero for lags larger than q.

  • For anything else, try ARMA(p, q) with p > 0 and q > 0.

17

slide-18
SLIDE 18

For p > 0 and q > 0: AR(p) MA(q) ARMA(p, q) ACF Tails off Cuts off after lag q Tails off PACF Cuts off after lag p Tails off Tails off IACF Cuts off after lag p Tails off Tails off

  • Note: these characteristics are used to guide the initial choice
  • f a model; estimation and model-checking will often lead to

a different model.

18

slide-19
SLIDE 19

Other ARMA Identification Techniques

  • SAS’s proc arima offers the MINIC option on the identify

statement, which produces a table of SBC criteria for various values of p and q.

  • The identify statement has two other options: ESACF and

SCAN.

  • Both produce tables in which the pattern of zero and non-

zero values characterize p and q.

  • See Section 3.4.10 in Brocklebank and Dickey.

19

slide-20
SLIDE 20
  • ptions linesize = 80;
  • ds html file = ’varve3.html’;

data varve; infile ’../data/varve.dat’; input varve; lv = log(varve); run; proc arima data = varve; title ’Use identify options to identify a good model’; identify var = lv(1) minic esacf scan; estimate q = 1 method = ml; estimate q = 2 method = ml; estimate p = 1 q = 1 method = ml;

slide-21
SLIDE 21

run;

  • proc arima output