Means Recall: We model a time series as a collection of random - - PowerPoint PPT Presentation

means recall we model a time series as a collection of
SMART_READER_LITE
LIVE PREVIEW

Means Recall: We model a time series as a collection of random - - PowerPoint PPT Presentation

Means Recall: We model a time series as a collection of random variables: x 1 , x 2 , x 3 , . . . , or more generally { x t , t T } . The mean function is x,t = E( x t ) = xf t ( x ) dx where the expectation is for the


slide-1
SLIDE 1

Means

  • Recall: We model a time series as a collection of random

variables: x1, x2, x3, . . . , or more generally {xt, t ∈ T }.

  • The mean function is

µx,t = E(xt) =

∞ xft(x)dx

where the expectation is for the given t, across all the possible values of xt. Here ft(·) is the pdf of xt.

1

slide-2
SLIDE 2

Example: Moving Average

  • wt is white noise, with E (wt) = 0 for all t
  • the moving average is

vt = 1 3

  • wt−1 + wt + wt+1
  • so

µv,t = E (vt) = 1 3

  • E

wt−1 + E (wt) + E

  • wt+1
  • = 0.

2

slide-3
SLIDE 3

Moving Average Model with Mean Function

Time v 100 200 300 400 500 −2 −1 1

3

slide-4
SLIDE 4

Example: Random Walk with Drift

  • The random walk with drift δ is

xt = δt +

t

  • j=1

wj

  • so

µx,t = E (xt) = δt +

t

  • j=1

E

  • wj
  • = δt,

a straight line with slope δ.

4

slide-5
SLIDE 5

Random Walk Model with Mean Function

Time x 100 200 300 400 500 20 40 60 80

5

slide-6
SLIDE 6

Example: Signal Plus Noise

  • The “signal plus noise” model is

xt = 2 cos(2πt/50 + 0.6π) + wt

  • so

µx,t = E (xt) = 2 cos(2πt/50 + 0.6π) + E (wt) = 2 cos(2πt/50 + 0.6π), the (cosine wave) signal.

6

slide-7
SLIDE 7

Signal-Plus-Noise Model with Mean Function

Time x 100 200 300 400 500 −4 −2 2 4

7

slide-8
SLIDE 8

Covariances

  • The autocovariance function is, for all s and t,

γx(s, t) = E

  • (xs − µx,s)
  • xt − µx,t
  • .
  • Symmetry: γx(s, t) = γx(t, s).
  • Smoothness:

– if a series is smooth, nearby values will be very similar, hence the autocovariance will be large; – conversely, for a “choppy” series, even nearby values may be nearly uncorrelated.

8

slide-9
SLIDE 9

Example: White Noise

  • If wt is white noise wn(0, σ2

w), then

γw(s, t) = E (wswt) =

  

σ2

w,

s = t, 0, s = t.

  • definitely choppy!

9

slide-10
SLIDE 10

Autocovariances of White Noise

s t gamma

10

slide-11
SLIDE 11

Example: Moving Average

  • The moving average is

vt = 1 3

  • wt−1 + wt + wt+1
  • and E (vt) = 0, so

γv(s, t) = E (vsvt) = 1 9E

  • ws−1 + ws + ws+1

wt−1 + wt + wt+1

  • =

            

(3/9)σ2

w,

s = t (2/9)σ2

w,

s = t ± 1 (1/9)σ2

w,

s = t ± 2 0,

  • therwise.

11

slide-12
SLIDE 12

Autocovariances of Moving Average

s t gamma

12

slide-13
SLIDE 13

Example: Random Walk

  • The random walk with zero drift is

xt =

t

  • j=1

wj and E (xt) = 0

  • so

γx(s, t) = E (xsxt) = E

 

s

  • j=1

wj

t

  • j=1

wj

 

= min{s, t}σ2

w.

13

slide-14
SLIDE 14

Autocovariances of Random Walk

s t gamma

14

slide-15
SLIDE 15
  • Notes:

– For the first two models, γx(s, t) depends on s and t only through |s − t|, but for the random walk γx(s, t) depends

  • n s and t separately.

– For the first two models, the variance γx(t, t) is constant, but for the random walk γx(t, t) = tσ2

w increases indefi-

nitely as t increases.

15

slide-16
SLIDE 16

Correlations

  • The autocorrelation function (ACF) is

ρ(s, t) = γ(s, t)

  • γ(s, s)γ(t, t)
  • Measures the linear predictability of xt given only xs.
  • Like any correlation, −1 ≤ ρ(s, t) ≤ 1.

16

slide-17
SLIDE 17

Across Series

  • For a pair of time series xt and yt, the cross covariance

function is γx,y(s, t) = E

  • (xs − µx,s)
  • yt − µy,t
  • .
  • The cross correlation function (CCF) is

ρx,y(s, t) = γx,y(s, t)

  • γx(s, s)γy(t, t)

.

17

slide-18
SLIDE 18

Stationary Time Series

  • Basic idea: the statistical properties of the observations do

not change over time.

  • Two specific forms: strong (or strict) stationarity and weak

stationarity.

  • A time series xt is strongly stationary if the joint distribution
  • f every collection of values {xt1, xt2, . . . , xtk} is the same as

that of the time-shifted values {xt1+h, xt2+h, . . . , xtk+h}, for every dimension k and shift h.

  • Strong stationarity is hard to verify.

18

slide-19
SLIDE 19

If {xt} is strongly stationary, then for instance:

  • k = 1: the distribution of xt is the same as that of xt+h, for

any h; – in particular, if we take h = −t, the distribution of xt is the same as that of x0; – that is, every xt has the same distribution;

19

slide-20
SLIDE 20
  • k = 2: the joint (bivariate) distribution of (xs, xt) is the same

as that of (xs+h, xt+h), for any h; – in particular, if we take h = −t, the joint distribution of (xs, xt) is the same as that of (xs−t, x0); – that is, the joint distribution of (xs, xt) depends on s and t only through s − t;

  • and so on...

20

slide-21
SLIDE 21
  • A time series xt is weakly stationary if:

– the mean function µt is constant; that is, every xt has the same mean; – the autocovariance function γ(s, t) depends on s and t only through their difference |s − t|.

  • Weak stationarity depends only on the first and second mo-

ment functions, so is also called second-order stationarity.

  • Strongly stationary (plus finite variance) ⇒ weakly stationary.
  • Weakly stationary ⇒ strongly stationary (unless some other

property implies it, like normality of all joint distributions).

21

slide-22
SLIDE 22

Simplifications

  • If xt is weakly stationary, cov
  • xt+h, xt
  • depends on h but not
  • n t, so we write the autocovariances as

γ(h) = cov

  • xt+h, xt
  • Similarly corr
  • xt+h, xt
  • depends only on h, and can be written

ρ(h) = γ(t + h, t)

  • γ(t + h, t + h)γ(t, t)

= γ(h) γ(0).

22

slide-23
SLIDE 23

Examples

  • White noise is weakly stationary.
  • A moving average is weakly stationary.
  • A random walk is not weakly stationary.

23