Lecture 2, Linear Time Series Erik Lindstrm FMS161/MASM18 Financial - - PowerPoint PPT Presentation

lecture 2 linear time series
SMART_READER_LITE
LIVE PREVIEW

Lecture 2, Linear Time Series Erik Lindstrm FMS161/MASM18 Financial - - PowerPoint PPT Presentation

Intro Estimation Lecture 2, Linear Time Series Erik Lindstrm FMS161/MASM18 Financial Statistics Erik Lindstrm Lecture 2, Linear Time Series Intro Estimation Systems with discrete time Systems: Represented with polynomial or


slide-1
SLIDE 1

Intro Estimation

Lecture 2, Linear Time Series

Erik Lindström

FMS161/MASM18 Financial Statistics

Erik Lindström Lecture 2, Linear Time Series

slide-2
SLIDE 2

Intro Estimation

Systems with discrete time

◮ Systems:

◮ Represented with polynomial or state space models models

being linear / stable / causal / time-invariant / stationary (?)

◮ Difference equation ◮ Weight Function / Impulse Response – h(s, t) or h(τ) ◮ Transfer Function – H(z) ◮ Frequency Function – H(ei2πf )

◮ Processes:

◮ Stationarity ◮ ARMAX-Processes Erik Lindström Lecture 2, Linear Time Series

slide-3
SLIDE 3

Intro Estimation

Impulse response

A causal linear stable system has a well defined impulse response h. The impulse response is the output of a system if we let the input be 1 at time zero and then zero for the rest of the time. The output for a general input u is given as y(t) =

  • i=0

h(i)u(t − i) = (h ∗ u)(t) i.e. it is the convolution of the input u and the impulse response h.

Erik Lindström Lecture 2, Linear Time Series

slide-4
SLIDE 4

Intro Estimation

Difference equations

Difference equation representation for ARX/ARMA structure: yt + a1yt−1 + · · · + apyt−p = ut + b1ut−1 + · · · + bqut−q using the delay operator leads to the Transfer Function (which might be defined also for a system not following this linear difference equation) yt = H(z)ut = 1 + b1z−1 + · · · + bqz−q 1 + a1z−1 + · · · + apz−p ut with (the latter equation with a Z-transform interpretation of the

  • perations)

H(z) =

  • h(τ)z−τ;

Y (Z) = H(Z)U(Z)

Erik Lindström Lecture 2, Linear Time Series

slide-5
SLIDE 5

Intro Estimation

Frequency representation

The frequency function is defined from the transfer function as H

  • ei2πf

= H(f ) giving a amplitude and phase shift of an input trigonometric signal, as e.g. uk = cos(2πfk) yk = |H(f )| cos(2πfk + arg(H(f ))) |f | ≤ 0.5

Erik Lindström Lecture 2, Linear Time Series

slide-6
SLIDE 6

Intro Estimation

Spectrum

If we filter standard white noise, i.e. a sequence of i.i.d. zero mean random variables with variance one, through a linear system with frequency function H then we get a signal with spectrum R(f ) = |H(f )|2. The spectrum at frequency f is the average energy in the output with frequency f . The spectrum is also the Fourier transform of covariance function γ, R(f ) =

  • k=−∞

γ(k)e−i2πfk {r(·) sym} =

  • k=−∞

γ(k) cos(2πfk).

Erik Lindström Lecture 2, Linear Time Series

slide-7
SLIDE 7

Intro Estimation

Inverse filtering in discrete time

AIM: Reconstruct the input (u) from the output (y) signal, and suppose that the system we are going to invert (h) is linear, stable, and time-invariant with an assumed inverse (g). We get by following the desired signal path: uk = (g ∗ y)(k) = (g ∗ h ∗ u)(k) and hence notice that h invertible is equivalent to ∃ causal and stable g such that (g ∗ h)(k) = δ(k) G(z)H(z) = 1 NOTE: The causality means that we do reconstruct from old values.

Erik Lindström Lecture 2, Linear Time Series

slide-8
SLIDE 8

Intro Estimation

ARMA(p,q)-filter

Process: yt + a1yt−1 + · · · + apyt−p = xt + c1xt−1 + · · · + cqxt−q Transfer Function: H(z) = 1 + c1z−1 + · · · + cqz−q 1 + a1z−1 + · · · + apz−p = z−q(zq + c1zq−1 + · · · + cq) z−p(zp + a1zp−1 + · · · + ap) Frequency Function as H(ei2πf ), f ∈ (−π, π] Stability: Poles to A(z−1) = 0 st |πi| < 1, i = 1, . . . , p Invertability: Zeroes to C(z−1) = 0, st |ηi| < 1, i = 1, . . . , q

Erik Lindström Lecture 2, Linear Time Series

slide-9
SLIDE 9

Intro Estimation

Auto correlation and cross-correlation

The auto covariance is defined as γ(k) = E [YtYt+k] (1) and corresponding autocorrelation function ρ(k) = γ(k) γ(0) (2) The cross covariance is defined as γXY (k) = E [XtYt+k] (3) and corresponding autocorrelation function ρXY (k) = γXY (k) γXY (0) (4)

Erik Lindström Lecture 2, Linear Time Series

slide-10
SLIDE 10

Intro Estimation

Auto covariance for ARMA

Consider the ARMA(p,q) process Yt + a1Yt−1 + . . . apYt−p = et + . . . cqet−q (5) The auto covariance then satisfies γ(k)+a1γ(k−1)+. . . apγ(k−p) = ckγeY (0)+. . . cqγeY (q−k) (6) Proof: Multiply with Yt−k, and use that E [et−lYt−k] = 0 for k > l

Erik Lindström Lecture 2, Linear Time Series

slide-11
SLIDE 11

Intro Estimation

Cointegration

◮ It is rather common that financial time series {X(t)} are

non-stationary, often integrated

◮ This means that ∇X(t) is stationary. We then say that is an

integrated process, X(t) ∼ I(1).

◮ Assume that X(t) ∼ I(1) and Y (t) ∼ I(1) but

X(t) − βY (t) ∼ I(0). We then say that X(t) and Y (t) are cointegrated.

◮ NOTE that the asymptotic theory for β is non-standard.

Erik Lindström Lecture 2, Linear Time Series

slide-12
SLIDE 12

Intro Estimation

Log-real money and Bond rates 1974-1985

1975 1980 1985 11.5 11.6 11.7 11.8 11.9 12 12.1 Money Levels 1975 1980 1985 −0.05 0.05 0.1 Differences 1975 1980 1985 0.1 0.12 0.14 0.16 0.18 0.2 0.22 Bond rate Levels 1975 1980 1985 −0.04 −0.03 −0.02 −0.01 0.01 Differences

Figure: Log-real money and interest rates

Erik Lindström Lecture 2, Linear Time Series

slide-13
SLIDE 13

Intro Estimation

Estimation

Two dominant approaches

◮ Optimization based estimation (LS, WLS, ML, PEM) ◮ Matching properties (GMM, EF, ML, IV)

We focus on the optimization based estimators today.

Erik Lindström Lecture 2, Linear Time Series

slide-14
SLIDE 14

Intro Estimation

General properties

◮ Denote the true parameter θ0 ◮ Introduce an estimator ˆ

θ = T(X1, . . . , XN)

◮ Observation: The estimator is a function of data.

Implications...

◮ Bias: b = (E[T(X)] − θ0). ◮ Consistency: T(X) p

→ θ0.

◮ Efficiency: Var(T(X)) ≥ IN(θ0)−1.

Note IN(θ0)ij = −E

  • ∂2

∂θi∂θj ℓ(X1, . . . , XN, θ)

  • |θ=θ0

, where ℓ is the log-likelihood function.

Erik Lindström Lecture 2, Linear Time Series

slide-15
SLIDE 15

Intro Estimation

ML methods for Gaussian processes

Say that we have sample Y = {yt}, t = 1, 2, , . . . , n from a Gaussian process. We then have that Y ∈ N(µ(θ), Σ(θ)), where θ is a vector of parameters. The log-likelihood for Y can be written as ℓ(Y , θ) = −1 2 log(det(2πΣ(θ)))− 1 2(Y −µ(θ))TΣ(θ)−1(Y −µ(θ)). If we just can calculate the likelihood we can use any standard

  • ptimization routine to maximize ℓ(Y , θ) and thereby estimate θ.

Erik Lindström Lecture 2, Linear Time Series

slide-16
SLIDE 16

Intro Estimation

AR(2)

Y =    x3 . . . xN    , X =    x2 x1 . . . . . . xN−1 xN−2    Then ˆ θ = (X TX)−1(X TY ). (X TX) =

  • x2

i−1

xi−1xi−2 xi−1xi−2 x2

i−2

  • and (X TY ) =

xixi−1 xixi−2

  • Solve! explanation on blackboard

Erik Lindström Lecture 2, Linear Time Series

slide-17
SLIDE 17

Intro Estimation

An ARMA example

ARMA(1,1) model: xt + 0.7xt−1 = et − 0.5et−1, {et}t=0,1,2,... i.i.d. ∈ N(0, 1).

200 400 600 800 1000 −10 10 Realisation −20 −15 −10 −5 5 10 15 20 −5 5 Covariance −0.5 0.5 20 40 Spectrum

Erik Lindström Lecture 2, Linear Time Series

slide-18
SLIDE 18

Intro Estimation

Explanation approximative method on blackboard

Erik Lindström Lecture 2, Linear Time Series

slide-19
SLIDE 19

Intro Estimation

Comparison, LS2 and MLE

ARMA(1,1) model: xt + 0.7xt−1 = et − 0.5et−1, {et}t=0,1,2,... i.i.d. ∈ N(0, 1).

0.65 0.7 0.75 0.8 0.001 0.003 0.01 0.02 0.05 0.10 0.25 0.50 0.75 0.90 0.95 0.98 0.99 0.997 0.999 a1 LS2 Normal Probability Plot −0.6 −0.5 −0.4 0.001 0.003 0.01 0.02 0.05 0.10 0.25 0.50 0.75 0.90 0.95 0.98 0.99 0.997 0.999 c1 Probability Normal Probability Plot 0.65 0.7 0.75 0.001 0.003 0.01 0.02 0.05 0.10 0.25 0.50 0.75 0.90 0.95 0.98 0.99 0.997 0.999 a1 MLE Normal Probability Plot −0.6 −0.55 −0.5 −0.45 −0.4 0.001 0.003 0.01 0.02 0.05 0.10 0.25 0.50 0.75 0.90 0.95 0.98 0.99 0.997 0.999 c1 Probability Normal Probability Plot

1000 estimations using 1000 observations each.

Erik Lindström Lecture 2, Linear Time Series