lecture 2 linear time series
play

Lecture 2, Linear Time Series Erik Lindstrm FMS161/MASM18 Financial - PowerPoint PPT Presentation

Intro Estimation Lecture 2, Linear Time Series Erik Lindstrm FMS161/MASM18 Financial Statistics Erik Lindstrm Lecture 2, Linear Time Series Intro Estimation Systems with discrete time Systems: Represented with polynomial or


  1. Intro Estimation Lecture 2, Linear Time Series Erik Lindström FMS161/MASM18 Financial Statistics Erik Lindström Lecture 2, Linear Time Series

  2. Intro Estimation Systems with discrete time ◮ Systems: ◮ Represented with polynomial or state space models models being linear / stable / causal / time-invariant / stationary (?) ◮ Difference equation ◮ Weight Function / Impulse Response – h ( s , t ) or h ( τ ) ◮ Transfer Function – H ( z ) ◮ Frequency Function – H ( e i 2 π f ) ◮ Processes: ◮ Stationarity ◮ ARMAX-Processes Erik Lindström Lecture 2, Linear Time Series

  3. Intro Estimation Impulse response A causal linear stable system has a well defined impulse response h . The impulse response is the output of a system if we let the input be 1 at time zero and then zero for the rest of the time. The output for a general input u is given as ∞ � y ( t ) = h ( i ) u ( t − i ) = ( h ∗ u )( t ) i = 0 i.e. it is the convolution of the input u and the impulse response h . Erik Lindström Lecture 2, Linear Time Series

  4. Intro Estimation Difference equations Difference equation representation for ARX/ARMA structure: y t + a 1 y t − 1 + · · · + a p y t − p = u t + b 1 u t − 1 + · · · + b q u t − q using the delay operator leads to the Transfer Function (which might be defined also for a system not following this linear difference equation) y t = H ( z ) u t = 1 + b 1 z − 1 + · · · + b q z − q 1 + a 1 z − 1 + · · · + a p z − p u t with (the latter equation with a Z-transform interpretation of the operations) ∞ � h ( τ ) z − τ ; H ( z ) = Y ( Z ) = H ( Z ) U ( Z ) 0 Erik Lindström Lecture 2, Linear Time Series

  5. Intro Estimation Frequency representation The frequency function is defined from the transfer function as � e i 2 π f � H = H ( f ) giving a amplitude and phase shift of an input trigonometric signal, as e.g. u k = cos ( 2 π fk ) y k = |H ( f ) | cos ( 2 π fk + arg ( H ( f ))) | f | ≤ 0 . 5 Erik Lindström Lecture 2, Linear Time Series

  6. Intro Estimation Spectrum If we filter standard white noise, i.e. a sequence of i.i.d. zero mean random variables with variance one, through a linear system with frequency function H then we get a signal with spectrum R ( f ) = |H ( f ) | 2 . The spectrum at frequency f is the average energy in the output with frequency f . The spectrum is also the Fourier transform of covariance function γ , ∞ ∞ γ ( k ) e − i 2 π fk { r ( · ) sym } � � R ( f ) = = γ ( k ) cos ( 2 π fk ) . k = −∞ k = −∞ Erik Lindström Lecture 2, Linear Time Series

  7. Intro Estimation Inverse filtering in discrete time AIM: Reconstruct the input ( u ) from the output ( y ) signal, and suppose that the system we are going to invert ( h ) is linear, stable, and time-invariant with an assumed inverse ( g ). We get by following the desired signal path: u k = ( g ∗ y )( k ) = ( g ∗ h ∗ u )( k ) and hence notice that h invertible is equivalent to ∃ causal and stable g such that ( g ∗ h )( k ) = δ ( k ) G ( z ) H ( z ) = 1 NOTE: The causality means that we do reconstruct from old values. Erik Lindström Lecture 2, Linear Time Series

  8. Intro Estimation ARMA(p,q)-filter Process: y t + a 1 y t − 1 + · · · + a p y t − p = x t + c 1 x t − 1 + · · · + c q x t − q Transfer Function: 1 + c 1 z − 1 + · · · + c q z − q H ( z ) = 1 + a 1 z − 1 + · · · + a p z − p z − q ( z q + c 1 z q − 1 + · · · + c q ) = z − p ( z p + a 1 z p − 1 + · · · + a p ) Frequency Function as H ( e i 2 π f ) , f ∈ ( − π, π ] Stability: Poles to A ( z − 1 ) = 0 st | π i | < 1 , i = 1 , . . . , p Invertability: Zeroes to C ( z − 1 ) = 0, st | η i | < 1 , i = 1 , . . . , q Erik Lindström Lecture 2, Linear Time Series

  9. Intro Estimation Auto correlation and cross-correlation The auto covariance is defined as γ ( k ) = E [ Y t Y t + k ] (1) and corresponding autocorrelation function ρ ( k ) = γ ( k ) (2) γ ( 0 ) The cross covariance is defined as γ XY ( k ) = E [ X t Y t + k ] (3) and corresponding autocorrelation function ρ XY ( k ) = γ XY ( k ) (4) γ XY ( 0 ) Erik Lindström Lecture 2, Linear Time Series

  10. Intro Estimation Auto covariance for ARMA Consider the ARMA(p,q) process Y t + a 1 Y t − 1 + . . . a p Y t − p = e t + . . . c q e t − q (5) The auto covariance then satisfies γ ( k )+ a 1 γ ( k − 1 )+ . . . a p γ ( k − p ) = c k γ eY ( 0 )+ . . . c q γ eY ( q − k ) (6) Proof: Multiply with Y t − k , and use that E [ e t − l Y t − k ] = 0 for k > l Erik Lindström Lecture 2, Linear Time Series

  11. Intro Estimation Cointegration ◮ It is rather common that financial time series { X ( t ) } are non-stationary, often integrated ◮ This means that ∇ X ( t ) is stationary. We then say that is an integrated process, X ( t ) ∼ I ( 1 ) . ◮ Assume that X ( t ) ∼ I ( 1 ) and Y ( t ) ∼ I ( 1 ) but X ( t ) − β Y ( t ) ∼ I ( 0 ) . We then say that X ( t ) and Y ( t ) are cointegrated . ◮ NOTE that the asymptotic theory for β is non-standard. Erik Lindström Lecture 2, Linear Time Series

  12. Intro Estimation Log-real money and Bond rates 1974-1985 Levels Differences 12.1 0.1 12 11.9 Money 0.05 11.8 11.7 0 11.6 11.5 −0.05 1975 1980 1985 1975 1980 1985 Levels Differences 0.22 0.01 0.2 0 0.18 Bond rate 0.16 −0.01 0.14 −0.02 0.12 −0.03 0.1 −0.04 1975 1980 1985 1975 1980 1985 Figure: Log-real money and interest rates Erik Lindström Lecture 2, Linear Time Series

  13. Intro Estimation Estimation Two dominant approaches ◮ Optimization based estimation (LS, WLS, ML, PEM) ◮ Matching properties (GMM, EF, ML, IV) We focus on the optimization based estimators today. Erik Lindström Lecture 2, Linear Time Series

  14. Intro Estimation General properties ◮ Denote the true parameter θ 0 ◮ Introduce an estimator ˆ θ = T ( X 1 , . . . , X N ) ◮ Observation: The estimator is a function of data. Implications... ◮ Bias: b = ( E [ T ( X )] − θ 0 ) . p ◮ Consistency: T ( X ) → θ 0 . ◮ Efficiency: Var ( T ( X )) ≥ I N ( θ 0 ) − 1 . Note ∂ 2 � � I N ( θ 0 ) ij = − E ℓ ( X 1 , . . . , X N , θ ) , ∂θ i ∂θ j | θ = θ 0 where ℓ is the log-likelihood function. Erik Lindström Lecture 2, Linear Time Series

  15. Intro Estimation ML methods for Gaussian processes Say that we have sample Y = { y t } , t = 1 , 2 , , . . . , n from a Gaussian process. We then have that Y ∈ N ( µ ( θ ) , Σ( θ )) , where θ is a vector of parameters. The log-likelihood for Y can be written as ℓ ( Y , θ ) = − 1 2 log ( det ( 2 π Σ( θ ))) − 1 2 ( Y − µ ( θ )) T Σ( θ ) − 1 ( Y − µ ( θ )) . If we just can calculate the likelihood we can use any standard optimization routine to maximize ℓ ( Y , θ ) and thereby estimate θ . Erik Lindström Lecture 2, Linear Time Series

  16. Intro Estimation AR(2)     x 3 x 2 x 1 . . . . . . Y =  , X =     . . .    x N x N − 1 x N − 2 Then ˆ θ = ( X T X ) − 1 ( X T Y ) . � x 2 � x i − 1 x i − 2 � � ( X T X ) = i − 1 � x i − 1 x i − 2 � x 2 i − 2 � � x i x i − 1 � and ( X T Y ) = � x i x i − 2 Solve! explanation on blackboard Erik Lindström Lecture 2, Linear Time Series

  17. Intro Estimation An ARMA example ARMA(1,1) model: x t + 0 . 7 x t − 1 = e t − 0 . 5 e t − 1 , { e t } t = 0 , 1 , 2 ,... i . i . d . ∈ N ( 0 , 1 ) . Realisation 10 0 −10 0 200 400 600 800 1000 Covariance 5 0 −5 −20 −15 −10 −5 0 5 10 15 20 Spectrum 40 20 0 −0.5 0 0.5 Erik Lindström Lecture 2, Linear Time Series

  18. Intro Estimation Explanation approximative method on blackboard Erik Lindström Lecture 2, Linear Time Series

  19. Intro Estimation Comparison, LS2 and MLE ARMA(1,1) model: x t + 0 . 7 x t − 1 = e t − 0 . 5 e t − 1 , { e t } t = 0 , 1 , 2 ,... i . i . d . ∈ N ( 0 , 1 ) . Normal Probability Plot Normal Probability Plot 0.999 0.999 0.997 0.997 0.99 0.99 0.98 0.98 0.95 0.95 0.90 0.90 Probability 0.75 0.75 LS2 0.50 0.50 0.25 0.25 0.10 0.10 0.05 0.05 0.02 0.02 0.01 0.01 0.003 0.003 0.001 0.001 0.65 0.7 0.75 0.8 −0.6 −0.5 −0.4 a 1 c 1 Normal Probability Plot Normal Probability Plot 0.999 0.999 0.997 0.997 0.99 0.99 0.98 0.98 0.95 0.95 0.90 0.90 Probability 0.75 0.75 MLE 0.50 0.50 0.25 0.25 0.10 0.10 0.05 0.05 0.02 0.02 0.01 0.01 0.003 0.003 0.001 0.001 0.65 0.7 0.75 −0.6 −0.55 −0.5 −0.45 −0.4 a 1 c 1 1000 estimations using 1000 observations each. Erik Lindström Lecture 2, Linear Time Series

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend