variance stabilization and simple garch models
play

Variance stabilization and simple GARCH models Erik Lindstrm - PowerPoint PPT Presentation

Variance stabilization and simple GARCH models Erik Lindstrm Simulation, GBM Standard model in math. finance, the GBM (1) Solution: 2 (2) d S t = S t d t + S t d W t (( ) ) 2 S t = S 0 exp t + W t Problem: Estimate


  1. Variance stabilization and simple GARCH models Erik Lindström

  2. Simulation, GBM Standard model in math. finance, the GBM (1) Solution: 2 (2) d S t = µ S t d t + σ S t d W t (( ) ) µ − σ 2 S t = S 0 exp t + σ W t Problem: Estimate ˜ µ = µ − σ 2 2 or µ .

  3. Data Showing 5 independent realizations Figure: 25 20 15 Index level 10 5 0 0 5 10 15 20 25 30 time

  4. 4 alternatives Derive estimators on the black board. ◮ OLS ◮ WLS ◮ OLS on transformed data ◮ MLE

  5. Histograms OLS WLS 600 300 400 200 200 100 0 0 -0.2 0 0.2 0.4 -0.1 0 0.1 0.2 Transformed OLS MLE 300 300 200 200 100 100 0 0 -0.1 0 0.1 0.2 -0.1 0 0.1 0.2

  6. Finding a transformation Several strategies ◮ Box-Cox transformations ◮ Doss transform (SDEs)

  7. Classical airline passenger data Time Series Plot:AirlinePassengers 700 600 500 AirlinePassengers 400 300 200 100 01-Jan-1949 01-Jan-1954 01-Jan-1959

  8. Taking logarithms Time Series Plot:AirlinePassengers 600 550 500 450 400 350 AirlinePassengers 300 250 200 150 01-Jan-1949 01-Jan-1954 01-Jan-1959

  9. Time series models by an AR, ARMA, SETAR, STAR etc. model. conditional mean allows us to model the conditional variance. the zero mean returns. variance model. Let r t be a stochastic process. ◮ µ t = E [ r t |F t − 1 ] is the conditional mean modeled ◮ Having a correctly specified model for the ◮ I will for the rest of the lecture assume that r t is ◮ σ 2 t = V [ r t |F t − 1 ] is modeled using a dynamic

  10. Why are we interested in (conditional) variances? Several financial applications: Chebyshev inequality a 2 (3) ◮ Mean variance portfolio optimization ◮ VaR and ES calculations ◮ Conservative estimate of quantiles via the P ( | X − µ | > a ) ≤ σ 2

  11. Dependence structures Dependence on the OMXS30. 1.2 1.2 1 1 0.8 0.8 Autocorrelation, abs returns Autocorrelation, returns 0.6 0.6 0.4 0.4 0.2 0.2 0 0 −0.2 −0.2 0 5 10 15 20 25 30 0 50 100 150 200 250 300 lag lag

  12. e ARCH family EWMA) ◮ ARCH (1982), Bank of Sweden …(2003) ◮ GARCH (1986) ◮ EGARCH (1991) ◮ Special cases (IGARCH, A-GARCH, GJR-GARCH, ◮ FIGARCH (1996) ◮ SW-GARCH ◮ GARCH in Mean (1987)

  13. ARCH model. Heteroscedasticity) model p The ARCH (Auto Regressive Conditional ◮ The (mean free) model is given by r t = σ t z t , ◮ The conditional variance is given by ∑ t = ω + σ 2 α i r 2 t − i i = 1 t ∈ F t − 1 ! ◮ Easy to estimate as σ 2 ◮ Q : Compute cov( r t , r t − h ) and cov( r 2 t , r 2 t − h ) for this ◮ Hint: Use properties of expectations

  14. Next, we compute Cov( r t r t h ) as t r 2 t (white noise!). It then follows that t is thus a ………process (with ARCH, solution 2 h ) is harder. Introduce t r 2 t 2 r 2 t p t t i 1 i r 2 t i t The r 2 t 0 Computing Cov( r 2 h z t E t z t t h z t h E E t z t t h t 1 E t t h z t h E z t t 1 heteroscedastic noise). ◮ We have that E [ r t ] = E [ E [ σ t z t |F t − 1 ]] = E [ σ t E [ z t |F t − 1 ]] = 0.

  15. t r 2 t (white noise!). It then follows that t is thus a ………process (with ARCH, solution 2 The r 2 t i t i r 2 1 i p t t r 2 t 2 t r 2 t h ) is harder. Introduce t Computing Cov( r 2 heteroscedastic noise). ◮ We have that E [ r t ] = E [ E [ σ t z t |F t − 1 ]] = E [ σ t E [ z t |F t − 1 ]] = 0. ◮ Next, we compute Cov( r t , r t − h ) ) as E [ σ t z t σ t − h z t − h ] = E [ E [ σ t z t σ t − h z t − h |F t − 1 ]] = E [ σ t σ t − h z t − h E [ z t |F t − 1 ]] = 0 .

  16. ARCH, solution r 2 The r 2 p heteroscedastic noise). ◮ We have that E [ r t ] = E [ E [ σ t z t |F t − 1 ]] = E [ σ t E [ z t |F t − 1 ]] = 0. ◮ Next, we compute Cov( r t , r t − h ) ) as E [ σ t z t σ t − h z t − h ] = E [ E [ σ t z t σ t − h z t − h |F t − 1 ]] = E [ σ t σ t − h z t − h E [ z t |F t − 1 ]] = 0 . t , r 2 ◮ Computing Cov( r 2 t − h ) is harder. Introduce ν t = r 2 t − σ 2 t (white noise!). It then follows that ∑ t = σ 2 t + ν t = ω + α i r 2 t − i + ν t . i = 1 t is thus a ………process (with

  17. ARCH, limitations must be bounded if moments should be finite ARCH(1) model to have finite variance. ◮ Large number of lags are needed to fit data. ◮ The model is rather restrictive, as the parameters ◮ Exercise: Compute the restrictions for the

  18. GARCH (Generalized ARCH) q p ◮ Is the most common dynamics variance model. ◮ The conditional variance is given by ∑ ∑ σ 2 t = ω + α i r 2 t − i + β j σ 2 t − j i = 1 j = 1 ◮ A GARCH(1,1) is often sufficent! ◮ Conditions on the parameters.

  19. GARCH t The r 2 p p p p p p heteroscedastic noise). r 2 ◮ Cov( r t , r t − h ) = 0 as in the ARCH model. t , r 2 ◮ Computing Cov( r 2 t − h ) is similar to the ARCH model. Reintroducing ν t = r 2 t − σ 2 t gives (assume p = q ) ∑ ∑ = σ 2 t + ν t = ω + α i r 2 t − i + β j σ 2 t − j + ν t i = 1 j = 1 ∑ ∑ = ω + α i r 2 t − i + β j ( r 2 t − j − ν t − j ) + ν t i = 1 j = 1 ∑ ∑ = ω + ( α i + β i ) r 2 t − i + − β j ν t − j + ν t i = 1 j = 1 t is thus a ………process (with

  20. Estimation of GARCH(1,1) on OMXS30 logreturns ω = 1 . 9 · 10 − 6 , α 1 = 0 . 0775 β 1 = 0 . 9152 OMXS30 logreturns Extimated GARCH(1,1) vol 0.1 0.04 0.035 0.05 0.03 0.025 0 0.02 0.015 −0.05 0.01 2000 2010 2000 2010 OMXS30 normalised logreturns NORMPLOT OMXS30 normalised logreturns 4 0.999 0.997 0.99 0.98 2 0.95 0.90 Probability 0.75 0 0.50 0.25 −2 0.10 0.05 0.02 0.01 0.003 −4 0.001 2000 2010 −4 −2 0 2 4 Data

  21. GARCH, special cases ◮ An IGARCH (integrated GARCH) is a GARCH where ∑ α i + β i = 1 and ω > 0. ◮ The EWMA(exponentially weighted moving average) process is a process where α + β = 1 and ω = 0, i.e. the volatility is given by t = α r 2 t − 1 + ( 1 − α ) σ 2 σ 2 t − 1

  22. EGARCH (Exponential GARCH) p q ◮ The conditional variance is given by ∑ ∑ t = ω + α i f ( r t − i ) + log σ 2 β j log σ 2 t − j i = 1 j = 1 ◮ log σ 2 may be negative! ◮ Thus no (fewer) restrictions on the parameters.

  23. Variations Several improvements can be applied to any of the models. than good news. We can replace r 2 (GJR, Glosten-Jagannathan-Runkle). ◮ Bad news tend to increase the variance more t − i by ◮ ( r t − i + γ ) 2 (Type I) ◮ ( | r t − i | + cr 2 t − i ) (Type II) ◮ Replace α i with ( α i + ˜ α i 1 { r t − i < 0 } ) ◮ Distributions ◮ Stationarity problems.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend