Variance stabilization and simple GARCH models Erik Lindstrm - - PowerPoint PPT Presentation

variance stabilization and simple garch models
SMART_READER_LITE
LIVE PREVIEW

Variance stabilization and simple GARCH models Erik Lindstrm - - PowerPoint PPT Presentation

Variance stabilization and simple GARCH models Erik Lindstrm Simulation, GBM Standard model in math. finance, the GBM (1) Solution: 2 (2) d S t = S t d t + S t d W t (( ) ) 2 S t = S 0 exp t + W t Problem: Estimate


slide-1
SLIDE 1

Variance stabilization and simple GARCH models

Erik Lindström

slide-2
SLIDE 2

Simulation, GBM

Standard model in math. finance, the GBM dSt = µStdt + σStdWt (1) Solution: St = S0 exp (( µ − σ2 2 ) t + σWt ) (2) Problem: Estimate ˜ µ = µ − σ2

2 or µ.

slide-3
SLIDE 3

Data

Showing 5 independent realizations

5 10 15 20 25 30 time 5 10 15 20 25 Index level

Figure:

slide-4
SLIDE 4

4 alternatives

◮ OLS ◮ WLS ◮ OLS on transformed data ◮ MLE

Derive estimators on the black board.

slide-5
SLIDE 5

Histograms

  • 0.2

0.2 0.4 200 400 600

OLS

  • 0.1

0.1 0.2 100 200 300

WLS

  • 0.1

0.1 0.2 100 200 300

Transformed OLS

  • 0.1

0.1 0.2 100 200 300

MLE

slide-6
SLIDE 6

Finding a transformation

Several strategies

◮ Box-Cox transformations ◮ Doss transform (SDEs)

slide-7
SLIDE 7

Classical airline passenger data

01-Jan-1949 01-Jan-1954 01-Jan-1959 100 200 300 400 500 600 700

AirlinePassengers Time Series Plot:AirlinePassengers

slide-8
SLIDE 8

Taking logarithms

01-Jan-1949 01-Jan-1954 01-Jan-1959 150 200 250 300 350 400 450 500 550 600

AirlinePassengers Time Series Plot:AirlinePassengers

slide-9
SLIDE 9

Time series models

Let rt be a stochastic process.

◮ µt = E[rt|Ft−1] is the conditional mean modeled

by an AR, ARMA, SETAR, STAR etc. model.

◮ Having a correctly specified model for the

conditional mean allows us to model the conditional variance.

◮ I will for the rest of the lecture assume that rt is

the zero mean returns.

◮ σ2 t = V[rt|Ft−1] is modeled using a dynamic

variance model.

slide-10
SLIDE 10

Why are we interested in (conditional) variances?

Several financial applications:

◮ Mean variance portfolio optimization ◮ VaR and ES calculations ◮ Conservative estimate of quantiles via the

Chebyshev inequality P (|X − µ| > a) ≤ σ2 a2 (3)

slide-11
SLIDE 11

Dependence structures

Dependence on the OMXS30.

5 10 15 20 25 30 −0.2 0.2 0.4 0.6 0.8 1 1.2 lag Autocorrelation, returns 50 100 150 200 250 300 −0.2 0.2 0.4 0.6 0.8 1 1.2 lag Autocorrelation, abs returns

slide-12
SLIDE 12

e ARCH family

◮ ARCH (1982), Bank of Sweden …(2003) ◮ GARCH (1986) ◮ EGARCH (1991) ◮ Special cases (IGARCH, A-GARCH, GJR-GARCH,

EWMA)

◮ FIGARCH (1996) ◮ SW-GARCH ◮ GARCH in Mean (1987)

slide-13
SLIDE 13

ARCH

The ARCH (Auto Regressive Conditional Heteroscedasticity) model

◮ The (mean free) model is given by

rt = σtzt,

◮ The conditional variance is given by

σ2

t = ω + p

i=1

αir2

t−i ◮ Easy to estimate as σ2 t ∈ Ft−1! ◮ Q : Compute cov(rt, rt−h) and cov(r2 t , r2 t−h) for this

model.

◮ Hint: Use properties of expectations

slide-14
SLIDE 14

ARCH, solution

◮ We have that

E[rt] = E[E[σtzt|Ft−1]] = E[σtE[zt|Ft−1]] = 0. Next, we compute Cov(rt rt

h ) as

E

tzt t hzt h

E E

tzt t hzt h t 1

E

t t hzt hE zt t 1

Computing Cov(r2

t r2 t h) is harder. Introduce t

r2

t 2 t (white noise!). It then follows that

r2

t 2 t t p i 1 ir2 t i t

The r2

t is thus a ………process (with

heteroscedastic noise).

slide-15
SLIDE 15

ARCH, solution

◮ We have that

E[rt] = E[E[σtzt|Ft−1]] = E[σtE[zt|Ft−1]] = 0.

◮ Next, we compute Cov(rt, rt−h)) as

E[σtztσt−hzt−h] = E[E[σtztσt−hzt−h|Ft−1]] = E[σtσt−hzt−hE[zt|Ft−1]] = 0. Computing Cov(r2

t r2 t h) is harder. Introduce t

r2

t 2 t (white noise!). It then follows that

r2

t 2 t t p i 1 ir2 t i t

The r2

t is thus a ………process (with

heteroscedastic noise).

slide-16
SLIDE 16

ARCH, solution

◮ We have that

E[rt] = E[E[σtzt|Ft−1]] = E[σtE[zt|Ft−1]] = 0.

◮ Next, we compute Cov(rt, rt−h)) as

E[σtztσt−hzt−h] = E[E[σtztσt−hzt−h|Ft−1]] = E[σtσt−hzt−hE[zt|Ft−1]] = 0.

◮ Computing Cov(r2 t , r2 t−h) is harder. Introduce

νt = r2

t − σ2 t (white noise!). It then follows that

r2

t = σ2 t + νt = ω + p

i=1

αir2

t−i + νt.

The r2

t is thus a ………process (with

heteroscedastic noise).

slide-17
SLIDE 17

ARCH, limitations

◮ Large number of lags are needed to fit data. ◮ The model is rather restrictive, as the parameters

must be bounded if moments should be finite

◮ Exercise: Compute the restrictions for the

ARCH(1) model to have finite variance.

slide-18
SLIDE 18

GARCH (Generalized ARCH)

◮ Is the most common dynamics variance model. ◮ The conditional variance is given by

σ2

t = ω + p

i=1

αir2

t−i + q

j=1

βjσ2

t−j ◮ A GARCH(1,1) is often sufficent! ◮ Conditions on the parameters.

slide-19
SLIDE 19

GARCH

◮ Cov(rt, rt−h)= 0 as in the ARCH model. ◮ Computing Cov(r2 t , r2 t−h) is similar to the ARCH

  • model. Reintroducing νt = r2

t − σ2 t gives (assume

p = q) r2

t

= σ2

t + νt = ω + p

i=1

αir2

t−i + p

j=1

βjσ2

t−j + νt

= ω +

p

i=1

αir2

t−i + p

j=1

βj(r2

t−j − νt−j) + νt

= ω +

p

i=1

(αi + βi)r2

t−i + p

j=1

−βjνt−j + νt The r2

t is thus a ………process (with

heteroscedastic noise).

slide-20
SLIDE 20

Estimation of GARCH(1,1) on OMXS30 logreturns

ω = 1.9 · 10−6, α1 = 0.0775 β1 = 0.9152

2000 2010 −0.05 0.05 0.1 OMXS30 logreturns 2000 2010 0.01 0.015 0.02 0.025 0.03 0.035 0.04 Extimated GARCH(1,1) vol 2000 2010 −4 −2 2 4 OMXS30 normalised logreturns −4 −2 2 4 0.001 0.003 0.01 0.02 0.05 0.10 0.25 0.50 0.75 0.90 0.95 0.98 0.99 0.997 0.999 Data Probability NORMPLOT OMXS30 normalised logreturns

slide-21
SLIDE 21

GARCH, special cases

◮ An IGARCH (integrated GARCH) is a GARCH

where ∑ αi + βi = 1 and ω > 0.

◮ The EWMA(exponentially weighted moving

average) process is a process where α + β = 1 and ω = 0, i.e. the volatility is given by σ2

t = αr2 t−1 + (1 − α)σ2 t−1

slide-22
SLIDE 22

EGARCH (Exponential GARCH)

◮ The conditional variance is given by

log σ2

t = ω + p

i=1

αif(rt−i) +

q

j=1

βj log σ2

t−j ◮ log σ2 may be negative! ◮ Thus no (fewer) restrictions on the parameters.

slide-23
SLIDE 23

Variations

Several improvements can be applied to any of the models.

◮ Bad news tend to increase the variance more

than good news. We can replace r2

t−i by

◮ (rt−i + γ)2 (Type I) ◮ (|rt−i| + cr2

t−i) (Type II)

◮ Replace αi with (αi + ˜

αi1{rt−i<0}) (GJR, Glosten-Jagannathan-Runkle).

◮ Distributions ◮ Stationarity problems.

slide-24
SLIDE 24