Lecture on advanced volatility models Erik Lindstrm Stochastic - - PowerPoint PPT Presentation

lecture on advanced volatility models
SMART_READER_LITE
LIVE PREVIEW

Lecture on advanced volatility models Erik Lindstrm Stochastic - - PowerPoint PPT Presentation

Lecture on advanced volatility models Erik Lindstrm Stochastic Volatility (SV) Let r t be a stochastic process. The log returns (observed) are given by r t = exp ( V t / 2 ) z t . The volatility V t is a hidden AR process V t = + V


slide-1
SLIDE 1

Lecture on advanced volatility models

Erik Lindström

slide-2
SLIDE 2

Stochastic Volatility (SV)

Let rt be a stochastic process.

◮ The log returns (observed) are given by

rt = exp(Vt/2)zt.

◮ The volatility Vt is a hidden AR process

Vt = α + βVt−1 + et.

◮ Or more general

A(·)Vt = et.

◮ More flexible than e.g. EGARCH models! ◮ Multivariate extensions.

slide-3
SLIDE 3

A simulation of Taylor (1982)

100 200 300 400 500 600 700 800 900 1000 0.1 0.2 0.3 0.4

exp(x/2)

100 200 300 400 500 600 700 800 900 1000

  • 0.5

0.5 1

returns

With α = −0.2, β = 0.95 and σ = 0.2.

slide-4
SLIDE 4

Long Memory Stochastic Volatility (LMSV)

The autocorr. of volatility decays slower than exp. rate

◮ The returns (observed) are given by

rt = exp(Vt/2)zt.

◮ The volatility Vt is a hidden, fractionally

integrated AR process A(·)(1 − q−1)bVt = et, where b ∈ (0, 0.5).

◮ This gives long memory!

slide-5
SLIDE 5

Long Memory Stochastic Volatility (LMSV)

◮ The long memory model can be approximated

by a large AR process.

◮ It can be shown that

(1 − q−1)b =

j=0

πjq−j, where πj = Γ(j − b) Γ(j + 1)Γ(−b).

slide-6
SLIDE 6

Quasi Likelihood inference

◮ The parameters in the SV model can be found by

studying yt = log(r2

t ) and xt = Vt.

This leads to (with

t

log z2

t )

yt log r2

t

log exp Vt log z2

t

xt

t

xt xt

1

et Estimate volatility and parameters using a Kalman filter! Practical consideration: rt 0 in the real world.

slide-7
SLIDE 7

Quasi Likelihood inference

◮ The parameters in the SV model can be found by

studying yt = log(r2

t ) and xt = Vt. ◮ This leads to (with ηt = log(z2 t ))

yt = log(r2

t ) = log(exp(Vt)) + log(z2 t ) = xt + ηt

xt = α + βxt−1 + et.

◮ Estimate volatility and parameters using a

Kalman filter! Practical consideration: rt 0 in the real world.

slide-8
SLIDE 8

Quasi Likelihood inference

◮ The parameters in the SV model can be found by

studying yt = log(r2

t ) and xt = Vt. ◮ This leads to (with ηt = log(z2 t ))

yt = log(r2

t ) = log(exp(Vt)) + log(z2 t ) = xt + ηt

xt = α + βxt−1 + et.

◮ Estimate volatility and parameters using a

Kalman filter!

◮ Practical consideration: P (rt = 0) > 0 in the real

world.

slide-9
SLIDE 9

Stochastic Volatility in continuous time

A popular application of stoch. volatility models is

  • ption valuation.

◮ Several parameterizations. ◮ The Heston model is the most used model,

mainly due to computational properties dSt = µStdt + √ VtStdW(S)

t

dVt = κ(θ − Vt)dt + σ √ VtdW(V)

t

dW(S)

t dW(V) t

= ρdt

◮ Note that the drift and squared diffusion have

affine form.

◮ This reduces the task of computing prices to

inversion of a Fourier integral.

slide-10
SLIDE 10

Continuous time volatility

◮ We can compute the volatility in a continuous

time model.

◮ Advantage: A continuous time model can use

data from any time scale, and does not assume that data is equidistantly sampled.

◮ Can derive a limit theory when data is sampled

at high frequency.

◮ This is based on the general theory on quadratic

variation.

slide-11
SLIDE 11

Quadratic variation

◮ Let {S} be a general semimartingale. ◮ Let πN = {0 = τ0 < τ1 < . . . < τN = T} be a

partition of [0, T], and denote ∆ = τn − τn−1, where ∆ = T/N.

◮ Define

QN =

N

n=1

(S(τn) − S(τn−1))2 .

◮ What are the properties of QN? ◮ QN converges to the quadratic variation.

slide-12
SLIDE 12

Quadratic variation, cont

Let St = σWt.

◮ Then

QN =

N

n=1

(S(τn) − S(τn−1))2 .

◮ Note that (S(τn) − S(τn−1))2 ∼ σ2∆χ2(1). ◮ Remember E[χ2(p)] = p, V[χ2(p)] = 2p.

What are the properties of QN? QN

2 2 N 2

N

2T.

QN

2 2 2 N 4 T2 N2

2N Chebyshev's inequality then states that QN

p 2T.

slide-13
SLIDE 13

Quadratic variation, cont

Let St = σWt.

◮ Then

QN =

N

n=1

(S(τn) − S(τn−1))2 .

◮ Note that (S(τn) − S(τn−1))2 ∼ σ2∆χ2(1). ◮ Remember E[χ2(p)] = p, V[χ2(p)] = 2p. ◮ What are the properties of QN? ◮ E[QN] = σ2∆E[χ2(N)] = σ2∆N = σ2T. ◮ V[QN] =

( σ2∆ )2 V[χ2(N)] = ( σ4 T2

N2

) 2N → 0

◮ Chebyshev's inequality then states that

QN

p

→ σ2T.

slide-14
SLIDE 14

Quadratic variation of daily log returns for the Black-Scholes model

50 100 150 200 250 300 350 400 450 500 0.02 0.04 0.06 0.08 0.1 0.12 0.14

slide-15
SLIDE 15

Quadratic variation, cont

◮ For a diffusion process

dXt = µ(t, Xt)dt + σ(t, Xt)dWt, the quadratic variation converge to QN → ∫ σ2(s, Xs)ds. For a jump diffusion dXt t Xt dt t Xt dWt dZt where Z is a Poisson process Nt with random jumps of size Ji the quadratic variation yields QN

2 s Xs ds Nt i

J2

i

slide-16
SLIDE 16

Quadratic variation, cont

◮ For a diffusion process

dXt = µ(t, Xt)dt + σ(t, Xt)dWt, the quadratic variation converge to QN → ∫ σ2(s, Xs)ds.

◮ For a jump diffusion

dXt = µ(t, Xt)dt + σ(t, Xt)dWt + dZt, where {Z} is a Poisson process Nt with random jumps of size Ji the quadratic variation yields QN → ∫ σ2(s, Xs)ds +

Nt

i=0

J2

i .

slide-17
SLIDE 17

Realized variation

◮ The quadratic (realized) variation is estimated as

QVN =

N

n=1

(S(τn) − S(τn−1))2 .

◮ The Bipower variation is estimated as

BPVN = π 2

N

n=1

|S(τn+1) − S(τn)||S(τn) − S(τn−1)|.

◮ It can be shown that the Bipower variation

converge to BPVN → ∫ σ2(s, Xs)ds, for a jump diffusion process (and even for a general semimartingale).

◮ The difference between the realized variation

and Bipower variation is used to estimate the size of the jump component.

slide-18
SLIDE 18

Example: Realised variation for daily log return of Black-Scholes

100 200 300 400 500 600 700 800 900 0.05 0.1 0.15 100 200 300 400 500 600 700 800 900 −3 −2 −1 1 2 x 10

−3

QV−BPV (jumps ?) QV BPV

slide-19
SLIDE 19

Example: Realised variation for daily log return of OMXS30

1995 2000 2005 2010 0.2 0.4 0.6 0.8 1 1995 2000 2005 2010 0.01 0.02 0.03 0.04 QV−BPV (jumps ?) QV BPV

slide-20
SLIDE 20

Practical considerations

◮ Theory suggests that ∆ → 0 would be a good

thing. Practice suggests otherwise, cf. stylized facts. Problem is micro structure noise. Several strategies for correcting for this.

slide-21
SLIDE 21

Practical considerations

◮ Theory suggests that ∆ → 0 would be a good

thing.

◮ Practice suggests otherwise, cf. stylized facts.

Problem is micro structure noise. Several strategies for correcting for this.

slide-22
SLIDE 22

Practical considerations

◮ Theory suggests that ∆ → 0 would be a good

thing.

◮ Practice suggests otherwise, cf. stylized facts. ◮ Problem is micro structure noise. ◮ Several strategies for correcting for this.

slide-23
SLIDE 23