Option pricing: a new adaptive Monte carlo method Nadia Oudjane and - - PowerPoint PPT Presentation

option pricing a new adaptive monte carlo method
SMART_READER_LITE
LIVE PREVIEW

Option pricing: a new adaptive Monte carlo method Nadia Oudjane and - - PowerPoint PPT Presentation

Option pricing: a new adaptive Monte carlo method Nadia Oudjane and Jean-Michel Marin 1. Motivation: Option pricing 2. Importance Sampling for variance reduction 3. Particle methods to approximate the optimal importance law 4. Simulation


slide-1
SLIDE 1

Option pricing: a new adaptive Monte carlo method

Nadia Oudjane and Jean-Michel Marin

  • 1. Motivation: Option pricing
  • 2. Importance Sampling for variance reduction
  • 3. Particle methods to approximate the optimal importance law
  • 4. Simulation results

1

slide-2
SLIDE 2
  • 1. Motivations: Option pricing

2

General framework

◮ We consider a Markov chain (Xn)n≥0 ∈ E = Rd

with

  • Initial distribution

µ0 = L(X0)

  • Transition kernels

Qk = L(Xk | Xk−1)

  • Joint distributions

µ0:k = L(X0, · · · , Xk) = µ0:k−1 × Qk ◮ The Goal is to Compute efficiently the expectation µ0:n(H) µ0:n(H) = E[H(X0:n)] = E[H(X0, · · · , Xn)]

for a given

H : En+1 → R

slide-3
SLIDE 3
  • 1. Motivations: Option pricing

3

Example of application: Pricing a european call

◮ We consider a price process (Sk)k≥0

modelized by

Sk = Vk(Xk) = S0 exp(Mk + σXk)

where (Xk)k≥0 is a Markov chain and (Mk)k≥0 is such that S is martingale

  • Black-Scholes => No spikes

Xk = Wtk where W is the BM(0, 1)

  • NIG L´

evy model => Spikes

Xk = Ltk where L is the Normal

Inverse Gaussian L´ evy process

50 100 150 200 250 20 30 40 50 60 70 80

◮ The price of the European call with maturity tn and strike K is given by E[H(X0:n)] = E[(Vn(Xn) − K)+] ◮ Crude Monte Carlo is inefficient when K ≫ S0 => variance reduction

slide-4
SLIDE 4
  • 2. Importance Sampling for variance reduction

4

Importance Sampling for variance reduction

◮ Change of measure µ − → ν µ(H) = E[H(X)] = E[H( ˜ X)dµ dν ( ˜ X)] ,

where

X ∼ µ

and

˜ X ∼ ν ◮ Monte Carlo approximation E[H(X)] ≈ 1 M

M

  • i=1

H( ˜ Xi)dµ dν ( ˜ Xi) ,

where

( ˜ X1, · · · , ˜ XN)

i.i.d. ∼ ν

◮ Optimal change of measure µ − → ν∗

achieves zero variance if H ≥ 0

ν∗ = Hµ µ(H) = Hµ E[H(X)]

def

= H · µ ◮ ν∗ depends on µ(H) ⇒ How to approximate ν∗ ?

slide-5
SLIDE 5
  • 2. Importance Sampling for variance reduction

5

Progressive correction

◮ We introduce some functions Hk : Ek+1 → R

for all 0 ≤ k ≤ n

H0(x0) = 1 ,

and

Hn(x0:n) = H(x0:n) ,

for all

x0:n ∈ En+1 . ◮ We introduce some potential functions Gk : Ek+1 → R G0(x0) = 1 ,

and

Gk(x0:k) = Hk(x0:k) Hk−1(x0:k−1) ,

for all

x0:k ∈ Ek+1 . ◮ We introduce the sequence of measures (ν0:k)0≤k≤n

  • n (Ek+1)0≤k≤n

ν0:k = G0:k · µ0:k = G0:kµ0:k µ0:k(G0:k) ,

where

G0:k =

k

  • p=0

Gp . ◮ G0:n = H => ν0:n is the optimal importance distribution for µ0:n(H) ν0:n = H · µ0:n = ν∗

0:n

slide-6
SLIDE 6
  • 2. Importance Sampling for variance reduction

6

Evolution of (ν0:k)0≤k≤n

◮ Evolution of (ν0:k)0≤k≤n ν0:k−1

(1)

− − − − − →

Mutation

η0:k = νk−1 × Qk

(2)

− − − − − →

Correction

ν0:k = Gk · η0:k ◮ [Del Moral & Garnier 2005] consider the sequence of measures (γ0:k)0≤k≤n

  • n (Ek+1)0≤k≤n such that for all test function φ on Ek+1,

γ0:k(φ) = E[

k

  • p=0

Gp(X0:p)φ(X0:k)] , => γ0:n(1) = µ0:n(H) = E[H(X0:n)] ◮ Link between γ0:n(1) and (η0:k)0≤k≤n γ0:n(1) =

n

  • k=0

η0:k(Gk)

slide-7
SLIDE 7
  • 3. Particle methods to approximate the optimal importance law

7

Approximation of ν0:n by particle methods

◮ The idea is to replace η0:k = ν0:k−1 × Qk

by its empirical measure

ηN

0:k = SN(ν0:k−1 × Qk) = 1

N

N

  • i=1

δXi

0:k

where (X1

0:k, · · · , XN 0:k) are i.i.d. ∼ ν0:k−1 × Qk−1

◮ Particle approximation (νN

0:k)0≤k≤n

νN

0:k−1 (1)

− − − − − − − − − − − − →

Selection and mutation

ηN

0:k = SN(νN 0:k−1 ×Qk) (2)

− − − − − →

Correction νN

0:k = Gk ·ηN 0:k

◮ Particle approximation (γN

0:k)0≤k≤n

γN

0:k = GkηN 0:kγN 0:k−1(1)

hence

γN

0:n(1) = n

  • k=0

ηN

0:k(Gk)

slide-8
SLIDE 8
  • 3. Particle methods to approximate the optimal importance law

8

Algorithm

◮ Initialization:

Set νN

0 = ν0 = µ0

◮ Selection: Generate independantly ( ˜ X1

0:k, · · · , ˜

XN

0:k)

i.i.d.

∼ νN

0:k = N

  • i=1

ωi

k δXi

0:k

◮ Mutation: Generate independantly for each i ∈ {1, · · · , N}, Xi

k+1

∼ Qk+1( ˜ Xi

k, ·)

then set

ηN

0:k+1 = 1

N

N

  • i=1

δXi

0:k+1

◮ Weighting: For each particle i ∈ {1, · · · , N}, compute ωi

k+1 =

Gk+1(Xi

0:k+1)

N

j=1 Gk+1(Xj 0:k+1)

then set

νN

0:k+1 = N

  • i=1

ωi

k+1 δXi

0:k+1

slide-9
SLIDE 9
  • 3. Particle methods to approximate the optimal importance law

9

Density estimation

◮ At the end of the algorithm, we get νN

0:n ≈ ν∗ 0:n.

But Importance sampling requires a smooth approximation of ν∗

◮ Kernel of order 2 K K ≥ 0

  • K = 1
  • xi K = 0
  • |xi xj| K < ∞

◮ Rescaled kernel Kh Kh(x) = 1 hd K(x h) ◮ νN =

  • ωi δξi

Density estimation

− − − − − − − − − →

Kh∗ ·

νN,h =

  • ωi Kh(· − ξi)

◮ Optimal choice of h => EνN,h

0:n − ν∗ 0:n1 ≤ C N

4 2(d+4)

W1 W2 W3 W4 W5 <−−−−−−−− WEIGHTS <−−−−−−−− SAMPLE

<−−−−−−−− KERNELS <−−−−−−−− DENSITY ESTIMATE <−−−−−−−− SAMPLE

slide-10
SLIDE 10
  • 3. Particle methods to approximate the optimal importance law

10

Adaptive choice of the sequence (Hk)0≤k≤n

[C´ erou & al. 2006] [Hommem-de-Mello & Rubinstein 2002] [Musso & al. 2001]

◮ In the case of european call pricing H(x0:n) = (Vn(xn) − K)+        Hn(x0:n) = (Vn(xn) − K)+ , Hk(x0:k) = max ((Vk(xk) − Kk), ε) ,

for all 1 ≤ k ≤ n − 1 , where

  • ε > 0 ensures the positivity of Hk for 1 ≤ k ≤ n − 1
  • Kk is a r.v. depending on (V 1

k = Vk(X1 0:k), · · · , V N k

= Vk(XN

0:k)) and on

parameter ρ ∈ (0, 1):

Kk = V ([ρN])

k

where

V (1)

k

≤ · · · ≤ V (N)

k

;

slide-11
SLIDE 11
  • 3. Particle methods to approximate the optimal importance law

11

Variance of the estimator

◮ Importance sampling µ(H) = E[H(X)] ≈ ISM,N = 1 M

M

  • i=1

H( ˜ Xi) dµ0:n dνN,h

0:n

( ˜ Xi) ,

where

( ˜ X1, · · · , ˜ XM)

i.i.d. ∼ νN,h

0:n

V ar(ISM,N) ≤ µ(H)2 M E

  • dν∗

dνN,h

ν∗ − νN,hL1

C′ µ(H)2 MN

4 2(d+4) .

slide-12
SLIDE 12
  • 4. Simulation results

12

Some simulation results

◮ Pricing of a European call Maturiry : 1 year ; Volatility 20%/year ◮ Pricing of a European call N = 200, M = 10N, ρ = 20% K

Variance ratio BS

60 45 65 194 70 690 75 4862 80 16190

slide-13
SLIDE 13
  • 4. Simulation results

13

References

◮ [Del Moral & Garnier 05] Del Moral, P. and Garnier, J. Genealogical particle

analysis of rare events, Annals of Applied Probability, 2005.

◮ [Musso & al 01] Musso, C. and Oudjane, N. and Le Gland, F. , Improving

regularized particle filters, in Sequential Monte Carlo Methods in Practice, A. Doucet N. de Freitas and N. Gordon editors, Statistics for Engineering and Information Science, 2001.

◮ [Cerou & al 06] Cerou, F. and Del Moral, P. and Le Gland, F. and Guyader, P.

and Lezaud, H. Topart, Some recent improvements to importance splitting, Proceedings of the 6th International Workshop on Rare Event Simulation, Bamberg, October 9-10, 2006.

◮ [Homem-de-Mello & Rubinstein 02] Homem-de-Mello, T. and Rubinstein, R.Y.

Estimation of rare event probabilities using cross-entropy, Proceedings of the Winter Simulation Conference, 2002.