optimal quantization for thepricing of american style
play

Optimal Quantization for thepricing of American style options - PowerPoint PPT Presentation

Optimal Quantization for thepricing of American style options Gilles Pag` es gpa@ccr.jussieu.fr www.proba.jussieu.fr/pageperso/pages Univ. PARIS 6 (Labo. Proba. et Mod` eles Al eatoires, UMR 7599) Linz Special semester on Stochastics 18


  1. Optimal Quantization for thepricing of American style options Gilles Pag` es gpa@ccr.jussieu.fr www.proba.jussieu.fr/pageperso/pages Univ. PARIS 6 (Labo. Proba. et Mod` eles Al´ eatoires, UMR 7599) Linz Special semester on Stochastics 18 th November 2008

  2. 1 Introduction to optimal quadratic Vector Quantization ? 1.1 What is (quadratic) Vector Quantization ? → ( R d , R ⊗ d ), | . | Euclidean norm, ⊲ Let X : (Ω , A , P ) − E | X | 2 < + ∞ . ⊲ When R d ← ( H, < . | . > ) separable Hilbert space ≡ Functional Quantization. . .. Example : If H = L 2 T := L 2 ([0 , T ] , dt ) a process X = ( X t ) t ∈ [0 ,T ] .

  3. Discretization of the state/path space H = R d or L 2 ([0 , T ] , dt ) using ⊲ N -quantizer (or N -codebook) : Γ := { x 1 , . . . , x N } ⊂ R d . ⊲ Discretization by Γ-quantization X Γ : Ω → Γ := { x 1 , . . . , x N } . X � � X Γ := Proj Γ ( X ) � where Proj Γ denotes the projection on Γ following the nearest neighbour rule.

  4. Fig. 1: A 2-dimensional 10-quantizer Γ = { x 1 , . . . , x 10 } and its Voronoi diagram. . .

  5. X Γ and � X Γ ? What do we know about X − � 1.2 ⊲ Pointwise induced error : for every ω ∈ Ω, | X ( ω ) − � X Γ ( ω ) | = dist( X ( ω ) , Γ) = min 1 ≤ i ≤ N | X ( ω ) − x i | . ⊲ Mean quadratic induced error (or quadratic quantization error) : � � � e N ( X, Γ) := � X − � X Γ � 2 = E 1 ≤ i ≤ N | X − x i | 2 min . X Γ : weights associated to each x i : ⊲ Distribution of � X Γ = x i ) = P ( X ∈ C i (Γ)) , P ( � i = 1 , . . . , N where C i (Γ) denotes the Voronoi cell of x i (w.r.t. Γ) defined by � � ξ ∈ R d : | ξ − x i | = 1 ≤ j ≤ N | ξ − x j | C i (Γ) := min .

  6. Fig. 2: Two N -quantizers related to N (0; I 2 ) of size N = 500. . . Which one is the best ?

  7. 1.3 Optimal (Quadratic) Quantization The quadratic distortion (squared quadratic quantization error) ( R d ) N − D X → R + : N � � Γ = ( x 1 , . . . , x N ) �− → � X − � X Γ � 2 1 ≤ i ≤ N | X − x i | 2 2 = E min is continuous [the quantization error is Lipschitz continuous !] for the (product topology on ( R d ) N ). One derives (Cuesta-Albertos & Matran (88), P¨ arna (90), P. (93)) by induction on N that D X N reaches a minimum at an (optimal) quantizer Γ ( N, ∗ ) of full size N (if card(supp( P )) ≥ N ). One derives X Γ ( N, ∗ ) � 2 e N ( X, R d ) := inf {� X − � X Γ � 2 , card(Γ) ≤ N, Γ ⊂ R d } = � X − �

  8. X Γ ( N, ∗ ) � 2 = min {� X − Y � 2 , Y : Ω → R d , card( Y (Ω)) ≤ N } . � X − � Example ( N = 1) : Optimal 1-quantizer Γ = { E X } and e 2 ( X, R d ) = � X − E X � 2 . Extensions to the L r ( P ) -quantization of random 1.4 variables 0 < r ≤ ∞ → ( R d , | . | ) ⊲ X : (Ω , A , P ) − E | X | r < + ∞ (0 < r < + ∞ ) . ⊲ The N -level ( L r ( P ) , | . | )-quantization problem for X ∈ L r E ( P ) � � � X − � X Γ � r , Γ ⊂ E, card(Γ) ≤ N e r,N ( X, E ) := inf . Example ( N = 1, r = 1) : Optimal 1-quantizer Γ = { med( X ) } and e 1 ( X, H ) = � X − med( X ) � 1 .

  9. ⊲ Other examples : – Non-Euclidean norms on E = R d like ℓ p -norms, 1 ≤ p ≤ ∞ , etc. – dispersion of compactly supported distribution : r = ∞

  10. 1.5 Stationary Quantizers N is | . | -differentiable at N -quantizers Γ ∈ ( R d ) N of full ⊲ Distortion D X size : �� � � � ( x i − ξ ) P X ( dξ ) E ( x i − X ) 1 { b ∇ D X N (Γ) = 2 = 2 X Γ = x i } 1 ≤ i ≤ N C i (Γ) 1 ≤ i ≤ N ⊲ Definition : If Γ ⊂ ( R d ) N is a zero of ∇ D X N (Γ), then Γ is called a stationary quantizer (or self-consistent quantizer). � X Γ � X Γ = E � X | � ∇ D X N (Γ) = 0 ⇐ ⇒ since σ ( � X Γ ) = σ ( { X ∈ C i (Γ) } , i = 1 , . . . , N ) . ⊲ An optimal quadratic quantizer Γ is stationary First by-product : E X = E � X Γ .

  11. 1.6 Numerical Integration and conditional expectation (I) : cubature formulae Let F : ( R d ) N − → R be a functional and let Γ ⊂ R d be an N -quantizer. ⊲ If F is Lipschitz continuous, then for every r ∈ [1 , + ∞ ), � � � � � E ( F ( X ) | � X Γ ) − F ( � r ≤ [ F ] Lip � X − � X Γ ) X Γ � r � ⊲ If F is Lipschitz continuous, then (with r = 1) � � � � � E F ( X ) − E F ( � � ≤ [ F ] Lip � X − � X Γ � 1 ≤ [ F ] Lip � X − � X Γ ) X Γ � 2 . Hence the cubature formula since : N � E ( F ( � X Γ )) = F ( x i ) P ( � X = x i ) i =1 In fact � � � � � X − � � E F ( X ) − E F ( � X Γ � 1 = X Γ ) sup � . [ F ] Lip ≤ 1

  12. ⊲ Assume F is C 1 on R d , DF is Lipschitz continuous and the quantizer Γ is a stationary. Taylor expansion yields F ( X ) = F ( � X Γ ) + DF ( � X Γ ) . ( X − � X Γ ) + ( DF ( � X Γ ) − DF ( ζ )) . ( X − � X Γ ) ζ ∈ ( X, � X Γ ), so that � � X Γ � � X Γ � � F ( X ) | � − F ( � DF ( � X Γ ) . ( X − � X Γ ) | � X Γ ) � E E − | �� � X Γ � 2 � � � X − � | � X Γ [ DF ] Lip E ≤ �

  13. . . .so that � � � � � X Γ � � X Γ � � � F ( X ) | � − F ( � DF ( � X Γ ) . ( X − � X Γ ) | � � X Γ ) � E E − � � � �� � � =0 �� � X Γ � 2 � � � X − � | � X Γ [ DF ] Lip E ≤ � since � � � � X Γ | � DF ( � X Γ ) . ( X − � DF ( � X Γ ) . E ( X − � X Γ ) X Γ ) E = E = 0 . so that � � � � X Γ | 2 | � � E ( F ( X ) | � X Γ ) − F ( � � ≤ [ DF ] Lip E ( | X − � X Γ ) X Γ )

  14. ⊲ As a consequence for conditional expectation � � � � � E ( F ( X ) | � X Γ ) − F ( � ≤ [ DF ] Lip � X − � X Γ ) X Γ � 2 � 2 r r ⊲ Hence the cubature formulas for numerical integration � � � � � E F ( X ) − E F ( � � ≤ [ DF ] Lip � X − � X Γ ) X Γ � 2 2

  15. Quantized approximation of E ( F ( X ) | Y ) 1.7 → R d and F : R d → R a Borel functional. ⊲ Let X , Y (Ω , A , P ) − Y Γ ′ are (Voronoi) quantizations . X Γ and � X = � � Y = � Let ⊲ Natural idea E ( F ( X ) | Y ) ≈ E ( F ( � X ) | � Y ).To what extend ? E ( F ( X ) | Y ) = ϕ F ( Y ) . ⊲ In a Feller Markovian framework : regularity of F � regularity ϕ F E ( F ( X ) | Y ) − E ( F ( � X ) | � Y ) = E ( F ( X ) | Y ) − E ( F ( X ) | � Y )+ E ( F ( X ) − F ( � X ) | � Y ) so that, using that conditional expectation is an L 2 -contraction and � Y is σ ( Y )-measurable, E ( E ( F ( � X ) | Y ) | � � E ( F ( X ) | Y ) − Y ) � 2 � ϕ F ( Y ) − E ( F ( X ) | � Y ) � 2 + � F ( X ) − F ( � ≤ X ) � 2 � ϕ F ( Y ) − E ( ϕ F ( Y ) | � Y ) � 2 + � F ( X ) − F ( � X ) � 2 = � ϕ F ( Y ) − ϕ F ( � Y ) � 2 + � F ( X ) − F ( � ≤ X ) � 2 .

  16. The last inequality follows from the very definition of conditional expectation given � Y � E ( F ( X ) | Y ) − E ( F ( � X ) | � Y ) � 2 ≤ [ F ] Lip � X − � X � 2 + [ ϕ F ] Lip � Y − � Y � 2 . ⊲ Non-quadratic case the above inequality remains valid provided [ ϕ F ] Lip is replaced by 2[ ϕ F ] Lip . ⊲ These are the ingredients for the proofs of both theorems for – Bermuda options (orders 0 & 1). – Swing options

  17. Vector Quantization rate ( H = R d ) 1.8 ⊲ Theorem ( a ) Asymptotic (Zador, Kiefer, Bucklew & Wise, Graf & Luschgy al., from 1963 to 2000) ⊥ Let X ∈ L r + ( P ) and P X ( dξ ) = ϕ ( ξ ) dξ + ν ( dξ ). Then �� � 1 d + 1 r d × N − 1 e N,r ( X, R d ) ∼ � d +2 ( u ) du J 2 ,d × N → + ∞ . R d ϕ as d ( b ) Non asymptotic (Pierce Lemma) (Luschgy-P.(2005) Let d ≥ 1. Let r, δ > 0. There exists a universal constant C d,r,δ ∈ (0 , ∞ ) e N,r ( X, R d ) ≤ C d,r,δ � X � r + δ N − 1 ∀ N ≥ 1 , d ⊲ The true value of � J r,d is unknown for d ≥ 3 but (Euclidean norm) � � d d � J r,d ∼ 2 πe ≈ as d → + ∞ . 17 , 08

  18. Conclusions : • For every N the same rate as with “naive” product-grids for the U ([0 , 1] d ) distribution with N = m d points + the best constant • No escape from “The curse of dimensionality” . . . • Equalization of local inertia (see Comm. in Statist. , S.Delattre-J.C. Fort-G. P., 2004)

  19. 2 Numerical optimization of the grids : Gaussian and non-Gaussian vectors The case of normal distribution N (0; I d ) on R d 2.1 ⊲ As concerns Gaussian N (0 , I d ) Already quantized for you (see J. Printems-G.P., MCMA 2003).

  20. ⊲ For d = 1 up to 10 and N = 1 ≤ N ≤ 5 000, new grid files available including( L 1 & L 2 -distortion, local L 1 & L 2 -pseudo-inertia, etc). on Download at our WEBSITE : www.quantize.maths-fi.com

  21. 2.2 The 1 -dimension. . . ⊲ Theorem (Kiefer (82), LLoyd (82), Lamberton-P. (90)) H = R . If P X ( dξ ) = ϕ ( ξ ) dξ with log ϕ concave, then there is exactly one stationary quantizer. Hence argmin D X N = { Γ ( N ) } . ∀ N ≥ 1 , Examples : The normal distribution, the gamma distributions, etc. 2 = x i +1 + x i ⊲ Voronoi cells : C i (Γ) = [ x i − 1 2 , x i + 1 2 [, x i + 1 . 2   � x i + 1 2 ( x i − ξ ) ϕ ( ξ ) dξ ⊲ Gradient : ∇ D X   N (Γ) = 2 x i − 1 2 1 ≤ i ≤ N � x � x Hessian : D 2 ( D X N )(Γ) = . . . . . . only involves 0 ϕ ( ξ ) dξ and 0 ξϕ ( ξ ) dξ

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend