optimal vector quantization from signal processing to
play

Optimal Vector Quantization: from signal processing to clustering - PowerPoint PPT Presentation

Optimal Vector Quantization: from signal processing to clustering and numerical probability Gilles Pag` es LPMA CEMRACS 2017 CIRM, Luminy 19th July 2017 Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 1 / 81 Introduction to


  1. Lp -mean quantization error Introduction to Optimal Quantization(s) L p -mean quantization error ⊲ What about “Optimal”? Is there an optimal way to select the grid/ N -quantizer to classify the data? In data analysis optimal clustering ? ⊲ The L p - mean quantization error Definition The L p -mean quantization error induced by a grid Γ ⊂ R d with size | Γ | ≤ N , N ∈ N ‚ ‚ ‚ ‚ ‚ ‚ ‚ dist( X , Γ) ‚ x ∈ Γ | X − x | e p ( X ; Γ) = p = ‚ min ‚ (1) p (only depends on the distribution µ = P X of X ). ⊲ The optimal L p -mean quantization problem consists in minimizing (1) over all grids of size | Γ | ≤ N . We define the L p -optimal mean quantization error at level N as n‚ ‚ o ‚ ‚ p : Γ ⊂ R d , | Γ | ≤ N x ∈ Γ | X − x | e p , N ( X ) := inf ‚ min ‚ . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 10 / 81

  2. Lp -mean quantization error Introduction to Optimal Quantization(s) Voronoi Quantization ⊲ Noting that ` ´ = | X ( ω ) − b X Ξ(Ω) | | X ( ω ) − Ξ( ω ) | ≥ dist X ( ω ) , Ξ(Ω) one derives the more general optimality result ˘ ¯ � X − Ξ � p : Ξ ∈ L p ( R d ) , Card (Ξ(Ω)) ≤ N = W p ( P X , P N ) . e p , N ( X ) = inf Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 11 / 81

  3. Lp -mean quantization error Introduction to Optimal Quantization(s) Voronoi Quantization ⊲ Noting that ` ´ = | X ( ω ) − b X Ξ(Ω) | | X ( ω ) − Ξ( ω ) | ≥ dist X ( ω ) , Ξ(Ω) one derives the more general optimality result ˘ ¯ � X − Ξ � p : Ξ ∈ L p ( R d ) , Card (Ξ(Ω)) ≤ N = W p ( P X , P N ) . e p , N ( X ) = inf X Γ provides an optimal L p -mean discretization of X by ⇒ Voronoi Quantization b Γ-valued random variables for every p ∈ (0 , + ∞ ). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 11 / 81

  4. Lp -mean quantization error Introduction to Optimal Quantization(s) Voronoi Quantization ⊲ Noting that ` ´ = | X ( ω ) − b X Ξ(Ω) | | X ( ω ) − Ξ( ω ) | ≥ dist X ( ω ) , Ξ(Ω) one derives the more general optimality result ˘ ¯ � X − Ξ � p : Ξ ∈ L p ( R d ) , Card (Ξ(Ω)) ≤ N = W p ( P X , P N ) . e p , N ( X ) = inf X Γ provides an optimal L p -mean discretization of X by ⇒ Voronoi Quantization b Γ-valued random variables for every p ∈ (0 , + ∞ ). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 11 / 81

  5. Lp -mean quantization error Introduction to Optimal Quantization(s) Voronoi Quantization ⊲ Noting that ` ´ = | X ( ω ) − b X Ξ(Ω) | | X ( ω ) − Ξ( ω ) | ≥ dist X ( ω ) , Ξ(Ω) one derives the more general optimality result ˘ ¯ � X − Ξ � p : Ξ ∈ L p ( R d ) , Card (Ξ(Ω)) ≤ N = W p ( P X , P N ) . e p , N ( X ) = inf X Γ provides an optimal L p -mean discretization of X by ⇒ Voronoi Quantization b Γ-valued random variables for every p ∈ (0 , + ∞ ). ⇒ The Nearest Neighbor projection is the coding rule, which yields the smallest L p -mean approximation error for X . Theorem (Kieffer, Cuesta-Albertos, (P.), Graf-Luschgy) ( a ) Let p ∈ (0 , + ∞ ) , X ∈ L p . For every level N ≥ 1 , there exists (at least) one L p -optimal quantization grid Γ ∗ , N at level N and N �− → e p , N ( X ) ↓ 0 (vanishes if supp ( X ) is finite, ↓ ↓ 0 otherwise) “ X Γ N , ∗ ” X Γ N , ∗ a . s . (stationarity/self-consistency). X | b = b ( b ) If p = 2 , E Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 11 / 81

  6. Lp -mean quantization error Introduction to Optimal Quantization(s) Sketch of proof ( p ≥ 1) ( a ) We proceed by induction N = 1: ξ �→ � X − ξ � p is convex and coercive and atteins its minmum at an L p -median. ⇒ N + 1: Let ξ ∈ supp( X ) \ Γ ∗ , N , Γ ∗ , N L p -optimal at level N . N = N +1 := e p ( X , Γ ∗ , N ∪ { ξ } ) p < e p ( X , Γ ∗ , N ) p = e p , N ( X ) p ℓ ∗ so that n o K ∗ = Γ ⊂ R d : | Γ | = N + 1 , e p ( X , Γ) p ≤ ℓ ∗ � = ∅ , closed . . . N +1 . . . and bounded (send one component or more to infinity and use Fatou’s Lemma). → e p ( X , Γ) attains a global minimum over K ∗ . Then Γ �− “ X Γ N , ∗ ” ⊥ L 2 ` ´ X Γ N , ∗ − E X Γ N , ∗ ) ( b ) The random variable b X | b σ ( b . Hence “ X Γ N , ∗ ”‚ “ X Γ N , ∗ ”‚ ‚ X Γ N , ∗ ‚ ‚ ‚ X Γ N , ∗ − E ‚ X − b ‚ 2 ‚ X − E X | b ‚ 2 ‚b X | b ‚ 2 2 = 2 + 2 . Hence, uniqueness of conditional expectation yields “ X Γ N , ∗ ” X Γ N , ∗ X | b = b a . s . E Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 12 / 81

  7. Lp -mean quantization error Introduction to Optimal Quantization(s) Applications ˘ ¯ Signal transmission: Let Γ ∗ , N = x ∗ 1 , . . . , x ∗ N X Γ ∗ , N = x ∗ Pre-processing I : re-ordering the labels i so that i �→ p ∗ i := P ( b i ) is decreasing. Pre-processing II : encoding i � Code ( i ) see [CT06]. A who emits and B who receives both share the one-to-one bible. x ∗ i ↔ Code ( i ) X is encoded, Code( i ) is transmitted, then decoded. Naive encoding : dyadic coding of the labels i N X p ∗ Complexity = i (1 + ⌊ log 2 i ⌋ ) ≤ 1 + ⌊ log 2 N ⌋ . i =1 ˘ 2 i − 1 ¯ Uniform signal X ∼ U ([0 , 1]) then Γ ∗ , N = and p ∗ i = 1 2 N , i = 1 : N N so that N X Complexity = 1 + 1 ⌊ log 2 i ⌋ ∼ log 2 ( N / e ) . N i =1 On the way to Shannon’s Source coding theorem (see e . g . [Dembo-Zeitouni]). . . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 13 / 81

  8. Lp -mean quantization error Introduction to Optimal Quantization(s) Quantization for (Probability and) Numerics: What for? Cubature formulas for the computation of expectations. N X ` ´ X Γ ∗ , N ) F ( b p ∗ i F ( x ∗ E F ( X ) ≈ E = i ) . i =1 X Γ ∗ , N . i ) i =1 ,..., N of b What is needed? The distribution ( x ∗ i , p ∗ How to perform grid optimization? Lloyd I (Lloyd, 1982) and CLVQ (Mc Queen, further on). Conditional expectation approximation: ` Y Γ Y ´ F ( b X Γ X | b E ( F ( X ) | Y ) ≈ E . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 14 / 81

  9. Lp -mean quantization error Introduction to Optimal Quantization(s) Quantization for (Probability and) Numerics: What for? Cubature formulas for the computation of expectations. N X ` ´ X Γ ∗ , N ) F ( b p ∗ i F ( x ∗ E F ( X ) ≈ E = i ) . i =1 X Γ ∗ , N . i ) i =1 ,..., N of b What is needed? The distribution ( x ∗ i , p ∗ How to perform grid optimization? Lloyd I (Lloyd, 1982) and CLVQ (Mc Queen, further on). Conditional expectation approximation: ` Y Γ Y ´ F ( b X Γ X | b E ( F ( X ) | Y ) ≈ E . Clustering (unsupervised learning): What for? Unsupervised classification Mc Queen, 1957; (up to improvements like Self-Organizing Kohonen Maps, Cottrell-Fort-P. 1998, among others). How to perform? Lloyd I (Lloyd, 1982) and CLVQ (Mc Queen, 1967, further on). A typical problem in progress: P n Distribution µ n ( ω, d ξ ) = 1 k =1 δ ξ k ( ω ) , ( ξ k ) k ≥ 1 i.i.d. n L 2 -Optimal quantization grid Γ ∗ n ( ω ) at a fixed level N ≥ 1. n ( ω ) = Γ ∗ , N optimal grid at level N for µ = L ( ξ 1 ). One has lim n → + ∞ Γ ∗ At which rate? Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 14 / 81

  10. Lp -mean quantization error Introduction to Optimal Quantization(s) Extension and. . . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 15 / 81

  11. Lp -mean quantization error Introduction to Optimal Quantization(s) Extension and. . . ⊲ Generalization to infinite dimension Still true in: a separable Hilbert space, even in a reflexive Banach space E (Cuesta-Albertos, PTRF , 1997) for a tight r.v. ‚ ‚ ‚ min ‚ is l.s.c. fro the product weak topology on E N ( x 1 , . . . , x N ) �− → 1 ≤ i ≤ N | X − x i | E p or even in a L 1 space (Graf-Luschgy-P., J. of Approx. , 2005) using τ -topology. . . ` ´ but. . . not in C ([0 , T ] , R ) , � · � sup . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 15 / 81

  12. Lp -mean quantization error Introduction to Optimal Quantization(s) Extension and. . . ⊲ Generalization to infinite dimension Still true in: a separable Hilbert space, even in a reflexive Banach space E (Cuesta-Albertos, PTRF , 1997) for a tight r.v. ‚ ‚ ‚ min ‚ is l.s.c. fro the product weak topology on E N ( x 1 , . . . , x N ) �− → 1 ≤ i ≤ N | X − x i | E p or even in a L 1 space (Graf-Luschgy-P., J. of Approx. , 2005) using τ -topology. . . ` ´ but. . . not in C ([0 , T ] , R ) , � · � sup . ⊲ Convergence to 0 e p , N ( X ) ↓ 0 as N → + ∞ . Let ( z n ) n ≥ 1 be an everywhere dense sequence in R d » – ` ´ p = E e p , N ( X ) p ≤ e p 1 ≤ i ≤ N | X − z i | p X , { z 1 , . . . , z N } min ↓ 0 as N → + ∞ . by the Lebesgue dominated convergence theorem. ⊲ But. . . at which rate? At least for the finite dimensional vector space. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 15 / 81

  13. Introduction to Optimal Quantization(s) Quantization Rates/Zador’s Theorem Theorem (Zador’s Theorem, from 1963 (PhD) to 2000) ( a ) Sharp asymptotic (Zador, Kieffer, Bucklew & Wise, Graf & Luschgy in [GL00]): Let X ∈ L p + ( R d ) with distribution P X = ϕ.λ d ⊥ + ν . Then „Z « ( d + p ) / d R d ϕ d / ( d + p ) d λ d 1 d · e p , N ( X ) = Q p , |·| . N →∞ N lim ` ´ 1 d . e p , N U ([0 , 1] d ) where Q p , |·| = inf N ≥ 1 N . ( b ) Non-asymptotic (Pierce, Graf & Luschgy in [GL00], Luschgy-P. [LP08]): Let p ′ > p. There exists C p , p ′ , d ∈ (0 , + ∞ ) such that, for every R d -valued X r.v. e p , N ( X ) ≤ C p , p ′ , d σ p ′ ( X ) . N − 1 d . ∀ N ≥ 1 , Remarks. • σ p ′ ( X ) := inf a ∈ R d � X − a � p ′ ≤ + ∞ is the L p ′ -(pseudo-)standard deviation. • The rate N − 1 d is known as the curse of dimensionality . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 16 / 81

  14. Introduction to Optimal Quantization(s) Quantization Rates/Zador’s Theorem Theorem (Zador’s Theorem, 2016) ( a ) Sharp asymptotic (Zador, Kieffer, Bucklew & Wise, Graf & Luschgy in [GL00], Luschgy-P., 2016): Let X ∈ L p ( R d ) with distribution P X = ϕ.λ d ⊥ + ν such that ϕ is essentially L p -radial and non-increasing [e.g. ϕ ( ξ ) ≍ g ( | ξ | 0 ) , g ↓ on ( a 0 , + ∞ ) &. . . ] Then „Z « ( d + p ) / d 1 R d ϕ d / ( d + p ) d λ d d · e p , N ( X ) = Q p , |·| · N →∞ N lim ` ´ 1 d · e p , N U ([0 , 1] d ) where Q p , |·| = inf N N . ( b ) Non-asymptotic (Pierce, Graf & Luschgy in [GL00], Luschgy-P. [LP08]): Let p ′ > p. There exists C p , p ′ , d ∈ (0 , + ∞ ) such that, for every R d -valued X r.v. e p , N ( X ) ≤ C p , p ′ , d σ p ′ ( X ) . N − 1 d . ∀ N ≥ 1 , Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 17 / 81

  15. Introduction to Optimal Quantization(s) Numerical computation of quantizers Numerical computation of quantizers ⊲ Stationary quantizers Optimal grids Γ ∗ at level satisfy ` X Γ ∗ = E X Γ ∗ ) b X | b or equivalently if Γ ∗ = { x ∗ 1 , . . . , x ∗ N ` ´ x ∗ X | X ∈ C i (Γ ∗ ) i = E (Nearly) optimal grids can be computed by optimization algorithms : Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 18 / 81

  16. Introduction to Optimal Quantization(s) Numerical computation of quantizers Numerical computation of quantizers ⊲ Stationary quantizers Optimal grids Γ ∗ at level satisfy ` X Γ ∗ = E X Γ ∗ ) b X | b or equivalently if Γ ∗ = { x ∗ 1 , . . . , x ∗ N ` ´ x ∗ X | X ∈ C i (Γ ∗ ) i = E (Nearly) optimal grids can be computed by optimization algorithms : ⊲ Lloyd’s I algorithm (Randomized) fixed-point method. n = 0 Initial grid Γ [0] = { x [0] 1 , . . . , x [0] N } ⇒ k + 1 Standard step : Let Γ [ k ] the current grid. k = ` ´ ` X Γ [ k ] = x [ k ] ´ x [ k +1] X | X ∈ C i (Γ [ k ] ) X | b = E = E i i and set Γ [ k +1] = { x [ k +1] , i = 1 : N } . i i Proposition (Lloyd I always makes the quantization error decrease) ‚ X Γ ( k +1) ‚ ‚ ‚ ‚ X Γ ( k ) ‚ ` X Γ ( k ) ´ ‚ X − b ‚ ‚ X − E X | b ‚ ‚ X − b ‚ 2 ≤ 2 ≤ 2 | {z } Γ ( k +1) - valued Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 18 / 81

  17. Introduction to Optimal Quantization(s) Numerical computation of quantizers When d = 1 and L ( X ) is log-concave: exponetially fast convergence (Kieffer, 1982). Renewal of interest for 1-D quantization for quadrature formulas [Callegaro et al., 2017]. However . . . no general proof of convergence when L ( X ) has a non compact support and d ≥ 2. Splitting method : initialize Lloyd’s I procedure inductively on the size N by ` ´ Γ N , (0) = Γ N − 1 , ( ∞ ) ∪ { ξ N } , ξ N ∈ supp L ( X ) . (see P. -Yu, SICON, 2016). Then Γ N +1 , ( k ) → Γ N , ( ∞ ) (stationary quantizer of full size N . . . ) as k → + ∞ Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 19 / 81

  18. Introduction to Optimal Quantization(s) Numerical computation of quantizers When d = 1 and L ( X ) is log-concave: exponetially fast convergence (Kieffer, 1982). Renewal of interest for 1-D quantization for quadrature formulas [Callegaro et al., 2017]. However . . . no general proof of convergence when L ( X ) has a non compact support and d ≥ 2. Splitting method : initialize Lloyd’s I procedure inductively on the size N by ` ´ Γ N , (0) = Γ N − 1 , ( ∞ ) ∪ { ξ N } , ξ N ∈ supp L ( X ) . (see P. -Yu, SICON, 2016). Then Γ N +1 , ( k ) → Γ N , ( ∞ ) (stationary quantizer of full size N . . . ) as k → + ∞ Practical implementation based on Monte Carlo simulations (or a dataset). P M ` ´ m =1 g ( X m ) 1 { X m ∈ C i (Γ) } X Γ = x i g ( X ) | b , ( X m ) m ≥ 1 i.i.d. ∼ X . E = lim P M M → + ∞ m =1 1 { X m ∈ C i (Γ) } Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 19 / 81

  19. Introduction to Optimal Quantization(s) Numerical computation of quantizers ⊲ Competitive Learning Vector Quantization algorithm ( p = 2 ) “Simply” a Stochastic gradient descent Let D N : ( R d ) N → R + be the (quadratic) distortion function 1 ≤ i ≤ N � X − x i � 2 → D N ( x ) := E min x ∈ ( R d ) N . min Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 20 / 81

  20. Introduction to Optimal Quantization(s) Numerical computation of quantizers ⊲ Competitive Learning Vector Quantization algorithm ( p = 2 ) “Simply” a Stochastic gradient descent Let D N : ( R d ) N → R + be the (quadratic) distortion function 1 ≤ i ≤ N � X − x i � 2 → D N ( x ) := E min x ∈ ( R d ) N . min As soon as | · | is smooth enough ⇒ D N is differentiable at grids of full size. and if Γ = { x 1 , . . . , x N } , ` ˆ` ´ ˜´ ∂ D N ∂ x i (Γ) = 2 E x i − X 1 { X ∈ C i (Γ) } i =1: N Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 20 / 81

  21. Introduction to Optimal Quantization(s) Numerical computation of quantizers ⊲ Competitive Learning Vector Quantization algorithm ( p = 2 ) “Simply” a Stochastic gradient descent Let D N : ( R d ) N → R + be the (quadratic) distortion function 1 ≤ i ≤ N � X − x i � 2 → D N ( x ) := E min x ∈ ( R d ) N . min As soon as | · | is smooth enough ⇒ D N is differentiable at grids of full size. and if Γ = { x 1 , . . . , x N } , ` ˆ` ´ ˜´ ∂ D N ∂ x i (Γ) = 2 E x i − X 1 { X ∈ C i (Γ) } i =1: N Main point : ∇ D N (Γ) = 0 iff Γ is stationary. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 20 / 81

  22. Introduction to Optimal Quantization(s) Numerical computation of quantizers ⊲ Competitive Learning Vector Quantization algorithm ( p = 2 ) “Simply” a Stochastic gradient descent Let D N : ( R d ) N → R + be the (quadratic) distortion function 1 ≤ i ≤ N � X − x i � 2 → D N ( x ) := E min x ∈ ( R d ) N . min As soon as | · | is smooth enough ⇒ D N is differentiable at grids of full size. and if Γ = { x 1 , . . . , x N } , ` ˆ` ´ ˜´ ∂ D N ∂ x i (Γ) = 2 E x i − X 1 { X ∈ C i (Γ) } i =1: N Main point : ∇ D N (Γ) = 0 iff Γ is stationary. Hence we can implement a zero search (stochastic) gradient . . . known as Competitive Learning Vector Quantization Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 20 / 81

  23. Introduction to Optimal Quantization(s) Numerical computation of quantizers • d = 1: Z x i +1 / 2 X N | ξ − x i | 2 d P X ( ξ ) D N ( x ) = x i − 1 / 2 i =1 ⇒ Evaluation of Voronoi-Cells, Gradient and Hessian is simple if f X , F X & E 1 X have closed form � Newton-Raphson. • d ≥ 2: Stochastic Gradient Method: CLVQ Simulate ξ 1 , ξ 2 , . . . independent copies of X Generate step sequence γ 1 , γ 2 , . . . A B + n ց 0 γ n = η ≈ 0 Usually: step γ n = or Grid updating n �→ n + 1: Selection: select winner index: i ∗ ∈ argmin i | x n i − ξ n | ( ` ´ x n +1 := x n i ∗ + γ n ( x n x n i ∗ − ξ n ) ≡ dilat ( ξ n ; 1 − γ n ) i ∗ i ∗ Learning: x n +1 := x n for j � = i ∗ . j , j Nearest neighbour search: Computational challenge of simulation based stochastic optimization methods : 1 { X ∈ C i (Γ) } ≡ NEAREST NEIGHBOUR SEARCH Highly challenging problem in higher dimension, say d ≥ 4 or 5. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 21 / 81

  24. Introduction to Optimal Quantization(s) Optimal Quantizers Figure: A random Quantizer for N (0 , I 2 ) of size N = 500 in ( R 2 , | · | 2 ). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 22 / 81

  25. Introduction to Optimal Quantization(s) Optimal Quantizers Figure: A Quantizer for N (0 , I 2 ) of size N = 500 in ( R 2 , | · | 2 ). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 23 / 81

  26. Introduction to Optimal Quantization(s) Optimal Quantizers Benett’s conjecture (1955): a coloured approach Figure: An N -quantization of X ∼ N (0; I 2 ) with coloured weights: P ( X ∈ C i (Γ ( ∗ , N ) )) ( with J. Printems ) Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 24 / 81

  27. Introduction to Optimal Quantization(s) Optimal Quantizers Toward Benett’s conjecture: Γ ( ∗ , N ) = { x 1 . . . , x N } X ∼ N (0; I 2 ). Figure: x i �→ P ( X ∈ C i (Γ ( ∗ , N ) ) (green Gaussian line); x i �→ E | X − x i | 2 1 { X ∈ C i (Γ ( ∗ , N ) ) } (red flat line) ( with J.C. fort ) → E | X − x i | 2 1 X ∈ C i (Γ ∗ , N ) ≃ Constant. Local inertia: x i �− „ « 1 x 2 3 i Weights: x i �→ P ( X ∈ C i (Γ ( ∗ , N ) ) ≃ C . e − (fitting) 2 Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 25 / 81

  28. Introduction to Optimal Quantization(s) Optimal Quantizers More on Benett’s conjecture ⊲ Benett’s conjecture (weak form): In any dimension d , L p -optimal quantizers satisfy → E | X − x i | 2 1 X ∈ C i (Γ ∗ , N ) ≃ e N ( X ) Local inertia: x i �− . N „ « d x 2 d + p i Weights: x i �→ P ( X ∈ C i (Γ ( ∗ , N ) ) ≃ C . e − . 2 When d = 1 is holds uniformly on compacts sets ([Fort-P.], ’03), when d ≥ 1 at least in a measure sense. ⊲ Strong Benett’s conjecture: Conjecture on the geometric form of Voronoi cells go U ([0 , 1] d ) ( d = 2: regular hexagon, d = 3 octaedron, d ≥ 4 ????). Generic form of Voronoi cells for A.C. distributions. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 26 / 81

  29. Introduction to Optimal Quantization(s) Optimal Quantizers Quantizing Non-Gaussian multivariate distributions Figure: A Quantizer for ( B 1 , sup t ∈ [0 , 1] B t , B std B.M. of size N = 500 in ( R 2 , | · | 2 ). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 27 / 81

  30. Back to learning Back to clustering If ( ξ k ) k ≥ i.i.d.d. ξ 1 ∼ µ = L ( ξ 1 ) on R d , consider its empirical measure X n µ n ( ω, d ξ ) = 1 , δ ξ k . n k =1 Assume that µ ( B (0; 1)) = 1. For every ω ∈ Ω, there exists (at least) an optimal quantizer Γ ( N ) ( ω, n ) for µ n ( ω, d ξ ). Then (Biau et al., 2008, see [BDL08]) 0 s 1 r “ ´” d N 1 − 2 ` d log n Nd Γ ( N ) ( ω, n ) , µ @ A − e 2 , N ( µ ) ≤ C min E e 2 n , n where C > 0 is a universal real constant. See also (Graf-Luschgy, AoP, 2002, [GL02]) for other results on empirical measures (bounded support). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 28 / 81

  31. Quantization and Cubature Cubature formulae Back to numerical Probability? Quantization for Cubature ⊲ Assume that we have access to L ( b X Γ ): both the grid and the Voronoi cell weights Γ = { x 1 , . . . , x N } and p Γ i = P ( X ∈ C i (Γ)) , i = 1 , . . . , N . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 29 / 81

  32. Quantization and Cubature Cubature formulae Back to numerical Probability? Quantization for Cubature ⊲ Assume that we have access to L ( b X Γ ): both the grid and the Voronoi cell weights Γ = { x 1 , . . . , x N } and p Γ i = P ( X ∈ C i (Γ)) , i = 1 , . . . , N . X Γ ) for some Lipschitz continuous F : R d → R becomes ⇒ The computation of E F ( b = straightforward: N X E F ( b X Γ ) = p Γ i F ( x i ) . i =1 Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 29 / 81

  33. Quantization and Cubature Cubature formulae Back to numerical Probability? Quantization for Cubature ⊲ Assume that we have access to L ( b X Γ ): both the grid and the Voronoi cell weights Γ = { x 1 , . . . , x N } and p Γ i = P ( X ∈ C i (Γ)) , i = 1 , . . . , N . X Γ ) for some Lipschitz continuous F : R d → R becomes ⇒ The computation of E F ( b = straightforward: N X E F ( b X Γ ) = p Γ i F ( x i ) . i =1 ⊲ As a first error estimate, we already know that | E F ( X ) − E F ( b X Γ ) | ≤ [ F ] Lip E | X − b X Γ | . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 29 / 81

  34. Quantization and Cubature Error estimates Error Estimates ⊲ First order. Moreover, if Γ N , ∗ is L 1 -optimal at level N ≥ 1 n o | E F ( X ) − E F ( Y ) | , card ( Y (Ω)) ≤ N inf sup [ F ] Lip ≤ 1 ˛ X Γ N , ∗ ˛ X Γ N , ∗ ) | = E | E F ( X ) − E F ( b ˛ X − b ˛ = e 1 , N ( X ) = sup [ F ] Lip ≤ 1 i . e . Optimal Quantization is optimal for the class of Lipschitz functions or equivalently. ` ´ e 1 , N ( X ) = W 1 L ( X ) , P N . ˘ ¯ with P N = atomic distribution with at most N atoms . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 30 / 81

  35. Quantization and Cubature Error estimates Error Estimates ⊲ First order. Moreover, if Γ N , ∗ is L 1 -optimal at level N ≥ 1 n o | E F ( X ) − E F ( Y ) | , card ( Y (Ω)) ≤ N inf sup [ F ] Lip ≤ 1 ˛ X Γ N , ∗ ˛ X Γ N , ∗ ) | = E | E F ( X ) − E F ( b ˛ X − b ˛ = e 1 , N ( X ) = sup [ F ] Lip ≤ 1 i . e . Optimal Quantization is optimal for the class of Lipschitz functions or equivalently. ` ´ e 1 , N ( X ) = W 1 L ( X ) , P N . ˘ ¯ with P N = atomic distribution with at most N atoms . ⊲ Second order. Proposition Second order cubature error bound Assume F ∈ C 1 Lip and the grid Γ is stationary (e . g . because it is L 2 -optimal), i . e . X Γ = E ( X | b b X Γ ) . Then a Taylor expansion yields ` ˛ X Γ ´ | E F ( X ) − E F ( b | E F ( X ) − E F ( b ∇ F ( b ˛ X − b X Γ ) | X Γ ) − E X Γ ) = | [ DF ] Lip · E | X − b X Γ | 2 . ≤ Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 30 / 81

  36. Quantization and Cubature Error estimates ⊲ Convexity Furthermore, if F is convex, then Jensen’s inequality implies for stationary grids Γ E F ( b X Γ ) ≤ E F ( X ) . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 31 / 81

  37. Quantization and Cubature Error estimates Quantization for Conditional expectation (Pythagoras’ Theorem) ⊲ Applications in Numerical Probability = conditional expectation approximation. b b X = q X ( X ) Y = q Y ( Y ) Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 32 / 81

  38. Quantization and Cubature Error estimates Quantization for Conditional expectation (Pythagoras’ Theorem) ⊲ Applications in Numerical Probability = conditional expectation approximation. b b X = q X ( X ) Y = q Y ( Y ) Proposition (Pythagoras’ Theorem for conditional expectation) Let P ( y , du ) = L ( X | Y = y ) be a regular version of the conditional distribution of X given Y , so that ` ´ g ( X ) | Y = Pg ( Y ) a . s . E Then ‚ ` ´ ` ´‚ ‚ ‚ ‚ ‚ ‚ E g ( b X ) | b ‚ 2 ‚ X − b ‚ 2 ‚ Pg ( Y ) − Pg ( b ‚ 2 [ g ] 2 g ( X ) | Y − E ≤ Y X 2 + Y ) Lip 2 2 ‚ ‚ ‚ ‚ [ g ] 2 ‚ X − b ‚ 2 2 + [ Pg ] 2 ‚ Y − b ‚ 2 ≤ X Y 2 . Lip Lip If P propagates Lipschitz continuity: [ Pg ] Lip ≤ [ P ] Lip [ g ] Lip . then quantization produces a control of the error. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 32 / 81

  39. Quantization and Cubature Error estimates Quantization for Conditional expectation ⊲ Sketch of proof. As ´ L 2 ( P ) ` Pg ( Y ) | b ⊥ σ ( b Pg ( Y ) − E Y Y ) and “ ´” ⊥ “ ´” ` ´ ` ´ ` ´ ` ` ´ ` g ( b X ) | b Pg ( Y ) | b Pg ( Y ) | b g ( b X ) | b E g ( X ) | Y − E Y = E g ( X ) | Y − E Y + E Y − E Y so that by Pythagoras’ theorem ‚ ` ´ ` ´‚ ‚ ` ´‚ ‚ ` ´ ` ´‚ ‚ E g ( b X ) | b ‚ 2 ‚ Pg ( Y ) − E Pg ( Y ) | b ‚ 2 ‚ E Pg ( X ) | b g ( b X ) | b ‚ 2 g ( X ) | Y − E − E Y 2 = Y 2 + Y Y 2 ‚ ´‚ ‚ ‚ ‚ 2 ‚ 2 ‚ Pg ( Y ) − Pg ( b ‚ g ( X ) − g ( b ≤ Y ) 2 + X ) 2 . ‚ ‚ ‚ ‚ ‚ 2 ‚ 2 ≤ [ Pg ] 2 ‚ Y − b 2 + [ g ] 2 ‚ X − b Y X 2 . Lip Lip ⊲ If p � = 2, a Minkowski like control is preserved ‚ ´‚ ‚ ‚ ‚ ‚ ` ´ ` ‚ E g ( b X ) | b ‚ ‚ X − b ‚ ‚ Pg ( Y ) − Pg ( b ‚ g ( X ) | Y − E Y ≤ [ g ] Lip X p + Y ) p p ‚ ‚ ‚ ‚ ‚ X − b ‚ Y − b ‚ ‚ ≤ [ g ] Lip X p + [ Pg ] Lip Y p . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 33 / 81

  40. Application to BSDE A typical result (BSDE) ⊲ We consider a “standard” BSDE: Z T Z T f ( s , X s , Y s , Z s ) ds − t ∈ [0 , T ] , Y t = h ( X T ) + Z s dW s , t t where the exogenous process ( X t ) t ∈ [0 , T ] is a diffusion Z t Z t x ∈ R d . X t = x + b ( s , X s ) ds + σ ( s , X s ) dW s , 0 0 with b , σ , h Lipschitz continuous in x , f Lipschitz in ( x , y , z ) uniformly in t ∈ [0 , T ]. . . ⊲ which is the probabilistic representation of the partially non-linear PDE ∂ t u ( t , x ) + Lu ( t , x ) + f ( t , x , u ( t , x ) , ( ∂ ∗ x u σ )( t , x )) = 0 on [0 , T ) × R d , u ( T , . ) = h ` ´ with Lg = ( ∇ b | g ) + 1 σ ∗ D 2 g σ 2 Tr . ⊲ . . . and its time discretization scheme with step ∆ n = T n recursively defined by ¯ h ( ¯ Y t n = X t n n ) , n ` ´ ¯ E ( ¯ t n k , ¯ k , E ( ¯ k ) , ¯ Y t n = Y t n k +1 |F t n k ) + ∆ n f X t n Y t n k +1 |F t n ζ t n , k k ` ¯ ´ ` ´ 1 1 ¯ ( ¯ k +1 − ¯ k +1 − W t n k ) |F t k k +1 − W t n k ) |F t k ζ t n = ∆ n E Y t n k +1 ( W t n = ∆ n E Y t n Y t n k )( W t n k where ¯ X is the Euler scheme of X defined by ¯ k +1 = ¯ k + b ( n k , ¯ k )∆ n + σ ( n k , ¯ X t n X t n X t n X t n k )( W t n k +1 − W t n k ) . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 34 / 81

  41. Application to BSDE ⊲ . . . spatially discretized by quantization: We “force” Markov property to write a Quantized Backward Dynamic Programming Principle b h ( b Y n = X n ) `b ´ b b E k ( b X k , b E k ( b Y k +1 ) , b Y k = Y k +1 ) + ∆ n f k ζ k 1 b b E k ( b ζ k = Y k +1 ( W t n k +1 − W t n k )) ∆ n where b E k = E ( · | b X k ) . ⊲ By induction b v k ( b Y k = ˆ X k ) , k = 0 , . . . , n . so that `b ´ ` ´ b = b v k +1 ( b k +1 − W t n k +1 − W t n E k Y k +1 ( W t n k ) E k ˆ X k +1 )( W t n k ) . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 35 / 81

  42. Application to BSDE Quanrization tree ⊲ A Quantization tree for ( b X k ) k =0 ,..., n : N = N 0 + · · · + N n , N k = size of layer t n k . Figure: A typical (small!) 1-dimensional quantization tree ⊲ At time k (i.e. t k ) ` ´ b with Γ k = { x k 1 , . . . , x k X t k = Proj Γ k X t k Nk } is a grid of size N k . ⊲ What kind of tree a quantization tree is ? A quantization tree is not re-combining. But its size can designed a priori (and subject to possible optimization). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 36 / 81

  43. Application to BSDE Calibrating the quantization tree ⊲ To implement the above Quantized Backward Dynamic Programming Principled we need to compute repeatedly conditional expectations of the form ` ´ ` ´ ϕ ( b X k +1 ) | b ϕ ( b X k +1 )∆ W t k +1 | b E X k and E X k ⊲ First, one has N k +1 X ` ´ ϕ ( b π k ij ϕ ( x k +1 X k +1 ) 1 { b = b ) E X k = x k i } j j =1 where ` ´ π k b ij = P X k +1 ∈ C j (Γ k +1 ) & X k ∈ C i (Γ k ) π k so we need to estimate the hyper-matrix [ˆ ij ] i , j , k . ⊲ Weights for the Z term N k +1 X ` ´ ϕ ( b π W , k ϕ ( x k +1 X k +1 )∆ W t k +1 1 { b = e ) E X k = x k j i } ij j =1 where “ ” π W , k e = E 1 { X k +1 ∈ C j (Γ k +1 ) }∩{ , b i } ∆ W t k +1 X k = x k ij Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 37 / 81

  44. Application to BSDE Quantized forward Kolmogorov equations (on weights) ⊲ Note that by elementary Bayes formula N k − 1 X ` ´ p k X ∈ C j (Γ k ) π k − 1 j := P = ˆ ij i =1 so that we may compute ` ´ ϕ ( b ` ´ X k +1 ) 1 { X k ∈ C i (Γ k ) } = E ϕ ( b X k +1 ) | b E X k P ( X ∈ C i (Γ k )) ⊲ Initialization: Quantize X 0 (often X 0 = x 0 ). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 38 / 81

  45. Application to BSDE Grid optimization and calibration (offline) ⊲ Simulability Exact X k = X t k when possible. A discretization scheme X k = ¯ X k . Let ( X m k , ∆ W m t k +1 ) 0 ≤ k ≤ n , m = 1 : M be i.i.d. copies of ( X k , ∆ W m t k +1 ) 0 ≤ k ≤ n . ⊲ Grid Optimization: Let the sample “pass” through the quantization tree using either Randomized Lloyd procedure. or CLVQ . to optimize the grids Γ k at each time level. π k π k ⊲ Calibrate b ij and e ij : n o X M 1 π k m : X m k ∈ C i (Γ k ) & X m b ij = lim Card k +1 ∈ C j (Γ k +1 ) , 1 ≤ m ≤ M M M → + ∞ m =1 and h i X M 1 π k ∆ W m e ij = lim E t k +1 1 { X m . k ∈ C i (Γ k ) }∩{ X m k +1 ∈ C j (Γ k +1 ) } M M → + ∞ m =1 ⊲ Embedded optimal quantization: Perform optimization and calibration simultaneously. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 39 / 81

  46. Application to BSDE Error estimates Theorem (A priori error estimates (Sagna-P., SPA 2017)) Suppose that all the “Lipschitz” assumptions on b, σ , f , h are fulfilled. ( a ) “Price”: Then, for every k = 0 , . . . , n, „ n « X n ‚ ‚ ‚ ‚ e (1+[ f ] Lip )( t n i − t n ‚ ¯ k − b ‚ 2 ‚ ¯ i − b ‚ 2 2 ≤ [ f ] 2 k ) K i ( b , σ, T , f , h ) Y t n Y k X t n X t n 2 = O . Lip 2 i N d i = k ( b ) “Hedge”: n − 1 n − 1 X ‚ ‚ X k ‚ ‚ ‚ ‚ e (1+[ f ] Lip ) t n ‚ 2 ‚ 2 ‚ 2 ‚ ¯ k − b ‚ Y t n k +1 − b ‚ X t n k − b ∆ n ζ t n ζ k 2 ≤ Y t n 2 + K k ( b , σ, T , f , h ) X t n 2 k +1 k k =0 k =0 Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 40 / 81

  47. Application to BSDE Error estimates Theorem (A priori error estimates (Sagna-P., SPA 2017)) Suppose that all the “Lipschitz” assumptions on b, σ , f , h are fulfilled. ( a ) “Price”: Then, for every k = 0 , . . . , n, „ n « X n ‚ ‚ ‚ ‚ e (1+[ f ] Lip )( t n i − t n ‚ ¯ k − b ‚ 2 ‚ ¯ i − b ‚ 2 2 ≤ [ f ] 2 k ) K i ( b , σ, T , f , h ) Y t n Y k X t n X t n 2 = O . Lip 2 i N d i = k ( b ) “Hedge”: n − 1 n − 1 X ‚ ‚ X k ‚ ‚ ‚ ‚ e (1+[ f ] Lip ) t n ‚ 2 ‚ 2 ‚ 2 ‚ ¯ k − b ‚ Y t n k +1 − b ‚ X t n k − b ∆ n ζ t n ζ k 2 ≤ Y t n 2 + K k ( b , σ, T , f , h ) X t n 2 k +1 k k =0 k =0 ( c ) “RBSDE”: The same error bounds hold with Reflected BSDE (so far without Z in f ) by replacing h by h k = h ( t n k , . ) where h ( t , X t ) is the obstacle process in the resulting quantized scheme. What is new (compared to Bally-P. 2003 for reflected BSDE )? +: Z inside the driver f for quantization error bounds. +: Squares everywhere Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 40 / 81

  48. Application to BSDE Distortion mismatch A new result : distortion mismatch/ L s -rate optimality, s > p ⊲ Let Γ ( p ) N , N ≥ 1, be a sequence L p -optimal grids. What about e s ( X , Γ p N ) ( L s -mean quantization error) when X ∈ L s R d ( P ) for s > p ? Theorem ( L p - L s -distortion mismatch, Graf-Luschgy-P. 2005, Luschgy-P. 2015) R d ( P ) and let (Γ ( p ) ( a ) Let X ∈ L p N ) N ≥ 1 be an L p -optimal sequence for grids. Let s ∈ ( p , p + d ) . If sd d + p − s + δ ( P ) , δ > 0 , X ∈ L sd sd (note that d + p − s > s and lim s → p + d d + p − s = + ∞ ), then 1 d e s (Γ ( p ) lim N N N , X ) < + ∞ . ( b ) If P X = f ( | x | ) .λ d ( d ξ ) (radial density) then δ = 0 is admissible. sd 1 d + p − s = + ∞ , then lim N N d e s (Γ ( p ) ( c ) If E | X | N , X ) = + ∞ . ⊲ Possible perspectives: error bounds for quantization based numerical schemes for BSDE with a quadratic Z term ? ⊲ So far, an application to quantized non-linear filtering. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 41 / 81

  49. Application to BSDE Distortion mismatch Application to non-linear filtering Signal process ( X k ) k ≥ 0 is an R d -valued Markov chain. The observation process ( Y k ) k ≥ 0 is a sequence of R q -valued random vectors such that ( X k , Y k ) k ≥ 0 is a Markov chain. The conditional distribution L ( Y k | X k − 1 , Y k − 1 , X k ) = g k ( X k − 1 , Y k − 1 , X k , y ) λ q ( dy ) Aim : compute Π y 0: n , n ( dx ) = P ( X k ∈ dx | Y 1 = y 1 , · · · , Y n = y n ) Kallianpur-Streibel formula: set y = y 0: n = ( y 0 , . . . , y n ) a vector of observations Π y , n ( dx ) = Π y , n f = π y , n f π y , n 1 with the normalized filter π y 0 , n , n defined by Y n π y 0: n , n f = E ( f ( X n ) L y 0: n , n ) with L y 0: n , n = g k ( X k − 1 , y k − 1 , X k , y k ) , k =1 solution to both a forward and a backward inductionsbased on the kernels H y , k h ( x ) = E ( h ( X k ) g k ( x , y k − 1 , X k , y k ) | X k − 1 = x ) , H y , 0 f ( x ) = E ( f ( X 0 )) , Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 42 / 81

  50. Application to BSDE Distortion mismatch Forward: Start from π y , 0 = H y , 0 and define by a forward induction π y , k f = π y , k − 1 H y , k f , k = 1 , . . . , n . Backward: We define by a backward induction u y , n ( f )( x ) = f ( x ) , u y , k − 1 ( f ) = H y , k u y , k ( f ) , k = 0 , . . . , n . so that π y , n f = u y , − 1 ( f ) This formulation is useful in order to establish the quantization error bound. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 43 / 81

  51. Application to BSDE Distortion mismatch Quantized Kallianpur-Streibel formula (P.-Pham (2005)) Quantization of the kernel: → b H y 0: n , k f ( x ) = E ( f ( b X k ) g k ( x , y k − 1 , b X k , y k ) | b H y 0: n , k f ( x ) − X k − 1 = x ) Forward quantized dynamics (I): π y , k − 1 b π y , k f = b b H y , k f , k = 1 , . . . , n . Forward quantized dynamics (II): b π y , n f Π y ( dx ) = b b Π y , n f = π y 0: n , n 1 (finitely supported unnormalized filter satisfies formally the same recursions) Weight computation: If b X n = b X Γ n n , Γ n = { x 1 1 , . . . , x n N n } then N n X ` ´ b Π i b with b Π i y , n = b Π y , n ( dx ) = y , n δ x n Π y , n . 1 C i (Γ n ) i i =1 Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 44 / 81

  52. Application to BSDE Distortion mismatch From Lip to θ -Liploc assumptions Standard H Lip assumption for the conditional densities g k ( ., y , ., y ′ ): bounded by K g and Lipschitz continuity. ` ´ x | + | x ′ − b | g k ( x , y , x ′ , y ′ ) − g k ( b x ′ , y ′ ) | [ g k ] Lip ( y , y ′ ) x ′ | x , y , b | x − b ≤ . The kernels P k ( x , d ξ ) = P ( X k ∈ d ξ | X k − 1 = x ) propagate Lipschitz continuity with coefficient [ P k ] Lip s such that k =1 ,..., n [ P k ] Lip < + ∞ max Aim: Switch to a θ -local Lipschitz assumption ( θ : R d → R + , ↑ + ∞ as | x | ↑ + ∞ ). ` ´` ´ x | + | x ′ − b | h ( x , x ′ ) − h (ˆ x ′ ) | ≤ [ h ] loc x ′ | 1 + θ ( x ) + θ ( x ′ ) + θ (ˆ x ′ ) | x − b x , ˆ x ) + θ (ˆ New ( H θ Liploc ) assumption: the functions g k are still bounded by K g and θ -local Lipschitz continuous ` ´` ´ | g k ( x , y , x ′ , y ′ ) − g k ( b x ′ , y ′ ) | ≤ [ g k ] loc ( y , y ′ ) x | + | x ′ − b x ′ | 1+ θ ( x )+ θ ( x ′ )+ θ (ˆ x ′ ) x , y , b | x − b x )+ θ (ˆ The kernels P k ( x , d ξ ) = P ( X k ∈ d ξ | X k − 1 = x ) propagate θ -local Lipschitz continuity with coefficient [ P k ] loc < + ∞ . ` ´ The kernels P k ( x , d ξ ) propagate θ -control: max 0 ≤ k ≤ n − 1 P k ( θ )( x ) ≤ C 1 + θ ( x ) . Typical example: X k = ¯ X n k (Euler scheme with step ∆ n = T n ), θ ( ξ ) = | ξ | α , α > 0. t n Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 45 / 81

  53. Application to BSDE Distortion mismatch Theorem (Sagna-P., SPA ’17) Let s ∈ (1 , 1 + d 2 ) and θ ( x ) = | x | α , α ∈ (0 , 1 d ) . s − 1 − 2 1 Assume ( X k ) and ( g k ) satisfy ( H θ Liploc ) (in particular ( X k ) propagates θ -Lipschitz 2 ds d +2 − 2 s , k = 0 , . . . , n. Then continuity) and assume X k ∈ L n 2( K n g ) 2 X Π y , n f | 2 ≤ | Π y , n f − b B n � X k − b X k � 2 k ( f , y ) × (2) 2 s n ( y ) ∨ b | {z } φ 2 φ 2 n ( y ) k =0 − 2 ≍� X k − b X k � 2 d 2 ≤ c k N (Mismatch!!) k with b φ n ( y ) = b φ n ( y ) = π y , n 1 and π y , n 1 , B n k ( f , y ) := 2[ P ] 2( n − k ) [ f ] 2 loc + 2 � f � 2 ∞ R n , k + � f � ∞ R 2 n , k , loc where h “ n − k ” 2 i s X s − 1 M n R n , k = 8 s [ g k +1 ] 2 loc + [ g k ] 2 [ P ] m − 1 loc + loc (1 + [ P ] loc )[ g k + m ] loc , K 2 g m =1 and ` s − 1 ´ ` s − 1 ´ 2 s 2 s θ ( b M n s := 2 max k =0 ,..., n ( E θ ( X k ) + E X k ) . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 46 / 81

  54. Application to BSDE Distortion mismatch Numerical illustrations (3) Risk-neutral price under historical probability (B&S model, Euler scheme) “ ” rY t + µ − r dY t = Z dt + Z t dW t σ with Y T = h ( X T ) = ( X T − K ) + . ⊲ Model parameters: r = 0 . 1; T = 0 . 1; σ = 0 . 25; S 0 = K = 100. ⊲ Quantization tree calibration: 7 . 5 10 5 MC and NbLloyd = 1. ⊲ Reference call BS ( K , T ) = 3 . 66, Z 0 = 14 . 148. If µ ∈ { 0 . 05 , 0 . 1 , 0 . 15 , 0 . 2 } , n = 10 and N k = ¯ N = 20 : Q -price = 3 . 65, b Z 0 = 14 . 06. n = 10 and N k = ¯ N = 40, Q -price = 3 . 66, b Z 0 = 14 . 08. ⊲ Computation time : – 5 seconds for one contract. – Additional contracts for free (more than 10 5 / s ). ⊲ Romberg extrapolation price = 2 ∗ Q -price( N 2 )- Q -price( N 1 ) does improve the price (and the “hedge”). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 47 / 81

  55. Application to BSDE Distortion mismatch Numerical illustrations Bid-ask spreads on interest rates : „ ”« “ rY t + µ − r Y t − Z t dY t = Z t + ( R − r ) min σ , 0 dt + Z t dW t σ with Y T = h ( X T ) = ( X T − K 1 ) + − 2( X T − K 2 ) + , K 1 = 95 , K 2 = 105 . µ = 0 . 05 , r = 0 . 01 , σ = 0 . 2 , T = 0 . 25 , R = 0 . 06 ⊲ Reference values: price = 2 . 978, b Z 0 = 0 . 553. ⊲ Crude Quantized prices: n = 10 and N k = ¯ N 1 = 20 : Q -price = 2 . 96, b Z 0 = 0 . 515. n = 10 and N k = ¯ N 2 = 40, Q -price = 2 . 97, b Z 0 = 0 . 531. ⊲ Romberg extrapolated price = 2 ∗ Q -price(¯ N 2 )- Q -price(¯ N 1 ) ≃ 2 . 98 and Romberg extrapolated hedge b Z 0 ≈ 0 . 547. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 48 / 81

  56. Application to BSDE Distortion mismatch Multidimensional example (due to J.-F. Chassagneux) ⊲ Let W be a d -dimensional B.M. and let e t = exp( t + W 1 t + . . . + W d t ) . ⊲ Consider the non-linear BSDE e T dX t = dW t , − dY t = f ( t , Y t , Z t ) dt − Z t · dW t , Y T = 1 + e T ` ´ y − 2+ d with f ( t , y , z ) = ( z 1 + . . . + z d ) . 2 d ⊲ Solution: e t e t Y t = 1 + e t , Z t = (1 + e t ) 2 . We set d = 2 , 3 and T = 0 . 5, so that Z i Y 0 = 0 . 5 and 0 = 0 . 24 , i = 1 , . . . , d . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 49 / 81

  57. Application to BSDE Distortion mismatch Figure: Convergence rate of the quantization error for the multidimensional example). Abscissa axis: the size N = 5 , . . . , 100 of the quantization. Ordinate axis: The error | Y 0 − b Y N a / N + ˆ 0 | and the graph N �→ ˆ b , where a and ˆ ˆ b are the regression coefficients. d = 3. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 50 / 81

  58. Other results Local behaviour of optimal quantizers (back to Benett’s conjecture) Theorem (Local behaviour: toward Benett’s conjecture, Graf-Luschgy-P. AoP, 2012) ( a ) If P X is absolutely continuous on R d then N +1 , p ( X ) ≍ N − ( 1+ p d ) . e p N , p ( X ) − e p Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 51 / 81

  59. Other results Local behaviour of optimal quantizers (back to Benett’s conjecture) Theorem (Local behaviour: toward Benett’s conjecture, Graf-Luschgy-P. AoP, 2012) ( a ) If P X is absolutely continuous on R d then N +1 , p ( X ) ≍ N − ( 1+ p d ) . e p N , p ( X ) − e p ( b ) Upper-bounds : Suppose P X = ϕ.λ d ϕ is essentially bounded with compact support and its support is peakless ` ´ ∀ s ∈ (0 , s 0 ) , ∀ x ∈ supp ( P X ) , P X B ( x , s ) ≥ c λ d ( B ( x , s )) , c > 0 . “ 8 ´ ≤ c 1 C i (Γ ∗ , N ) > max N , x i ∈ Γ ∗ , N P X < Z ∃ c , ¯ c ∈ [1 , ∞ ) s.t. ∀ N ∈ N , � ξ − x i � p d P X ( d ξ ) ≤ ¯ cN − (1+ p > d ) . : max x i ∈ Γ ∗ , N C i (Γ ∗ , N ) Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 51 / 81

  60. Other results Local behaviour of optimal quantizers (back to Benett’s conjecture) Theorem (Local behaviour: toward Benett’s conjecture, Graf-Luschgy-P. AoP, 2012) ( a ) If P X is absolutely continuous on R d then N +1 , p ( X ) ≍ N − ( 1+ p d ) . e p N , p ( X ) − e p ( b ) Upper-bounds : Suppose P X = ϕ.λ d ϕ is essentially bounded with compact support and its support is peakless ` ´ ∀ s ∈ (0 , s 0 ) , ∀ x ∈ supp ( P X ) , P X B ( x , s ) ≥ c λ d ( B ( x , s )) , c > 0 . “ 8 ´ ≤ c 1 C i (Γ ∗ , N ) > max N , x i ∈ Γ ∗ , N P X < Z ∃ c , ¯ c ∈ [1 , ∞ ) s.t. ∀ N ∈ N , � ξ − x i � p d P X ( d ξ ) ≤ ¯ cN − (1+ p > d ) . : max x i ∈ Γ ∗ , N C i (Γ ∗ , N ) Z � ξ − a � p d P ( ξ ) ≥ c N − ( 1+ p d ) . ( c ) Lower bounds ∀ n ∈ N , min a ∈ Γ ∗ , N C a (Γ ∗ , N ) “ p ´ d + p C a (Γ ∗ , N ) ϕ ( a ) , a ∈ Γ ∗ , N , as N → + ∞ . ∼ c x ⊲ Benett’s conjecture (1955): P N ⊲ Various extensions to unbounded r.v., including uniform results for radial decreasing Gilles PAG` distribution (Junglen, 2012). ES (LPMA-UPMC) Quantization 19.07.2017 51 / 81

  61. Other results Quantification quadratique optimale de taille 50 de N (0; 1) 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 -4 -3 -2 -1 0 1 2 3 4 o Le quantifieur optimal de taille 50 : x (50) = ( x (50) , . . . , x (50) 50 ), 1 � � X ∈ C i ( x (50) ) —- Les poids : x i �→ P � ( ξ − x (50) ) 2 P X ( dξ ) —- L’inertie locale : x i �→ i C i ( x (50) ) Figure: a �→ P ( X ∈ C ( b X ∗ , N ), X ∼ N (0; 1), N = 50 a Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 52 / 81

  62. Other results Applications Applications to Numerical Probability What are these applications using optimal quantization grids? Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 53 / 81

  63. Other results Applications Applications to Numerical Probability What are these applications using optimal quantization grids? Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 53 / 81

  64. Other results Applications Applications to Numerical Probability What are these applications using optimal quantization grids? Obstacle Problems: Valuation of Bermuda and American options, Reflected BSDE’s (Bally-P.-Printems ’01, ’03 and ’05, Illand ’11). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 53 / 81

  65. Other results Applications Applications to Numerical Probability What are these applications using optimal quantization grids? Obstacle Problems: Valuation of Bermuda and American options, Reflected BSDE’s (Bally-P.-Printems ’01, ’03 and ’05, Illand ’11). δ -Hedging for American options (ibid. ’05). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 53 / 81

  66. Other results Applications Applications to Numerical Probability What are these applications using optimal quantization grids? Obstacle Problems: Valuation of Bermuda and American options, Reflected BSDE’s (Bally-P.-Printems ’01, ’03 and ’05, Illand ’11). δ -Hedging for American options (ibid. ’05). Optimal Stochastic Control problems (P.-Pham-Printems 06’), Pricing of Swing options (Bouthemy-Bardou-P.’09). . . on massively parallel architecture (GPU, Bronstein-P.-Wilbertz, ’10), Control of PDMP (Dufour-de Sapporta ’13). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 53 / 81

  67. Other results Applications Applications to Numerical Probability What are these applications using optimal quantization grids? Obstacle Problems: Valuation of Bermuda and American options, Reflected BSDE’s (Bally-P.-Printems ’01, ’03 and ’05, Illand ’11). δ -Hedging for American options (ibid. ’05). Optimal Stochastic Control problems (P.-Pham-Printems 06’), Pricing of Swing options (Bouthemy-Bardou-P.’09). . . on massively parallel architecture (GPU, Bronstein-P.-Wilbertz, ’10), Control of PDMP (Dufour-de Sapporta ’13). Non-linear filtering and stochastic, volatility models (P.-Pham-Printems ’05, Pham-Sellami-Runggaldier’06, Sellami ’09 &’10, Callegaro-Sagna ’10). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 53 / 81

  68. Other results Applications Applications to Numerical Probability What are these applications using optimal quantization grids? Obstacle Problems: Valuation of Bermuda and American options, Reflected BSDE’s (Bally-P.-Printems ’01, ’03 and ’05, Illand ’11). δ -Hedging for American options (ibid. ’05). Optimal Stochastic Control problems (P.-Pham-Printems 06’), Pricing of Swing options (Bouthemy-Bardou-P.’09). . . on massively parallel architecture (GPU, Bronstein-P.-Wilbertz, ’10), Control of PDMP (Dufour-de Sapporta ’13). Non-linear filtering and stochastic, volatility models (P.-Pham-Printems ’05, Pham-Sellami-Runggaldier’06, Sellami ’09 &’10, Callegaro-Sagna ’10). Discretization of SPDE’s (stochastic Zaka¨ ı & McKean-Vlasov equations) [Gobet-P.-Pham-Printems ’07]. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 53 / 81

  69. Other results Applications Applications to Numerical Probability What are these applications using optimal quantization grids? Obstacle Problems: Valuation of Bermuda and American options, Reflected BSDE’s (Bally-P.-Printems ’01, ’03 and ’05, Illand ’11). δ -Hedging for American options (ibid. ’05). Optimal Stochastic Control problems (P.-Pham-Printems 06’), Pricing of Swing options (Bouthemy-Bardou-P.’09). . . on massively parallel architecture (GPU, Bronstein-P.-Wilbertz, ’10), Control of PDMP (Dufour-de Sapporta ’13). Non-linear filtering and stochastic, volatility models (P.-Pham-Printems ’05, Pham-Sellami-Runggaldier’06, Sellami ’09 &’10, Callegaro-Sagna ’10). Discretization of SPDE’s (stochastic Zaka¨ ı & McKean-Vlasov equations) [Gobet-P.-Pham-Printems ’07]. Quantization based Universal Stratification (variance reduction) [Corlay-P. ’10]. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 53 / 81

  70. Other results Applications Applications to Numerical Probability What are these applications using optimal quantization grids? Obstacle Problems: Valuation of Bermuda and American options, Reflected BSDE’s (Bally-P.-Printems ’01, ’03 and ’05, Illand ’11). δ -Hedging for American options (ibid. ’05). Optimal Stochastic Control problems (P.-Pham-Printems 06’), Pricing of Swing options (Bouthemy-Bardou-P.’09). . . on massively parallel architecture (GPU, Bronstein-P.-Wilbertz, ’10), Control of PDMP (Dufour-de Sapporta ’13). Non-linear filtering and stochastic, volatility models (P.-Pham-Printems ’05, Pham-Sellami-Runggaldier’06, Sellami ’09 &’10, Callegaro-Sagna ’10). Discretization of SPDE’s (stochastic Zaka¨ ı & McKean-Vlasov equations) [Gobet-P.-Pham-Printems ’07]. Quantization based Universal Stratification (variance reduction) [Corlay-P. ’10]. CVaR-based dynamical risk hedging [Bardou-Frikha-P., ’15). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 53 / 81

  71. Other results Applications Applications to Numerical Probability What are these applications using optimal quantization grids? Obstacle Problems: Valuation of Bermuda and American options, Reflected BSDE’s (Bally-P.-Printems ’01, ’03 and ’05, Illand ’11). δ -Hedging for American options (ibid. ’05). Optimal Stochastic Control problems (P.-Pham-Printems 06’), Pricing of Swing options (Bouthemy-Bardou-P.’09). . . on massively parallel architecture (GPU, Bronstein-P.-Wilbertz, ’10), Control of PDMP (Dufour-de Sapporta ’13). Non-linear filtering and stochastic, volatility models (P.-Pham-Printems ’05, Pham-Sellami-Runggaldier’06, Sellami ’09 &’10, Callegaro-Sagna ’10). Discretization of SPDE’s (stochastic Zaka¨ ı & McKean-Vlasov equations) [Gobet-P.-Pham-Printems ’07]. Quantization based Universal Stratification (variance reduction) [Corlay-P. ’10]. CVaR-based dynamical risk hedging [Bardou-Frikha-P., ’15). Fast Marginal quantization [Sagna-P., 2015] Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 53 / 81

  72. Other results Applications First conclusions on optimal (Voronoi) vector quantization ⊲ Download free pre-computed grids of N (0; I d ) distributions at the URL www.quantize.maths-fi.com for d = 1 , . . . , 10 and N = 1 , . . . , 10 4 and many others items related to optimal quantization. Voronoi quantization is optimal for “Lipschitz approximation” Paradox: it does not preserve regularity Second order (stationarity) : (almost) only optimal grids ⇒ lack of flexibility As for cubature: quantization vs uniformly distributed sequences? ( ξ N ) N ≥ 1 , [0 , 1] d -valued sequences s.t. N X 1 R d δ ξ i = ⇒ λ | [0 , 1] d N i =1 R d vs [0 , 1] d [1 − 0]. 1 Lipschitz continuity vs Hardy & Krause finite variation on [0 , 1] d , [2 − 0]. 2 Sequences of N -tuples vs sequences [2 − 1] ( QMC !). 3 Companion weights vs no weights [2 − 2]. 4 Rates n − 1 d vs log n × n − 1 d ( Stoikov, 1987, price for uniform weights!) [3 − 2]. 5 How to “fix” (3) without affecting (4): Greedy quantization. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 54 / 81

  73. Greedy quantization What greedy quantization is the name for? ⊲ Switch from a sequence of N -tuples toward a sequence of points ( a N ) N ≥ 1 such that a ( N ) = { a 1 , . . . , a N } produces “good” quantization grids. ∀ N ≥ 1 , Among others, the first questions are: How to proceed theoretically? How “good”? How to compute them? How flexible can they be? Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 55 / 81

  74. Greedy quantization Level-by-level “greedy” optimization Let X ∈ L p R d (Ω , A , P ) be a random vector with distribution P X = µ . ⊲ Optimal greedy quantization: We define by induction a sequence ( a N ) N ≥ 1 recursively by a (0) = ∅ , a N +1 ∈ argmin ξ ∈ R d e p ( a ( N ) ∪ { ξ } , X ) . ∀ N ≥ 0 , ⊲ It is a natural and constructive way to answer the above first question. ⊲ Is it the best one? No answer so far. . . ⊲ Note that a 1 always exists and a 1 is the L p ( P )-median (always unique if p > 1). Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 56 / 81

  75. Greedy quantization Existence of an L p - optimal greedy quantization sequence ` ´ = + ∞ and X ∈ L p ( P )) Proposition (Assume card supp ( µ ) ( a ) Existence: There exists an L p - optimal greedy quantization sequence ( a N ) N ≥ 1 and ` ´ e p ( a ( n ) , X ) 1 ≤ n ≤ N is (strictly) decreasing to 0 (and a 1 is an L p -median). ( b ) Space filling: Let q > p. If X ∈ L q R d ( P ) . Then, any L p -optimal greedy quantization sequence ( a N ) N ≥ 1 satisfies N e q ( a ( N ) , X ) = 0 . lim Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 57 / 81

  76. Greedy quantization is rate optimal Greedy quantization is rate optimal ⊲ Main rate optimality result. Theorem (Rate optimality, Luschgy-P. ’15) Let p ∈ (0 , + ∞ ) , X ∈ L p (Ω , A , P ) and let µ = P X . Let ( a N ) N ≥ 1 be an L p -optimal greedy quantization sequence. ( a ) Let p ′ > p. There exists C p , p ′ , d ∈ (0 , + ∞ ) such that, for every R d -valued X r.v. e p ( a ( N ) , X ) ≤ C p , p ′ , d .σ p ′ ( X ) . N − 1 d . ∀ N ≥ 1 , ( b ) If µ = ϕ ( ξ ) λ d ( d ξ ) = f ( | ξ | 0 ) λ d ( d ξ ) , | . | 0 (any) norm on R d and f = R + → R + , bounded and non-increasing outside a compact, and X lies in L p and R d d + p d λ d ( ξ ) < + ∞ , then R d f ( | ξ | 0 ) 1 d e p ( a ( N ) , X ) < + ∞ . lim sup N N Condition in ( b ) is optimal since, if µ = ϕ.λ d , „Z « ( d + p ) / d R d ϕ d / ( d + p ) d λ d 1 d e p , N ( X ) ≥ e Q p , | . | × lim inf N . N ⊲ Main tool: Still micro-macro inequalities. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 58 / 81

  77. Greedy quantization is rate optimal Flavour of proof ⊲ First we not that by definition of the sequence ( a N ) N ≥ 1 , N +1 := e p ( a ( N ) , X ) p − e p ( a ( N +1) ) ≥ e p ( a ( N ) , X ) p − e p ( a ( N ) ∪ { y } , X ) p ∀ y ∈ R d , ∆ ( a ) So, we start from the micro-macro inequality (0 < b < 1 2 , fixed parameter). ` ´ e p ( a ( N ) , X ) p − e p ( a ( N ) ∪ { y } , X ) p ≥ C p , b d ( y , a ( N ) ) p µ ∀ y ∈ R d , B ( y , b d ( y , a ( N ) )) . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 59 / 81

  78. Greedy quantization is rate optimal Flavour of proof ⊲ First we not that by definition of the sequence ( a N ) N ≥ 1 , N +1 := e p ( a ( N ) , X ) p − e p ( a ( N +1) ) ≥ e p ( a ( N ) , X ) p − e p ( a ( N ) ∪ { y } , X ) p ∀ y ∈ R d , ∆ ( a ) So, we start from the micro-macro inequality (0 < b < 1 2 , fixed parameter). ` ´ e p ( a ( N ) , X ) p − e p ( a ( N ) ∪ { y } , X ) p ≥ C p , b d ( y , a ( N ) ) p µ ∀ y ∈ R d , B ( y , b d ( y , a ( N ) )) . ⊲ Let µ = P X . Integrating w.r.t. a distribution ν ( dy ): Z Z ∆ ( a ) 1 {| ξ − y |≤ b d ( y , a ( N ) ) } d ( y , a ( N ) ) p ν ( dy ) µ ( d ξ ) ≥ C p , b N +1 Z Z b +1 d ( ξ, a ( N ) ) } d ( y , a ( N ) ) p ν ( dy ) µ ( d ξ ) ≥ C p , b 1 {| ξ − y |≤ b d ( y , a ( N ) ) , d ( y , a ( N ) ) ≥ 1 Z Z C ′ b +1 d ( ξ, a ( N ) ) } d ( ξ, a ( N ) ) p ν ( dy ) µ ( d ξ ) ≥ 1 {| ξ − y |≤ b d ( y , a ( N ) ) , d ( y , a ( N ) ) ≥ p , b 1 Z Z C ′ b +1 d ( ξ, a ( N ) ) } d ( ξ, a ( N ) ) p ν ( dy ) ≥ 1 {| ξ − y |≤ p , b b Z “ “ ”” b ∆ ( a ) C ′ b + 1 d ( ξ, a ( N ) ) d ( ξ, a ( N ) ) p µ ( d ξ ) = ν B ξ ; p , b N +1 still by Fubini’s theorem. Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 59 / 81

  79. Greedy quantization is rate optimal ⊲ Let b ∈ (0 , 1 b +1 = 1 b 2 ) be such that 4 . κ ν ( dx ) = ( | x − a 1 | + 5 / 4) d + η λ d ( dx ) . Then, if ρ ≤ 1 4 | x − a 1 | , h i ` ´ 1 ≥ ρ d × g ( ξ ) := κ ′ V d ν B ( ξ, ρ ) . ( | ξ − a 1 | + 1) d + η Noting that d ( ξ, a ( N ) ) ≤ d ( x , a 1 ) = | x − a 1 | yields Z e p ( a ( N ) , X ) p − e p ( a ( N +1) ≥ C ′′ d ( ξ, a ( N ) ) p + d g ( ξ ) µ ( ξ ) . p Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 60 / 81

  80. Greedy quantization is rate optimal ⊲ Let b ∈ (0 , 1 b +1 = 1 b 2 ) be such that 4 . κ ν ( dx ) = ( | x − a 1 | + 5 / 4) d + η λ d ( dx ) . Then, if ρ ≤ 1 4 | x − a 1 | , h i ` ´ 1 ≥ ρ d × g ( ξ ) := κ ′ V d ν B ( ξ, ρ ) . ( | ξ − a 1 | + 1) d + η Noting that d ( ξ, a ( N ) ) ≤ d ( x , a 1 ) = | x − a 1 | yields Z e p ( a ( N ) , X ) p − e p ( a ( N +1) ≥ C ′′ d ( ξ, a ( N ) ) p + d g ( ξ ) µ ( ξ ) . p p + d < 1 and − p p ⊲ Inverse Minkowski Inequality implies with d < 0, yields »Z – p + d »Z – − d p p g ( ξ ) − p ∆ ( a ) N +1 ≥ C ′′ d ( ξ, a ( N ) ) p µ ( d ξ ) d µ ( d ξ ) . p | {z } = e p ( a ( N ) , X ) p + d Now Z Z g ( ξ ) − p | ξ − a 1 | p + η d µ ( ξ ) = E | X | p + η d µ ( ξ ) ≍ d < + ∞ so that e p ( a ( N ) , X ) p − e p ( a ( N +1) ) p ≥ C p , X · e p ( a ( N ) , X ) p + d . Gilles PAG` ES (LPMA-UPMC) Quantization 19.07.2017 60 / 81

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend