from martingales in finance to quantization for pricing
play

From Martingales in Finance to Quantization for pricing Giorgia - PowerPoint PPT Presentation

Introduction Quantization Recursive marginal quantization Results and perspectives From Martingales in Finance to Quantization for pricing Giorgia Callegaro Universit di Padova Workshop on Martingales in Finance and Physics ICTP, 24 May


  1. Introduction Quantization Recursive marginal quantization Results and perspectives From Martingales in Finance to Quantization for pricing Giorgia Callegaro Università di Padova Workshop on Martingales in Finance and Physics ICTP, 24 May 2019. Mostly based on recent papers with Lucio Fiorin and Martino Grasselli . 1 / 24

  2. Introduction Quantization Recursive marginal quantization Results and perspectives Martingales in Finance: why? � [Arbitrage] Intuition: a possibility of a riskless profit. An arbitrage is an investment strategy whose cost today is non positive, whose (portfolio) value tomorrow is non-negative and strictly positive with positive probability (recall today’s first talk). � [Viability] A market is viable if there is no arbitrage opportunity: we � AOA. � [Key theorem] The market is viable if and only if there exists a probability measure Q equivalent to P such that the discounted asset prices are Q -martingales. P : real world measure Q : risk neutral probability 2 / 24

  3. Introduction Quantization Recursive marginal quantization Results and perspectives Pricing and hedging in a nutshell � [Call and Put] A Call option gives the holder the opportunity to buy an underlying asset X , at a fixed time T and at a specified cost (strike) K > 0: its value is F C T = ( X T − K ) + The Put option analogously gives the holder the right to sell: F P T = ( K − X T ) + � [Pricing] The price at time t of a European option, whose payoff is F T = f ( X T ) is E Q � � e − r ( T − t ) f ( X T ) | F t where ( F t ) t ∈ [ 0 , T ] is the available filtration. � [Hedging] An investment strategy whose portfolio’s value coincides (replicates) at any time with the option’s value. 3 / 24

  4. Introduction Quantization Recursive marginal quantization Results and perspectives What if we discretize X T ? In case when the random variable X T reduces to a finite set of points, the expectation (price) is computed as a finite sum. Quantization : approximating a signal (random variable) admitting a continuum of possible values, by a signal that takes values in a discrete set Figure: Picture taken from Mc Walter et al [9] 4 / 24

  5. Introduction Quantization Recursive marginal quantization Results and perspectives Quantization: a brief history � [Birth] Back to the 50’s, to optimize signals’ transmission � [Two worlds] � Vector quantization � random variables � Functional quantization � stochastic processes � [Applications] Information theory, cluster analysis, pattern and speech recognition, numerical integration and probability � [How?] Numerical procedures mostly based on stochastic optimization algorithms � very time consuming. Today’s menu: discretize random variables and stochastic processes in a fast and efficient way via recursive marginal quantization. 5 / 24

  6. Introduction Quantization Recursive marginal quantization Results and perspectives Vector quantization: some Math Given an R d -valued random variable X on ( Ω , A , P ) (or Q ), X ∈ L r , N -quantizing X on a grid Γ = ( x 1 ,..., x N ) consists in projecting X on Γ . In order to univocally define the projection function, we need to specify a partition of R d , ( C i ) 1 ≤ i ≤ N , so that � N Proj Γ ( X ) = x i 1 1 C i ( X ) i = 1 � The induced L r error � � 1 / r 1 ≤ i ≤ N | X − x i | r || X − Proj Γ ( X ) || r = E min is called the L r − mean quantization error. N.B. For a complete background on optimal quantization: Graf and Luschgy [7] 6 / 24

  7. Introduction Quantization Recursive marginal quantization Results and perspectives Quadratic optimal quantization ( r = 2) Let us focus, from now on, on r = 2! How do we choose the N points in Γ ? By minimizing the L 2 error! � A grid Γ ⋆ minimizing the L 2 − quantization error over all the grids with size at most N is the optimal quadratic quantizer. X Γ ⋆ , or � � The projection of X on Γ ⋆ , Proj Γ ⋆ ( X ) , or � X for simplicity, is called the quantization of X and the associated partition � � ξ ∈ R d : | ξ − x i | = min C i ( Γ ⋆ ) ⊂ 1 ≤ j ≤ N | ξ − x j | is called the Voronoï partition, or tessellation induced by Γ ⋆ . Proj Γ ⋆ ( X ) is defined as the closest neighbor projection on Γ ⋆ . 7 / 24

  8. Introduction Quantization Recursive marginal quantization Results and perspectives Vector quantization: example ( N = 50) 4 3 2 1 0 −1 −2 −3 −4 −4 −3 −2 −1 0 1 2 3 4 Figure: Optimal quantizer and tessellation of a 2-d Gaussian r.v. 8 / 24

  9. Introduction Quantization Recursive marginal quantization Results and perspectives Vector quantization: some useful facts Theory: � The L r − error goes to zero as N → +∞ ( Zador Theorem ). � The distortion function (the quadratic quantization error squared) always reaches one minimum at a N -tuple Γ ⋆ having pairwise distinct components. Practice: � d = 1: optimal quantizers can be obtained via standard Newton-Raphson procedure. � d ≥ 2: stochastic gradient descent algorithms are required (or standard gradient descent when the distribution can be easily simulated) 9 / 24

  10. Introduction Quantization Recursive marginal quantization Results and perspectives Vector quantization: fixing the ideas Optimal quadratic quantization of X : gives access to a N -tuple Γ = { x 1 , x 2 ,..., x N } which minimizes the L 2 distance between X and � X Γ This provides the best possible quadratic approximation of a random vector X by a random vector taking (at most) N values. 10 / 24

  11. Introduction Quantization Recursive marginal quantization Results and perspectives Vector quantization: numerical integration Given an integrable function f , a random variable X and a (hopefully optimal) quantizer Γ = { x 1 ,..., x N } , E [ f ( X )] can be approximated by the finite sum � N X Γ = x i ) . E [ f ( � X Γ )] = f ( x i ) P ( � i = 1 If f is Lipschitz continuous, then � � � E [ f ( X )] − E [ f ( � � ≤ [ f ] Lip || X − � X Γ || 2 X Γ )] N →∞ and || X − � X Γ || 2 → 0 (Zador theorem). − N.B. When f is smoother this error bound can be significantly improved. 11 / 24

  12. Introduction Quantization Recursive marginal quantization Results and perspectives Vector quantization: towards stationary quantizers What do we need in practice to quantize X ? � The grid Γ ⋆ = { x 1 , x 2 ,..., x N } � The weights of the cells in the Voronöi tessellation P ( X ∈ C i ( Γ ⋆ )) = P ( � X = x i ) , i = 1 ,..., N From a numerical point of view, finding an optimal quantizer may be a very challenging and time consuming task. This motivates the introduction of sub-optimal criteria: stationary quantizers. 12 / 24

  13. Introduction Quantization Recursive marginal quantization Results and perspectives Stationary quantizers � Definition: Γ = { x 1 , ··· , x N } is stationary for X if � X | � X Γ � = � X Γ . E � Optimal quantizers are stationary; � Stationary quantizers Γ are critical points of the distortion function: ∇ D ( Γ ) = 0 (1) where the distortion function is the square of the L 2 -error � N � | u − x i | 2 d P X ( u ) . D ( Γ ) : = C i ( Γ ) i = 1 13 / 24

  14. Introduction Quantization Recursive marginal quantization Results and perspectives From stationary quantizers to the quantization of a stochastic process Stationary quantizers are interesting from a numerical point of view: they can be found through zero search recursive procedures like Newton’s algorithm. ⇓ � We � stationary (sub-optimal) quantizers. � How to quantize a stochastic process with these ideas? 14 / 24

  15. Introduction Quantization Recursive marginal quantization Results and perspectives “Step by step marginal quantization”: warm up Recently introduced by Pagès and Sagna [8] � Consider a continuous-time Markov process Y dY t = b ( t , Y t ) dt + a ( t , Y t ) dW t , Y 0 = y 0 > 0 , where W is a standard Brownian motion and a and b satisfy the usual conditions ensuring the existence of a (strong) solution to the SDE; � Given T > 0 and { 0 = t 0 , t 1 ,..., t M = T } , ∆ k = t k − t k − 1 , k ≥ 1, the Euler scheme is Y t k = � � Y t k − 1 + b ( t k − 1 , � Y t k − 1 ) ∆ k + a ( t k − 1 , � Y t k − 1 ) ∆ W k Y t 0 = � � Y 0 = y 0 where ∆ W k : = ( W t k − W t k − 1 ) ∼ N ( 0 , ∆ k ) . 15 / 24

  16. Introduction Quantization Recursive marginal quantization Results and perspectives � Key remark: for every k = 1 ,..., M � � � � � m k − 1 ( x ) , σ 2 � � � (2) L Y t k Y t k − 1 = x ∼ N k − 1 ( x ) where m k − 1 ( x ) = x + b ( t k − 1 , x ) ∆ k = [ a ( t k − 1 , x )] 2 ∆ k . σ 2 k − 1 ( x ) � Idea: quantize recursively every marginal random variable (vector quantization) � Y t k , exploiting (2). 16 / 24

  17. Introduction Quantization Recursive marginal quantization Results and perspectives “Step by step marginal quantization”: stationary quantizers The distortion function at time t k , relative to � Y t k , is � � N � � � i ) 2 P D k ( Γ k ) = C i ( Γ k ) ( y − y k Y t k ∈ dy i = 1 � y k � . Target: where N is the (fixed) size of the grid Γ k = 1 , y k 2 ,..., y k N Γ k ∈ R N such that ∇ D k ( Γ k ) = 0 Question: applying Newton-Raphson now? Answer: NO! We do NOT know the distribution of � Y t k ! 17 / 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend