rolling adjoints
play

Rolling Adjoints Fast Greeks along Monte Carlo scenarios for - PowerPoint PPT Presentation

Rolling Adjoints Fast Greeks along Monte Carlo scenarios for early-exercise options Shashi Jain, Alvaro Leitao and Cornelis W. Oosterlee QuantMinds International - Lisbon May 16, 2018 S. Jain & A. Leitao & Kees Oosterlee Rolling


  1. Rolling Adjoints Fast Greeks along Monte Carlo scenarios for early-exercise options Shashi Jain, ´ Alvaro Leitao and Cornelis W. Oosterlee QuantMinds International - Lisbon May 16, 2018 S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 1 / 31

  2. Motivation Efficient calculation of option sensitivities is a problem of practical importance. For many pricing problems, Monte Carlo is the only feasible choice, as typically for early-exercise options. Usual finite differences approach ( bump-and-revalue ) provides poor estimations at high computational cost. Sensitivities along the paths, i.e. at intermediate times, is even more involved. “Generalization” of the Smoking adjoints technique by Giles and Glasserman to a generic interval. Sensitivities required for MVA calculations. Hedging in energy markets: multiple exercise contracts. S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 2 / 31

  3. Outline Problem formulation 1 Stochastic Grid Bundling Method (SGBM) 2 Sensitivities along the paths with SGBM 3 Numerical results 4 Conclusions 5 S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 3 / 31

  4. Problem formulation d − dimensional Bermudan option pricing problem. X t = ( X 1 t , . . . , X d t ) ∈ R d , depending on parameters θ = { θ 1 , . . . , θ N θ } . Let h t := h ( X t ) the intrinsic value of the option at time t . The holder receives max( h t , 0), if the option is exercised. The problem is to compute � h ( X τ ) � V t 0 ( X t 0 ) = max , E B t 0 B τ τ where B t is the risk-free saving account process and τ is a stopping time. Optimization problem: determine the early-exercise policy. S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 4 / 31

  5. Problem formulation It can be solved by the dynamic programming principle. The option value at the terminal time T is V T ( X T ) = max( h ( X T ) , 0) . We solve the problem recursively, moving backwards in time. The continuation value Q t m − 1 is given by � V t m ( X t m ) � � � � Q t m − 1 ( X t m − 1 ) = B t m − 1 E � X t m − 1 . B t m The Bermudan option value at time t m − 1 and state X t m − 1 reads V t m − 1 ( X t m − 1 ) = max( h ( X t m − 1 ) , Q t m − 1 ( X t m − 1 )) . We are interested in V t 0 . S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 5 / 31

  6. Stochastic Grid Bundling Method (SGBM) SGBM is based on N independent paths, { X t 0 , . . . , X t M } , obtained by a discretization scheme X t m ( n ) = F m − 1 ( X t m − 1 ( n ) , Z t m − 1 ( n ) , θ ) , where n = 1 , . . . , N is the index of the path. Z t m − 1 is a d- dimensional standard normal random vector. F m − 1 is a transformation from R d to R d . The method starts by computing the option value at terminal time as V t M ( X t M ) = max( h ( X t M ) , 0) . The following SGBM components are performed for each time step, t m , m ≤ M , moving backwards in time, starting from t M . S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 6 / 31

  7. SGBM - Bundling The grid points at t m − 1 are bundled into B t m − 1 (1) , . . . , B t m − 1 ( ν ) non-overlapping sets or partitions. Several bundling techniques can be employed, ◮ Equal-partitioning ◮ k-means clustering algorithm ◮ recursive bifurcation ◮ recursive bifurcation of a reduced state space t m − 1 : N [1 , N β ] �→ N [1 , N ] , is defined which maps ordered A mapping I β indices of paths in a bundle B t m − 1 ( β ) to the original path indices, where N β := |B t m − 1 ( β ) | is the cardinality of the β -th bundle, β = 1 , . . . , ν . S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 7 / 31

  8. SGBM - Regression Regress-later approach within each bundle B t m − 1 ( β ) , β = 1 , . . . , ν . G : R d × R K �→ R , which assigns values A parameterized value function ˜ ˜ G ( X t m , α β t m ) to states X t m , is introduced. The aim is to choose, for each t m and β , a vector α β t m so that � � ˜ X t m , α β = V t m ( X t m ) . G t m The option value is approximated as a linear combination of a finite number of orthonormal basis functions φ k as � � K � V t m ( X t m ) ≈ � X t m , α β α β G := t m ( k ) φ k ( X t m ) . t m k =1 The α β t m weights are approximated using a least squares regression by � ��� 2 N β � � �� � � K � � I β α β I β argmin t m − 1 ( n ) − � t m ( k ) φ k t m − 1 ( n ) V t m X t m X t m . α β � n =1 k =1 tm S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 8 / 31

  9. SGBM - Continuation and option values The continuation values for X t m − 1 ( n ) ∈ B t m − 1 ( β ), n = 1 , . . . , N , β = 1 , . . . , ν, are approximated by � � � � � � � � X t m , α β X t m − 1 ( n ) = E | X t m − 1 ( n ) Q t m − 1 G . t m Exploiting the linearity of the expectation operator, it is written as K � � � � α β Q t m − 1 ( X t m − 1 ( n )) = � t m ( k ) E φ k ( X t m ) | X t m − 1 ( n ) . k =1 The vector of basis functions φ k should ideally be chosen such that � � the expectations E φ k ( X t m ) | X t m − 1 are known in closed-form, or have analytic approximations. The option value at each exercise time is then given by � �� � � � � � � , � V t m − 1 X t m − 1 ( n ) = max h X t m − 1 ( n ) Q t m − 1 X t m − 1 ( n ) . S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 9 / 31

  10. Sensitivities along the paths with SGBM Naturally, we follow a backward iteration, starting at maturity, where the sensitivities are again trivial to calculate. We focus on two main sensitivities of interest: ∂ V tm − 1 ( X tm − 1 ) ◮ With respect to X t m − 1 , i.e. . ∂ X tm − 1 ∂ V tm − 1 ( X tm − 1 ) ◮ With respect to the model parameters, . ∂θ α β The method requires the derivatives of the regression coefficients, � t m . Assuming minimal smoothness of the option value function V , � � V t m ( X t m ) � �� � ∂ � V t m ( X t m ) �� � � � ∂ � � E � X t m − 1 = E � X t m − 1 . ∂θ B t m ∂θ B t m S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 10 / 31

  11. Delta along the paths Delta is the sensitivity of the option value at t m − 1 w.r.t. X t m − 1 , � � � � ∂ V t m − 1 ( X t m − 1 ) ∂ h X t m − 1 = 1 Q tm − 1 < h ( X tm − 1 ) ∂ X t m − 1 ∂ X t m − 1 � � � � ∂ Q t m − 1 X t m − 1 + 1 Q tm − 1 ≥ h ( X tm − 1 ) . ∂ X t m − 1 The derivative of the immediate payoff, h , is usually easy to compute. The computation of the sensitivity of the continuation value function � K � ∂ � � � � Q t m − 1 ( X t m − 1 ( n )) ∂ α β = � t m ( k ) E φ k ( X t m ) | X t m − 1 ( n ) ∂ X t m − 1 ∂ X t m − 1 k =1 � K � α β � � ∂ � t m ( k ) = φ k ( X t m ) | X t m − 1 ( n ) E ∂ X t m − 1 k =1 �� � ∂ α β + � t m ( k ) E φ k ( X t m ) | X t m − 1 ( n ) . ∂ X t m − 1 S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 11 / 31

  12. Delta along the paths � � ∂ φ k ( X t m ) | X t m − 1 ( n ) is readily computed. ∂ X tm − 1 E The derivative of the regression coefficients is the difficult part. Let us first define matrix A β t m as   φ 1 ( X t m ( I β φ 2 ( X t m ( I β φ K ( X t m ( I β t m − 1 (1))) t m − 1 (1))) t m − 1 (1))) . . .    φ 1 ( X t m ( I β φ 2 ( X t m ( I β φ K ( X t m ( I β  t m − 1 (2))) t m − 1 (2))) t m − 1 (2))) . . .     A β t m :=   . . . , ...   . . .   . . .     φ 1 ( X t m ( I β φ 2 ( X t m ( I β φ K ( X t m I β t m − 1 ( N β ))) t m − 1 ( N β ))) . . . t m − 1 (( N β ))) where X t m ( I β t m − 1 (1)) , . . . , X t m ( I β t m − 1 ( N β )) are the states of the paths in bundle B t m − 1 ( β ) . The corresponding vector of option values for these paths   V t m ( X t m ( I β � t m − 1 (1)))   V t m ( X t m ( I β � t m − 1 (2)))   V β   t m := . .   .  .  V t m ( X t m ( I β � t m − 1 ( N β ))) S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 12 / 31

  13. Delta along the paths The least squares coefficients computation can be written as: ⊤ A β ⊤ ) V β α β t m = ( A β t m ) − 1 ( A β � t m . t m t m The derivative of the regression coefficients is then given by ⊤ A β ∂α β ∂ ( A β t m ) − 1 ⊤ ) V β t m t m ( A β = t m t m ∂ X t m − 1 ∂ X t m − 1 ⊤ t m ) − 1 ∂ A β ⊤ A β ( A β t m V β + t m t m ∂ X t m − 1 ⊤ ) ∂ V β ⊤ A β ( A β t m ) − 1 ( A β t m + , t m t m ∂ X t m − 1 The derivative of the matrix inverse can be further expanded as � � ⊤ A β ⊤ ∂ ( A β t m ) − 1 ∂ A β ⊤ ∂ A β ⊤ A β ⊤ A β t m t m t m = − ( A β t m ) − 1 A β t m + A β ( A β t m ) − 1 t m t m t m ∂ X t m − 1 ∂ X t m − 1 ∂ X t m − 1 S. Jain & A. Leitao & Kees Oosterlee Rolling Adjoints May 16, 2018 13 / 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend