least square regression monte carlo for approximating
play

Least-square regression Monte Carlo for approximating BSDEs and - PowerPoint PPT Presentation

Least-square regression Monte Carlo for approximating BSDEs and semilinear PDEs Plamen Turkedjiev BP International Plc. 20th July 2017 Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 1 / 67 Forward-Backward Stochastic Di ff


  1. Least-square regression Monte Carlo for approximating BSDEs and semilinear PDEs Plamen Turkedjiev BP International Plc. 20th July 2017 Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 1 / 67

  2. Forward-Backward Stochastic Di ff erential Equations (FBSDEs) Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 2 / 67

  3. Continuous time framework Definitions and relations in continuous time ( X, Y, Z ) are predictable R d ⇥ R ⇥ R q -valued processes Z t Z t X t = X 0 + b ( s, X s ) dt + � ( s, X s ) dW s , 0 0 Z T Z T Y t = Φ ( X T ) + f ( s, X s , Y s , Z s ) ds � Z s dW s . t t Feymann-Kac relation (Pardoux-Peng-92): ( Y t , Z t ) = ( Y ( t, X t ) , Z ( t, X t )) where ( Y ( t, x ) , Z ( t, x )) deterministic and solve Y ( t, x ) = u ( t, x ) and Z ( t, x ) = r u ( t, x ) � ( t, x ) for @ t u ( t, x ) + L ( t, x ) u ( t, x ) = f ( t, x, u, r x u � ) , u ( T, x ) = Φ ( x ) , L ( t, x ) g ( x ) = h b ( t, x ) , r x g ( x ) i + 1 2 trace ( �� > ( t, x ) Hess ( g )( x )) . CE Z loss Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 3 / 67

  4. First steps First steps to discrete time approximation Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 4 / 67

  5. First steps Goals Goals of numerical method (1) approximate the stochastic process ˜ X ⇡ X ; (2) compute approximations of Y ( t, x ) and Z ( t, x ) minimizing the loss function Z T | � ( t, ˜ | ( t, ˜ X t ) � Y ( t, X t ) | 2 ] + E [ X t ) � Z ( t, X t ) | 2 dt ]; l ( � , ) := E [ sup 0  t  T 0 (3) tune the approximation algorithm to minimize the computational cost. In this talk, we are not concerned with approximating X ; we drop the notation ˜ X hereafter. The loss function is not tractable and we must make an approximation. Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 5 / 67

  6. First steps Finite time grid Finite time grid approximation Let ⇡ = { 0 = t 0 < . . . < t n = T } and define the loss function Z t i +1 X t 2 ⇡ E [ | � ( t, X t ) � Y ( t, X t ) | 2 ]+ | ( t, X t ) � Z ( s, X s ) | 2 ds ] . l ⇡ ( � , ) := max E [ t i i • Clearly l ⇡ ( · ) is an approximation of l ( · ) . • The choice of ⇡ will a ff ect the e ffi ciency of the approximation. • The regularity and boundedness of Φ , f , b , and � will influence the e ffi ciency of the approximation. Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 6 / 67

  7. First steps Finite time grid Conditional expectation formulation BSDE : By taking conditional expectations in Z T  � � � Y t = E Φ ( X T ) + f ( s, X s , Y s , Z s ) ds a.s. � F t � t "� 2 # Z T � � � = arg inf Ψ t 2 A ( t ) E � Φ ( X T ) + f ( s, X s , Y s , Z s ) ds � Ψ t � � � t where A ( t ) = L 2 ( F t ; R ) . Markov property: replace A ( t ) by � A t = { : R d ! R � E [ | ( X t ) | 2 ] < 1 } , � "� 2 # Z T � � � Y t = arg inf ( t, · ) 2 A t E � Φ ( X T ) + f ( s, X s , Y s , Z s ) ds � ( t, X t ) � � � t D1 Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 7 / 67

  8. First steps Finite time grid Reformulation of the Y -part of the loss Orthogonality of conditional expectation: Z T f ( s, X s , Y s , Z s ) ds | 2 ] E [ | ( t, X t ) � Φ ( X T ) � t Z T = E [ | ( t, X t ) � Y ( t, X t ) | 2 ] + E [ | Y ( t, X t ) � Φ ( X T ) � f ( s, X s , Y s , Z s ) ds | 2 ] t The Y part of the loss function becomes Z T f ( s, X s , Y s , Z s ) ds | 2 ] . l ⇡ ,y ( t, ) = E [ | ( t, X t ) � Φ ( X T ) � t Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 8 / 67

  9. First steps Finite time grid Z part of the loss BSDE : The optimal discrete Z is also a conditional expectation, Z t i +1 | � ( X t i ) � Z ( s, X s ) | 2 ds ] Z ⇡ ( t i , x ) := arg inf � 2 A t E [ t i Z t i +1 1 = E [ Z s ds | X t i = x ] t i +1 � t i t i  W t i +1 � W t i Z T ◆� ✓ � � = E Φ ( X T ) � f ( s, X s , Y s , Z s ) ds � X t i = x � t i +1 � t i t i D1 Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 9 / 67

  10. First steps Finite time grid Z part of the loss As before, we use orthogonality property of the conditional expectation E [ | � ( t i , X t i ) � Z ⇡ ( t i , X t i ) | 2 ] Z T ✓ ◆ + E [ | Z ⇡ ( t i , X t i ) � W t i +1 � W t i | 2 ] Φ ( X T ) + f ( s, X s , Y s , Z s ) ds t i +1 � t i t i Z T ✓ ◆ = E [ | � ( t i , X t i ) � W t i +1 � W t i | 2 ] Φ ( X T ) + f ( s, X s , Y s , Z s ) ds t i +1 � t i t i =: l ⇡ ,z ( t i , � ) . The discrete loss is approximated by X l ⇡ ( , � ) ⇡ max t i 2 ⇡ l ⇡ ,y ( t i , ) + l ⇡ ,z ( t i , � )( t i +1 � t i ) t i 2 ⇡ The loss function is still not tractable because of the integral. Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 10 / 67

  11. Equivalent representations Equivalent continuous time representations Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 11 / 67

  12. Equivalent representations One-step vs. multistep approximation From the tower law, Z T  � � � Y ( t i , x ) = E Φ ( X T ) + f ( s, X s , Y s , Z s ) ds � X t i = x � t i Z t i +1  � � � = E Y t i +1 + f ( s, X s , Y s , Z s ) ds � X t i = x . � t i Likewise,  W t i +1 � W t i Z T ✓ ◆� � � Z ⇡ ( t i , x ) = E Φ ( X T ) + f ( s, X s , Y s , Z s ) ds � X t i = x � t i +1 � t i t i  W t i +1 � W t i Z t i +1 ✓ ◆� � � = E Y t i +1 + f ( s, X s , Y s , Z s ) ds � X t i = x . � t i +1 � t i t i D2 Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 12 / 67

  13. Equivalent representations Decomposition into a system z ) and ( ˜ Y , ˜ Define (ˆ y, ˆ Z ) solving respectively Z T y t = Φ ( X T ) � ˆ z s dW s , ˆ t Z T Z T ˜ y s + ˜ z s + ˜ ˜ Y t = f ( s, X s , ˆ Y s , ˆ Z s ) ds � Z s dW s . t t y t + ˜ z t + ˜ Observe that Y t = ˆ Y t and Z t = ˆ Z t . The representation is beneficial: • The functions ˆ y ( t, X t ) = ˆ y t , ˆ z ( t, X t ) = ˆ z t come from linear equation. • The functions ˜ Y ( t, X t ) = ˜ Y t , ˜ Z ( t, X t ) = ˜ Z t are generally smoother than their Y ( t, x ) , Z ( t, x ) counterparts. ML Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 13 / 67

  14. Equivalent representations Adding zero From the conditional expectation Z T �  � � Y ( t i , x ) = E Φ ( X T ) + f ( s, X s , Y s , Z s ) ds � X t i = x � t i 2 3 Z T Z T � � = E 4 Φ ( X T ) + f ( s, X s , Y s , Z s ) ds � Z s dW s � X t i = x � 5 t i t i | {z } = Y ( t i , x ) In other words, the integrand has conditional variance zero. More to come... Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 14 / 67

  15. Equivalent representations Adding zero From the conditional expectation  W t i +1 � W t i Z T ◆� ✓ � � Z ⇡ ( t i , x ) = E Φ ( X T ) + f ( s, X s , Y s , Z s ) ds � X t i = x � t i +1 � t i t i  W t i +1 � W t i = E t i +1 � t i 0 1 Z T Z T � � � B C ⇥ @ Φ ( X T ) + f ( s, X s , Y s , Z s ) ds � Y ( t i , x ) � Z s dW s � X t i = x � A t i t i +1 | {z } Z t i +1 = Y ( t i +1 , X t i +1 ) � Y ( t i , x ) + f ( s, X s , Y s , Z s ) ds t i The integrand has low conditional variance zero. More to come... D3 Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 15 / 67

  16. Equivalent representations Malliavin representation (Hu-Nualart-Song-11) Rather than computing Z ⇡ ( t, x ) , directly use the representation Z T  � � � Z ( t, x ) = E D t Φ ( X T ) + r x f ( s, X s , Y s , Z s ) D t X s ds � X t = x � t i Z T � � � + E @ y f ( s, X s , Y s , Z s ) D t Y s ds � X t = x � t Z T � � � + E r z f ( s, X s , Y s , Z s ) D t Z s ds � X t = x � t Z T  � � � = E Γ ( t, T ) D t Φ ( X T ) + Γ ( t, s ) r x f ( s, X s , Y s , Z s ) D t X s ds � X t = x � t i with D t X ⌧ = r x X ⌧ ( r x X t ) � 1 � ( t, X t ) and ✓Z s Z s ◆ ( @ y f ⌧ � 1 2 | r z f ⌧ | 2 d ⌧ Γ ( t, s ) = exp r z f ⌧ dW ⌧ + t t Valid under restricted conditions. Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 16 / 67

  17. Equivalent representations Malliavin integration by parts (Ma-Zhang-02)(T.-15) Rather than computing Z ⇡ ( t, x ) , directly use the representation Z T  � � � Z ( t, x ) = E Φ ( X T ) M ( t, T ) + f ( s, X s , Y s , Z s ) M ( t, s ) ds � X t = x � t i for random variables Z s 1 � � 1 ( ⌧ , X ⌧ ) D t X ⌧ dW ⌧ ) > . M ( t, s ) := s � t t Valid under restricted conditions. Sometimes M ( t, s ) is available in closed form. E.g. for X t = W t or geometric Brownian motion, M ( t, s ) = W s � W t . s � t D3 Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 17 / 67

  18. Continuous time approximations Continuous time approximations Plamen Turkedjiev (BP) Least-squares regression 20th July 2017 18 / 67

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend