multilevel monte carlo
play

Multilevel Monte Carlo A few words on concurrency in C++11 Vincent - PowerPoint PPT Presentation

Multilevel Monte Carlo A few words on concurrency in C++11 Vincent Lemaire LPMA UPMC June 11, 2015 Vincent Lemaire (LPMA UPMC) Multilevel Monte Carlo June 11, 2015 1 / 47 Introduction 1 Abstract framework and assumptions Monte


  1. Multilevel Monte Carlo A few words on concurrency in C++11 Vincent Lemaire LPMA – UPMC June 11, 2015 Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 1 / 47

  2. Introduction 1 Abstract framework and assumptions Monte Carlo type estimators Crude Monte Carlo estimator in a biased framework Multistep Richardson-Romberg estimator Multilevel estimators 2 Definition Majoration of the effort Specification(s) of the allocation matrix T Convergence for a fixed R � 2 Main result Applications and numerical experiments 3 Parameters and methodology Nested Monte Carlo Hardware and implementation 4 Hardware Threads in C++11 Numerical results Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 2 / 47

  3. Introduction 1 Abstract framework and assumptions Monte Carlo type estimators Crude Monte Carlo estimator in a biased framework Multistep Richardson-Romberg estimator Multilevel estimators 2 Definition Majoration of the effort Specification(s) of the allocation matrix T Convergence for a fixed R � 2 Main result Applications and numerical experiments 3 Parameters and methodology Nested Monte Carlo Hardware and implementation 4 Hardware Threads in C++11 Numerical results Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 3 / 47

  4. Abstract linear simulation framework Y 0 ∈ L 2 ( P ) not simulatable real-valued random variable. Aim: compute I 0 = E [ Y 0 ] with a given accuracy ε > 0 . Suppose we have family ( Y h ) h ∈H of real-valued random variables in L 2 ( P ) satisfying: ◮ Bias error expansion (weak error expansion): ∃ α > 0 , E [ Y h ] = E [ Y 0 ] + c 1 h α + c 2 h 2 α + · · · + c R h αR � � 1 + η R ( h ) ( WE α,R ) ◮ Strong approximation error assumption: �� � 2 � � � � � 2 � Y h − Y 0 � Y h − Y 0 � V 1 h β ∃ β > 0 , 2 = E ( SE β ) � � h is the bias parameter taking value in H = h /n, n � 1 . Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 4 / 47

  5. Monte Carlo type estimators We consider Monte Carlo type estimators I N π depending on some parameters π to compute I 0 = E [ Y 0 ] i.e. satisfying � � � � I N I 1 Constant bias: E = E for all N � 1 π π π ) = ν ( π ) Inverse linear variance: var( I N where ν ( π ) = var( I 1 π ) N Linear cost: Cost( I N π ) = N κ ( π ) where κ ( π ) = Cost( I 1 π ) Typically I N π is the N -empirical mean of i.d.d. r.v. Aim: minimize the global cost of the estimator for a given error ε > 0 . Cost( I N ( π ( ε ) , N ( ε )) = argmin π ) . � � � I N � π − I 0 2 � ε Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 5 / 47

  6. Definition The effort of the estimator I N π is defined for every π ∈ Π by � � = ν ( π ) κ ( π ) . φ π � � � 2 � I 1 If the estimator I N π is unbiased then π − I 0 2 = ν ( π ) so that � � N ( ε ) = ν ( π ∗ ) = φ ( π ∗ ) π ( ε ) = π ∗ = argmin , κ ( π ∗ ) ε 2 . φ π ε 2 π ∈ Π � � � 2 If the estimator I N � I 1 2 = ν ( π ) + µ 2 ( π ) where π is biased then π − I 0 � � � � 2 = � � � � 2 µ 2 ( π ) = I N I 1 E − I 0 E − I 0 π π so that � � � � φ π φ ( π ( ε )) π ( ε ) = argmin , N ( ε ) = κ ( π ( ε ))( ε 2 − µ 2 ( π ( ε ))) . ε 2 − µ 2 ( π ) π ∈ Π , | µ ( π ) | <ε Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 6 / 47

  7. Theorem (Crude Monte Carlo, π = h ) N � h = 1 The Monte Carlo estimator of E [ Y 0 ] defined by ¯ Y N Y k h satisfies N k =1 κ ( h ) = 1 φ ( h ) = var( Y h ) µ ( h ) ∼ c 1 h α , h, . h Then, the optimal parameters h ∗ ( ε ) and N ∗ ( ε ) are � � var( Y 0 )(1 + θh ∗ ( ε ) h ∗ ( ε ) = (1 + 2 α ) − 1 β 2 ) 2 1 + 1 2 α α , N ∗ ( ε ) = 1 ε . ε 2 | c 1 | 1 /α 2 α � V 1 where θ = var( Y 0 ) . Furthermore, we have � � 1 + 1 ε 2+ 1 Cost( ¯ 1 1 α min Y N 2 α var( Y 0 ) . lim sup h ) � | c 1 | (1 + 2 α ) α 2 α h ∈H , ε → 0 | µ ( h ) | <ε Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 7 / 47

  8. Multistep Richardson-Romberg estimator [P. 2007, MCMA] Aim: Kill the bias using the weak error expansion ( WE α,R ) Refiners: R -tuple of integers n := ( n 1 , n 2 , . . . , n R ) ∈ N R satisfying n 1 = 1 < n 2 < · · · < n R . We consider Y h,n the R R -valued random vector � � Y h,n = Y h , Y h n 2 , . . . , Y h nR We define the vector w = ( w 1 , . . . , w R ) as the unique solution to the Vandermonde system V w = e 1 where   1 1 · · · 1 n − α  n − α  1 · · ·  2  R V = V (1 , n − α 2 , . . . , n − α R ) =  . . .   . . . .  . . · · · . n − α ( R − 1) n − α ( R − 1) 1 · · · 2 R Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 8 / 47

  9. Key point: The solution w has a closed form given by Cramer’s rule: ( − 1) R − i n α ( R − 1) � � i ∀ i ∈ 1 , . . . , R , w i = � � . ( n α i − n α ( n α j − n α j ) i ) 1 � j<i i<j � R We also derive the following identity of interest in what follows R � = ( − 1) R − 1 w i w R +1 := � . n αR n ! α i i =1 Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 9 / 47

  10. Multistep Richardson-Romberg estimator Definition (Multistep Richardson-Romberg estimator) N � � � � � h,n = 1 ¯ Y N w , Y k Y k , k � 1 i.i.d. h,n h,n N k =1 N R � � = 1 w i Y k h N ni k =1 i =1 Note that π = ( h, n, R ) and �� �� � �� � w , Y 1 Y 1 µ ( π ) = E = w , E h,n h,n w R +1 c R h αR (1 + η R,n ( h )) = E [ Y 0 ] + � Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 10 / 47

  11. Let | n | = n 1 + · · · + n R and n ! = n 1 · · · n R . Theorem (Multistep Richardson-Romberg) The Multistep Richardson-Romberg estimator of E [ Y 0 ] satisfies � h R � α , κ ( h ) = | n | h , φ ( h ) ≃ | n | var( Y 0 ) µ ( h ) ≃ ( − 1) R − 1 c R n ! h and for a fixed R � 2 the optimal parameters h ∗ ( ε ) and N ∗ ( ε ) are β 1 h ∗ ( ε ) = (1 + 2 αR ) − � � var( Y 0 )(1 + θ h ∗ ( ε ) 2 ) 2 1 2 αR 1 αR , N ∗ ( ε ) = ε 1 + . | c R | 1 / ( αR ) ε 2 2 αR Furthermore, we have � � � � 1 1 � n � (1 + 2 αR ) 1+ αR var( Y 0 ) | c R | 2 αR Cost( ¯ Y N inf h ) ∼ as ε → 0 . 2 αR ε 2+ 1 1 h ∈H n ! αR R | µ ( h ) | <ε Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 11 / 47

  12. Remarks on the Multistep estimator � � � � � n � � n � = R ( R +1) 1 R ∼ e ( R +1) R ∼ R If n i = i then and ( n !) e so that . 2 1 2 n ! � � � n � If n i = M i − 1 ( M � 2 ) then R − 1 R ∼ M 2 1 n ! How to avoid this drawback ? Using control variance technique such as Multilevel: We consider R independant copies ( Y ( j ) h,n ) , j = 1 , . . . , R of the random � � vector Y h,n and we split w , Y h,n into R R � � � � � � T j , Y ( j ) i Y ( j ) T j w , Y h,n = � h,n h ni j =1 i,j =1 where T = [ T 1 . . . T R ] is an R × R matrix such that � j T j = w . Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 12 / 47

  13. Introduction 1 Abstract framework and assumptions Monte Carlo type estimators Crude Monte Carlo estimator in a biased framework Multistep Richardson-Romberg estimator Multilevel estimators 2 Definition Majoration of the effort Specification(s) of the allocation matrix T Convergence for a fixed R � 2 Main result Applications and numerical experiments 3 Parameters and methodology Nested Monte Carlo Hardware and implementation 4 Hardware Threads in C++11 Numerical results Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 13 / 47

  14. Definitions Allocation matrix: An R × R -matrix T = [ T 1 . . . T R ] satisfying R � T 1 = e 1 T j and ∀ j ∈ { 2 , . . . , R } , i = 0 . (1) i =1 d � T j so that i = 1 . i,j =1 Stratification/Allocation strategy: Let q = ( q 1 , . . . , q R ) , q j > 0 , j = 1 , . . . , R and � j q j = 1 . Let N j = ⌈ q j N ⌉ . Multilevel estimator of order R N j N j R R � � � � � � � � 1 ≈ 1 1 Y N,q ¯ T j , Y ( j ) ,k T j , Y ( j ) ,k h,n = (2) h,n h,n N j N q j j =1 k =1 j =1 k =1 ◮ Multilevel Monte Carlo ≡ � R j =1 T j = e R ◮ Multilevel Richardson-Romberg ≡ � R j =1 T j = w Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 14 / 47

  15. Effort of a Multilevel estimator Parameters: π = ( π 0 , q ) with π 0 = ( h, n, R, T ) . (Unitary) Cost of a Multilevel estimator R R � � 1 Cost( ¯ Y N,q h,n ) = N j hn i 1 { T j i � =0 } = N κ ( π ) j =1 i =1 where the unitary complexity κ ( π ) is given by R R � � κ ( π ) = 1 q j n i 1 { T j i � =0 } . h j =1 i =1 Effort of a Multilevel estimator:   R �� �� � 1 T j , Y ( j )   κ ( π ) φ ( π ) = ν ( π ) κ ( π ) = var h,n q j j =1 Vincent Lemaire (LPMA – UPMC) Multilevel Monte Carlo June 11, 2015 15 / 47

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend