simulation of rare events by adaptive multilevel
play

Simulation of rare events by Adaptive Multilevel Splitting - PowerPoint PPT Presentation

Simulation of rare events by Adaptive Multilevel Splitting algorithms Charles-Edouard Brhier CNRS & Institut Camille Jordan, Universit Lyon 1 Joint works with M. Gazeau (Borealis AI, Toronto) , L. Goudenge (CNRS, Centrale Paris) , T.


  1. Simulation of rare events by Adaptive Multilevel Splitting algorithms Charles-Edouard Bréhier CNRS & Institut Camille Jordan, Université Lyon 1 Joint works with M. Gazeau (Borealis AI, Toronto) , L. Goudenège (CNRS, Centrale Paris) , T. Lelièvre (Ecole des Ponts, Paris) , M. Rousset (Inria, Rennes) F. Bouchet, C. Herbert, T. Lestang (ENS Lyon, Physics Dept) , F. Ragone (Department of Earth and Environmental Sciences, Milano)

  2. Motivations: rare event in metastable dynamics Goal: simulation of metastable stochastic processes = small noise perturbations of dynamics having multiple stationary points.

  3. Motivations: rare event in metastable dynamics Goal: simulation of metastable stochastic processes = small noise perturbations of dynamics having multiple stationary points. Examples: √ ◮ SDE: dX t = −∇ V ( X t ) dt + 2 ǫ dW ( t ) ◮ Double-well potential: V ( x ) = x 4 4 − x 2 2 .

  4. Motivations: rare event in metastable dynamics Goal: simulation of metastable stochastic processes = small noise perturbations of dynamics having multiple stationary points. Exemples: ◮ ◮ SPDE: Allen-Cahn equation √ ∂ u ( t , x ) = γ ∆ u ( t , x ) − ∇ V ( u ( t , x )) + 2 ǫξ ( t , x ) ∂ t

  5. Transitions between metastable states One goal is to estimate p = P x 0 ( τ B < τ A ) ⊥ΧΘΕ∴ A B where A and B are metastable states. ◮ Direct numerical simulations : prohibitive, need N ∼ Cp − 1 independent realizations. ◮ Analytic approach : Large Deviations, Potential Theory . In the limit ǫ → 0, ǫ → 0 ce − C /ǫ . ǫ log ( p ( ǫ )) → ǫ → 0 − C , p ( ǫ ) ∼ Transition times of the order e C /ǫ .

  6. Splitting algorithms Introduced in the 1950s: H. Kahn and T. E. Harris. Estimation of particle transmission by random sampling. National Bureau of Standards , 12:27–30, 1951.

  7. Splitting algorithms Introduced in the 1950s: H. Kahn and T. E. Harris. Estimation of particle transmission by random sampling. National Bureau of Standards , 12:27–30, 1951. Revival in the 1990s and 2000s: P. Glasserman, P. Heidelberger, P. Shahabuddin, and T. Zajic. Multilevel splitting for estimating rare event probabilities. Oper. Res. , 47(4):585–600, 1999. with many variants: M. Villén-Altamirano and J. Villén-Altamirano. RESTART: A method for accelerating rare events simulations. In Proceeding of the thirteenth International Teletraffic Congress , volume Copenhagen, Denmark, June 19-26 of Queueing, performance and control in ATM: ITC-13 workshops , pages 71–76. North-Holland, Amsterdam-New York, 1991. S. K. Au and J. L. Beck. Estimation of small failure probabilities in high dimensions by subset simulation. Journal of Probabilistic Engineering Mechanics , 16:263–277, 2001. J. Skilling. Nested sampling for general Bayesian computation. Bayesian Anal. , 1(4):833–859 (electronic), 2006. P. Del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. J. R. Stat. Soc. Ser. B Stat. Methodol. , 68(3):411–436, 2006.

  8. Adaptive Multilevel Splitting algorithms Introduced in F. Cérou and A. Guyader. Adaptive multilevel splitting for rare event analysis. Stoch. Anal. Appl. , 25(2):417–443, 2007. Analysis of the method: series of recent works (non exhaustive) E. Simonnet. Combinatorial analysis of the adaptive last particle method. Statistics and Computing , pages 1–20, 2014. C.-E. Bréhier, T. Lelièvre, and M. Rousset. Analysis of adaptive multilevel splitting algorithms in an idealized setting. ESAIM Probability and Statistics , 19:361–394, 2015. F. Cérou and A. Guyader. Fluctuation analysis of adaptive multilevel splitting. Ann. Appl. Probab. , 26(6):3319–3380, 2016. C.-E. Bréhier, M. Gazeau, L. Goudenège, T. Lelièvre, and M. Rousset. Unbiasedness of some generalized adaptive multilevel splitting algorithms. Ann. Appl. Probab. , 26(6):3559–3601, 2016.

  9. Applications of AMS algorithms J. Rolland and E. Simonnet. Statistical behaviour of adaptive multilevel splitting algorithms in simple models. Journal of Computational Physics , 283:541 – 558, 2015. D. Aristoff, T. Lelièvre, C. G. Mayne, and I. Teo. Adaptive multilevel splitting in molecular dynamics simulations. In CEMRACS 2013—modelling and simulation of complex systems: stochastic and deterministic approaches , volume 48 of ESAIM Proc. Surveys , pages 215–225. EDP Sci., Les Ulis, 2015. Several recently defended and ongoing PhD Thesis: R. Poncet (Polytechnique 10-2017), H. Louvin (CEA 10-2017), L. J. Lopes (Ponts-Sanofi), T. Lestang (ENS Lyon, Physics). H. Louvin, E. Dumonteil, T. Lelièvre, M. Rousset, and C. M. Diop. Adaptive multilevel splitting for monte carlo particle transport. In EPJ Web of Conferences , volume 153, page 06006. EDP Sciences, 2017. L. J. Lopes, C. Chipot, and T. Lelièvre. Adaptive multilevel splitting method: Isomerization of the alanine dipeptide. 2017. Collaboration with F. Bouchet, C. Herbert, T. Lestang, F. Ragone.

  10. Principle of splitting algorithms Principles: iterative algorithm ◮ System of interacting replicas: copies of the Markov trajectories are simulated (in parallel). ◮ Adaptive Splitting: selection of replicas using a score function. ◮ Partial resampling: new copies are simulated, by branching replicas (mutation). ◮ Weighting: remain consistent when removing/creating replicas. An important property: the (Markov) model is a black box. Simpler setting: estimation of p = P ( Y > a ) . with ◮ Y positive real-valued random variable (with continuous cdf) ◮ a threshold.

  11. General notions: estimators, consistency, efficiency An estimator of θ ∈ R is a random variable ˆ θ .

  12. General notions: estimators, consistency, efficiency An estimator of θ ∈ R is a random variable ˆ θ . About consistency: ◮ The bias of the estimator is defined as b (ˆ θ ) = E [ˆ θ ] − θ . ◮ We say that ˆ θ is an unbiased estimator of θ if b (ˆ θ ) = 0. � ˆ ◮ Let � θ N N ∈ N be a sequence of estimators of θ . It is called consistent if ˆ N →∞ θ in probability. It is called asymptotically unbiased if → θ N b (ˆ θ N ) N →∞ 0. →

  13. General notions: estimators, consistency, efficiency An estimator of θ ∈ R is a random variable ˆ θ . About consistency: ◮ The bias of the estimator is defined as b (ˆ θ ) = E [ˆ θ ] − θ . ◮ We say that ˆ θ is an unbiased estimator of θ if b (ˆ θ ) = 0. � ˆ ◮ Let � θ N N ∈ N be a sequence of estimators of θ . It is called consistent if ˆ N →∞ θ in probability. It is called asymptotically unbiased if → θ N b (ˆ θ N ) N →∞ 0. → About efficiency: mean-square error � 2 + E | ˆ θ − θ | 2 = θ | 2 = b (ˆ θ ) 2 + Var (ˆ E | ˆ E ˆ θ − E ˆ � θ − θ θ ) sum of bias squared (consistency error) and variance (statistical error). Example: N m N = 1 � ˆ Z n N n = 1 � � with i.i.d. Z n n ∈ N is a consistent unbiased estimator of m = E [ Z 1 ] . m N ) = Var ( Z 1 ) Statistical error: Var ( ˆ . More precisely: Central Limit Theorem. N

  14. The rare event case Estimation of p = P ( X ∈ A ) = E [ 1 X ∈A ] ? Direct approach: crude Monte-Carlo simulation Take Z = 1 X ∈A , and i.i.d. realizations Z 1 , . . . , Z n = 1 X n ∈A , . . . , and compute the empirical average N N p N = 1 Z n = 1 � � ˆ 1 X n ∈A . N N n = 1 n = 1 Then ˆ p N is a consistent unbiased estimator of p = E [ Z ] . Relative error: � � E | ˆ p N − p | 2 p ( 1 − p ) 1 = √ ∼ √ pN . p p N p → 0

  15. The rare event case Estimation of p = P ( X ∈ A ) = E [ 1 X ∈A ] ? Direct approach: crude Monte-Carlo simulation Take Z = 1 X ∈A , and i.i.d. realizations Z 1 , . . . , Z n = 1 X n ∈A , . . . , and compute the empirical average N N p N = 1 Z n = 1 � � ˆ 1 X n ∈A . N N n = 1 n = 1 Then ˆ p N is a consistent unbiased estimator of p = E [ Z ] . Relative error: � � E | ˆ p N − p | 2 p ( 1 − p ) 1 = √ ∼ √ pN . p p N p → 0 Variance reduction strategy: find a family of simulatable estimators ˆ θ to reduce the cost. Requirements (to also remain unbiased) Var (ˆ E [ˆ θ ) ≪ Var (ˆ p N ) θ ] = p , where simulations of ˆ θ and ˆ p N have the same average cost .

  16. Splitting algorithm Case study: p = P ( Y > a ) , with Y > 0. Splitting: N N � � � � � p = Y > a j � Y > a j − 1 = p j P j = 1 j = 1 with a non-decreasing sequence of levels 0 = a 0 < a 1 < . . . < a N − 1 < a N = a . p = � N Estimator: ˆ j = 1 p j , n rep , ( N independent estimators, with n rep independent replicas per level) is unbiased. Optimal choice of levels a j : such that p 1 = . . . = p N = p 1 / N . √ | log ( p ) | Relative error, N → + ∞ : ∼ . √ N

  17. Splitting algorithm Case study: p = P ( Y > a ) , with Y > 0. Splitting: N N � � � � � p = Y > a j � Y > a j − 1 = p j P j = 1 j = 1 with a non-decreasing sequence of levels 0 = a 0 < a 1 < . . . < a N − 1 < a N = a . p = � N Estimator: ˆ j = 1 p j , n rep , ( N independent estimators, with n rep independent replicas per level) is unbiased. Optimal choice of levels a j : such that p 1 = . . . = p N = p 1 / N . √ | log ( p ) | Relative error, N → + ∞ : ∼ . √ N Adaptive version: computes a random sequence Z 0 , . . . of levels with k p j = 1 − n rep .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend