an introduction to particle rare event simulation p del
play

An introduction to particle rare event simulation P. Del Moral - PowerPoint PPT Presentation

An introduction to particle rare event simulation P. Del Moral INRIA Bordeaux- Sud Ouest & IMB & CMAP Computation of transition trajectories and rare events in non equilibrium systems , ENS Lyon, June 2012 Some hyper-refs


  1. An introduction to particle rare event simulation P. Del Moral INRIA Bordeaux- Sud Ouest & IMB & CMAP Computation of transition trajectories and rare events in non equilibrium systems , ENS Lyon, June 2012 Some hyper-refs ◮ Feynman-Kac formulae, Genealogical & Interacting Particle Systems with appl., Springer (2004) ◮ Sequential Monte Carlo Samplers JRSS B. (2006). (joint work with A. Doucet & A. Jasra) ◮ A Backward Particle Interpretation of Feynman-Kac Formulae M2AN (2010). (joint work with A. Doucet & S.S. Singh) ◮ On the concentration of interacting processes. Foundations & Trends in Machine Learning [170p.] (2012). (joint work with Peng Hu & Liming Wu) [+ Refs] ◮ More references on the website : Feynman-Kac models and particle systems [+ Links]

  2. Introduction Feynman-Kac models Some rare event models Stochastic analysis

  3. Introduction Some basic notation Importance sampling Acceptance-rejection samplers Feynman-Kac models Some rare event models Stochastic analysis

  4. Basic notation P ( E ) probability meas., B ( E ) bounded functions on E . � ◮ ( µ, f ) ∈ P ( E ) × B ( E ) − → µ ( f ) = µ ( dx ) f ( x ) ◮ Q ( x 1 , dx 2 ) integral operators x 1 ∈ E 1 � x 2 ∈ E 2 � Q ( f )( x 1 ) = Q ( x 1 , dx 2 ) f ( x 2 ) � ⇒ [ µ Q ]( f ) = µ [ Q ( f )] ) [ µ Q ]( dx 2 ) = µ ( dx 1 ) Q ( x 1 , dx 2 ) (= ◮ Boltzmann-Gibbs transformation [Positive and bounded potential function G ] 1 �→ µ ( dx ) Ψ G ( µ )( dx ) = µ ( G ) G ( x ) µ ( dx )

  5. Importance sampling and optimal twisted measures P ( X ∈ A ) = P X ( A ) = 10 − 10 � Find P Y t.q. P Y ( A ) = P ( Y ∈ A ) ≃ 1 � Crude Monte Carlo sampling Y i i.i.d. P Y � d P X � � X ( A ) := 1 d P X = P X ( A ) ≃ P N ( Y i ) 1 A ( Y i ) P Y 1 A d P Y N d P Y 1 ≤ i ≤ N Optimal twisted measure = Conditional distribution Variance = 0 ⇐ ⇒ P Y = Ψ 1 A ( P X ) = Law ( X | X ∈ A ) ⇓ Perfect or MCMC samplers =acceptance-rejection techniques BUT Very often with very small acceptance rates

  6. Conditional distributions and Feynman-Kac models Example : Markov chain models X n restricted to subsets A n X = ( X 0 , . . . , X n ) ∈ A = ( A 0 × . . . × A n ) Conditional distributions Law ( X | X ∈ A ) = Law (( X 0 , . . . , X n ) | X p ∈ A p , p < n ) = Q n and Proba ( X p ∈ A p , p < n ) = Z n

  7. Conditional distributions and Feynman-Kac models Example : Markov chain models X n restricted to subsets A n X = ( X 0 , . . . , X n ) ∈ A = ( A 0 × . . . × A n ) Conditional distributions Law ( X | X ∈ A ) = Law (( X 0 , . . . , X n ) | X p ∈ A p , p < n ) = Q n and Proba ( X p ∈ A p , p < n ) = Z n given by the Feynman-Kac measures     � d Q n := 1 G p ( X p )  d P n Z n  0 ≤ p < n with P n = Law ( X 0 , . . . , X n ) and G p = 1 A p , p < n

  8. Introduction Feynman-Kac models Nonlinear evolution equation Interacting particle samplers Continuous time models Particle estimates Some rare event models Stochastic analysis

  9. Feynman-Kac models ( general G n ( X n ) & X n ∈ E n ) Flow of n -marginals   �  f ( X n )  η n ( f ) = γ n ( f ) /γ n (1) with γ n ( f ) := E G p ( X p ) 0 ≤ p < n ⇓ ( γ n (1) = Z n ) Nonlinear evolution equation : η n +1 = Ψ G n ( η n ) M n +1 Z n +1 η n ( G n ) × Z n = with the Markov transitions M n +1 ( x n , dx n +1 ) = P ( X n +1 ∈ dx n +1 | X n = x n ) Note : [ X n = ( X ′ 0 , . . . , X ′ n ) & G n ( X n ) = G ′ ( X ′ ⇒ η n = Q ′ n )] = n

  10. Interacting particle samplers Nonlinear evolution equation : η n +1 = Ψ G n ( η n ) M n +1 Z n +1 = η n ( G n ) × Z n � Sequential particle simulation technique M n -propositions ⊕ G n -acceptance-rejection with recycling � � Genetic type branching particle model G n − selection M n − mutation → � ξ n = ( � ξ n = ( ξ i ξ i → ξ n +1 = ( ξ i n ) 1 ≤ i ≤ N − − − − − − − n ) 1 ≤ i ≤ N − − − − − − − n +1 ) 1 ≤ i ≤ N Note : [ X n = ( X ′ 0 , . . . , X ′ n ) & G n ( X n ) = G ′ ( X ′ ⇒ Genealogical tree model n )] =

  11. ⊃ Continuous time models ⊃ Langevin diffusions � t n +1 X n := X ′ V s ( X ′ & G n ( X n ) = exp s ) ds [ t n , t n +1 [ t n OR Euler approximations (Langevin diff. � Metropolis-Hasting moves) OR Fully continuous time particle models � Schr¨ odinger operators d t = L ′ dt γ t ( f ) = γ t ( L V L V t ( f )) t + V t with � � � t � t V s ( X ′ γ t (1) = E exp s ) ds = exp η s ( V s ) ds with η t = γ t /γ t (1) 0 0

  12. ⊃ Continuous time models ⊃ Langevin diffusions � t n +1 X n := X ′ V s ( X ′ & G n ( X n ) = exp s ) ds [ t n , t n +1 [ t n OR Euler approximations (Langevin diff. � Metropolis-Hasting moves) OR Fully continuous time particle models � Schr¨ odinger operators d t = L ′ dt γ t ( f ) = γ t ( L V L V t ( f )) t + V t with � � � t � t V s ( X ′ γ t (1) = E exp s ) ds = exp η s ( V s ) ds with η t = γ t /γ t (1) 0 0 d Master equation η t = Law ( X t ) ⇒ dt η t ( f ) = η t ( L t ,η t ( f )) (ex. : V t = − U t ≤ 0) � L t ,η t ( f )( x ) = L ′ t ( f )( x ) + U t ( x ) ( f ( y ) − f ( x )) η t ( dy ) � �� � � �� � � �� � free exploration acceptance rate interacting jump law ⇓ Particle model: Survival-acceptance rates ⊕ Recycling jumps

  13. Genealogical tree evolution ( N , n ) = (3 , 3) ✲ • ✲ • = • • • ✲ ✲ • • • • = • ✲ ✲ ✲ ✲ • • • • = • ✲ Some particle estimates ( δ a ( dx ) ↔ δ ( x − a ) dx ) � n = 1 ◮ Individuals ξ i n ”almost” iid with law η n ≃ η N 1 ≤ i ≤ N δ ξ i N n � ◮ Ancestral lines ”almost” iid with law Q n ≃ 1 1 ≤ i ≤ N δ line n ( i ) N ◮ Normalizing constants � � η p ( G p ) ≃ N ↑∞ Z N η N Z n +1 = n +1 = p ( G p ) ( Unbiased ) 0 ≤ p ≤ n 0 ≤ p ≤ n

  14. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  15. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  16. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  17. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  18. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  19. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  20. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  21. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  22. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  23. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  24. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  25. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  26. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  27. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  28. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  29. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  30. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  31. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  32. � n := 1 Graphical illustration : η n ≃ η N 1 ≤ i ≤ N δ ξ i N n

  33. How to use the full ancestral tree model ? hyp G n − 1 ( x n − 1 ) M n ( x n − 1 , dx n ) = H n ( x n − 1 , x n ) ν n ( dx n ) ⇒ Backward Markov model : Q n ( d ( x 0 , . . . , x n )) = η n ( dx n ) M n ,η n − 1 ( x n , dx n − 1 ) . . . M 1 ,η 0 ( x 1 , dx 0 ) � �� � ∝ η n − 1 ( dx n − 1 ) H n ( x n − 1 , x n )

  34. How to use the full ancestral tree model ? hyp G n − 1 ( x n − 1 ) M n ( x n − 1 , dx n ) = H n ( x n − 1 , x n ) ν n ( dx n ) ⇒ Backward Markov model : Q n ( d ( x 0 , . . . , x n )) = η n ( dx n ) M n ,η n − 1 ( x n , dx n − 1 ) . . . M 1 ,η 0 ( x 1 , dx 0 ) � �� � ∝ η n − 1 ( dx n − 1 ) H n ( x n − 1 , x n ) Particle approximation Q N n ( d ( x 0 , . . . , x n )) = η N n ( dx n ) M n ,η N n − 1 ( x n , dx n − 1 ) . . . M 1 ,η N 0 ( x 1 , dx 0 ) � 1 Ex.: Additive functionals f n ( x 0 , . . . , x n ) = 0 ≤ p ≤ n f p ( x p ) n +1 � 1 Q N η N n ( f n ) := n − 1 . . . M p +1 ,η N p ( f p ) n M n ,η N n + 1 � �� � 0 ≤ p ≤ n matrix operations

  35. Introduction Feynman-Kac models Some rare event models Self avoiding walks Level crossing probabilities Particle absorption models Quasi-invariant measures Doob h -processes Semigroup gradient estimates Boltzmann-Gibbs measures Stochastic analysis

  36. Self avoiding walks in Z d Feynman-Kac model with X n = ( X 0 , . . . , X n ) & G n ( X n ) = 1 X n �∈{ X 0 ,..., X n − 1 } � Conditional distributions Q n = Law (( X 0 , . . . , X n ) | X p � = X q , ∀ 0 ≤ p < q < n ) and Z n = Proba ( X p � = X q , ∀ 0 ≤ p < q < n )

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend