perfect simulation using atomic regeneration with
play

Perfect simulation using atomic regeneration with application to - PowerPoint PPT Presentation

Perfect simulation using atomic regeneration with application to Sequential Monte Carlo Anthony Lee, University of Warwick Joint work with Arnaud Doucet (Oxford), Krzysztof atuszyski (Warwick) February 11th, 2015 Gatsby Computational


  1. Perfect simulation using atomic regeneration with application to Sequential Monte Carlo Anthony Lee, University of Warwick Joint work with Arnaud Doucet (Oxford), Krzysztof Łatuszyński (Warwick) February 11th, 2015 Gatsby Computational Neuroscience Unit, UCL arXiv:1407.5770 1 / 47

  2. Outline Introduction Regeneration and perfect simulation Singleton atoms and Bernoulli factories Introduction of an artificial singleton atom Perfect simulation from a Feynman–Kac path measure Remarks 1 / 47

  3. Outline Introduction Regeneration and perfect simulation Singleton atoms and Bernoulli factories Introduction of an artificial singleton atom Perfect simulation from a Feynman–Kac path measure Remarks 1 / 47

  4. Introduction ◮ Given a target probability measure (distribution) π on ( X , B ( X )) , we would like to obtain exact samples from π . ◮ Some methodology: inverse transform, special relationships, rejection sampling. ◮ In many practical situations, none of the above are suitable. ◮ Markov chain Monte Carlo: define a Markov kernel P with stationary distribution π ◮ One can obtain samples whose asymptotic distribution is π . ◮ In the 90s the ability to sample exactly from the stationary distribution of a Markov chain was investigated. 2 / 47

  5. Sampling from the stationary distribution ◮ Asmussen et al. (1992) investigated methods to sample from the stationary distribution of a Markov chain. ◮ These methods were prohibitively expensive to implement on general-state spaces. ◮ Propp and Wilson (1996) proposed an alternative method: Coupling From The Past (CFTP). ◮ This has had notable successes on discrete state spaces. ◮ Murdoch and Green (1998) proposed a CFTP method for general state spaces: the multigamma coupler. ◮ One of the algorithms we propose is a multigamma coupler. 3 / 47

  6. Structure of the talk ◮ We will overview, for a Markov kernel P with stationary distribution π , ◮ regeneration and the split chain, ◮ mixture representations of π , ◮ perfect simulation. ◮ We will then discuss the special case where P is intractable but admits a singleton atom and defines a uniformly ergodic Markov chain. ◮ Solutions to Bernoulli factory problems provide implementations of perfect simulation in this context. ◮ We then discuss how one can introduce an artificial singleton atom, as in Brockwell and Kadane (2005). ◮ Finally, we use the methodology to sample from a Feynman–Kac path measure. 4 / 47

  7. Motivation ◮ The primary methodology we propose is for uniformly ergodic Markov chains. ◮ However, the transition kernel P can be intractable in the sense that we cannot compute, e.g., p ( x , y ) where ˆ P ( x , A ) = p ( x , y ) d y , A ∈ B ( X ) , x / ∈ A . A ◮ There is no barrier, e.g., to letting P be an iterate of another kernel. ◮ In our primary example the Markov kernel is intractable but “almost” a perfect sampler. ◮ Of course, there can be difficulties in applying the method, which we will discuss. 5 / 47

  8. Primary example: Feynman–Kac path measures ◮ We consider a generic discrete-time Feynman–Kac model with time horizon n . ◮ Let ( Z , B ( Z )) be a measurable space and define ◮ a probability measure µ : B ( Z ) → [ 0 , 1 ] , ◮ some Markov kernels M p : Z × B ( Z ) → [ 0 , 1 ] for p ∈ { 2 , . . . , n } and ◮ non-negative B ( Z ) -measurable functions G p : Z → R + for p ∈ { 1 , . . . , n } . ◮ We define for any p ∈ { 1 , . . . , n } , the measure γ p by   p p ˆ �  µ ( d z 1 ) � A ∈ B ( Z p ) , γ p ( A ) := G q ( z q ) M q ( z q − 1 , d z q ) ,  A q = 1 q = 2 and its associated probability measure π p := γ p ( Z p ) − 1 γ p . ◮ With X := Z n the Feynman–Kac path measure of interest is the probability measure π := π n on B ( X ) . 6 / 47

  9. Hidden Markov models ◮ One direct application is for inferring the distribution of the latent variables in a hidden Markov model. ◮ Here µ and M := ( M p ) p ∈{ 2 ,..., n } determine the unconditional distribution of the latent variables. ◮ G := ( G p ) p ∈{ 1 ,..., n } encode the probability densities of the observed data, i.e., G p ( x p ) = g ( x p , y p ) , where ( y 1 , . . . , y n ) is the sequence of observed data at the times 1 , . . . , n . 7 / 47

  10. Sequential Monte Carlo ◮ Expressing π using this Feynman–Kac formalism has immediate methodological consequences. ◮ One can approximate π ( f ) := ´ X f ( x ) π ( d x ) using SMC or particle filtering methods. ◮ The following algorithm is a particle filter. 8 / 47

  11. A particle filter 1. Simulate ζ i 1 ∼ µ for i ∈ { 1 , . . . , N } . 2. For p = 2 , . . . , n : ◮ For each i ∈ { 1 , . . . , N } : 2.1 Simulate A i G p − 1 ( ζ 1 p − 1 ) , . . . , G p − 1 ( ζ N � � p − 1 ∼ C p − 1 ) . A i 2.2 Simulate ζ i p − 1 p ∼ M p ( ζ p − 1 , · ) . 3. Set V = ( ζ 1 1 , . . . , ζ N n , A 1 1 , . . . , A N n − 1 ) . ◮ Here C denotes a categorical distribution, i.e. G p − 1 ( ζ k p − 1 ) Pr ( A i p − 1 = k ) = . � N j = 1 G p − 1 ( ζ j p − 1 ) 9 / 47

  12. Motivation for perfect simulation ◮ Following Andrieu et al. (2010), for any k ∈ { 1 , . . . , N } , we define the ancestral lineage B k to be the { 1 , . . . , N } n -valued B k random variable satisfying B k n := k and B k p + 1 p := A . p ◮ The random variable ζ k := ( ζ B k 1 , . . . , ζ B k 1 n ) n is then a path taking values in X. ◮ Let Q N be the probability measure associated with the path ζ K chosen by tracing an ancestral line after picking K with G n ( ζ k n ) Pr ( K = k ) = . � N k = 1 G n ( ζ j n ) ◮ How close is Q N to π ? 10 / 47

  13. Hope... Proposition Assume there exists B < ∞ such that for each p ∈ { 1 , . . . , n } , 0 < G p ( z p ) < B for all z p ∈ Z . Then there exists F < ∞ such that for any N ≥ 2 , � n � π ( d x ) 1 + F sup Q N ( d x ) ≤ . N x ∈ X ◮ Great! ◮ Can we make these samples perfect somehow? 11 / 47

  14. Iterated Conditional SMC ◮ This is a Markov kernel, defined by running a conditional SMC algorithm with a fixed path, followed by picking a new path. ˆ Q N ¯ x ( d v ) Q N P N ( x , A ) := v ( A ) , x ∈ X , A ∈ B ( X ) . V N ◮ ¯ Q N x is the probability measure associated with the random variable V produced by conditional SMC with fixed path x . ◮ Q N v is the probability measure associated with the path ζ K chosen by tracing an ancestral line after picking K with G n ( ζ k n ) Pr ( K = k ) = . k = 1 G n ( ζ j � N n ) ◮ This is a reversible, π -invariant Markov kernel (Andrieu et al., 2010). 12 / 47

  15. Uniform ergodicity of i-cSMC ◮ This Markov kernel has been studied in detail in Chopin and Singh (2013), Andrieu et al. (2013) and Lindsten et al. (2014). ◮ If assume π -essential boundedness of each G p , then P N ( x , · ) ≥ ǫ N π ( · ) , where lim N →∞ ǫ N = 1. ◮ Quantitative bounds provided in Andrieu et al. (2013) and Lindsten et al. (2014) can be used to bound ǫ N under various assumptions. 13 / 47

  16. Remarks ◮ In Andrieu et al. (2012), a perfect sampling method is proposed where the mechanism governing particle offspring is fundamentally changed from selection with a constant population size at each time to stochastic branching. ◮ Computational guarantees are yet to be established. ◮ The only other perfect sampling method on a general state space is rejection in O ( exp ( n )) time. ◮ Some applications of our methodology are presented in the paper on arXiv, I will cover only the methodology here. 14 / 47

  17. Outline Introduction Regeneration and perfect simulation Singleton atoms and Bernoulli factories Introduction of an artificial singleton atom Perfect simulation from a Feynman–Kac path measure Remarks 14 / 47

  18. Notation ◮ Recall that π is a probability measure on ( X , B ( X )) . ◮ Let X := ( X n ) n ≥ 1 be a time-homogeneous, π -irreducible, Harris recurrent Markov chain with π -invariant transition kernel P , i.e., P ( x , A ) = Pr ( X n ∈ A | X n − 1 = x ) . ◮ We will use the notation, where µ : B ( X ) → [ 0 , 1 ] , ˆ µ P ( A ) := µ ( d x ) P ( x , A ) , A ∈ B ( X ) , X and for n ∈ N , ˆ P n ( x , A ) := P ( x , d y ) P n − 1 ( y , A ) . X ◮ We assume we can sample from P ( x , · ) for any x ∈ X. 15 / 47

  19. Atoms ◮ The set α is a proper atom for P is there exists a probability measure µ such that P ( x , A ) = µ ( A ) , x ∈ α, A ∈ B ( X ) . ◮ A proper atom is accessible if π ( α ) > 0 so that (ind. of X 1 )   �  = 1 . Pr I ( X n ∈ α ) = ∞ n ≥ 1 ◮ Intuition: when a proper atom exists, the Markov chain occasionally visits α , at which point it regenerates. ◮ X can then be split into independent “tours”. 16 / 47

  20. The split chain ◮ On general state spaces, proper atoms are not guaranteed to exist. ◮ A key theoretical development was the split chain (Athreya and Ney, 1978; Nummelin, 1978). ◮ The key assumption is that P satisfies a minorization condition P ( x , · ) ≥ s ( x ) ν ( · ) , ´ for some function s with π ( s ) = X s ( x ) π ( d x ) > 0 and a probability measure ν . X ( ν, s ) ◮ This is a bivariate Markov chain ˜ X ν, s = (˜ ) n ≥ 1 evolving n on X × { 0 , 1 } whose first coordinate has identical law to X . 17 / 47

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend