particle gibbs with ancestor sampling
play

Particle Gibbs with Ancestor Sampling Fredrik Lindsten , Michael I. - PowerPoint PPT Presentation

Particle Gibbs with Ancestor Sampling Fredrik Lindsten , Michael I. Jordan , Thomas B. Schn Chamonix, January 6, 2014 Division of Automatic Control Linkping University, Sweden Departments of EECS and Statistics University


  1. Particle Gibbs with Ancestor Sampling Fredrik Lindsten ⋆ , Michael I. Jordan † , Thomas B. Schön ‡ Chamonix, January 6, 2014 ⋆ Division of Automatic Control Linköping University, Sweden † Departments of EECS and Statistics University of California, Berkeley, USA ‡ Department of Information Technology Uppsala University, Sweden AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  2. Identification of state-space models 2(24) Consider a nonlinear discrete-time state-space model, x t ∼ f θ ( x t | x t − 1 ) , y t ∼ g θ ( y t | x t ) , and x 1 ∼ π ( x 1 ) . We observe y 1: T = ( y 1 , . . . , y T ) and wish to estimate θ . • Frequentists: Find ˆ θ ML = arg max θ p θ ( y 1: T ) . - Use e.g. the Monte Carlo EM algorithm. • Bayesians: Find p ( θ | y 1: T ) . - Use e.g. Gibbs sampling. AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  3. Gibbs sampler for SSMs 3(24) Aim: Find p ( θ , x 1: T | y 1: T ) . MCMC: Gibbs sampling for state-space models. Iterate, • Draw θ [ k ] ∼ p ( θ | x 1: T [ k − 1 ] , y 1: T ) ; OK! • Draw x 1: T [ k ] ∼ p θ [ k ] ( x 1: T | y 1: T ) . Hard! Problem: p θ ( x 1: T | y 1: T ) not available! Idea: Approximate p θ ( x 1: T | y 1: T ) using a particle filter. AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  4. The particle filter 4(24) Weighting Resampling Propagation Weighting Resampling t − 1 ) = w j • Resampling: P ( a i t = j | F N t − 1 / ∑ l w l t − 1 . t ( x t | x a i 1: t = { x a i • Propagation: x i t ∼ R θ 1: t − 1 ) and x i 1: t − 1 , x i t } . t t • Weighting: w i t = W θ t ( x i 1: t ) . ⇒ { x i 1: t , w i t } N i = 1 AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  5. The particle filter 5(24) Algorithm Particle filter (PF) 1. Initialize ( t = 1 ): (a) Draw x i 1 ∼ R θ 1 ( x 1 ) for i = 1, . . . , N . (b) Set w i 1 = W θ 1 ( x i 1 ) for i = 1, . . . , N . 2. for t = 2, . . . , T : t ∼ Discrete ( { w j (a) Draw a i t − 1 } N j = 1 ) for i = 1, . . . , N . t ( x t | x a i (b) Draw x i t ∼ R θ 1: t − 1 ) for i = 1, . . . , N . t 1: t = { x a i (c) Set x i 1: t − 1 , x i t } and w i t = W θ t ( x i t 1: t ) for i = 1, . . . , N . AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  6. The particle filter 6(24) 1 0 −1 State −2 −3 −4 5 10 15 20 25 Time AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  7. Sampling based on the PF 7(24) approx. 1: T = x i 1: T | F N T ) ∝ w i ∼ p θ ( x 1: T | y 1: T ) . With P ( x ⋆ T we get x ⋆ 1: T 1 0.5 0 −0.5 −1 State −1.5 −2 −2.5 −3 −3.5 −4 5 10 15 20 25 Time AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  8. Conditional particle filter with ancestor sampling 8(24) Problems with this approach, • Based on a PF ⇒ approximate sample. • Does not leave p ( θ , x 1: T | y 1: T ) invariant! • Relies on large N to be successful. • A lot of wasted computations. Conditional particle filter with ancestor sampling (CPF-AS) Let x ′ 1: T = ( x ′ 1 , . . . , x ′ T ) be a fixed reference trajectory . • At each time t , sample only N − 1 particles in the standard way. • Set the N th particle deterministically: x N t = x ′ t . • Generate an artificial history for x N t by ancestor sampling. AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  9. Conditional particle filter with ancestor sampling 9(24) Algorithm CPF-AS, conditioned on x ′ 1: T 1. Initialize ( t = 1 ): (a) Draw x i 1 ∼ R θ 1 ( x 1 ) for i = 1, . . . , N − 1 . (b) Set x N 1 = x ′ 1 . (c) Set w i 1 = W θ 1 ( x i 1 ) for i = 1, . . . , N . 2. for t = 2, . . . , T : t ∼ Discrete ( { w j (a) Draw a i t − 1 } N j = 1 ) for i = 1, . . . , N − 1 . t ( x t | x a i (b) Draw x i t ∼ R θ 1: t − 1 ) for i = 1, . . . , N − 1 . t (c) Set x N t = x ′ t . (d) Draw a N t with P ( a N t = i | F N t − 1 ) ∝ w i t | x i t − 1 f θ ( x ′ t − 1 ) . 1: t = { x a i (e) Set x i 1: t − 1 , x i t } and w i t = W θ t ( x i t 1: t ) for i = 1, . . . , N . AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  10. The PGAS Markov kernel (I/II) 10(24) Consider the procedure: 1. Run CPF-AS ( N , x ′ 1: T ) targeting p θ ( x 1: T | y 1: T ) , 1: T = x i 1: T | F N T ) ∝ w i 2. Sample x ⋆ 1: T with P ( x ⋆ T . 3 2 1 State 0 −1 −2 −3 5 10 15 20 25 30 35 40 45 50 Time AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  11. The PGAS Markov kernel (II/II) 11(24) This procedure: • Maps x ′ 1: T stochastically into x ⋆ 1: T . • Implicitly defines a Markov kernel ( P N θ ) on ( X T , X T ) , referred to as the PGAS (Particle Gibbs with ancestor sampling) kernel. Theorem For any number of particles N ≥ 1 and for any θ ∈ Θ , the PGAS kernel P N θ leaves p θ ( x 1: T | y 1: T ) invariant, � θ ( x ′ 1: T ) p θ ( dx ′ p θ ( dx ⋆ P N 1: T , dx ⋆ 1: T | y 1: T ) = 1: T | y 1: T ) . F. Lindsten, M. I. Jordan and T. B. Schön , P . Bartlett, F . C. N. Pereira, C. J. C. Burges, L. Bottou and K. Q. Weinberger (Eds.), Ancestor Sampling for Particle Gibbs Advances in Neural Information Processing Systems (NIPS) 25 , 2600-2608, 2012. F. Lindsten, M. I. Jordan and T. B. Schön , Particle Gibbs with Ancestor sampling, arXiv:1401.0604 , 2014. AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  12. Ergodicity 12(24) Theorem Assume that there exist constants ε > 0 and κ < ∞ such that, for any θ ∈ Θ , t ∈ { 1, . . . , T } and x 1: t ∈ X t , W θ t ( x 1: t ) ≤ κ and p θ ( y 1: T ) ≥ ε . a Then, for any N ≥ 2 the PGAS kernel P N θ is uniformly ergodic. That is, there exist constants R < ∞ and ρ ∈ [ 0, 1 ) such that θ ) k ( x ′ ∀ x ′ � ( P N 1: T , · ) − p θ ( · | y 1: T ) � TV ≤ R ρ k , 1: T ∈ X T . a N.B. These conditions are simple, but unnecessarily strong; see (Lindsten, Douc, and Moulines. 2014). F. Lindsten, M. I. Jordan and T. B. Schön , Particle Gibbs with Ancestor sampling, arXiv:1401.0604 , 2014. AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  13. PGAS for Bayesian identification 13(24) Bayesian identification: PGAS + Gibbs Algorithm PGAS for Bayesian identification 1. Initialize: Set { θ [ 0 ] , x 1: T [ 0 ] } arbitrarily. 2. For k ≥ 1 , iterate: (a) Draw x 1: T [ k ] ∼ P N θ [ k − 1 ] ( x 1: T [ k − 1 ] , · ) . (b) Draw θ [ k ] ∼ p ( θ | x 1: T [ k ] , y 1: T ) . For any number of particles N ≥ 2 , the Markov chain { θ [ k ] , x 1: T [ k ] } k ≥ 1 has limiting distribution p ( θ , x 1: T | y 1: T ) . AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  14. ex) Stochastic volatility model 14(24) Stochastic volatility model, x t + 1 = 0.9 x t + v t , v t ∼ N ( 0, θ ) , e t ∼ N ( 0, 1 ) . y t = e t exp ( 1 2 x t ) , Consider the ACF of θ [ k ] − E [ θ | y 1: T ] . PG-AS, T = 1000 PG, T = 1000 1 N=5 1 N=5 N=20 N=20 N=100 N=100 0.8 0.8 N=1000 N=1000 0.6 0.6 ACF ACF 0.4 0.4 0.2 0.2 0 0 0 50 100 150 200 0 50 100 150 200 Lag Lag AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

  15. PGAS vs. PG 15(24) 3 2 1 State 0 −1 −2 −3 5 10 15 20 25 30 35 40 45 50 Time PGAS PG 3 3 2 2 1 1 State State 0 0 −1 −1 −2 −2 −3 −3 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50 Time Time AUTOMATIC CONTROL Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön REGLERTEKNIK LINKÖPINGS UNIVERSITET Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend