sequential monte carlo methods particle filter
play

Sequential Monte Carlo Methods Particle Filter Martin Ulmke Head of - PowerPoint PPT Presentation

Sequential Monte Carlo Methods Particle Filter Martin Ulmke Head of Research Group Distributed Sensor Systems Sensor Data and Information Fusion Fraunhofer FKIE Lecture: Introduction to Sensor Data Fusion 12 December 2018 Inhalt


  1. Sequential Monte Carlo Methods “Particle Filter” Martin Ulmke Head of Research Group Distributed Sensor Systems Sensor Data and Information Fusion Fraunhofer FKIE Lecture: Introduction to Sensor Data Fusion 12 December 2018

  2. Inhalt Dynamic State Estimation 1 Problem setting Conditional Probability Density: Monte Carlo Methods 2 Sequential Bayesian Estimation 3 Linear Gaussian Systems Grid-based methods Non-linear, non-Gaussian systems Sequential Monte Carlo Methods 4 “Sequential Importance Sampling”, SIS “Sampling Importance Resampling”, SIR Multitarget Tracking 5 “Combinatorial Desaster” Probability Hypothesis Density Extensions and Variants of Particle Filters 6 Summary 7 Literatur 8 1/ 33

  3. Dynamic State Estimation Problem setting Dynamic State Estimation t=0 Zustandsraum Zeit t Zieltrajektorie k (unbekannt) Target trajectory: X ( t ) 2/ 33

  4. Dynamic State Estimation Problem setting Dynamic State Estimation t=0 Zustandsraum k=1 Messungen k=2 Falschalarm . . . Ausfaller Zeit t Zieltrajektorie k (unbekannt) Target trajectory: X ( t ) Measurements: Z 1: k = { z 1 , · · · , z k } 2/ 33

  5. Dynamic State Estimation Problem setting Dynamic State Estimation Wahrsch.-Dichte t=0 Zustandsraum k=1 Messungen k=2 . . . Zeit t Zieltrajektorie k (unbekannt) Target trajectory: X ( t ) Measurements: Z 1: k = { z 1 , · · · , z k } Probability density: p ( X 0: k | Z 1: k ) with X 0: k ≡ { x 0 , · · · , x k } 2/ 33

  6. Dynamic State Estimation Conditional Probability Density: Conditional Probability Density: p ( X 0: k | Z 1: k ) contains our collective knowledge on the target state X 0: k = { x 0 , · · · , x k } at time instances t j ( j = 1 , · · · , k ), given measurements Z 1: k ≡ { z 1 , · · · , z k } and given prior knowledge , e.g., about target dynamics. 3/ 33

  7. Dynamic State Estimation Conditional Probability Density: Conditional Probability Density: p ( X 0: k | Z 1: k ) contains our collective knowledge on the target state X 0: k = { x 0 , · · · , x k } at time instances t j ( j = 1 , · · · , k ), given measurements Z 1: k ≡ { z 1 , · · · , z k } and given prior knowledge , e.g., about target dynamics. Specific marginalized densities:: � p ( x k | Z 1: k ) = p ( X 0: k | Z 1: k ) d X 0: k − 1 filter estimation � p ( X 0: k | Z 1: k ) � p ( x j | Z 1: k ) = i � = j d x i retrodiction ( j < k ) � Expectation values: E [ f ( X 0: k )] = p ( X 0: k | Z 1: k ) f ( X 0: k ) d X 0: k 3/ 33

  8. Dynamic State Estimation Conditional Probability Density: Conditional Probability Density: p ( X 0: k | Z 1: k ) contains our collective knowledge on the target state X 0: k = { x 0 , · · · , x k } at time instances t j ( j = 1 , · · · , k ), given measurements Z 1: k ≡ { z 1 , · · · , z k } and given prior knowledge , e.g., about target dynamics. Specific marginalized densities:: � p ( x k | Z 1: k ) = p ( X 0: k | Z 1: k ) d X 0: k − 1 filter estimation � p ( X 0: k | Z 1: k ) � p ( x j | Z 1: k ) = i � = j d x i retrodiction ( j < k ) � Expectation values: E [ f ( X 0: k )] = p ( X 0: k | Z 1: k ) f ( X 0: k ) d X 0: k In general, integrals not analytically solvable ⇒ “Monte Carlo Methods” Replace integral by a sum of weighted “random paths”. N N � w ( i ) k f ( X ( i ) � w ( i ) k δ ( X 0: k − X ( i ) E [ f ( X 0: k )] ≈ 0: k ) ⇔ p ( X 0: k | Z 1: k ) ≈ 0: k ) i =1 i =1 3/ 33

  9. Monte Carlo Methods Wahrsch.-Dichte t=0 Zustandsraum k=1 k=2 . . . Teilchen-Trajektorien Zeit t Zieltrajektorie k (unbekannt) Random path (trajectory): X ( i ) 0: k Weight: w ( i ) k N � w ( i ) k δ ( X 0: k − X ( i ) p ( X 0: k | Z 1: k ) ≈ 0: k ) i =1 4/ 33

  10. Monte Carlo Methods Wahrsch.-Dichte t=0 Zustandsraum k=1 k=2 . . . Teilchen-Trajektorien Zeit t Zieltrajektorie k (unbekannt) Random path (trajectory): X ( i ) 0: k Weight: w ( i ) k How to determine X ( i ) 0: k and w ( i ) k ? N � w ( i ) k δ ( X 0: k − X ( i ) p ( X 0: k | Z 1: k ) ≈ 0: k ) i =1 4/ 33

  11. Monte Carlo Methods Sampling X ( i ) p a) simple sampling: 0: k equally distributed w ( i ) ∝ p ( X ( i ) 0: k | Z 1: k ) k → very inefficient X 5/ 33

  12. Monte Carlo Methods Sampling X ( i ) p a) simple sampling: 0: k equally distributed w ( i ) ∝ p ( X ( i ) 0: k | Z 1: k ) k → very inefficient X X ( i ) 0: k ∼ p ( X ( i ) p b) perfect sampling: 0: k | Z 1: k ) w ( i ) optimal sampling = 1 / N k of phase space → sampling ∼ p difficult X 5/ 33

  13. Monte Carlo Methods Sampling X ( i ) p a) simple sampling: 0: k equally distributed w ( i ) ∝ p ( X ( i ) 0: k | Z 1: k ) k → very inefficient X X ( i ) 0: k ∼ p ( X ( i ) p b) perfect sampling: 0: k | Z 1: k ) w ( i ) optimal sampling = 1 / N k of phase space → sampling ∼ p difficult X p X ( i ) 0: k ∼ q ( X ( i ) c) importance sampling: 0: k | Z 1: k ) q p ( X ( i ) 0: k | Z 1: k ) w ( i ) ∝ k q ( X ( i ) 0: k | Z 1: k ) → q arbitrary (in principle) X 5/ 33

  14. Sequential Bayesian Estimation Often, quantities at time step k can be calculated recursively from the former time step k − 1. 1 p ( X 0: k | Z 1: k ) = p ( z k | X 0: k Z 1: k − 1 ) p ( X 0: k | Z 1: k − 1 ) (Bayes) p ( z k | Z 1: k − 1 ) 6/ 33

  15. Sequential Bayesian Estimation Often, quantities at time step k can be calculated recursively from the former time step k − 1. 1 p ( X 0: k | Z 1: k ) = p ( z k | X 0: k Z 1: k − 1 ) p ( X 0: k | Z 1: k − 1 ) (Bayes) p ( z k | Z 1: k − 1 ) ∝ p ( z k | X 0: k Z 1: k − 1 ) p ( x k | X 0: k − 1 Z 1: k − 1 ) p ( X 0: k − 1 | Z 1: k − 1 ) 6/ 33

  16. Sequential Bayesian Estimation Often, quantities at time step k can be calculated recursively from the former time step k − 1. 1 p ( X 0: k | Z 1: k ) = p ( z k | X 0: k Z 1: k − 1 ) p ( X 0: k | Z 1: k − 1 ) (Bayes) p ( z k | Z 1: k − 1 ) ∝ p ( z k | X 0: k Z 1: k − 1 ) p ( x k | X 0: k − 1 Z 1: k − 1 ) p ( X 0: k − 1 | Z 1: k − 1 ) = p ( z k | x k ) p ( x k | x k − 1 ) p ( X 0: k − 1 | Z 1: k − 1 ) ↑ ↑ Measurement does not Markov dynamics . depend on history. 6/ 33

  17. Sequential Bayesian Estimation Often, quantities at time step k can be calculated recursively from the former time step k − 1. 1 p ( X 0: k | Z 1: k ) = p ( z k | X 0: k Z 1: k − 1 ) p ( X 0: k | Z 1: k − 1 ) (Bayes) p ( z k | Z 1: k − 1 ) ∝ p ( z k | X 0: k Z 1: k − 1 ) p ( x k | X 0: k − 1 Z 1: k − 1 ) p ( X 0: k − 1 | Z 1: k − 1 ) = p ( z k | x k ) p ( x k | x k − 1 ) p ( X 0: k − 1 | Z 1: k − 1 ) ↑ ↑ Measurement does not Markov dynamics . depend on history. � Measurement eq.: z k = h k ( x k , u k ) , p ( z k | x k ) = δ ( z k − h k ( x k , u k )) p u ( u k ) d u k � Markov dynamics: x k = f k ( x k − 1 , v k ) , p ( x k | x k − 1 ) = δ ( x k − f k ( x k − 1 , v k )) p v ( v k ) d v k u k ∼ p u ( u ) : measurement noise v k ∼ p v ( v ) : process noise 6/ 33

  18. Sequential Bayesian Estimation p ( X 0: k | Z 1: k ) ∝ p ( z k | x k ) p ( x k | x k − 1 ) p ( X 0: k − 1 | Z 1: k − 1 ) Often, only current estimates x k are needed, not the history X 0: k . � d X 0: k − 1 · · · − → recursive filter equation for time step k : measurement update p ( x k | Z 1: k ) ∝ p ( z k | x k ) p ( x k | Z 1: k − 1 ) � prediction p ( x k | Z 1: k − 1 ) d x k − 1 p ( x k | x k − 1 ) p ( x k − 1 | Z 1: k − 1 ) = 7/ 33

  19. Sequential Bayesian Estimation p ( X 0: k | Z 1: k ) ∝ p ( z k | x k ) p ( x k | x k − 1 ) p ( X 0: k − 1 | Z 1: k − 1 ) Often, only current estimates x k are needed, not the history X 0: k . � d X 0: k − 1 · · · − → recursive filter equation for time step k : measurement update p ( x k | Z 1: k ) ∝ p ( z k | x k ) p ( x k | Z 1: k − 1 ) � prediction p ( x k | Z 1: k − 1 ) d x k − 1 p ( x k | x k − 1 ) p ( x k − 1 | Z 1: k − 1 ) = ☛ ✟ In general, this is only a conceptional solution. ✡ ✠ Analytically solvable in special cases, only. 7/ 33

  20. Sequential Bayesian Estimation Linear Gaussian Systems Linear Gaussian Systems Target dynamics: x k = F k x k − 1 + v k , v k ∼ N ( o , D k ) Measurement eq.: z k = H k x k + u k , u k ∼ N ( o , R k ) Then, conditional densities normally distributed: p ( x k − 1 | Z 1: k − 1 ) = N ( x k − 1 ; x k − 1 | k − 1 , P k − 1 | k − 1 ) p ( x k | Z 1: k − 1 ) = N ( x k ; x k | k − 1 , P k | k − 1 ) p ( x k | Z 1: k ) N ( x k ; x k | k , P k | k ) = ✗ ✔ with Kalman filter equations: P k | k − 1 = F k P k − 1 | k − 1 F T x k | k − 1 = F k x k − 1 | k − 1 , k + D k P k | k = P k | k − 1 − W k S k W T � � x k | k = x k | k − 1 + W k z k − H k x k | k − 1 , k ✖ ✕ W k = P k | k − 1 H T k S − 1 S k = H k P k | k − 1 H T k , k + R k (see Lecture II) 8/ 33

  21. Sequential Bayesian Estimation Grid-based methods Grid-based methods Finite number of discrete target states x ( i ) k ( i = 1 · · · N ) ✬ ✩ Be P ( x k − 1 = x ( i ) k − 1 | Z 1: k − 1 ) ≡ w ( i ) k − 1 | k − 1 given. Then: N � w ( i ) k | k − 1 δ ( x k − x ( i ) p ( x k | Z 1: k − 1 ) = k ) i =1 N w ( i ) k | k δ ( x k − x ( i ) � p ( x k | Z 1: k ) = k ) i =1 N w ( i ) � w ( i ) k − 1 | k − 1 p ( x ( i ) k | x ( j ) mit = k − 1 ) k | k − 1 j =1 ✫ ✪ w ( i ) w ( i ) k | k − 1 p ( z k | x ( i ) ∝ k ) k | k 9/ 33

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend