a brief introduction to particle filters
play

A brief Introduction to Particle Filters Michael Pfeiffer - PowerPoint PPT Presentation

A brief Introduction to Particle Filters Michael Pfeiffer pfeiffer@igi.tugraz.at 18.05.2004 Agenda Problem Statement Classical Approaches Particle Filters Theory Algorithms Applications Problem Statement


  1. A brief Introduction to Particle Filters Michael Pfeiffer pfeiffer@igi.tugraz.at 18.05.2004

  2. Agenda • Problem Statement • Classical Approaches • Particle Filters – Theory – Algorithms • Applications

  3. Problem Statement • Tracking the state of a system as it evolves over time • We have: Sequentially arriving (noisy or ambiguous) observations • We want to know: Best possible estimate of the hidden variables

  4. Illustrative Example: Robot Localization 0 1 Prob t= 0 Sensory model: never more than 1 mistake Motion model: may not execute action with small prob.

  5. Illustrative Example: Robot Localization 0 1 Prob t= 1

  6. Illustrative Example: Robot Localization 0 1 Prob t= 2

  7. Illustrative Example: Robot Localization 0 1 Prob t= 3

  8. Illustrative Example: Robot Localization 0 1 Prob t= 4

  9. Illustrative Example: Robot Localization 1 t= 5 4 Trajectory 3 2 1 0 Prob

  10. Applications • Tracking of aircraft positions from radar • Estimating communications signals from noisy measurements • Predicting economical data • Tracking of people or cars in surveillance videos

  11. Bayesian Filtering / Tracking Problem • Unknown State Vector x 0:t = (x 0 , …, x t ) • Observation Vector z 1:t • Find PDF p(x 0:t | z 1:t ) … posterior distribution • or p(x t | z 1:t ) … filtering distribution • Prior Information given: – p(x 0 ) … prior on state distribution – p(z t | x t ) … sensor model – p(z t | x t-1 ) … Markovian state-space model

  12. Sequential Update • Storing all incoming measurements is inconvenient • Recursive filtering: – Predict next state pdf from current estimate – Update the prediction using sequentially arriving new measurements • Optimal Bayesian solution: recursively calculating exact posterior density

  13. Bayesian Update and Prediction • Prediction ∫ = ( | ) ( | ) ( | ) p x z p x x p x z dx − − − − − 1 : 1 1 1 1 : 1 1 t t t t t t t • Update ( | ) ( | ) p z x p x z = − ( | ) 1 : 1 t t t t p x z 1 : t t ( | ) p z z − 1 : 1 t t ∫ = ( | ) ( | ) ( | ) p z z p z x p x z dx − − 1 : 1 1 : 1 t t t t t t t

  14. Agenda • Problem Statement • Classical Approaches • Particle Filters – Theory – Algorithms • Applications

  15. Kalman Filter • Optimal solution for linear-Gaussian case • Assumptions: – State model is known linear function of last state and Gaussian noise signal – Sensory model is known linear function of state and Gaussian noise signal – Posterior density is Gaussian

  16. Kalman Filter: Update Equations = + ~ ( 0 , ) x F x v v N Q − − − − 1 1 1 1 t t t t t t = + ~ ( 0 , ) z H x n n N R t t t t t t , : known matrices F H t t = m F m − − − | 1 1 | 1 t t t t t = ( | ) ( | , ) = + T p x z N x m P P Q F P F − − − − − − − 1 1 : 1 1 1 | 1 1 | 1 − − − − t t t t t t t | 1 1 1 | 1 t t t t t t t = ( | ) ( | , ) = + − ( ) p x z N x m P m m K z H m − − − 1 : 1 | 1 | 1 − − t t t t t t t | | 1 | 1 t t t t t t t t t = ( | ) ( | , ) = − p x z N x m P P P K H P 1 : | | − − | | 1 | 1 t t t t t t t t t t t t t t t = + T S H P H R − | 1 t t t t t t − = 1 T K P H S − | 1 t t t t t

  17. Limitations of Kalman Filtering • Assumptions are too strong. We often find: – Non-linear Models – Non-Gaussian Noise or Posterior – Multi-modal Distributions – Skewed distributions • Extended Kalman Filter: – local linearization of non-linear models – still limited to Gaussian posterior

  18. Grid-based Methods • Optimal for discrete and finite state space • Keep and update an estimate of posterior pdf for every single state • No constraints on posterior (discrete) density

  19. Limitations of Grid-based Methods • Computationally expensive • Only for finite state sets • Approximate Grid-based Filter – divide continuous state space into finite number of cells – Hidden Markov Model Filter – Dimensionality increases computational costs dramatically

  20. Agenda • Problem Statement • Classical Approaches • Particle Filters – Theory – Algorithms • Applications

  21. Many different names… Particle Filters • (Sequential) Monte • Interacting Particle Carlo filters Approximations • Bootstrap filters • Survival of the fittest • Condensation • …

  22. Sample-based PDF Representation • Monte Carlo characterization of pdf: – Represent posterior density by a set of random i.i.d. samples ( particles ) from the pdf p(x 0:t |z 1:t ) – For larger number N of particles equivalent to functional description of pdf – For N →∞ approaches optimal Bayesian estimate

  23. Sample-based PDF Representation • Regions of high density – Many particles – Large weight of particles • Uneven partitioning • Discrete N ∑ = − ( | ) δ ( ) i i P x z w x x approximation for 0 : 1 : 0 : 0 : N t t t t t = 1 i continuous pdf

  24. Importance Sampling (i) from Importance • Draw N samples x 0:t sampling distribution π (x 0:t |z 1:t ) ( | ) p x z = 0 : 1 : ( ) t t w x • Importance weight 0 : t π ( | ) x z 0 : 1 : t t • Estimation of arbitrary functions f t : ( ) ( ) i w x ∑ ~ ~ ˆ N = ( ) ( ) ( ) = ( ) ( ) , 0 : i i i t I f f x w w ∑ 0 : = N t t t t t 1 N ( ) i ( ) j w 0 : = t 1 j . . a s ˆ ∫ → = ( ) ( ) ( ) ( | ) I f I f f x p x y dx 0 : 0 : 1 : 0 : N t t t t t t t → ∞ N

  25. Sequential Importance Sampling (SIS) • Augmenting the samples = = π ( | ) π ( | ) π ( | , ) x z x z x x z − − − 0 : 1 : 0 : 1 1 : 1 0 : 1 1 : t t t t t t t = π ( | ) π ( | , ) x z x x z − − − 0 : 1 1 : 1 1 t t t t t ( ) ( ) ~ π ( | , ) i i x x x z − 1 t t t t • Weight update ( ) ( ) ( ) ( | ) ( | ) i i i p z x p x x ∝ ( ) ( ) − 1 i i t t t t w w − 1 ( ) ( ) t t π ( | , ) i i x x z − 1 t t t

  26. Degeneracy Problem • After a few iterations, all but one particle will have negligible weight • Measure for degeneracy: Effective sample size N = * ... true weights at time t N w t eff * i + 1 ( ) Var w t • Small N eff indicates severe degeneracy • Brute force solution: Use very large N

  27. Choosing Importance Density • Choose π to minimize variance of weights ( ) = ( ) π ( | , ) ( | , ) i i • Optimal solution: x x z p x x z − − 1 1 opt t t t t t t ⇒ ∝ ( ) ( ) ( ) ( | ) i i i w w p z x − − 1 1 t t t t • Practical solution = ( ) ( ) π ( | , ) ( | ) i i x x z p x x − − 1 1 t t t t t ⇒ ∝ ( ) ( ) ( ) ( | ) i i i w w p z x − 1 t t t t – importance density = prior

  28. Resampling • Eliminate particles with small importance weights • Concentrate on particles with large weights • Sample N times with replacement from the (i) according to importance set of particles x t (i) weights w t • „ Survival of the fittest “

  29. Sampling Importance Resample Filter: Basic Algorithm • 1. INIT, t= 0 (i) ~ p(x 0 ); t:= 1; – for i= 1,..., N: sample x 0 • 2. IMPORTANCE SAMPLING (i) ~ p(x t |x t-1 (i) ) – for i= 1,..., N: sample x t (i) := (x 0:t-1 (i) , x t (i) ) • x 0:t (i) = p(z t |x t (i )) – for i= 1,..., N: evaluate importance weights w t – Normalize the importance weights • 3. SELECTION / RESAMPLING (i) according to the – resample with replacement N particles x 0:t importance weights – Set t:= t+ 1 and go to step 2

  30. Variations • Auxiliary Particle Filter: – resample at time t-1 with one-step lookahead (re-evaluate with new sensory information) • Regularisation: – resample from continuous approximation of posterior p(x t |z 1:t )

  31. Visualization of Particle Filter unweighted measure compute importance weights ⇒ p(x t-1 |z 1:t-1 ) resampling move particles predict p(x t |z 1:t-1 )

  32. Particle Filter Demo 1 moving Gaussian + uniform, N= 100 particles

  33. Particle Filter Demo 2 moving Gaussian + uniform, N= 1000 particles

  34. Particle Filter Demo 3 moving (sharp) Gaussian + uniform, N= 100 particles

  35. Particle Filter Demo 4 moving (sharp) Gaussian + uniform, N= 1000 particles

  36. Particle Filter Demo 5 mixture of two Gaussians, filter loses track of smaller and less pronounced peaks

  37. Obtaining state estimates from particles • Any estimate of a function f(x t ) can be calculated by discrete PDF-approximation 1 [ ] ∑ = N = ( ) ( ) ( ) ( ) j j E f x w f x t t t 1 j N 1 [ ] ∑ = N = ( ) ( ) j j • Mean: E x w x t t t 1 j N • MAP-estimate: particle with largest weight • Robust mean: mean within window around MAP-estimate

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend