introduction to mobile robotics bayes filter particle
play

Introduction to Mobile Robotics Bayes Filter Particle Filter and - PowerPoint PPT Presentation

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard 1 Motivation Estimating the state of a dynamical system is a fundamental problem The Recursive Bayes Filter is an effective


  1. Introduction to Mobile Robotics Bayes Filter – Particle Filter and Monte Carlo Localization Wolfram Burgard 1

  2. Motivation § Estimating the state of a dynamical system is a fundamental problem § The Recursive Bayes Filter is an effective approach to estimate the belief about the state of a dynamical system § How to represent this belief? § How to maximize it? § Particle filters are a way to efficiently represent an arbitrary (non-Gaussian) distribution § Basic principle § Set of state hypotheses ( “ particles ” ) § Survival-of-the-fittest 2

  3. z = observation u = action Bayes Filters x = state Bel ( x t ) = P ( x t | u 1 , z 1 … , u t , z t ) , = η P ( z t | x t , u 1 , z 1 , … , u t ) P ( x t | u 1 , z 1 , … , u t ) Bayes = η P ( z t | x t ) P ( x t | u 1 , z 1 , … , u t ) Markov P ( x t | u 1 , z 1 , … , u t , x t − 1 ) ∫ = η P ( z t | x t ) Total prob. P ( x t − 1 | u 1 , z 1 , … , u t ) dx t − 1 P ( x t − 1 | u 1 , z 1 , … , u t ) dx t − 1 ∫ = η P ( z t | x t ) P ( x t | u t , x t − 1 ) Markov P ( x t − 1 | u 1 , z 1 , … , z t − 1 ) dx t − 1 ∫ = η P ( z t | x t ) P ( x t | u t , x t − 1 ) Markov ∫ = η P ( z t | x t ) P ( x t | u t , x t − 1 ) Bel ( x t − 1 ) dx t − 1 3

  4. Probabilistic Localization

  5. Function Approximation § Particle sets can be used to approximate functions § The more particles fall into an interval, the higher the probability of that interval § How to draw samples from a function/distribution? 5

  6. Rejection Sampling § Let us assume that f(x)< a for all x § Sample x from a uniform distribution § Sample c from [0,a] § if f(x) > c keep the sample otherwise reject the sample a f(x ’ ) c c OK f(x) x x ’ 6

  7. Importance Sampling Principle § We can even use a different distribution g to generate samples from f § Using an importance weight w , we can account for the “ differences between g and f ” § w = f / g § f is called target § g is called proposal § Pre-condition: f(x)>0 à g(x)>0 7

  8. Particle Filter Representation § Set of weighted samples State hypothesis Importance weight § The samples represent the posterior 8

  9. Importance Sampling with Resampling: Landmark Detection Example

  10. Distributions

  11. Distributions Wanted: samples distributed according to p(x| z 1 , z 2 , z 3 ) 11

  12. This is Easy! We can draw samples from p(x|z l ) by adding noise to the detection parameters.

  13. Importance Sampling ∏ p ( z k | x ) p ( x ) Target distribution f: p ( x | z 1 , z 2 ,..., z n ) = k p ( z 1 , z 2 ,..., z n ) Sampling distribution g: p ( x | z l ) = p ( z l | x ) p ( x ) p ( z l ) ∏ p ( z l ) p ( z k | x ) Importance weights w: f g = p ( x | z 1 , z 2 ,..., z n ) k ≠ l = p ( x | z l ) p ( z 1 , z 2 ,..., z n )

  14. Importance Sampling with Resampling Weighted samples After resampling

  15. Particle Filter Localization 1. Draw from 2. Draw from 3. Importance factor for 4. Re-sample

  16. Rejection Sampling 2. Draw from § Let us assume that f(x)< a for all x § Sample x from a uniform distribution § Sample c from [0,a] § if f(x) > c keep the sample otherwise reject the sample a f(x ’ ) c c OK f(x) x x ’ 17

  17. Importance Sampling 3. Importance factor for Principle § We can even use a different distribution g to generate samples from f § Using an importance weight w , we can account for the “ differences between g and f ” § w = f / g § f is called target § g is called proposal § Pre-condition: f(x)>0 à g(x)>0 18

  18. Particle Filters

  19. Sensor Information: Importance Sampling - ¬ a Bel ( x ) p ( z | x ) Bel ( x ) - a p ( z | x ) Bel ( x ) ¬ = a w p ( z | x ) - Bel ( x )

  20. Robot Motion Bel − ( x ) ∫ p ( x | u , x ') Bel ( x ') d x ' ←

  21. Sensor Information: Importance Sampling α p ( z | x ) Bel − ( x ) Bel ( x ) ← α p ( z | x ) Bel − ( x ) w α p ( z | x ) ← = Bel − ( x )

  22. Robot Motion Bel − ( x ) ∫ p ( x | u , x ') Bel ( x ') d x ' ←

  23. Particle Filter Algorithm § Sample the next generation for particles using the proposal distribution § Compute the importance weights : weight = target distribution / proposal distribution § Resampling: “ Replace unlikely samples by more likely ones ” 24

  24. Particle Filter Algorithm 1. Algorithm particle_filter ( S t-1 , u t , z t ): 2. = Æ h = S , 0 t 3. For Generate new samples i = 1, … , n 4. Sample index j(i) from the discrete distribution given by w t-1 5. Sample from using and i j ( i ) p ( x t | x t − 1 , u t ) x t x t − 1 u t i = p ( z t | x t 6. Compute importance weight i ) w t 7. Update normalization factor i η = η + w t i > } 8. i , w t Add to new particle set S t = S t ∪ { < x t 9. For i = 1, … , n i / η i = w t 10. Normalize weights w t 25

  25. Particle Filter Algorithm ∫ Bel ( x t ) = η p ( z t | x t ) p ( x t | x t − 1 , u t ) Bel ( x t − 1 ) dx t − 1 draw x it - 1 from Bel (x t - 1 ) draw x it from p ( x t | x it - 1 , u t ) Importance factor for x it : target distribution i = w t proposal distribution = η p ( z t | x t ) p ( x t | x t − 1 , u t ) Bel ( x t − 1 ) p ( x t | x t − 1 , u t ) Bel ( x t − 1 ) ∝ p ( z t | x t )

  26. Resampling § Given : Set S of weighted samples. § Wanted : Random sample, where the probability of drawing x i is given by w i . § Typically done n times with replacement to generate new sample set S ’ .

  27. Resampling w 1 w n w 1 w n w 2 w 2 W n-1 W n-1 w 3 w 3 § Stochastic universal sampling § Roulette wheel § Systematic resampling § Binary search, n log n § Linear time complexity § Easy to implement, low variance

  28. Resampling Algorithm 1. Algorithm systematic_resampling ( S,n ): = Æ = 2. 1 S ' , c w 1 = 3. For Generate cdf i 2 ! n = + i c c w 4. - 1 i i - = 1 5. Initialize threshold u ~ U ] 0 , n ], i 1 1 = 6. For j 1 ! n Draw samples … u > 7. While ( ) c Skip until next threshold reached j i = i + 8. i 1 { } = È < - 1 > i 9. S ' S ' x , n Insert = + - 1 u u n 10. Increment threshold + j 1 j 11. Return S ’ Also called stochastic universal sampling

  29. Particle Filters for Mobile Robot Localization § Each particle is a potential pose of the robot § Proposal distribution is the motion model of the robot (prediction step) § The observation model is used to compute the importance weight (correction step) 30

  30. Motion Model Start

  31. Proximity Sensor Model (Reminder) Sonar sensor Laser sensor

  32. Mobile Robot Localization Using Particle Filters (1) § Each particle is a potential pose of the robot § The set of weighted particles approximates the posterior belief about the robot’s pose (target distribution) 36

  33. Mobile Robot Localization Using Particle Filters (2) § Particles are drawn from the motion model (proposal distribution) § Particles are weighted according to the observation model (sensor model) § Particles are resampled according to the particle weights 37

  34. Mobile Robot Localization Using Particle Filters (3) Why is resampling needed? § We only have a finite number of particles § Without resampling: The filter is likely to loose track of the “good” hypotheses § Resampling ensures that particles stay in the meaningful area of the state space 38

  35. 39

  36. 40

  37. 41

  38. 42

  39. 43

  40. 44

  41. 45

  42. 46

  43. 47

  44. 48

  45. 49

  46. 50

  47. 51

  48. 52

  49. 53

  50. 54

  51. 55

  52. Sample-based Localization (Sonar) 56

  53. Using Ceiling Maps for Localization [Dellaert et al. 99]

  54. Vision-based Localization P(z|x) z h(x)

  55. Under a Light Measurement z: P(z|x) :

  56. Next to a Light Measurement z: P(z|x) :

  57. Elsewhere Measurement z: P(z|x) :

  58. Global Localization Using Vision

  59. Limitations § The approach described so far is able § to track the pose of a mobile robot and § to globally localize the robot § How can we deal with localization errors (i.e., the kidnapped robot problem)? 69

  60. Approaches § Randomly insert a fixed number of samples with randomly chosen poses § This corresponds to the assumption that the robot can be teleported at any point in time to an arbitrary location § Alternatively, insert such samples inversely proportional to the average likelihood of the observations (the lower this likelihood the higher the probability that the current estimate is wrong). 70

  61. Summary – Particle Filters § Particle filters are an implementation of recursive Bayesian filtering § They represent the posterior by a set of weighted samples § They can model arbitrary and thus also non-Gaussian distributions § Proposal to draw new samples § Weights are computed to account for the difference between the proposal and the target § Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter 71

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend