development and implementation of slam algorithms
play

Development and Implementation of SLAM Algorithms Kasra Khosoussi - PowerPoint PPT Presentation

Development and Implementation of SLAM Algorithms Kasra Khosoussi Supervised by: Dr. Hamid D. Taghirad Advanced Robotics and Automated Systems (ARAS) Industrial Control Laboratory K.N. Toosi University of Technology July 13, 2011 K. Khosoussi


  1. Introduction Bayesian Filtering Probabilistic Methods Probabilistic methods outperform deterministic algorithms Describe uncertainty in data, models Filtering distribution: and estimates p ( s t , θ | z 1: t , u 1: t ) Bayesian estimation Smoothing distribution: p ( s 0 : t , θ | z 1: t , u 1: t ) - Robot pose s t MMSE estimate: - Robot pose is assumed to be a Markov x t = E [ x t | z 1: t , u 1: t ] ˆ process with initial distribution p ( s 0 ) x 0: t = E [ x 0: t | z 1: t , u 1: t ] ˆ - Feature’s location θ i - Map θ = { θ 1 , . . . , θ N } - Observation z t , and control input u t θ ] T - x t = [ s t - x 1: t � { x 1 , . . . , x t } K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 7 / 43

  2. Introduction Bayesian Filtering Probabilistic Methods Probabilistic methods outperform deterministic algorithms Describe uncertainty in data, models Filtering distribution: and estimates p ( s t , θ | z 1: t , u 1: t ) Bayesian estimation Smoothing distribution: p ( s 0 : t , θ | z 1: t , u 1: t ) - Robot pose s t MMSE estimate: - Robot pose is assumed to be a Markov x t = E [ x t | z 1: t , u 1: t ] ˆ process with initial distribution p ( s 0 ) x 0: t = E [ x 0: t | z 1: t , u 1: t ] ˆ - Feature’s location θ i - Map θ = { θ 1 , . . . , θ N } - Observation z t , and control input u t θ ] T - x t = [ s t - x 1: t � { x 1 , . . . , x t } K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 7 / 43

  3. Introduction Bayesian Filtering State-Space Equations Robot motion equation: s t = f ( s t − 1 , u t , v t ) Observation equation: z t = g ( s t , θ n t , w t ) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 8 / 43

  4. Introduction Bayesian Filtering State-Space Equations Robot motion equation: s t = f ( s t − 1 , u t , v t ) Observation equation: z t = g ( s t , θ n t , w t ) f ( · , · , · ) and g ( · , · , · ) are non-linear functions K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 8 / 43

  5. Introduction Bayesian Filtering State-Space Equations Robot motion equation: s t = f ( s t − 1 , u t , v t ) Observation equation: z t = g ( s t , θ n t , w t ) f ( · , · , · ) and g ( · , · , · ) are non-linear functions v t and w t are zero-mean white Gaussian noises with covariances matrices Q t and R t K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 8 / 43

  6. Introduction Bayesian Filtering Bayes Filter How to estimate the posterior distribution recursively in time? Bayes filter! 1 Prediction 2 Update � p ( x t | z 1: t − 1 , u 1: t ) = p ( x t | x t − 1 , u t ) p ( x t − 1 | z 1: t − 1 , u 1: t − 1 ) d x t − 1 (1) p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) p ( x t | z 1: t , u 1: t ) = (2) � p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) d x t Motion model Observation model There is a similar recursive formula for the smoothing density K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43

  7. Introduction Bayesian Filtering Bayes Filter How to estimate the posterior distribution recursively in time? Bayes filter! 1 Prediction 2 Update � p ( x t | z 1: t − 1 , u 1: t ) = p ( x t | x t − 1 , u t ) p ( x t − 1 | z 1: t − 1 , u 1: t − 1 ) d x t − 1 (1) p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) p ( x t | z 1: t , u 1: t ) = (2) � p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) d x t Motion model Observation model There is a similar recursive formula for the smoothing density K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43

  8. Introduction Bayesian Filtering Bayes Filter How to estimate the posterior distribution recursively in time? Bayes filter! 1 Prediction 2 Update � p ( x t | z 1: t − 1 , u 1: t ) = p ( x t | x t − 1 , u t ) p ( x t − 1 | z 1: t − 1 , u 1: t − 1 ) d x t − 1 (1) p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) p ( x t | z 1: t , u 1: t ) = (2) � p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) d x t Motion model Observation model There is a similar recursive formula for the smoothing density K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43

  9. Introduction Bayesian Filtering Bayes Filter How to estimate the posterior distribution recursively in time? Bayes filter! 1 Prediction 2 Update � p ( x t | z 1: t − 1 , u 1: t ) = p ( x t | x t − 1 , u t ) p ( x t − 1 | z 1: t − 1 , u 1: t − 1 ) d x t − 1 (1) p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) p ( x t | z 1: t , u 1: t ) = (2) � p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) d x t Motion model Observation model There is a similar recursive formula for the smoothing density K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43

  10. Introduction Bayesian Filtering Bayes Filter How to estimate the posterior distribution recursively in time? Bayes filter! 1 Prediction 2 Update � p ( x t | z 1: t − 1 , u 1: t ) = p ( x t | x t − 1 , u t ) p ( x t − 1 | z 1: t − 1 , u 1: t − 1 ) d x t − 1 (1) p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) p ( x t | z 1: t , u 1: t ) = (2) � p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) d x t Motion model Observation model There is a similar recursive formula for the smoothing density K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43

  11. Introduction Bayesian Filtering Bayes Filter How to estimate the posterior distribution recursively in time? Bayes filter! 1 Prediction 2 Update � p ( x t | z 1: t − 1 , u 1: t ) = p ( x t | x t − 1 , u t ) p ( x t − 1 | z 1: t − 1 , u 1: t − 1 ) d x t − 1 (1) p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) p ( x t | z 1: t , u 1: t ) = (2) � p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) d x t Motion model Observation model There is a similar recursive formula for the smoothing density K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43

  12. Introduction Bayesian Filtering Bayes Filter How to estimate the posterior distribution recursively in time? Bayes filter! 1 Prediction 2 Update � p ( x t | z 1: t − 1 , u 1: t ) = p ( x t | x t − 1 , u t ) p ( x t − 1 | z 1: t − 1 , u 1: t − 1 ) d x t − 1 (1) p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) p ( x t | z 1: t , u 1: t ) = (2) � p ( z t | x t ) p ( x t | z 1: t − 1 , u 1: t ) d x t Motion model Observation model There is a similar recursive formula for the smoothing density K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 9 / 43

  13. Introduction Bayesian Filtering Bayes Filter Cont’d. Example For the case of linear-Gaussian models, Bayes filter equations would be simplified into the Kalman filter equations. K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 10 / 43

  14. Introduction Bayesian Filtering Bayes Filter Cont’d. Example For the case of linear-Gaussian models, Bayes filter equations would be simplified into the Kalman filter equations. But . . . In general, it is impossible to implement the exact Bayes filter because it requires the ability to evaluate complex high-dimensional integrals. K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 10 / 43

  15. Introduction Bayesian Filtering Bayes Filter Cont’d. Example For the case of linear-Gaussian models, Bayes filter equations would be simplified into the Kalman filter equations. But . . . In general, it is impossible to implement the exact Bayes filter because it requires the ability to evaluate complex high-dimensional integrals. So we have to use approximation . . . Extended Kalman Filter (EKF) Unscented Kalman Filter (UKF) Gaussian-Sum Filter Extended Information Filter (EIF) Particle Filter (A.K.A. Sequential Monte Carlo Methods) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 10 / 43

  16. Introduction Bayesian Filtering Bayes Filter Cont’d. Example For the case of linear-Gaussian models, Bayes filter equations would be simplified into the Kalman filter equations. But . . . In general, it is impossible to implement the exact Bayes filter because it requires the ability to evaluate complex high-dimensional integrals. So we have to use approximation . . . Extended Kalman Filter (EKF) Unscented Kalman Filter (UKF) Gaussian-Sum Filter Extended Information Filter (EIF) Particle Filter (A.K.A. Sequential Monte Carlo Methods) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 10 / 43

  17. Introduction Particle Filter Perfect Monte Carlo Sampling � Q. How to compute expected values such as E p ( x ) [ h ( x )] = h ( x ) p ( x ) d x for any integrable function h ( · ) ? K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 11 / 43

  18. Introduction Particle Filter Perfect Monte Carlo Sampling � Q. How to compute expected values such as E p ( x ) [ h ( x )] = h ( x ) p ( x ) d x for any integrable function h ( · ) ? Perfect Monte Carlo (A.K.A. Monte Carlo Integration) 1 Generate N i.i.d. samples { x [ i ] } N i =1 according to p ( x ) 2 Estimate the PDF as P N ( x ) � 1 � N i =1 δ ( x − x [ i ] ) N h ( x ) P N ( x ) d x = 1 � N 3 Estimate E p ( x ) [ h ( x )] ≈ i =1 h ( x [ i ] ) � N K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 11 / 43

  19. Introduction Particle Filter Perfect Monte Carlo Sampling � Q. How to compute expected values such as E p ( x ) [ h ( x )] = h ( x ) p ( x ) d x for any integrable function h ( · ) ? Perfect Monte Carlo (A.K.A. Monte Carlo Integration) 1 Generate N i.i.d. samples { x [ i ] } N i =1 according to p ( x ) 2 Estimate the PDF as P N ( x ) � 1 � N i =1 δ ( x − x [ i ] ) N h ( x ) P N ( x ) d x = 1 � N 3 Estimate E p ( x ) [ h ( x )] ≈ i =1 h ( x [ i ] ) � N Convergence theorems for N → ∞ using central limit theorem and strong law of large numbers Error decreases with O ( N − 1 / 2 ) regardless of the dimension of x K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 11 / 43

  20. Introduction Particle Filter Perfect Monte Carlo Sampling � Q. How to compute expected values such as E p ( x ) [ h ( x )] = h ( x ) p ( x ) d x for any integrable function h ( · ) ? Perfect Monte Carlo (A.K.A. Monte Carlo Integration) 1 Generate N i.i.d. samples { x [ i ] } N i =1 according to p ( x ) 2 Estimate the PDF as P N ( x ) � 1 � N i =1 δ ( x − x [ i ] ) N h ( x ) P N ( x ) d x = 1 � N 3 Estimate E p ( x ) [ h ( x )] ≈ i =1 h ( x [ i ] ) � N Convergence theorems for N → ∞ using central limit theorem and strong law of large numbers Error decreases with O ( N − 1 / 2 ) regardless of the dimension of x But . . . It is usually impossible to sample directly from the filtering or smoothing distribution (high-dimensional, non-standard, only known up to a constant) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 11 / 43

  21. Introduction Particle Filter Importance Sampling (IS) Idea Generate samples from another distribution called the importance function like π ( x ) , and weight these samples according to w ∗ ( x [ i ] ) = p ( x [ i ] ) π ( x [ i ] ) : K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 12 / 43

  22. Introduction Particle Filter Importance Sampling (IS) Idea Generate samples from another distribution called the importance function like π ( x ) , and weight these samples according to w ∗ ( x [ i ] ) = p ( x [ i ] ) π ( x [ i ] ) : N π ( x ) π ( x ) d x = 1 � h ( x ) p ( x ) � w ∗ ( x [ i ] ) h ( x [ i ] ) E p ( x ) [ h ( x )] = N i =1 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 12 / 43

  23. Introduction Particle Filter Importance Sampling (IS) Idea Generate samples from another distribution called the importance function like π ( x ) , and weight these samples according to w ∗ ( x [ i ] ) = p ( x [ i ] ) π ( x [ i ] ) : N π ( x ) π ( x ) d x = 1 � h ( x ) p ( x ) � w ∗ ( x [ i ] ) h ( x [ i ] ) E p ( x ) [ h ( x )] = N i =1 But . . . How to do this recursively in time? K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 12 / 43

  24. Introduction Particle Filter Sequential Importance Sampling (SIS) Sampling from scratch from the importance function π ( x 0: t | z 1: t , u 1: t ) implies growing computational complexity for each step over time K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 13 / 43

  25. Introduction Particle Filter Sequential Importance Sampling (SIS) Sampling from scratch from the importance function π ( x 0: t | z 1: t , u 1: t ) implies growing computational complexity for each step over time Q. How to estimate p ( x 0: t | z 1: t , u 1: t ) using importance sampling recursively? K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 13 / 43

  26. Introduction Particle Filter Sequential Importance Sampling (SIS) Sampling from scratch from the importance function π ( x 0: t | z 1: t , u 1: t ) implies growing computational complexity for each step over time Q. How to estimate p ( x 0: t | z 1: t , u 1: t ) using importance sampling recursively? Sequential Importance Sampling At time t , generate x [ i ] t according to π ( x t | x [ i ] 0: t − 1 , z 1: t , u 1: t ) (proposal distribution), and merge it with the previous samples x [ i ] 0: t − 1 drawn from π ( x 0: t − 1 | z 1: t − 1 , u 1: t − 1 ) : x [ i ] 0: t = { x [ i ] 0: t − 1 , x [ i ] t } ∼ π ( x 0: t | z 1: t , u 1: t ) 0: t − 1 ) p ( z t | x [ i ] t ) p ( x [ i ] t | x [ i ] t − 1 , u t ) w ( x [ i ] 0: t ) = w ( x [ i ] π ( x [ i ] t | x [ i ] 0: t − 1 , z 1: t , u 1: t ) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 13 / 43

  27. Introduction Particle Filter Degeneracy and Resampling Degeneracy Problem After a few steps, all but one of the particles (samples) would have very insignificant normalized weight K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 14 / 43

  28. Introduction Particle Filter Degeneracy and Resampling Degeneracy Problem After a few steps, all but one of the particles (samples) would have very insignificant normalized weight Resampling Eliminate particles with low normalized weights and multiply those with high normalized weights in a probabilistic manner K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 14 / 43

  29. Introduction Particle Filter Degeneracy and Resampling Degeneracy Problem After a few steps, all but one of the particles (samples) would have very insignificant normalized weight Resampling Eliminate particles with low normalized weights and multiply those with high normalized weights in a probabilistic manner Resampling will cause sample impoverishment K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 14 / 43

  30. Introduction Particle Filter Degeneracy and Resampling Degeneracy Problem After a few steps, all but one of the particles (samples) would have very insignificant normalized weight Resampling Eliminate particles with low normalized weights and multiply those with high normalized weights in a probabilistic manner Resampling will cause sample impoverishment Effective sample size (ESS) is a measure of the degeneracy of SIS that can be used in order to avoid unnecessary resampling steps 1 ˆ N eff = w ( x [ i ] � N 0: t ) 2 i =1 ˜ Perform resampling only if N eff is lower than a fixed threshold N T K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 14 / 43

  31. Introduction Particle Filter Proposal Distribution Selecting an appropriate proposal distribution π ( x t | x 0: t − 1 , z 1: t , u 1: t ) plays an important role in the success of particle filter The simplest and most common choice is the motion model (transition density) p ( x t | x t − 1 , u t ) p ( x t | x t − 1 , z t , u t ) is known as the optimal proposal distribution and limits the degeneracy of the particle filter by minimizing the conditional variance of unnormalized weights Importance weights for the optimal proposal distribution can be obtained as: w ( x [ i ] 0: t ) = w ( x [ i ] 0: t − 1 ) p ( z t | x [ i ] t − 1 ) But . . . In SLAM, neither p ( x t | x t − 1 , z t , u t ) nor p ( z t | x [ i ] t − 1 ) can be computed in closed form and we have to use approximation K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 15 / 43

  32. Introduction Particle Filter Proposal Distribution Selecting an appropriate proposal distribution π ( x t | x 0: t − 1 , z 1: t , u 1: t ) plays an important role in the success of particle filter The simplest and most common choice is the motion model (transition density) p ( x t | x t − 1 , u t ) p ( x t | x t − 1 , z t , u t ) is known as the optimal proposal distribution and limits the degeneracy of the particle filter by minimizing the conditional variance of unnormalized weights Importance weights for the optimal proposal distribution can be obtained as: w ( x [ i ] 0: t ) = w ( x [ i ] 0: t − 1 ) p ( z t | x [ i ] t − 1 ) But . . . In SLAM, neither p ( x t | x t − 1 , z t , u t ) nor p ( z t | x [ i ] t − 1 ) can be computed in closed form and we have to use approximation K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 15 / 43

  33. Introduction Particle Filter Proposal Distribution Selecting an appropriate proposal distribution π ( x t | x 0: t − 1 , z 1: t , u 1: t ) plays an important role in the success of particle filter The simplest and most common choice is the motion model (transition density) p ( x t | x t − 1 , u t ) p ( x t | x t − 1 , z t , u t ) is known as the optimal proposal distribution and limits the degeneracy of the particle filter by minimizing the conditional variance of unnormalized weights Importance weights for the optimal proposal distribution can be obtained as: w ( x [ i ] 0: t ) = w ( x [ i ] 0: t − 1 ) p ( z t | x [ i ] t − 1 ) But . . . In SLAM, neither p ( x t | x t − 1 , z t , u t ) nor p ( z t | x [ i ] t − 1 ) can be computed in closed form and we have to use approximation K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 15 / 43

  34. Introduction Particle Filter Proposal Distribution Selecting an appropriate proposal distribution π ( x t | x 0: t − 1 , z 1: t , u 1: t ) plays an important role in the success of particle filter The simplest and most common choice is the motion model (transition density) p ( x t | x t − 1 , u t ) p ( x t | x t − 1 , z t , u t ) is known as the optimal proposal distribution and limits the degeneracy of the particle filter by minimizing the conditional variance of unnormalized weights Importance weights for the optimal proposal distribution can be obtained as: w ( x [ i ] 0: t ) = w ( x [ i ] 0: t − 1 ) p ( z t | x [ i ] t − 1 ) But . . . In SLAM, neither p ( x t | x t − 1 , z t , u t ) nor p ( z t | x [ i ] t − 1 ) can be computed in closed form and we have to use approximation K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 15 / 43

  35. Introduction Particle Filter Proposal Distribution Selecting an appropriate proposal distribution π ( x t | x 0: t − 1 , z 1: t , u 1: t ) plays an important role in the success of particle filter The simplest and most common choice is the motion model (transition density) p ( x t | x t − 1 , u t ) p ( x t | x t − 1 , z t , u t ) is known as the optimal proposal distribution and limits the degeneracy of the particle filter by minimizing the conditional variance of unnormalized weights Importance weights for the optimal proposal distribution can be obtained as: w ( x [ i ] 0: t ) = w ( x [ i ] 0: t − 1 ) p ( z t | x [ i ] t − 1 ) But . . . In SLAM, neither p ( x t | x t − 1 , z t , u t ) nor p ( z t | x [ i ] t − 1 ) can be computed in closed form and we have to use approximation K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 15 / 43

  36. RBPF-SLAM Table of Contents Introduction 1 SLAM Problem Bayesian Filtering Particle Filter RBPF-SLAM 2 Monte Carlo Approximation of the Optimal Proposal Distribution 3 Introduction LRS LIS-1 LIS-2 Results 4 Simulation Results Experiments On Real Data Conclusion 5 Future Work 6 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 16 / 43

  37. RBPF-SLAM Rao-Blackwellized Particle Filter in SLAM SLAM is a very high-dimensional problem Estimating the p ( s 0: t , θ | z 1: t , u 1: t ) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as M � p ( s 0: t , θ | z 1: t , u 1: t ) = p ( s 0: t | z 1: t , u 1: t ) p ( θ k | s 0: t , z 1: t , u 1: t ) k =1 Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43

  38. RBPF-SLAM Rao-Blackwellized Particle Filter in SLAM SLAM is a very high-dimensional problem Estimating the p ( s 0: t , θ | z 1: t , u 1: t ) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as M � p ( s 0: t , θ | z 1: t , u 1: t ) = p ( s 0: t | z 1: t , u 1: t ) p ( θ k | s 0: t , z 1: t , u 1: t ) k =1 Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43

  39. RBPF-SLAM Rao-Blackwellized Particle Filter in SLAM SLAM is a very high-dimensional problem Estimating the p ( s 0: t , θ | z 1: t , u 1: t ) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as M � p ( s 0: t , θ | z 1: t , u 1: t ) = p ( s 0: t | z 1: t , u 1: t ) p ( θ k | s 0: t , z 1: t , u 1: t ) k =1 Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43

  40. RBPF-SLAM Rao-Blackwellized Particle Filter in SLAM SLAM is a very high-dimensional problem Estimating the p ( s 0: t , θ | z 1: t , u 1: t ) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as M � p ( s 0: t , θ | z 1: t , u 1: t ) = p ( s 0: t | z 1: t , u 1: t ) p ( θ k | s 0: t , z 1: t , u 1: t ) k =1 Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43

  41. RBPF-SLAM Rao-Blackwellized Particle Filter in SLAM SLAM is a very high-dimensional problem Estimating the p ( s 0: t , θ | z 1: t , u 1: t ) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as M � p ( s 0: t , θ | z 1: t , u 1: t ) = p ( s 0: t | z 1: t , u 1: t ) p ( θ k | s 0: t , z 1: t , u 1: t ) k =1 Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43

  42. RBPF-SLAM Rao-Blackwellized Particle Filter in SLAM SLAM is a very high-dimensional problem Estimating the p ( s 0: t , θ | z 1: t , u 1: t ) using a particle filter can be very inefficient We can factor the smoothing distribution into two parts as M � p ( s 0: t , θ | z 1: t , u 1: t ) = p ( s 0: t | z 1: t , u 1: t ) p ( θ k | s 0: t , z 1: t , u 1: t ) k =1 Motion model is used as the proposal distribution in FastSLAM 1.0 FastSLAM 2.0 linearizes the observation equation and approximates the optimal proposal distribution with a Gaussian distribution FastSLAM 2.0 outperforms FastSLAM 1.0 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 17 / 43

  43. RBPF-SLAM FastSLAM 2.0 Linearization error K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 18 / 43

  44. RBPF-SLAM FastSLAM 2.0 Linearization error Gaussian approximation K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 18 / 43

  45. RBPF-SLAM FastSLAM 2.0 Linearization error Gaussian approximation Linear motion models with respect to the noise variable v t K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 18 / 43

  46. Monte Carlo Approximation of the Optimal Proposal Distribution Table of Contents Introduction 1 SLAM Problem Bayesian Filtering Particle Filter RBPF-SLAM 2 Monte Carlo Approximation of the Optimal Proposal Distribution 3 Introduction LRS LIS-1 LIS-2 Results 4 Simulation Results Experiments On Real Data Conclusion 5 Future Work 6 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 19 / 43

  47. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction FastSLAM 2.0 v.s. The Proposed Algorithms FastSLAM 2.0 1 Approximate the optimal proposal distribution with a Gaussian using the linearized observation equation. Sample from this Gaussian K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43

  48. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction FastSLAM 2.0 v.s. The Proposed Algorithms FastSLAM 2.0 1 Approximate the optimal proposal distribution with a Gaussian using the linearized observation equation. Sample from this Gaussian 2 Update the landmarks EKFs for the observed features K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43

  49. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction FastSLAM 2.0 v.s. The Proposed Algorithms FastSLAM 2.0 1 Approximate the optimal proposal distribution with a Gaussian using the linearized observation equation. Sample from this Gaussian 2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation equation K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43

  50. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction FastSLAM 2.0 v.s. The Proposed Algorithms FastSLAM 2.0 1 Approximate the optimal proposal distribution with a Gaussian using the linearized observation equation. Sample from this Gaussian 2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation equation 4 Perform resampling K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43

  51. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction FastSLAM 2.0 v.s. The Proposed Algorithms FastSLAM 2.0 1 Approximate the optimal proposal distribution with a Gaussian using the linearized observation equation. Sample from this Gaussian 2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation equation 4 Perform resampling Proposed Algorithms 1 Sample from the optimal proposal distribution using Monte Carlo sampling methods K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43

  52. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction FastSLAM 2.0 v.s. The Proposed Algorithms FastSLAM 2.0 1 Approximate the optimal proposal distribution with a Gaussian using the linearized observation equation. Sample from this Gaussian 2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation equation 4 Perform resampling Proposed Algorithms 1 Sample from the optimal proposal distribution using Monte Carlo sampling methods 2 Update the landmarks EKFs for the observed features K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43

  53. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction FastSLAM 2.0 v.s. The Proposed Algorithms FastSLAM 2.0 1 Approximate the optimal proposal distribution with a Gaussian using the linearized observation equation. Sample from this Gaussian 2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation equation 4 Perform resampling Proposed Algorithms 1 Sample from the optimal proposal distribution using Monte Carlo sampling methods 2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using Monte Carlo integration K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43

  54. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction FastSLAM 2.0 v.s. The Proposed Algorithms FastSLAM 2.0 1 Approximate the optimal proposal distribution with a Gaussian using the linearized observation equation. Sample from this Gaussian 2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using the linearized observation equation 4 Perform resampling Proposed Algorithms 1 Sample from the optimal proposal distribution using Monte Carlo sampling methods 2 Update the landmarks EKFs for the observed features 3 Compute the importance weights using Monte Carlo integration 4 Perform resampling only if it is necessary according to ESS K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 20 / 43

  55. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction MC Approximation of The Optimal Proposal Dist. MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the optimal proposal distribution p ( s t | s [ i ] t − 1 , z t , u t ) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43

  56. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction MC Approximation of The Optimal Proposal Dist. MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the optimal proposal distribution p ( s t | s [ i ] t − 1 , z t , u t ) Instead of sampling directly from the optimal proposal distribution . . . We can generate M samples { s [ i,j ] } M j =1 according to another t distribution like q ( s t | s [ i ] t − 1 , z t , u t ) and then weight those samples p ( s t | s [ i ] t − 1 , z t , u t ) proportional to q ( s t | s [ i ] t − 1 , z t , u t ) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43

  57. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction MC Approximation of The Optimal Proposal Dist. MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the optimal proposal distribution p ( s t | s [ i ] t − 1 , z t , u t ) Instead of sampling directly from the optimal proposal distribution . . . We can generate M samples { s [ i,j ] } M j =1 according to another t distribution like q ( s t | s [ i ] t − 1 , z t , u t ) and then weight those samples p ( s t | s [ i ] t − 1 , z t , u t ) proportional to t − 1 , z t , u t ) Local Importance Sampling (LIS) q ( s t | s [ i ] K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43

  58. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction MC Approximation of The Optimal Proposal Dist. MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the optimal proposal distribution p ( s t | s [ i ] t − 1 , z t , u t ) Instead of sampling directly from the optimal proposal distribution . . . We can generate M samples { s [ i,j ] } M j =1 according to another t distribution like q ( s t | s [ i ] t − 1 , z t , u t ) and then weight those samples p ( s t | s [ i ] t − 1 , z t , u t ) proportional to t − 1 , z t , u t ) Local Importance Sampling (LIS) q ( s t | s [ i ] We can generate M samples { s [ i,j ] } M j =1 according to another t distribution like q ( s t | s [ i ] t − 1 , z t , u t ) and accept some of them as samples of the optimal proposal distribution according to rejection sampling criteria K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43

  59. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction MC Approximation of The Optimal Proposal Dist. MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the optimal proposal distribution p ( s t | s [ i ] t − 1 , z t , u t ) Instead of sampling directly from the optimal proposal distribution . . . We can generate M samples { s [ i,j ] } M j =1 according to another t distribution like q ( s t | s [ i ] t − 1 , z t , u t ) and then weight those samples p ( s t | s [ i ] t − 1 , z t , u t ) proportional to t − 1 , z t , u t ) Local Importance Sampling (LIS) q ( s t | s [ i ] We can generate M samples { s [ i,j ] } M j =1 according to another t distribution like q ( s t | s [ i ] t − 1 , z t , u t ) and accept some of them as samples of the optimal proposal distribution according to rejection sampling criteria Local Rejection Sampling (LRS) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43

  60. Monte Carlo Approximation of the Optimal Proposal Distribution Introduction MC Approximation of The Optimal Proposal Dist. MC sampling methods such as importance sampling (IS) and rejection sampling (RS) can be used in order to sample from the optimal proposal distribution p ( s t | s [ i ] t − 1 , z t , u t ) Instead of sampling directly from the optimal proposal distribution . . . We can generate M samples { s [ i,j ] } M j =1 according to another t distribution like q ( s t | s [ i ] t − 1 , z t , u t ) and then weight those samples p ( s t | s [ i ] t − 1 , z t , u t ) proportional to t − 1 , z t , u t ) Local Importance Sampling (LIS) q ( s t | s [ i ] We can generate M samples { s [ i,j ] } M j =1 according to another t distribution like q ( s t | s [ i ] t − 1 , z t , u t ) and accept some of them as samples of the optimal proposal distribution according to rejection sampling criteria Local Rejection Sampling (LRS) q ( s t | s [ i ] t − 1 , u t , z t ) = p ( s t | s [ i ] t − 1 , u t ) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 21 / 43

  61. Monte Carlo Approximation of the Optimal Proposal Distribution LRS Local Rejection Sampling (LRS) graphics by M. Jordan K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 22 / 43

  62. Monte Carlo Approximation of the Optimal Proposal Distribution LRS Local Rejection Sampling (LRS) graphics by M. Jordan Rejection Sampling 1 Generate u [ i ] ∼ U [0 , 1] and s [ i,j ] ∼ p ( s t | s [ i ] t − 1 , u t ) t K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 22 / 43

  63. Monte Carlo Approximation of the Optimal Proposal Distribution LRS Local Rejection Sampling (LRS) graphics by M. Jordan Rejection Sampling 1 Generate u [ i ] ∼ U [0 , 1] and s [ i,j ] ∼ p ( s t | s [ i ] t − 1 , u t ) t p ( s [ i,j ] | s [ i ] t − 1 , z t , u t ) 2 Accept s [ i,j ] if u [ i ] ≤ t t C · p ( s [ i,j ] | s [ i ] t − 1 , u t ) t K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 22 / 43

  64. Monte Carlo Approximation of the Optimal Proposal Distribution LRS Local Rejection Sampling (LRS) graphics by M. Jordan Rejection Sampling 1 Generate u [ i ] ∼ U [0 , 1] and s [ i,j ] ∼ p ( s t | s [ i ] t − 1 , u t ) t p ( s [ i,j ] | s [ i ] p ( z t | s [ i,j ] t − 1 , z t , u t ) 2 Accept s [ i,j ] if u [ i ] ≤ ) t t − 1 , u t ) = t t C · p ( s [ i,j ] | s [ i ] p ( z t | s [ i,j ] max ) t t j K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 22 / 43

  65. Monte Carlo Approximation of the Optimal Proposal Distribution LRS LRS Cont’d. Now we have to compute the importance weights for the set of accepted samples K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43

  66. Monte Carlo Approximation of the Optimal Proposal Distribution LRS LRS Cont’d. Now we have to compute the importance weights for the set of accepted samples We should compute p ( z t | s [ i ] t − 1 ) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43

  67. Monte Carlo Approximation of the Optimal Proposal Distribution LRS LRS Cont’d. Now we have to compute the importance weights for the set of accepted samples We should compute p ( z t | s [ i ] t − 1 ) M 1 � p ( z t | s [ i ] p ( z t | s t ) p ( s t | s [ i ] p ( z t | s [ i,j ] � t − 1 ) = t − 1 , u t ) d s t ≈ ) t M MC Intergration j =1 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43

  68. Monte Carlo Approximation of the Optimal Proposal Distribution LRS LRS Cont’d. Now we have to compute the importance weights for the set of accepted samples We should compute p ( z t | s [ i ] t − 1 ) M 1 � p ( z t | s [ i ] p ( z t | s t ) p ( s t | s [ i ] p ( z t | s [ i,j ] � t − 1 ) = t − 1 , u t ) d s t ≈ ) t M MC Intergration j =1 p ( z t | s [ i,j ] ) can be approximated by a Gaussian t K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43

  69. Monte Carlo Approximation of the Optimal Proposal Distribution LRS LRS Cont’d. Now we have to compute the importance weights for the set of accepted samples We should compute p ( z t | s [ i ] t − 1 ) M 1 � p ( z t | s [ i ] p ( z t | s t ) p ( s t | s [ i ] p ( z t | s [ i,j ] � t − 1 ) = t − 1 , u t ) d s t ≈ ) t M MC Intergration j =1 p ( z t | s [ i,j ] ) can be approximated by a Gaussian t MC Integration K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43

  70. Monte Carlo Approximation of the Optimal Proposal Distribution LRS LRS Cont’d. Now we have to compute the importance weights for the set of accepted samples We should compute p ( z t | s [ i ] t − 1 ) M 1 � p ( z t | s [ i ] p ( z t | s t ) p ( s t | s [ i ] p ( z t | s [ i,j ] � t − 1 ) = t − 1 , u t ) d s t ≈ ) t M MC Intergration j =1 p ( z t | s [ i,j ] ) can be approximated by a Gaussian t MC Integration ⇒ Large number of local particles M K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 23 / 43

  71. Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1 Local Importance Sampling (LIS) LIS-1 LIS-1 1 Similar to LRS, we have to generate M random samples { s [ i,j ] } M t j =1 according to p ( s t | s [ i ] t − 1 , u t ) K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43

  72. Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1 Local Importance Sampling (LIS) LIS-1 LIS-1 1 Similar to LRS, we have to generate M random samples { s [ i,j ] } M t j =1 according to p ( s t | s [ i ] t − 1 , u t ) p ( s [ i,j ] | s [ i ] t − 1 , z t , u t ) 2 Local IS weights: w ( LIS ) ( s [ i,j ] ) = p ( z t | s [ i,j ] ) ∝ t t t p ( s [ i,j ] | s [ i ] t − 1 , u t ) t K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43

  73. Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1 Local Importance Sampling (LIS) LIS-1 LIS-1 1 Similar to LRS, we have to generate M random samples { s [ i,j ] } M t j =1 according to p ( s t | s [ i ] t − 1 , u t ) p ( s [ i,j ] | s [ i ] t − 1 , z t , u t ) 2 Local IS weights: w ( LIS ) ( s [ i,j ] ) = p ( z t | s [ i,j ] ) ∝ t t t p ( s [ i,j ] | s [ i ] t − 1 , u t ) t 3 Local resampling among { s [ i,j ] j =1 using w ( LIS ) ( s [ i,j ] } M ) t t K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43

  74. Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1 Local Importance Sampling (LIS) LIS-1 LIS-1 1 Similar to LRS, we have to generate M random samples { s [ i,j ] } M t j =1 according to p ( s t | s [ i ] t − 1 , u t ) p ( s [ i,j ] | s [ i ] t − 1 , z t , u t ) 2 Local IS weights: w ( LIS ) ( s [ i,j ] ) = p ( z t | s [ i,j ] ) ∝ t t t p ( s [ i,j ] | s [ i ] t − 1 , u t ) t 3 Local resampling among { s [ i,j ] j =1 using w ( LIS ) ( s [ i,j ] } M ) t t 4 Main weights: w ( s ∗ [ i,j ] ) = w ( s [ i ] t − 1 ) p ( z t | s [ i ] t − 1 ) t K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43

  75. Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1 Local Importance Sampling (LIS) LIS-1 LIS-1 1 Similar to LRS, we have to generate M random samples { s [ i,j ] } M t j =1 according to p ( s t | s [ i ] t − 1 , u t ) p ( s [ i,j ] | s [ i ] t − 1 , z t , u t ) 2 Local IS weights: w ( LIS ) ( s [ i,j ] ) = p ( z t | s [ i,j ] ) ∝ t t t p ( s [ i,j ] | s [ i ] t − 1 , u t ) t 3 Local resampling among { s [ i,j ] j =1 using w ( LIS ) ( s [ i,j ] } M ) t t 4 Main weights: w ( s ∗ [ i,j ] ) = w ( s [ i ] t − 1 ) p ( z t | s [ i ] t − 1 ) t 5 p ( z t | s [ i ] � M j =1 p ( z t | s [ i,j ] 1 t − 1 ) ≈ ) t M MC Integration K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43

  76. Monte Carlo Approximation of the Optimal Proposal Distribution LIS-1 Local Importance Sampling (LIS) LIS-1 LIS-1 1 Similar to LRS, we have to generate M random samples { s [ i,j ] } M t j =1 according to p ( s t | s [ i ] t − 1 , u t ) p ( s [ i,j ] | s [ i ] t − 1 , z t , u t ) 2 Local IS weights: w ( LIS ) ( s [ i,j ] ) = p ( z t | s [ i,j ] ) ∝ t t t p ( s [ i,j ] | s [ i ] t − 1 , u t ) t 3 Local resampling among { s [ i,j ] j =1 using w ( LIS ) ( s [ i,j ] } M ) t t 4 Main weights: w ( s ∗ [ i,j ] ) = w ( s [ i ] t − 1 ) p ( z t | s [ i ] t − 1 ) t 5 p ( z t | s [ i ] � M j =1 p ( z t | s [ i,j ] 1 t − 1 ) ≈ ) t M MC Integration K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 24 / 43

  77. Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2 Local Importance Sampling (LIS) LIS-2 LIS-2 Similar to LRS and LIS-1, we have to generate M random samples { s [ i,j ] j =1 according to p ( s t | s [ i ] } M t − 1 , u t ) t K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 25 / 43

  78. Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2 Local Importance Sampling (LIS) LIS-2 LIS-2 Similar to LRS and LIS-1, we have to generate M random samples { s [ i,j ] j =1 according to p ( s t | s [ i ] } M t − 1 , u t ) t Total weights of generated particles { s [ i,j ] } N,M ( i,j )=(1 , 1) = local weights t × SIS weights K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 25 / 43

  79. Monte Carlo Approximation of the Optimal Proposal Distribution LIS-2 Local Importance Sampling (LIS) LIS-2 LIS-2 Similar to LRS and LIS-1, we have to generate M random samples { s [ i,j ] j =1 according to p ( s t | s [ i ] } M t − 1 , u t ) t Total weights of generated particles { s [ i,j ] } N,M ( i,j )=(1 , 1) = local weights t × SIS weights Instead of eliminating local weights through local resamplings (LIS-1), total weights are computed in LIS-2 K. Khosoussi (ARAS) Development of SLAM Algorithms July 13, 2011 25 / 43

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend