robotics 01peeqw
play

ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino - PowerPoint PPT Presentation

ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot localization


  1. ROBOTICS 01PEEQW Basilio Bona DAUIN – Politecnico di Torino

  2. Probabilistic Fundamentals in Robotics Gaussian Filters

  3. Course Outline � Basic mathematical framework � Probabilistic models of mobile robots � Mobile robot localization problem � Robotic mapping � Probabilistic planning and control � Reference textbook � Thrun, Burgard, Fox, “Probabilistic Robotics”, MIT Press, 2006 � http://www.probabilistic-robotics.org/ Basilio Bona 3

  4. Basic mathematical framework � Recursive state estimation � Basic concepts in probability � Robot environment � Bayes filters � Gaussian filters (parametric filters) � Kalman filter � Extended Kalman Filter � Unscented Kalman filter � Information filter � Nonparametric filters � Histogram filter � Particle filter Basilio Bona 4

  5. Introduction � Gaussian filters are different implementations of Bayes filters for continuous spaces, with specific assumptions on probability distributions � Beliefs are represented by multi-variate normal distributions Basilio Bona 5

  6. Multi-variate Gaussian distribution Covariance matrix Mean vector Basilio Bona 6

  7. Examples Bi-dimensional Gaussian with conditional probabilities Mixture of Gaussians Basilio Bona 7

  8. Covariance matrix Basilio Bona 8

  9. Kalman filter (1) � Kalman filter (KF) [Swerling: 1958, Kalman: 1960] applies to linear Gaussian systems � KF computes the belief for continuous states governed by linear dynamic state equations � Beliefs are expressed by normal distributions � KF is not applicable to discrete or hybrid state space systems Basilio Bona 9

  10. Kalman filter (2) Basilio Bona 10

  11. Kalman filter (3) Basilio Bona 11

  12. Kalman filter (4) Basilio Bona 12

  13. Kalman filter algorithm (1) Prediction Innovation (residuals) covariance Kalman gain Update Basilio Bona 13

  14. Block diagram z t + µ u z ˆ + t t t B C − t t + + A D K + t µ µ t residuals t − 1 t Basilio Bona 14

  15. Kalman filter algorithm (2) visible hidden Basilio Bona 15

  16. Kalman filter example Initial state measurement update update prediction measurement Basilio Bona 16

  17. From Kalman filter to extended Kalman filter � Kalman filter is based on linearity assumptions � Gaussian random variables are expressed by means and covariance matrices of normal distributions � Gaussian distributions are transformed into Gaussian distributions � Kalman filter is optimal � Kalman filter is efficient Basilio Bona 17

  18. Linear transformation of Gaussians Basilio Bona 18

  19. Extended Kalman Filter (EKF) � When the linearity assumptions do not hold (as in robot motion models or orientation models) a closed form solution of the predicted belief does not exists Nonlinear state & measurement equations � Extended Kalman Filter (EKF) approximates the nonlinear transformations with a linear one � Linearization is performed around the most likely value: i.e., the mean value Basilio Bona 19

  20. EKF Example Montecarlo generated distribution Transformed mean value Approximating mean value Approximating Gaussian Approximating Gaussian uses mean and covariance of the Montecarlo generated distribution Basilio Bona 20

  21. EKF Example Basilio Bona 21

  22. EKF Example EKF Gaussian : the normal distribution built using mean and covariance of the true nonlinear distributions EKF Gaussian Approximating Gaussian : the normal distribution built using mean and covariance Approximat of the true nonlinear distributions ing Gaussian Basilio Bona 22

  23. EKF linearization � Taylor expansion Depends only on the mean Basilio Bona 23

  24. EKF algorithm Basilio Bona 24

  25. KF vs EKF Basilio Bona 25

  26. Features � EKF is a very popular tool for state estimation in robotics � It has the same time complexity of the KF � It is robust and simple � Limitations: rarely state and measurement functions are linear. � Goodness of linear approximation depends on � Degree of uncertainty � Degree of nonlinearity � When using EKF the uncertainty must be kept small as much as possible Basilio Bona 26

  27. Uncertainty More uncertain More uncertain Less uncertain Less uncertain Basilio Bona 27

  28. Uncertainty More uncertain Less uncertain Basilio Bona 28

  29. Nonlinearity More nonlinear More linear Basilio Bona 29

  30. Nonlinearity More nonlinear More linear Basilio Bona 30

  31. Example: EKF Localization within a sensor infrastructure Mobile Robot can acquire odometric Fixed sensors (deployed in measurements and known positions inside the distance environment) information from sensors in known positions True position of the mobile robot t=0 KF estimate (time zero) Basilio Bona 31

  32. Example: EKF Localization within a sensor infrastructure STEP 1: - Acquire odometry t=0 Basilio Bona 32

  33. Example: EKF Localization within a sensor infrastructure STEP 1: - Acquire odometry - Filter Prediction t=0 Basilio Bona 33 Luca Carlone – Politecnico di Torino

  34. Example: EKF Localization within a sensor infrastructure STEP 1: - Acquire odometry - Filter Prediction - Acquire meas. t=0 Basilio Bona 34 Luca Carlone – Politecnico di Torino

  35. Example: EKF Localization within a sensor infrastructure STEP 1: - Acquire odometry - Filter Prediction - Acquire meas. - Filter Update t=0 Basilio Bona 35 Luca Carlone – Politecnico di Torino

  36. Example: EKF Localization within a sensor infrastructure STEP 1: - Acquire odometry - Filter Prediction - Acquire meas. - Filter Update STEP 2: - Acquire odometry t=0 Basilio Bona 36 Luca Carlone – Politecnico di Torino

  37. Example: EKF Localization within a sensor infrastructure STEP 1: - Acquire odometry - Filter Prediction - Acquire meas. - Filter Update STEP 2: - Acquire odometry - Filter Prediction t=0 Basilio Bona 37 Luca Carlone – Politecnico di Torino

  38. Example: EKF Localization within a sensor infrastructure STEP 1: - Acquire odometry - Filter Prediction - Acquire meas. - Filter Update STEP 2: - Acquire odometry - Filter Prediction - Acquire meas. t=0 Basilio Bona 38 Luca Carlone – Politecnico di Torino

  39. Example: EKF Localization within a sensor infrastructure STEP 1: - Acquire odometry - Filter Prediction - Acquire meas. - Filter Update STEP 2: - Acquire odometry - Filter Prediction - Acquire meas. - Filter Update t=0 Basilio Bona 39 Luca Carlone – Politecnico di Torino

  40. Example: EKF Localization within a sensor infrastructure STEP 1: - Acquire odometry - Filter Prediction - Acquire meas. - Filter Update STEP 2: - Acquire odometry - Filter Prediction - Acquire meas. - Filter Update t=0 . . . Basilio Bona 40 Luca Carlone – Politecnico di Torino

  41. Unscented Kalman Filter (UKF) � UKF performs a stochastic linearization based on a weighted statistical linear regression � A deterministic sampling technique (the unscented transform) is used to pick a minimal set of sample points (sigma points) around the mean value of the normal pdf � The sigma points are propagated through the nonlinear functions, and then used to compute the mean and covariance of the transformed distribution � This approach � removes the need to explicitly compute Jacobians, which for complex functions can be difficult to calculate � produces a more accurate estimate of the posterior distribution Basilio Bona 41

  42. UKF Basilio Bona 42

  43. UKF Basilio Bona 43

  44. UKF Basilio Bona 44

  45. UKF Algorithm – part a) Basilio Bona 45

  46. UKF Algorithm – part b) Cross covariance Basilio Bona 46

  47. EKF vs UKF Basilio Bona 47

  48. EKF vs UKF Basilio Bona 48

  49. KF – EKF – UKF KF EKF UKF Basilio Bona 49

  50. Information filters Belief is represented by Gaussians Moments parameterization Canonical parameterization KF – EKF – UKF IF – EIF Duality Mean Information vector Covariance Information matrix Basilio Bona 50

  51. Multivariate normal distribution Basilio Bona 51

  52. Mahalanobis distance Mahalanobis distance Same Euclidean distance Same Mahalanobis distance Basilio Bona 52

  53. IF algorithm Basilio Bona 53

  54. IF vs KF IF KF � Prediction step requires two � Prediction step is additive matrix inversion � � � Measurements update is � Measurements update additive requires matrix inversion � � Duality Basilio Bona 54

  55. Extended information filter – EIF � It is similar to EKF and applies when state and measurement equations are nonlinear State estimate � Jacobians G and H replace A, B and C matrices Basilio Bona 55

  56. Practical considerations � IF advantages over KF: � Simpler global uncertainty representation: set Ω = 0 � Numerically more stable (in many but not all robotics applications) � Integrates information in simpler way � Is naturally fit for multi-robot problems (decentralized data integration => Bayes rule => logarithmic form => addition of terms => arbitrary order) � IF limitations: � A state estimation is required (inversion of a matrix) � Other matrix inversions are necessary (not required for EKF) � Computationally inferior to EKF for high-dim state spaces Basilio Bona 56

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend