Introduction to Mobile Robotics Bayes Filter Kalman Filter - - PowerPoint PPT Presentation

introduction to mobile robotics bayes filter kalman filter
SMART_READER_LITE
LIVE PREVIEW

Introduction to Mobile Robotics Bayes Filter Kalman Filter - - PowerPoint PPT Presentation

Introduction to Mobile Robotics Bayes Filter Kalman Filter Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1 Bayes Filter Reminder Algorithm Bayes_filter ( Bel(x),d ): 1. 2. = 0 3. If d is a perceptual


slide-1
SLIDE 1

1

Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

Bayes Filter – Kalman Filter Introduction to Mobile Robotics

slide-2
SLIDE 2

2

Bayes Filter Reminder

1. Algorithm Bayes_filter( Bel(x),d ):

2. η=0 3. If d is a perceptual data item z then 4. For all x do 5. 6. 7. For all x do 8. 9. Else if d is an action data item u then

  • 10. For all x do

11.

  • 12. Return Bel’(x)
slide-3
SLIDE 3

Kalman Filter

  • Bayes filter with Gaussians
  • Developed in the late 1950's
  • Most relevant Bayes filter variant in practice
  • Applications range from economics,

wheather forecasting, satellite navigation to robotics and many more.

  • The Kalman filter "algorithm" is a bunch of

matrix multiplications!

3

slide-4
SLIDE 4

Gaussians

  • σ

σ µ

Univariate

µ

Multivariate

slide-5
SLIDE 5

Gaussians

1D 2D 3D

Video

slide-6
SLIDE 6

Properties of Gaussians

slide-7
SLIDE 7

(where division "–" denotes matrix inversion)

  • We stay Gaussian as long as we start with

Gaussians and perform only linear transformations

Multivariate Gaussians

slide-8
SLIDE 8

8

Discrete Kalman Filter

Estimates the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation with a measurement

slide-9
SLIDE 9

9

Components of a Kalman Filter

Matrix (nxn) that describes how the state evolves from t to t-1 without controls or noise. Matrix (nxl) that describes how the control ut changes the state from t to t-1. Matrix (kxn) that describes how to map the state xt to an observation zt. Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance Qt and Rt respectively.

slide-10
SLIDE 10
  • Prediction
  • Correction

Bayes Filter Reminder

slide-11
SLIDE 11

11

Kalman Filter Updates in 1D

slide-12
SLIDE 12

12

Kalman Filter Updates in 1D

bel(xt) = µt = µ

t + Kt(zt − Ctµ t)

Σt = (I − KtCt)Σt    with Kt = ΣtCt

T (Ct ΣtCt T + Rt)−1

How to get the blue

  • ne?

–> Kalman correction step

slide-13
SLIDE 13

Kalman Filter Updates in 1D

bel(xt) = µ

t = Atµt−1 + Btut

Σt = AtΣt−1At

T + Qt

  

How to get the magenta one? –> State prediction step

slide-14
SLIDE 14

Kalman Filter Updates

slide-15
SLIDE 15

Linear Gaussian Systems: Initialization

  • Initial belief is normally distributed:
slide-16
SLIDE 16
  • Dynamics are linear function of state and

control plus additive noise:

Linear Gaussian Systems: Dynamics

p(xt | ut,xt−1) = N xt;Atxt−1 + Btut,Qt

( )

bel(xt) = p(xt | ut,xt−1)

bel(xt−1) dxt−1 ⇓ ⇓ ~ N xt;Atxt−1 + Btut,Qt

( )

~ N xt−1;µt−1,Σt−1

( )

slide-17
SLIDE 17

Linear Gaussian Systems: Dynamics

bel(xt) = p(xt | ut,xt−1)

bel(xt−1) dxt−1 ⇓ ⇓ ~ N xt;Atxt−1 + Btut,Qt

( )

~ N xt−1;µt−1,Σt−1

( )

⇓ bel(xt) = η exp − 1 2 (xt − Atxt−1 − Btut)T Qt

−1(xt − Atxt−1 − Btut)

     

exp − 1 2 (xt−1 − µt−1)T Σt−1

−1 (xt−1 − µt−1)

      dxt−1 bel(xt) = µ

t = Atµt−1 + Btut

Σt = AtΣt−1At

T + Qt

  

slide-18
SLIDE 18
  • Observations are linear function of state

plus additive noise:

Linear Gaussian Systems: Observations

p(zt | xt) = N zt;Ctxt,Rt

( )

bel(xt) = η p(zt | xt) bel(xt) ⇓ ⇓ ~ N zt;Ctxt,Rt

( )

~ N xt;µt,Σt

( )

slide-19
SLIDE 19

Linear Gaussian Systems: Observations

bel(xt) = η p(zt | xt) bel(xt) ⇓ ⇓ ~ N zt;Ctxt,Rt

( )

~ N xt;µt,Σt

( )

⇓ bel(xt) = η exp − 1 2 (zt − Ctxt)T Rt

−1(zt − Ctxt)

      exp − 1 2 (xt − µ

t)T Σ t −1(xt − µ t)

      bel(xt) = µt = µ

t + Kt(zt − Ctµ t)

Σt = (I − KtCt)Σt    with Kt = ΣtCt

T (Ct ΣtCt T + Rt)−1

slide-20
SLIDE 20

Kalman Filter Algorithm

1. Algorithm Kalman_filter( µt-1, Σt-1, ut, zt):

2. Prediction: 3. 4. 5. Correction: 6. 7. 8. 9. Return µt, Σt

µt = Atµt−1 + Btut Σt = AtΣt−1At

T + Qt

Kt = ΣtCt

T (Ct ΣtCt T + Rt)−1

µt = µt + Kt(zt − Ctµt) Σt = (I − KtCt)Σt

slide-21
SLIDE 21

Kalman Filter Algorithm

slide-22
SLIDE 22

Kalman Filter Algorithm

  • Prediction
  • Observation
  • Matching
  • Correction
slide-23
SLIDE 23

23

The Prediction-Correction-Cycle

bel(xt ) = µ

t = Atµt−1 + Btut

Σt = AtΣt−1At

T + Qt

   bel(xt ) = µ

t = atµt−1 + btut

σ

t 2 = at 2σ t 2 + σ act,t 2

  

Prediction

slide-24
SLIDE 24

24

The Prediction-Correction-Cycle

bel(xt) = µt = µ

t + Kt(zt − Ctµ t)

Σt = (I − KtCt)Σt    ,Kt = ΣtCt

T (Ct ΣtCt T + Rt)−1

bel(xt) = µt = µ

t + Kt(zt − µ t)

σ t

2 = (1− Kt)σ t 2

   , Kt = σ

t 2

σ

t 2 + σ

  • bs,t

2

Correction

slide-25
SLIDE 25

25

The Prediction-Correction-Cycle

bel(xt) = µt = µ

t + Kt(zt − Ctµ t)

Σt = (I − KtCt)Σt    ,Kt = ΣtCt

T (Ct ΣtCt T + Rt)−1

bel(xt) = µt = µ

t + Kt(zt − µ t)

σ t

2 = (1− Kt)σ t 2

   , Kt = σ

t 2

σ

t 2 + σ

  • bs,t

2

bel(xt ) = µ

t = Atµt−1 + Btut

Σt = AtΣt−1At

T + Qt

   bel(xt ) = µ

t = atµt−1 + btut

σ

t 2 = at 2σ t 2 + σ act,t 2

  

Correction Prediction

slide-26
SLIDE 26

Kalman Filter Summary

  • Highly efficient: Polynomial in the

measurement dimensionality k and state dimensionality n: O(k2.376 + n2)

  • Optimal for linear Gaussian systems!
  • Most robotics systems are nonlinear!
slide-27
SLIDE 27

Nonlinear Dynamic Systems

  • Most realistic robotic problems involve

nonlinear functions

slide-28
SLIDE 28

Linearity Assumption Revisited

slide-29
SLIDE 29

Non-linear Function

slide-30
SLIDE 30

EKF Linearization (1)

slide-31
SLIDE 31

EKF Linearization (2)

slide-32
SLIDE 32

EKF Linearization (3)

slide-33
SLIDE 33
  • Prediction:
  • Correction:

EKF Linearization: First Order Taylor Series Expansion

slide-34
SLIDE 34

EKF Algorithm

  • 1. Extended_Kalman_filter( µt-1, Σt-1, ut, zt):

2. Prediction: 3. 4. 5. Correction: 6. 7. 8. 9. Return µt, Σt

Σt = GtΣt−1Gt

T + Qt

Kt = ΣtHt

T (Ht ΣtHt T + Rt)−1

Σt = AtΣt−1At

T + Qt

Kt = ΣtCt

T (Ct ΣtCt T + Rt)−1