1
Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras
Introduction to Mobile Robotics Bayes Filter Extended Kalman - - PowerPoint PPT Presentation
Introduction to Mobile Robotics Bayes Filter Extended Kalman Filter Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1 Bayes Filter Reminder bel ( x t ) = p ( z t | x t ) p ( x t | u t , x t 1 ) bel ( x t 1 )
1
Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras
§ Prediction § Correction
t t t t
bel(xt) =η p(zt | xt) p(xt | ut, xt−1)
bel(xt−1) dxt−1
3
t t t t t t
−1 t t t t
4
t
Matrix (nxn) that describes how the state evolves from t-1 to t without controls or noise.
t
Matrix (nxl) that describes how the control ut changes the state from t-1 to t.
t
Matrix (kxn) that describes how to map the state xt to an observation zt.
t
t
Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance Qt and Rt respectively.
5
prediction measurement correction It's a weighted mean!
prediction correction measurement
1. Algorithm Kalman_filter( µt-1, Σt-1, ut, zt):
2. Prediction: 3. 4. 5. Correction: 6. 7. 8. 9. Return µt, Σt
µt = Atµt−1 + Btut Σt = AtΣt−1At
T + Qt
Kt = ΣtCt
T (Ct ΣtCt T + Rt)−1
µt = µt + Kt(zt − Ctµt) Σt = (I − KtCt)Σt
t t t t t t
−1 t t t t
1 −
t t t
t t
Non-Gaussian!
) ( ) , ( ) , ( ) ( ) , ( ) , ( ) , (
1 1 1 1 1 1 1 1 1 1 − − − − − − − − − −
− + ≈ − ∂ ∂ + ≈
t t t t t t t t t t t t t t t t
x G u g x u g x x u g u g x u g µ µ µ µ µ ) ( ) ( ) ( ) ( ) ( ) ( ) (
t t t t t t t t t t t
x H h x h x x h h x h µ µ µ µ µ − + ≈ − ∂ ∂ + ≈
Jacobi matrices
) ( ) , ( ) , ( ) ( ) , ( ) , ( ) , (
1 1 1 1 1 1 1 1 1 1 − − − − − − − − − −
− + ≈ − ∂ ∂ + ≈
t t t t t t t t t t t t t t t t
x G u g x u g x x u g u g x u g µ µ µ µ µ ) ( ) ( ) ( ) ( ) ( ) ( ) (
t t t t t t t t t t t
x H h x h x x h h x h µ µ µ µ µ − + ≈ − ∂ ∂ + ≈
Linear function!
2. Prediction: 3. 4. 5. Correction: 6. 7. 8. 9. Return µt, Σt
) , (
1 −
=
t t t
u g µ µ Σt = GtΣt−1Gt
T + Qt
Kt = ΣtHt
T (Ht ΣtHt T + Rt)−1
)) ( (
t t t t t
h z K µ µ µ − + =
t t t t
H K I Σ − = Σ ) (
1 1)
, (
− −
∂ ∂ =
t t t t
x u g G µ
t t t
x h H ∂ ∂ = ) (µ
t t t t t
u B A + =
−1
µ µ Σt = AtΣt−1At
T + Qt
Kt = ΣtCt
T (Ct ΣtCt T + Rt)−1
) (
t t t t t t
C z K µ µ µ − + =
t t t t
C K I Σ − = Σ ) (
§ EKF localization with landmarks (point features)
Prediction: 2. 3. 4. 5. 6.
) , (
1 −
=
t t t
u g µ µ Σt = GtΣt−1Gt
T +VtQtVt T
Gt = ∂g(ut,µt−1) ∂xt−1 = ∂x' ∂µt−1,x ∂x' ∂µt−1,y ∂x' ∂µt−1,θ ∂y' ∂µt−1,x ∂y' ∂µt−1,y ∂y' ∂µt−1,θ ∂θ ' ∂µt−1,x ∂θ ' ∂µt−1,y ∂θ ' ∂µt−1,θ # $ % % % % % % % % % & ' ( ( ( ( ( ( ( ( (
Vt = ∂g(ut,µt−1) ∂ut = ∂x' ∂vt ∂x' ∂ωt ∂y' ∂vt ∂y' ∂ωt ∂θ ' ∂vt ∂θ ' ∂ωt # $ % % % % % % % % % % & ' ( ( ( ( ( ( ( ( ( (
Qt = α1 |vt |+α2 |ωt |
( )
2
α3 |vt |+α4 |ωt |
( )
2
$ % & & ' ( ) )
Motion noise Jacobian of g w.r.t location Predicted mean Predicted covariance (V maps Q into state space) Jacobian of g w.r.t control
Correction:
2. 3. 4. 5. 6. 7. 8.
) ˆ (
t t t t t
z z K − + = µ µ
t t t
H K I Σ − = Σ
! ! ! ! " # $ $ $ $ % & ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ = ∂ ∂ =
θ θ
µ ϕ µ µ ϕ µ µ ϕ µ µ
, , , , , ,
) , (
t t t t y t t y t t x t t x t t t t t
r r r x m h H
( ) (
) ( )
! ! " # $ $ % & − − − − + − =
θ
µ µ µ µ µ
, , , 2 , 2 ,
, 2 atan ˆ
t x t x y t y y t y x t x t
m m m m z
St = HtΣ
tHt T + Rt
1 −
Σ =
t T t t t
S H K
Rt = σ r
2
σ r
2
# $ % & ' (
Predicted measurement mean (depends on observation type) Innovation covariance Kalman gain Updated mean Updated covariance Jacobian of h w.r.t location
VtQtVt
T
VtQtVt
T
VtQtVt
T
VtQtVt
T
Rt Rt
32