L ECTURE 20: E XTENDED K ALMAN F ILTER I NSTRUCTOR : G IANNI A. D I C - - PowerPoint PPT Presentation

l ecture 20
SMART_READER_LITE
LIVE PREVIEW

L ECTURE 20: E XTENDED K ALMAN F ILTER I NSTRUCTOR : G IANNI A. D I C - - PowerPoint PPT Presentation

16-311-Q I NTRODUCTION TO R OBOTICS F ALL 17 L ECTURE 20: E XTENDED K ALMAN F ILTER I NSTRUCTOR : G IANNI A. D I C ARO EKF FOR MAP-BASED ROBOT LOCALIZATION 1. Odometry measures: EKF with only motion Proprioceptive information 1.


slide-1
SLIDE 1

16-311-Q INTRODUCTION TO ROBOTICS FALL’17

LECTURE 20:

EXTENDED KALMAN FILTER

INSTRUCTOR: GIANNI A. DI CARO

slide-2
SLIDE 2

2

EKF FOR MAP-BASED ROBOT LOCALIZATION

Proprioceptive information

Exteroceptive sensing

  • 1. Action/prediction update: Proprioceptive sensing
  • 2. Perception/measurement update: Exteroceptive sensing
  • 1. Odometry measures: EKF with only motion
  • 2. Detection of landmarks: EKF with motion + observations
slide-3
SLIDE 3

3

DISCRETE-TIME MOTION EQUATIONS

ξk+1 = 2 6 6 4 xk+1 yk+1 θk+1 3 7 7 5 = 2 6 6 6 6 4 xk + ∆Sk cos

  • θk + ∆θk

2

  • yk + ∆Sk sin
  • θk + ∆θk

2

  • θk + ∆θk

3 7 7 7 7 5

From Runge-Kutta numeric integration

  • f pose evolution kinematic equations.

Assume that the odometry model is perfect, based on measured distance

𝛦S, and heading variation 𝛦𝜾

In absence of specific information, motion noise is modeled as Gaussian white noise (and the two noise components are assumed to be uncorrelated) νk = ⇥ νs

k

νθ

k

⇤T ∼ N(0, Vk), Vk = " σ2

ks

σ2

# Process noise Odometry measurements are noisy! ➔ Random noise is added to 𝛦S and 𝛦𝜾 to model motion’s kinematics ξk+1 =     xk yk θk     +      (∆Sk + νs

k) cos(θk + ∆θk 2 + νθ k)

(∆Sk + νs

k) sin(θk + ∆θk 2 + νθ k)

∆θk + νθ

k

     Discrete-time process (motion) equations

slide-4
SLIDE 4

4

NON LINEARITY OF DISCRETE-TIME MOTION EQUATIONS

ξk+1 =     xk yk θk     +      (∆Sk + νs

k) cos(θk + ∆θk 2 + νθ k)

(∆Sk + νs

k) sin(θk + ∆θk 2 + νθ k)

∆θk + νθ

k

     Discrete-time process (motion) equations ξk+1 = f (ξk, ∆Sk, ∆θk, νk), νk = ⇥ νs

k

νθ

k

⇤T ∼ N(0, Vk) Process’ dynamics function, f(), is not linear ➜ Process equations do not meet the linearity requirement for using the Kalman filter Linearize pose evolution f() in the neighborhood of [^ ξk|k uk (𝝃k = 0)], the current state estimate, controls (𝛦Sk and 𝛦𝜾k), and mean of process noise

f (ξk, uk, νk) = f (ξ, u, ν)|b

ξk|k,uk,0 + (ξk − b

ξk|k)F ξ|b

ξk|k,uk,0 + (νk − 0)F ν|b ξk|k,uk,0

= f k(b ξk|k, uk, 0) + (ξk − b ξk|k)F kξ + νkF kν

1st order Taylor series Linear in ξk and 𝝃k

slide-5
SLIDE 5

5

EXTENDED KALMAN FILTER (EKF): LINEARIZED MOTION MODEL

Linear(ized) discrete-time process (motion) equations ξk+1 = f k(b ξk|k, uk, 0) + (ξk − b ξk|k)F kξ + νkF kν

Scenario (Prediction from motion): The robot does move but no external observations are made. Proprioceptive measures from the on-board odometry sensors are used to model robot’s motion dynamics avoiding to consider the direct control inputs.

Linearization of motion dynamics using the Jacobians F kξ and F kν, that have to be evaluated in (ξk = b ξk|k, uk, νk = 0)

= 0

➔ Rules for linear transformations of mean and (co)variance of Gaussian variables can be applied Measurement correction 8 > > < > > : b ξk+1 = b ξk+1|k + Gk+1(zk+1 − Ck+1b ξk+1|k) (State update) P k+1 = P k+1|k − Gk+1Ck+1P k+1|k (Covariance update) Gk+1 = P k+1|kCT

k+1(Ck+1P k+1|kCT k+1 + W k+1)−1 (Kalman gain)

Extended Kalman Filter (EKF) - Motion only

Prediction update ( b ξk+1|k = f k(b ξk|k, 0; ∆Sk, ∆θk) + (b ξk|k − b ξk|k)F ξ|b

ξk,uk,0

(State prediction) P k+1|k = F kξP kF T

kξ + F kνV kF T kν

(Covariance prediction)

slide-6
SLIDE 6

6

EKF JACOBIANS FOR THE LINEARIZED MOTION MODEL

The Jacobian of the non-linear function f() is computed in [^ ξk|k uk (𝝃k = 0)], the current state estimate (the mean), the current controls, the mean of the Gaussian noise f() is a vector function with three function components: fkx = xk + (∆Sk + νs

k) cos(θk + ∆θk 2 + νθ k)

fky = yk + (∆Sk + νs

k) sin(θk + ∆θk 2 + νθ k)

fkθ = θk + ∆θk + νθ

k

The Jacobian matrix of f:

F k(xk, yk, θk, νs

k, νθ k) =

⇥ rfkx rfky rfkθ ⇤T = 2 6 6 6 6 6 6 6 6 6 4 ∂fkx ∂xk ∂fkx ∂yk ∂fkx ∂θk ∂fkx ∂νs

k

∂fkx ∂νθ

k

∂fky ∂xk ∂fky ∂yk ∂fky ∂θk ∂fky ∂νs

k

∂fky ∂νθ

k

∂fkθ ∂xk ∂fkθ ∂yk ∂fkθ ∂θk ∂fkθ ∂νs

k

∂fkθ ∂νθ

k

3 7 7 7 7 7 7 7 7 7 5 = ⇥ F kξ F kν ⇤ F kξ = 2 6 6 4 1 ∆Sk sin(θk + ∆θk

2 )

1 ∆Sk cos(θk + ∆θk

2 )

1 3 7 7 5

b ξk|k,uk,ν=0

F kν = 2 6 6 4 cos(θk + ∆θk

2 )

∆Sk sin(θk + ∆θk

2 )

sin(θk + ∆θk

2 )

∆Sk cos(θk + ∆θk

2 )

1 3 7 7 5

b ξk|k,uk,ν=0

slide-7
SLIDE 7
  • Def. Derivative: Given a scalar function f : X ✓ R 7! R, if the limit

lim

h!0

f (x0 + h) f (x0) h exists and takes a finite value, f is differentiable in x0 2 X and the value of the limit is the derivative of the function in x0, which is also indicated with f 0(x0) def =

df dx (x0)

Geometric interpretation: the derivative is the slope of the tangent to the graph of f in point (x0, f (x0)). This can be shown considering that the line passing for two points (x0, f (x0)) and ((x0 + h), f (x0 + h)) belonging to the graph f , is y = mx + f (x0 + h), where the slope is m = f (x0+h)f (x0)

(x0+h)x0

. If h ! 0, the secant to the curve overlaps with the tangent in x0. That is, the equation of the tangent to f in x0 is: y = f (x0) + f 0(x0)(x x0), which is precisely the first-order Taylor series computed in x0.

f(x0) f(x0+h) x0 x0+h f(x) x

h ! 0

f(x0) x0 f(x) x

7

R E C A P O N D E R I VAT I V E S , G R A D I E N T S , J A C O B I A N S

slide-8
SLIDE 8

Gradient: “derivative” for scalar functions of multiple variables ! Normal to the tangent hyperplane to the function graph. Given a scalar, differentiable, multi-variable function f : Rn 7! R, its gradient is the vector of its partial derivatives: rf(x1,x2,...,xn)

def

= ⇣ ∂f ∂x1 , ∂f ∂x2 , . . . , ∂f ∂xn ⌘ = ∂f ∂x1 e1 + ∂f ∂x2 e2 + . . . + ∂f ∂xn en For f : X ✓ Rn 7! R, the Taylor series becomes: f (x)|x0 = X

|k|0

1 k! ∂k[f (x0)](x x0)k where k is a multi-index, an integer-valued vector, k = (k1, k2, . . . , kn), ki 2 Z+, and ∂kf means ∂k1

1 f ∂k2 2 f · · · ∂kn n f , where ∂i j f = ∂j f ∂xj

i

. The 2nd order polynomial is: f (x) = f (x0) + rf (x0)T (x x0) + 1 2 (x x0)T H(f (x0))(x x0) Removing the quadratic part, the linear approximation is obtained, that is, the equation of the tangent hyperplane in x0, where the gradient is normal to the tangent hyperplane Jacobian: “gradient” for vector functions of multiple vari- ables ! Each function component has a tangent hyperplane to the function graph ! Map of tangent hyperplanes

8

R E C A P O N D E R I VAT I V E S , G R A D I E N T S , J A C O B I A N S

slide-9
SLIDE 9

9

E F F E C T O F L I N E A R I Z AT I O N : L I N E A R C A S E

slide-10
SLIDE 10

10

E F F E C T O F L I N E A R I Z AT I O N : N O N L I N E A R C A S E

slide-11
SLIDE 11

11

E F F E C T O F L I N E A R I Z AT I O N : N O N L I N E A R C A S E

slide-12
SLIDE 12

12

E F F E C T O F L I N E A R I Z AT I O N : N O N L I N E A R C A S E

slide-13
SLIDE 13

13

E F F E C T O F L I N E A R I Z AT I O N : N O N L I N E A R C A S E

slide-14
SLIDE 14

The ellipses in the plot show the error in (x, y), but also the error in θ (the third component

  • f the covariance matrix) grows (but usually less than that in (x, y))

14

E R R O R I N L O C A L I Z AT I O N K E E P S G R O W I N G

slide-15
SLIDE 15

The magnitude of the total uncertainty, including both position and heading, is quantified by the q det ( ˆ P ), shown in the plot for different values of V = αV 0, α = {0.5, 1, 2}

15

U N C E RTA I N T Y A S P R O C E S S VA R I A N C E

slide-16
SLIDE 16

16

EKF FOR MAP-BASED ROBOT LOCALIZATION

Proprioceptive information

Exteroceptive sensing

  • 1. Action/prediction update: Proprioceptive sensing
  • 2. Perception/measurement update: Exteroceptive sensing
  • 1. Odometry measures: EKF with only motion
  • 2. Detection of landmarks: EKF with motion + observations
slide-17
SLIDE 17

Exteroceptive measures are needed in the filter to reduce pose uncertainty A map is provided to the robot: a list of objects in the environment along with their properties Let’s consider the case in which the map contains n fixed landmarks with their position. Each landmark is identifiable by the robot through a set of detectable features

17

U S I N G M A P S T O R E D U C E T H E E R R O R

slide-18
SLIDE 18

The robot is equipped with (range finder) sensors that provide observations of the landmarks with respect to the robot as described by the observation model: zk+1 = hk(ξk, wk; λi

k)

λi

k =

h λi

kx λi ky

iT is the known (from map) location in the world frame of the landmark

  • bserved at time step k, wk models sensing errors, ξk =

⇥ xk yk θk ⇤T Using its range sensor, the robot performs the measure zk+1 = ⇥ ρk βk ⇤T relative to landmark i detected at step k: ρk is the range, βk is the bearing angle of the landmark with respect to the robot (i.e., landmark’s position expressed in polar coordinates in the robot’s local frame) In the considered scenario, an observation also returns the identity i of the sensed landmark In more general terms, the observation of the landmarks is performed through the observation

  • f a feature vector (e.g., a set of geometric features like line or arc segments), that in turn

need to be associated to a specific landmark → data association problem, to distinguish among different landmarks as well as to discard pure noise, which is not considered here The knowledge of the identity i of the landmark allows the robot to retrieve from the map the Cartesian coordinates (λi

kx, λi ky) of the landmark

In absence of specific information, the sensor noise is modeled as Gaussian white noise and the two noise components of the sensing are assumed to be uncorrelated: wk = h w ρ

k w β k

iT ∼ N(0, Wk), Wk = "σ2

σ2

#

18

L A N D M A R K - B A S E D M A P S

slide-19
SLIDE 19

Function hk plays the role of f for the observations: it allows to compute the predicted measurement from the predicted state ˆ ξk+1|k. It maps the state vector into the observation vector zk+1 At time k, the observation model hk(ξk, wk; λ) returns the observation zk+1 that the robot is expected to make in state ξk accounting for sensor noise In the scenario, at pose ξk the robot is expected to detect landmark i at a defined range ρ and bearing β, that is, through the measure zk+1 = (ρ, β) that can be possibly corrupted by white Gaussian noise Since hk maps the state (robot coordinates in the world reference frame) into the observation vector (polar coordinates of the landmark in the robot’s reference frame), the observation model is: zk+1 = 2 4 q (λi

kx − xk)2 + (λi ky − yk)2

arctan ⇣ (λi

yx − yk)/(λi kx − xk)

⌘ − θk 3 5 + "w ρ

k

w β

k

#

19

L A N D M A R K D E T E C T I O N A N D O B S E R VAT I O N M O D E L

slide-20
SLIDE 20

hk potentially changes at each time step, being parametrized by the coordinates i

k = (λi kx, λi ky) of the specific landmark detected, whose identity i is assumed to be

known/acquired Using the observation model hk, the robot computes the expected range and the bearing angle to the detected feature based on its own predicted pose ˆ ⇠k+1|k and the known position

  • f the landmark from the input map

Any difference between the actual observation zk+1 = (ρk, βk) and the estimated observation/position hk(ˆ ⇠k+1|k; i

k)

indicates an error in the robot’s position estimate: the robot isn’t where it thought it was! The difference is quantified in the Kalman filter by the innovation term: ✏k+1 = zk+1 − hk(ˆ ⇠k+1|k, 0 0; i

k+1)

Same problem as before: h is a non linear function of the state!

20

W H AT M E A S U R E M E N T S T E L L

slide-21
SLIDE 21

Example: at step k + 1 the robot detects landmark i at a relative range of 2m and a relative angle of 90o, that is, zk+1 = ⇥ 2 90 ⇤T ; from the input map, position of landmark i is i = (3, 3); robot’s predicted pose according to the current state of the Kalman filter is ˆ ⇠k+1|k = ⇥ 2 2 0 ⇤T , while its correct pose is ⇠k+1 = ⇥ 3 1 0 ⇤T (i.e., there is no sensing error, as it can be seen from the figure) The innovation is: ✏k+1 = zk+1 hk(ˆ ⇠k+1|k, 0 0; i

k+1)

= " 2 90 #

  • "

p 12 + 12 arctan(1/1) 0 # = " 2 p 2 45o # In [m, rad] units, the Euclidean norm of the innovation is: ⇥ 2 p 2 0.79 ⇤T ) k✏k+1k = q (2 p 2)2 + 0.792 ⇡ 0.98

21

N U M E R I C E X A M P L E

slide-22
SLIDE 22

Linearized observation model in the EKF: 1st order Taylor expansion for hk() in the neighborhood of the current state estimate, and parametrized by the coordinates λk, results in: hk(ξk, wk; λk) = hk(ξ, w; λk)|ˆ

ξk+1|k ,0 0 + (ξk − ˆ

ξk+1|k)Hξ|ˆ

ξk+1|k ,0 0 + (wk − 0

0)Hw|ˆ

ξk+1|k ,0

= hk(ˆ ξk+1|k, 0 0; λk) + (ξk − ˆ ξk+1|k)Hkξ + wkHkw Therefore, observation predictions return linear and can be used in the EKF equations below by using H, the Jacobian of h, to play the role of matrix C Prediction update ( ˆ ξk+1|k = fk(ˆ ξk|k, uk, 0 0) (State prediction) Pk+1|k = FkξPkF T

kξ + FkνVkF T kν

(Covariance prediction) Measurement correction 8 > > > > > < > > > > > : ˆ ξk+1 = ˆ ξk+1|k + Gk+1(zk+1 − hk(ˆ ξk+1|k, 0 0; λi

k)) (State update)

Pk+1 = Pk+1|k − Gk+1HkξPk+1|k (Covariance update) Gk+1 = Pk+1|kHkξT S−1

k+1

(Kalman gain) Sk+1 = HkξPk+1|kHkξT + HkwWk+1Hkw T

22

L I N E A R I Z AT I O N O F T H E O B S E R VAT I O N M O D E L

slide-23
SLIDE 23

The Jacobian of the non-linear function hk is computed at the mean of the Gaussian measurement noise (w = 0 0) and at the current state estimate ˆ ξk+1|k (which corresponds to the estimated mean of the Gaussian distribution of the state variable): Let’s adopt a notation similar to the one used before for f to express the function hk, defining hk = ⇥ hkρ hkβ ⇤T and including sensor noise: hkρ = q (λi

kx xk)2 + (λi ky yk)2 + w ρ k

hkβ = arctan ⇣ (λi

yx yk)/(λi kx xk)

⌘ θk + w β

k

The Jacobian matrix of hk is therefore: Hk(xk, yk, θk, w ρ

k , w β k ) =

⇥ rhkx rhky rhkθ ⇤T= 2 6 6 6 6 4 ∂hkρ ∂xk ∂hkρ ∂yk ∂hkρ ∂θk ∂hkρ ∂w ρ

k

∂hkρ ∂w β

k

∂hkβ ∂xk ∂hkβ ∂yk ∂hkβ ∂θk ∂hkβ ∂w ρ

k

∂hkβ ∂w β

k

3 7 7 7 7 5 = ⇥ Hkξ Hkw ⇤ Hkξ = 2 6 6 6 4 λi

kx xk

r i

k

  • λi

ky yk

r i

k

λi

ky yk

(r i

k)2

λi

kx xk

(r i

k)2

1 3 7 7 7 5

ˆ ξk+1|k ,w=0

Hkw = " 1 0 0 1 # where r i

k is the distance of landmark i from the predicted state: r i k =

q (λi

kx xk)2 + (λi ky yk)2

23

J A C O B I A N S F O R T H E L I N E A R I Z E D O B S E R VAT I O N M O D E L

Hk(xk, yk, θk, w ρ

k , w β k ) =

⇥ rhkρ rhkβ ⇤T= 2 6 6 6 6 4 ∂hkρ ∂xk ∂hkρ ∂yk ∂hkρ ∂θk ∂hkρ ∂w ρ

k

∂hkρ ∂w β

k

∂hkβ ∂xk ∂hkβ ∂yk ∂hkβ ∂θk ∂hkβ ∂w ρ

k

∂hkβ ∂w β

k

3 7 7 7 7 5 = ⇥ Hkξ Hkw ⇤

slide-24
SLIDE 24

n = 20 landmarks are randomly deployed in a squared environment of 20×20 m2 σρ = 0.1 m, σβ = 1o Every n steps, a reading is performed, returning the measured range and bearing to a randomly selected landmark This is a quite favorable scenario for the EKF

24

E X P E R I M E N TA L R E S U LT S : A N A L M O S T P E R F E C T T R A C K I N G

slide-25
SLIDE 25

25

E V O L U T I O N O F T H E E R R O R : N O S Y S T E M AT I C G R O W T H

slide-26
SLIDE 26

Simulated run with no visible beacons. The triangles represent the actual robot position and orientation ⇥ x(k), y(k), θ(k) ⇤T , the rectangles represent the estimated robot pose, the ellipses represent the confidence in the estimates of x(k) and y(k)

26

E K F W I T H E N V I R O N M E N T B E A C O N S

slide-27
SLIDE 27

Simulated run taking observations of a single wall beacon using a sonar sensors. After the wall comes into view, the error ellipse shrinks perpendicular to the wall as a posteriori confidence in the estimate of x(k) and y(k) increases. Note that the only part of a smooth wall that can be “seen” by a sonar sensor is the portion

  • f the wall that is perpendicular to the incident sonar beam.

27

E K F W I T H E N V I R O N M E N T B E A C O N S

slide-28
SLIDE 28

KALMAN WITH SPORADIC MEASURES: ENVIRONMENT BEACONS

Simulated run with localization from first one, then two wall beacons. After the first wall comes into view, the error ellipse shrinks perpendicular to the wall as a posteriori confidence in the estimate of x(k) and y(k) increases. The same happens with the view of the second wall, overall reducing estimate uncertainty.

28

E K F W I T H E N V I R O N M E N T B E A C O N S

slide-29
SLIDE 29

KALMAN WITH SPORADIC MEASURES: ENVIRONMENT BEACONS

Simulated run with localization from a sequence of wall beacons The presence of multiple wall beacons allows to always keep uncertainty estimation very low.

29

E K F W I T H E N V I R O N M E N T B E A C O N S