The Kalman Filter An Algorithm for Dealing with Uncertainty Steven - - PowerPoint PPT Presentation

the kalman filter
SMART_READER_LITE
LIVE PREVIEW

The Kalman Filter An Algorithm for Dealing with Uncertainty Steven - - PowerPoint PPT Presentation

The Kalman Filter An Algorithm for Dealing with Uncertainty Steven Janke May 2011 Steven Janke (Seminar) The Kalman Filter May 2011 1 / 29 Autonomous Robots Steven Janke (Seminar) The Kalman Filter May 2011 2 / 29 The Problem Steven


slide-1
SLIDE 1

The Kalman Filter

An Algorithm for Dealing with Uncertainty Steven Janke May 2011

Steven Janke (Seminar) The Kalman Filter May 2011 1 / 29

slide-2
SLIDE 2

Autonomous Robots

Steven Janke (Seminar) The Kalman Filter May 2011 2 / 29

slide-3
SLIDE 3

The Problem

Steven Janke (Seminar) The Kalman Filter May 2011 3 / 29

slide-4
SLIDE 4

One Dimensional Problem

Let rt be the position of the robot at time t. The variable rt is a random variable.

Steven Janke (Seminar) The Kalman Filter May 2011 4 / 29

slide-5
SLIDE 5

Distributions and Variances

Definition

The distribution of a random variable is the list of values it takes along with the probability of those values. The variance of a random variable is a measure of how spread out the distribution is.

Example

Suppose the robot moves either 8, 10, or 12 cm. in one step with probabilities 0.25, 0.5, and 0.25 respectively. The distribution is centered around 10 cm. (the mean). The variance is Var(r1) = 0.25 · (8 − 10)2 + 0.5 · (10 − 10)2 + 0.25 · (12 − 10)2 = 2 cm2

Steven Janke (Seminar) The Kalman Filter May 2011 5 / 29

slide-6
SLIDE 6

Variances for One and Two Steps

The value of r1 is the first step. Var(r1) = 0.25 · 22 + 0.5 · 02 + 0.25 · 22 = 2. The value of r2 is the sum of two steps. Var(r2) = 0.0625·42 +0.25·22 +0.375·02 +0.25·22 +0.0625·42 = 4

Steven Janke (Seminar) The Kalman Filter May 2011 6 / 29

slide-7
SLIDE 7

Variance Properties

Definition (Variance)

Suppose we have a random variable X with mean µ. Then Var(X) =

  • x

(x − µ)2P[X = x] Note that Var(aX) = a2Var(X). If X and Y are independent random variables:

Var(X + Y ) = Var(X) + Var(Y ) Var(X − Y ) = Var(X) + Var(Y )

If X and Y are not independent random variables:

Cov(X, Y ) =

x(x − µX)(y − µY )P[X = x, Y = y]

Var(X + Y ) = Var(X) + 2Cov(X, Y ) + Var(Y )

Steven Janke (Seminar) The Kalman Filter May 2011 7 / 29

slide-8
SLIDE 8

Normal Density

f (x) =

1 σ √ 2πe− (x−µ)2

2σ2 Steven Janke (Seminar) The Kalman Filter May 2011 8 / 29

slide-9
SLIDE 9

Comparing Variances for the Normal Density

Steven Janke (Seminar) The Kalman Filter May 2011 9 / 29

slide-10
SLIDE 10

Estimators for Robot Position

Suppose that in one step, the robot moves on average 10 cm. Noise in the process makes r1 a little different. Nevertheless, our prediction is ˆ r1 = 10. The robot has an ultrasonic sensor for measuring distance. Again because of noise, the reading is not totally accurate, but we have a second estimate of r1, namely ˆ s1. Both the process noise and the sensor noise are random variables with mean zero. Their variances may not be equal.

Steven Janke (Seminar) The Kalman Filter May 2011 10 / 29

slide-11
SLIDE 11

The Model and Estimates

The Model

Process: rt = rt−1 + d + Pnt (d is mean distance of one step) Sensor: st = rt + Snt The noise random variables Pnt and Snt are normal with mean 0. The variances of Pnt and Snt are σ2

p and σ2 s .

The Estimates

Process: ˆ rt = ˆ rt−1 + d Sensor: ˆ st = rt + Snt Error variance: Var(ˆ r1 − r1) = σ2

p and Var(ˆ

st − rt) = σ2

s

Steven Janke (Seminar) The Kalman Filter May 2011 11 / 29

slide-12
SLIDE 12

A Better Estimate

ˆ r1 is an estimate of where the robot is after one step (r1). ˆ s1 is also an estimate of r1. The estimator errors have variances σ2

p and σ2 s respectively.

Example (Combine the Estimators)

One way to combine the information in both estimators is to take a linear combination: ˆ r∗

1 = (1 − K)ˆ

r1 + Kˆ s1

Steven Janke (Seminar) The Kalman Filter May 2011 12 / 29

slide-13
SLIDE 13

Finding the Best K

Our new estimator is ˆ r∗

1 = (1 − K)ˆ

r1 + Kˆ s1. A good estimator has small error variance. Var(ˆ r∗

1 − r1) = (1 − K)2Var(ˆ

r1 − r1) + K 2Var(ˆ s1 − r1) Substituting gives Var(ˆ r∗

1 − r1) = (1 − K)2σ2 p + K 2σ2 s . To minimize the

variance, take the derivative with respect to K and set to zero: −2(1 − K)σ2

p + 2Kσ2 s = 0 =

⇒ K = σ2

p

σ2

p + σ2 s

Definition

K is called the Kalman Gain.

Steven Janke (Seminar) The Kalman Filter May 2011 13 / 29

slide-14
SLIDE 14

Smaller Variance

With the new K, the error variance of ˆ r∗

1 is:

Var(ˆ r∗

1 − r1) = Var

  • (1 −

σ2

p

σ2

p + σ2 s

)(ˆ r1 − r1) + ( σ2

p

σ2

p + σ2 s

)(ˆ s1 − r1)

  • (1)

=

  • σ2

s

σ2

p + σ2 s

2 (σ2

p) +

  • σ2

p

σ2

p + σ2 s

2 (σ2

s )

(2) = σ2

pσ2 s

σ2

p + σ2 s

= (1 − K)Var(ˆ r1 − r1) (3) Note that Var(ˆ r∗

1 − r1) is less than both Var(ˆ

r1 − r1) and Var(ˆ s1 − r1). Note the new estimator is ˆ r∗

1 = ˆ

r1 + K(ˆ s1 − ˆ r1)

Steven Janke (Seminar) The Kalman Filter May 2011 14 / 29

slide-15
SLIDE 15

The Kalman Filter

Now we can devise an algorithm:

Prediction:

Add the next step to our last estimate: ˆ rt = ˆ r∗

t−1 + d

Add in variance of next step: Var(ˆ rt − rt) = Var(ˆ r∗

t−1 − rt) + σ2 p

Update: (After Sensor Reading)

Minimize: K = Var(ˆ rt − rt)/(Var(ˆ rt − rt) + σ2

s )

Update the position estimate: ˆ r∗

t = ˆ

rt + K(ˆ st − ˆ rt) Update the variance: Var(ˆ r∗

t − rt) = (1 − K)Var(ˆ

rt − rt)

Steven Janke (Seminar) The Kalman Filter May 2011 15 / 29

slide-16
SLIDE 16

Calculating Estimates

Time rt ˆ rt Var(ˆ rt) K ˆ s(t) ˆ r∗

t

Var(ˆ r∗

t )

1000 1 1 1.00 1001 0.99 1.34 1.34 5.96 2 2 2.34 6.96 0.54 0.59 1.40 3.22 3 3 2.40 4.22 0.41 4.28 3.18 2.48

Steven Janke (Seminar) The Kalman Filter May 2011 16 / 29

slide-17
SLIDE 17

Moving Robot (Step Distance = 1)

Steven Janke (Seminar) The Kalman Filter May 2011 17 / 29

slide-18
SLIDE 18

Another Look at the Good Estimator

The distribution of rt is normal with density

1 σp √ 2πe − (rt −ˆ

rt )2 2σ2 p

The distribution of ˆ st is normal with density

1 σs √ 2πe − (ˆ

st −rt )2 2σ2 s

The likelihood function is the product of these two densities. Maximizing this likelihood gives a good estimator. To maximize likelihood, we minimize the negative of the exponent: (rt − ˆ rt)2 2σ2

p

+ (ˆ st − rt)2 2σ2

s

Again use calculus to discover: rt = (1 − K)ˆ rt + Kˆ st where K = σ2

p

σ2

p + σ2 s

Steven Janke (Seminar) The Kalman Filter May 2011 18 / 29

slide-19
SLIDE 19

Two-Dimensional Problem

Example

Suppose the state of the robot is both a position and a velocity. Then the robot state is a vector: rt = pt vt

  • .

The process is now: rt = pt vt

  • =

1 1 1 pt−1 vt−1

  • = Frt−1

The sensor reading now looks like this: st =

  • 1

pt vt

  • = Hrt

Steven Janke (Seminar) The Kalman Filter May 2011 19 / 29

slide-20
SLIDE 20

Covariance Matrix

With two parts to the robot’s state, a variance in one may affect the variance in the other. So we define the covariance between two random variables X and Y.

Definition (Covariance)

The covariance between random variables X and Y is Cov(X, Y ) = (x − µX)(y − µY )P[X = x, Y = y].

Definition (Covariance Matrix)

The covariance matrix for our robot state is Var(rt) = Var(pt) Cov(pt, vt) Cov(vt, pt) Var(vt)

  • .

Steven Janke (Seminar) The Kalman Filter May 2011 20 / 29

slide-21
SLIDE 21

2D Kalman Filter

The two dimensional Kalman Filter gives the predictions and updates in terms of matrices:

Prediction:

Add the next step to our last estimate: ˆ rt = Fˆ r∗

t−1

Add in variance of next step: Var(ˆ rt − rt) = FVar(ˆ r∗

t−1 − rt)F T + Var(ˆ

rt − rt)

Update:

Gain: K = Var(ˆ rt − rt)HT((HVar(ˆ rt − rt)HT + Var(ˆ st − rt))−1 Update the position estimate: ˆ r∗

t = ˆ

rt + K(ˆ st − Hˆ rt) Update the variance: Var(ˆ r∗

t − rt) = (I − KH)Var(ˆ

rt − rt)

Steven Janke (Seminar) The Kalman Filter May 2011 21 / 29

slide-22
SLIDE 22

Two-Dimensional Example

Example

Process: rt = Frt−1 where F = 1 1 1

  • Process Covariance matrix:

Q/3 Q/2 Q/2 Q

  • Sensor: st =
  • 1

pt vt

  • = Hrt

itial State: r0 = p0 v0

  • =
  • Steven Janke (Seminar)

The Kalman Filter May 2011 22 / 29

slide-23
SLIDE 23

2D Graph Results

Steven Janke (Seminar) The Kalman Filter May 2011 23 / 29

slide-24
SLIDE 24

Linear Algebra Interpretation

Inner Product: (u, v) = E[(u − mu)T(v − mv)]. (Orthogonal = uncorrelated)

Steven Janke (Seminar) The Kalman Filter May 2011 24 / 29

slide-25
SLIDE 25

Kalman Filter Summary

Summary of Kalman Filter

Assumptions:

The process model must be a linear model. All errors are normally distributed with mean zero.

Algorithm:

Prediction Step 1: Use the model to find estimate of robot state. Prediction Step 2: Use the model to find variance of estimate. Update Step 1: Calculate Kalman Gain from sensor reading. Update Step 2: Use Gain to update estimate of robot state. Update Step 3: Use Gain to update variance of estimator.

Result: This linear estimator of the robot state is unbiased and has minimum error variance.

Steven Janke (Seminar) The Kalman Filter May 2011 25 / 29

slide-26
SLIDE 26

What can go wrong ...

A Bad Model

Suppose we use our original model rt = rt−1 + d + Pnt. But actually the robot is stationary so the real model is rt = rt−1. Partial solution: Increase the variance of Pnt.

Not a Linear Model

Suppose our sensor measures distance to the origin in a 2D problem. The the sensor reading gives us

  • x2 + y2.

Consequently, no matrix H exists giving ˆ st = Hrt. Extended Kalman Filter approximates the sensor curve with a straight

  • line. (Taylor Series).

Steven Janke (Seminar) The Kalman Filter May 2011 26 / 29

slide-27
SLIDE 27

Robot is Stationary

Steven Janke (Seminar) The Kalman Filter May 2011 27 / 29

slide-28
SLIDE 28

Robot is Stationary: Increase Process Variance

Steven Janke (Seminar) The Kalman Filter May 2011 28 / 29

slide-29
SLIDE 29

Other Applications

Applications:

Tracking the Stock Market. (Time Series) Computing an asteroid’s orbit. Weather forecasting.

Steven Janke (Seminar) The Kalman Filter May 2011 29 / 29