the kalman filter
play

The Kalman Filter An Algorithm for Dealing with Uncertainty Steven - PowerPoint PPT Presentation

The Kalman Filter An Algorithm for Dealing with Uncertainty Steven Janke May 2011 Steven Janke (Seminar) The Kalman Filter May 2011 1 / 29 Autonomous Robots Steven Janke (Seminar) The Kalman Filter May 2011 2 / 29 The Problem Steven


  1. The Kalman Filter An Algorithm for Dealing with Uncertainty Steven Janke May 2011 Steven Janke (Seminar) The Kalman Filter May 2011 1 / 29

  2. Autonomous Robots Steven Janke (Seminar) The Kalman Filter May 2011 2 / 29

  3. The Problem Steven Janke (Seminar) The Kalman Filter May 2011 3 / 29

  4. One Dimensional Problem Let r t be the position of the robot at time t . The variable r t is a random variable. Steven Janke (Seminar) The Kalman Filter May 2011 4 / 29

  5. Distributions and Variances Definition The distribution of a random variable is the list of values it takes along with the probability of those values. The variance of a random variable is a measure of how spread out the distribution is. Example Suppose the robot moves either 8, 10, or 12 cm. in one step with probabilities 0.25, 0.5, and 0.25 respectively. The distribution is centered around 10 cm. (the mean). The variance is Var ( r 1 ) = 0 . 25 · (8 − 10) 2 + 0 . 5 · (10 − 10) 2 + 0 . 25 · (12 − 10) 2 = 2 cm 2 Steven Janke (Seminar) The Kalman Filter May 2011 5 / 29

  6. Variances for One and Two Steps The value of r 1 is the first step. Var ( r 1 ) = 0 . 25 · 2 2 + 0 . 5 · 0 2 + 0 . 25 · 2 2 = 2. The value of r 2 is the sum of two steps. Var ( r 2 ) = 0 . 0625 · 4 2 +0 . 25 · 2 2 +0 . 375 · 0 2 +0 . 25 · 2 2 +0 . 0625 · 4 2 = 4 Steven Janke (Seminar) The Kalman Filter May 2011 6 / 29

  7. Variance Properties Definition (Variance) Suppose we have a random variable X with mean µ . Then � ( x − µ ) 2 P [ X = x ] Var ( X ) = x Note that Var ( aX ) = a 2 Var ( X ). If X and Y are independent random variables: Var ( X + Y ) = Var ( X ) + Var ( Y ) Var ( X − Y ) = Var ( X ) + Var ( Y ) If X and Y are not independent random variables: Cov ( X , Y ) = � x ( x − µ X )( y − µ Y ) P [ X = x , Y = y ] Var ( X + Y ) = Var ( X ) + 2 Cov ( X , Y ) + Var ( Y ) Steven Janke (Seminar) The Kalman Filter May 2011 7 / 29

  8. Normal Density 2 π e − ( x − µ )2 1 f ( x ) = 2 σ 2 √ σ Steven Janke (Seminar) The Kalman Filter May 2011 8 / 29

  9. Comparing Variances for the Normal Density Steven Janke (Seminar) The Kalman Filter May 2011 9 / 29

  10. Estimators for Robot Position Suppose that in one step, the robot moves on average 10 cm. Noise in the process makes r 1 a little different. Nevertheless, our prediction is ˆ r 1 = 10. The robot has an ultrasonic sensor for measuring distance. Again because of noise, the reading is not totally accurate, but we have a second estimate of r 1 , namely ˆ s 1 . Both the process noise and the sensor noise are random variables with mean zero. Their variances may not be equal. Steven Janke (Seminar) The Kalman Filter May 2011 10 / 29

  11. The Model and Estimates The Model Process: r t = r t − 1 + d + Pn t (d is mean distance of one step) Sensor: s t = r t + Sn t The noise random variables Pn t and Sn t are normal with mean 0. The variances of Pn t and Sn t are σ 2 p and σ 2 s . The Estimates Process: ˆ r t = ˆ r t − 1 + d Sensor: ˆ s t = r t + Sn t r 1 − r 1 ) = σ 2 s t − r t ) = σ 2 Error variance: Var (ˆ p and Var (ˆ s Steven Janke (Seminar) The Kalman Filter May 2011 11 / 29

  12. A Better Estimate ˆ r 1 is an estimate of where the robot is after one step ( r 1 ). ˆ s 1 is also an estimate of r 1 . The estimator errors have variances σ 2 p and σ 2 s respectively. Example (Combine the Estimators) One way to combine the information in both estimators is to take a linear combination: ˆ r ∗ 1 = (1 − K )ˆ r 1 + K ˆ s 1 Steven Janke (Seminar) The Kalman Filter May 2011 12 / 29

  13. Finding the Best K Our new estimator is ˆ r ∗ 1 = (1 − K )ˆ r 1 + K ˆ s 1 . A good estimator has small error variance. 1 − r 1 ) = (1 − K ) 2 Var (ˆ r 1 − r 1 ) + K 2 Var (ˆ Var (ˆ r ∗ s 1 − r 1 ) 1 − r 1 ) = (1 − K ) 2 σ 2 p + K 2 σ 2 Substituting gives Var (ˆ r ∗ s . To minimize the variance, take the derivative with respect to K and set to zero: σ 2 p − 2(1 − K ) σ 2 p + 2 K σ 2 s = 0 = ⇒ K = σ 2 p + σ 2 s Definition K is called the Kalman Gain. Steven Janke (Seminar) The Kalman Filter May 2011 13 / 29

  14. Smaller Variance With the new K, the error variance of ˆ r ∗ 1 is: σ 2 σ 2 � � p p Var (ˆ r ∗ 1 − r 1 ) = Var (1 − )(ˆ r 1 − r 1 ) + ( )(ˆ s 1 − r 1 ) (1) σ 2 p + σ 2 σ 2 p + σ 2 s s σ 2 σ 2 � 2 � 2 � � p ( σ 2 ( σ 2 s = p ) + s ) (2) σ 2 p + σ 2 σ 2 p + σ 2 s s σ 2 p σ 2 s = = (1 − K ) Var (ˆ r 1 − r 1 ) (3) σ 2 p + σ 2 s Note that Var (ˆ r ∗ 1 − r 1 ) is less than both Var (ˆ r 1 − r 1 ) and Var (ˆ s 1 − r 1 ). Note the new estimator is ˆ 1 = ˆ r 1 + K (ˆ s 1 − ˆ r 1 ) r ∗ Steven Janke (Seminar) The Kalman Filter May 2011 14 / 29

  15. The Kalman Filter Now we can devise an algorithm: Prediction: Add the next step to our last estimate: ˆ r t = ˆ t − 1 + d r ∗ t − 1 − r t ) + σ 2 Add in variance of next step: Var (ˆ r t − r t ) = Var (ˆ r ∗ p Update: (After Sensor Reading) r t − r t ) + σ 2 Minimize: K = Var (ˆ r t − r t ) / ( Var (ˆ s ) Update the position estimate: ˆ r ∗ t = ˆ r t + K (ˆ s t − ˆ r t ) Update the variance: Var (ˆ t − r t ) = (1 − K ) Var (ˆ r t − r t ) r ∗ Steven Janke (Seminar) The Kalman Filter May 2011 15 / 29

  16. Calculating Estimates Time r t ˆ r t Var (ˆ r t ) K ˆ s ( t ) ˆ r ∗ Var (ˆ r ∗ t ) t 0 0 0 0 0 0 0 1000 1 1 1.00 1001 0.99 1.34 1.34 5.96 2 2 2.34 6.96 0.54 0.59 1.40 3.22 3 3 2.40 4.22 0.41 4.28 3.18 2.48 Steven Janke (Seminar) The Kalman Filter May 2011 16 / 29

  17. Moving Robot (Step Distance = 1) Steven Janke (Seminar) The Kalman Filter May 2011 17 / 29

  18. Another Look at the Good Estimator rt )2 − ( rt − ˆ 1 2 σ 2 The distribution of r t is normal with density 2 π e p √ σ p st − rt )2 − (ˆ 1 2 σ 2 The distribution of ˆ s t is normal with density 2 π e s √ σ s The likelihood function is the product of these two densities. Maximizing this likelihood gives a good estimator. To maximize likelihood, we minimize the negative of the exponent: r t ) 2 s t − r t ) 2 ( r t − ˆ + (ˆ 2 σ 2 2 σ 2 p s Again use calculus to discover: σ 2 p r t = (1 − K )ˆ r t + K ˆ s t where K = σ 2 p + σ 2 s Steven Janke (Seminar) The Kalman Filter May 2011 18 / 29

  19. Two-Dimensional Problem Example Suppose the state of the robot is both a position and a velocity. Then the � p t � robot state is a vector: r t = . v t � p t � � 1 � � p t − 1 � 1 The process is now: r t = = = Fr t − 1 0 1 v t v t − 1 � � p t � � The sensor reading now looks like this: s t = 1 0 = Hr t v t Steven Janke (Seminar) The Kalman Filter May 2011 19 / 29

  20. Covariance Matrix With two parts to the robot’s state, a variance in one may affect the variance in the other. So we define the covariance between two random variables X and Y. Definition (Covariance) The covariance between random variables X and Y is Cov ( X , Y ) = � ( x − µ X )( y − µ Y ) P [ X = x , Y = y ]. Definition (Covariance Matrix) The covariance matrix for our robot state is � Var ( p t ) Cov ( p t , v t ) � Var ( r t ) = . Cov ( v t , p t ) Var ( v t ) Steven Janke (Seminar) The Kalman Filter May 2011 20 / 29

  21. 2D Kalman Filter The two dimensional Kalman Filter gives the predictions and updates in terms of matrices: Prediction: Add the next step to our last estimate: ˆ r t = F ˆ r ∗ t − 1 Add in variance of next step: t − 1 − r t ) F T + Var (ˆ Var (ˆ r t − r t ) = FVar (ˆ r t − r t ) r ∗ Update: r t − r t ) H T + Var (ˆ r t − r t ) H T (( HVar (ˆ s t − r t )) − 1 Gain: K = Var (ˆ Update the position estimate: ˆ t = ˆ r t + K (ˆ s t − H ˆ r t ) r ∗ Update the variance: Var (ˆ t − r t ) = ( I − KH ) Var (ˆ r t − r t ) r ∗ Steven Janke (Seminar) The Kalman Filter May 2011 21 / 29

  22. Two-Dimensional Example Example � 1 � 1 Process: r t = Fr t − 1 where F = 0 1 � Q / 3 � Q / 2 Process Covariance matrix: Q / 2 Q � � p t � � Sensor: s t = 1 0 = Hr t v t � p 0 � � 0 � itial State: r 0 = = v 0 0 Steven Janke (Seminar) The Kalman Filter May 2011 22 / 29

  23. 2D Graph Results Steven Janke (Seminar) The Kalman Filter May 2011 23 / 29

  24. Linear Algebra Interpretation Inner Product: ( u , v ) = E [( u − m u ) T ( v − m v )]. (Orthogonal = uncorrelated) Steven Janke (Seminar) The Kalman Filter May 2011 24 / 29

  25. Kalman Filter Summary Summary of Kalman Filter Assumptions: The process model must be a linear model. All errors are normally distributed with mean zero. Algorithm: Prediction Step 1: Use the model to find estimate of robot state. Prediction Step 2: Use the model to find variance of estimate. Update Step 1: Calculate Kalman Gain from sensor reading. Update Step 2: Use Gain to update estimate of robot state. Update Step 3: Use Gain to update variance of estimator. Result: This linear estimator of the robot state is unbiased and has minimum error variance. Steven Janke (Seminar) The Kalman Filter May 2011 25 / 29

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend