kalman filter
play

Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon - PowerPoint PPT Presentation

Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal inference with continuous random


  1. Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University

  2. Examples up to now have been discrete (binary) random variables Kalman ‘filtering’ can be seen as a special case of a temporal inference with continuous random variables x 2 x 4 x 1 x 0 x 3 e 1 e 2 e 3 e 4 Everything is continuous… P ( x t | x t − 1 ) P ( x 0 ) P ( e | x ) x e probability distributions are no longer tables but functions

  3. Making the connection to the ‘filtering’ equations (Discrete) Filtering Tables Tables Tables X P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | X t ) P ( X t | e 1: t ) X t Kalman Filtering Gaussian Gaussian Gaussian Z P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | x t ) P ( x t | e 1: t ) d x t observation model motion model belief x t integral because continuous PDFs

  4. Simple, 1D example… x

  5. r 2 s x 1 x 2 know velocity noise x t = x t − 1 + s + r t r t ∼ N (0 , σ R ) ‘sampled from’ System (motion) model

  6. x t = x t − 1 + s + r t r t ∼ N (0 , σ R ) r 2 s x 1 x 2 know velocity noise How do you represent the motion model? P ( x t | x t − 1 )

  7. x t = x t − 1 + s + r t r t ∼ N (0 , σ R ) r 2 s x 1 x 2 know velocity noise How do you represent the motion model? A linear Gaussian (continuous) transition model P ( x t | x t − 1 ) = N ( x t ; x t − 1 + s, σ r ) standard mean deviation How can you visualize this distribution?

  8. x 2 s, σ r ) s t ; x t − 1 + s, x 1 know velocity A linear Gaussian (continuous) transition model P ( x t | x t − 1 ) = N ( x t ; x t − 1 + s, σ r ) Why don’t we just use a table as before?

  9. GPS q 1 sensor error z 1 x 1 GPS measurement True position z t = x t + q t q t ∼ N (0 , σ Q ) sampled from a Gaussian Observation (measurement) model

  10. GPS z t = x t + q t q t ∼ N (0 , σ Q ) q 1 sensor error z 1 x 1 GPS measurement True position How do you represent the observation (measurement) model? P ( e | x ) e represents z

  11. GPS z t = x t + q t q t ∼ N (0 , σ Q ) q 1 sensor error z 1 x 1 GPS measurement True position How do you represent the observation (measurement) model? Also a linear Gaussian model P ( z t | x t ) = N ( z t ; x t , σ Q )

  12. GPS z 1 GPS measurement z t = x t + q t q t ∼ N (0 , σ Q ) x 1 True position How do you represent the observation (measurement) model? Also a linear Gaussian model P ( z t | x t ) = N ( z t ; x t , σ Q )

  13. ˆ x 0 x 0 true position initial estimate σ 0 initial estimate uncertainty Prior (initial) State

  14. ˆ x 0 x 0 true position initial estimate How do you represent the prior state probability?

  15. ˆ x 0 x 0 true position initial estimate How do you represent the prior state probability? Also a linear Gaussian model! x 0 ) = N (ˆ P (ˆ x 0 ; x 0 , σ 0 )

  16. ˆ initial estimate x 0 x 0 true position How do you represent the prior state probability? Also a linear Gaussian model x 0 ) = N (ˆ P (ˆ x 0 ; x 0 , σ 0 )

  17. Inference So how do you do temporal filtering with the KL?

  18. Recall: the first step of filtering was the ‘prediction step’ prediction step Z P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | x t ) P ( x t | e 1: t ) d x t motion model belief x t compute this! It’s just another Gaussian need to compute the ‘prediction’ mean and variance…

  19. Prediction (Using the motion model) How would you predict given ? ˆ ˆ x 1 x 0 using this ‘cap’ notation to x 1 = ˆ ˆ x 0 + s (This is the mean) denote ‘estimate’ σ 2 1 = σ 2 0 + σ 2 (This is the variance) r

  20. prediction step Z P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | x t ) P ( x t | e 1: t ) d x t motion model belief x t the second step after prediction is …

  21. … update step! Z P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | x t ) P ( x t | e 1: t ) d x t observation prediction x t compute this (using results of the prediction step)

  22. In the update step, the sensor measurement corrects the system prediction initial system sensor estimate prediction estimate ˆ ˆ z 1 x 0 x 1 σ 2 σ 2 1 q uncertainty uncertainty Which estimate is correct? Is there a way to know? Is there a way to merge this information?

  23. Intuitively , the smaller variance mean less uncertainty. σ 2 prediction σ 2 system sensor 1 estimate q So we want a weighted state estimate correction σ 2 σ 2 x + 1 something q ˆ 1 = x 1 + ˆ z 1 like this… σ 2 σ 2 1 + σ 2 1 + σ 2 q q This happens naturally in the Bayesian filtering (with Gaussians) framework!

  24. Recall the filtering equation: observation one step motion prediction Z P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | x t ) P ( x t | e 1: t ) d x t x t Gaussian Gaussian What is the product of two Gaussians?

  25. Recall … When we multiply the prediction (Gaussian) with the observation model (Gaussian) we get … … a product of two Gaussians µ = µ 1 σ 2 2 + µ 2 σ 2 σ 2 1 σ 2 1 2 σ = σ 2 2 + σ 2 σ 2 1 + σ 2 1 2 ✓ ◆ applied to the filtering equation…

  26. Z P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | x t ) P ( x t | e 1: t ) d x t x t mean: mean: ˆ z 1 x 1 variance: σ q variance: σ 1 new mean: new variance: σ 2 q σ 2 x 1 σ 2 q + z 1 σ 2 1 = ˆ 1 σ 2+ 1 x + ˆ = ˆ 1 q + σ 2 σ 2 q + σ 2 σ 2 1 1 ‘plus’ sign means post ‘update’ estimate

  27. σ 2 prediction σ 2 system sensor 1 estimate q With a little algebra… x 1 σ 2 q + z 1 σ 2 σ 2 σ 2 1 = ˆ 1 x + 1 q ˆ = ˆ + z 1 x 1 q + σ 2 q + σ 2 q + σ 2 σ 2 σ 2 σ 2 1 1 1 We get a weighted state estimate correction!

  28. Kalman gain notation With a little algebra… σ 2 x + 1 ˆ 1 = ˆ x 1 + ( z 1 − ˆ x 1 ) = ˆ x 1 + K ( z 1 − ˆ x 1 ) q + σ 2 σ 2 1 ‘Kalman gain’ ‘Innovation’ With a little algebra… σ 2 1 σ 2 σ 2 ✓ ◆ σ + 1 σ 2 1 = (1 − K ) σ 2 q 1 = = 1 − 1 σ 2 σ 2 1 + σ 2 1 + σ 2 q q

  29. Summary (1D Kalman Filtering) To solve this… Z P ( X t +1 | e 1: t +1 ) ∝ P ( e t +1 | X t +1 ) P ( X t +1 | x t ) P ( x t | e 1: t ) d x t x t Compute this… σ 2 σ 2 x + 1 σ 2+ = σ 2 1 σ 2 ˆ 1 = ˆ x 1 + ( z 1 − ˆ x 1 ) 1 − 1 1 σ 2 σ 2 1 + σ 2 1 + σ 2 q q σ 2 1 K = σ 2 1 + σ 2 q ‘Kalman gain’ x + σ 2+ = σ 2 1 − K σ 2 ˆ 1 = ˆ x 1 + K ( z 1 − ˆ x 1 ) 1 1 mean of the new Gaussian variance of the new Gaussian

  30. Simple 1D Implementation [x p] = KF(x,v,z) x = x + s; v = v + q; K = v/(v + r); x = x + K * (z - x); p = v - K * v; Just 5 lines of code!

  31. or just 2 lines [x P] = KF(x,v,z) x = (x+s)+(v+q)/((v+q)+r)*(z-(x+s)); p = (v+q)-(v+q)/((v+q)+r)*v;

  32. Bare computations (algorithm) of Bayesian filtering: KalmanFilter( µ t − 1 , Σ t − 1 , u t , z t ) motion control µ t = A t µ t � 1 + Bu t ¯ prediction ‘old’ mean mean Prediction ‘old’ covariance ¯ Σ t = A t Σ t � 1 A > t + R prediction Gaussian noise covariance K t = ¯ t ( C t ¯ Σ t C > Σ t C > t + Q t ) � 1 Gain observation model µ t = ¯ µ t + K t ( z t − C t ¯ µ t ) update mean Update Σ t = ( I − K t C t ) ¯ Σ t update covariance

  33. Simple Multi-dimensional Implementation (also 5 lines of code!) [x P] = KF(x,P,z) x = A*x; P = A*P*A' + Q; K = P*C'/(C*P*C' + R); x = x + K*(z - C*x); P = (eye(size(K,1))-K*C)*P;

  34. 2D Example

  35. x state measurement  x  x � � x = z = y y y Constant position Motion Model x t = A x t − 1 + B u t + ✏ t

  36. x state measurement  x  x � � x = z = y y y Constant position Motion Model x t = A x t − 1 + B u t + ✏ t system noise ✏ t ∼ N ( 0 , R ) Constant position  0  σ 2  � 1 0 � � 0 A = B u = R = r 0 1 σ 2 0 0 r

  37. x state measurement  x  x � � x = z = y y y Measurement Model z t = C t x t + δ t

  38. x state measurement  x  x � � x = z = y y y Measurement Model z t = C t x t + δ t zero-mean measurement noise  σ 2  � � 0 1 0 C = Q = q δ t ∼ N ( 0 , Q ) σ 2 0 0 1 q

  39. Algorithm for the 2D object tracking example  1  1 � � 0 0 A = C = 0 1 0 1 motion model observation model General Case Constant position Model µ t = A t µ t � 1 + Bu t ¯ ¯ x t = x t − 1 ¯ ¯ Σ t = A t Σ t � 1 A > Σ t = Σ t − 1 + R t + R K t = ¯ t ( C t ¯ Σ t C > Σ t C > t + Q t ) � 1 K t = ¯ Σ t ( ¯ Σ t + Q ) − 1 µ t = ¯ µ t + K t ( z t − C t ¯ µ t ) x t = ¯ x t + K t ( z t − ¯ x t ) Σ t = ( I − K t C t ) ¯ Σ t Σ t = ( I − K t ) ¯ Σ t

  40. Just 4 lines of code [x P] = KF_constPos(x,P,z) P = P + Q; K = P / (P + R); x = x + K * (z - x); P = (eye(size(K,1))-K) * P; Where did the 5th line go?

  41. General Case Constant position Model µ t = A t µ t � 1 + Bu t ¯ ¯ x t = x t − 1 ¯ ¯ Σ t = A t Σ t � 1 A > Σ t = Σ t − 1 + R t + R K t = ¯ t ( C t ¯ Σ t C > Σ t C > t + Q t ) � 1 K t = ¯ Σ t ( ¯ Σ t + Q ) − 1 µ t = ¯ µ t + K t ( z t − C t ¯ µ t ) x t = ¯ x t + K t ( z t − ¯ x t ) Σ t = ( I − K t C t ) ¯ Σ t Σ t = ( I − K t ) ¯ Σ t

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend