l ecture 19
play

L ECTURE 19: K ALMAN F ILTER 2 I NSTRUCTOR : G IANNI A. D I C ARO KF - PowerPoint PPT Presentation

16-311-Q I NTRODUCTION TO R OBOTICS F ALL 17 L ECTURE 19: K ALMAN F ILTER 2 I NSTRUCTOR : G IANNI A. D I C ARO KF FOR FILTERING MEASUREMENTS FROM ONE SENSOR Scenario: The robot does not move, a stream of measures z K about some quantity of


  1. 16-311-Q I NTRODUCTION TO R OBOTICS F ALL ’17 L ECTURE 19: K ALMAN F ILTER 2 I NSTRUCTOR : G IANNI A. D I C ARO

  2. KF FOR FILTERING MEASUREMENTS FROM ONE SENSOR Scenario: The robot does not move, a stream of measures z K about some quantity ξ of interest is obtained from one if its sensors. The goal is to filter the data stream in order to produce at each time step k the best estimate for the quantity ξ ( Filtering problem ) The state-observation equations: the state vector ξ corresponds to the measured quantity of interest. It does not change over time (no system motion ) apart from small deviations (e.g., because of temperature or robot’s vibrations). No controls are issued. For simplicity, we assume that the observation model directly maps the measures into the state vector (i.e., C is the identity matrix and can be therefore removed from the equations). Observations are corrupted by a white Gaussian noise. The resulting system equations are: ξ k +1 = ξ k + ν k z k +1 = ξ k + w k The KF equations: At every time step k + 1 when an observation z k +1 is available: At every time step k : ξ k +1 = ˆ ˆ ξ k +1 | k + G k +1 ( z k +1 − ˆ ξ k +1 | k ) ξ k +1 | k = ˆ ˆ ξ k P k +1 = P k +1 | k − G k +1 P k +1 | k P k +1 | k = P k + V k G k +1 = P k +1 | k ( P k +1 | k + W k +1 ) − 1 2

  3. RANGE FINDER EXAMPLE The robot is not moving and the environment is assumed to be stationary → the true distance , or the range and bearing do not change during the measurement process, apart from maybe environment vibrations, v k (small white noise) The measures from the sensor are a ff ected by Gaussian white noise w k Proximity sensor - In case of a simple proximity sensor, the sensor only reports the measured distance d from the closest obstacle, and the state variable has only one component. Range finder - If a more powerful sensor is available (e.g., a camera), the sensor can measure both the range ρ and bearing angle β of the closest obstacle wrt to the robot. The state variable has two components, which are the same variables measured by the sensor. The observation vector is therefore the following, where the C matrix is the identity matrix I : ⇤ T = ξ k + w k ⇥ z k +1 = ρ k +1 β k +1 3 ,

  4. VALUES OF THE MODEL MATRICES FOR A DISTANCE SENSOR The state transition Matrix A : is the transformation factor to obtain the new state from the ⇥ ⇤ ⇥ ⇤ last state, ξ k +1 = A ξ k , but since there is no change dynamics in the system, A = 1 The control matrix B : defines how the control inputs a ff ect state changes, ⇥ ⇤ ⇥ ⇤ ξ k +1 = A ξ k + Bu k , however in this case no control actions are executed, therefore B = 0 The observation matrix C : multiplies a state vector to translate it to a measurement vector, ⇥ ⇤ z k = C ξ k but since in this case the measurement d is obtained directly, C = ⇥ ⇤ 1 The process covariance matrix V : defines how spread state uncertainty is, ξ k +1 = A ξ k + Bu k + ν k . To set it to a reasonable value, some assumptions/knowledge regarding the stability of the distance being measured is needed. In this case, since nothing is ⇥ 1 · 10 − 5 ⇤ ⇤ ⇥ moving it’s safe to use a small value, such as: V = The measurement covariance matrix W : is related to how reliable and stable the sensor is ⇥ ⇤ 1 · 10 − 1 ⇤ ⇥ making distance measures, z k = C ξ k + w k . We can be conservative, using W = Matrix ˆ ξ 0 is the initial prediction of the distance: This can be based on any a priori knowledge. Let’s set it to ˆ ⇥ ⇤ ξ 0 = m 3 ⇥ ⇤ Matrix P 0 is the initial prediction of the covariance on the distance estimate: Again, some a priori knowledge would be necessary to set it to a good value. Let’s start with a value which ⇥ ⇤ corresponds to about 30% of the initial prediction: P 0 = m 1 4

  5. FILTER’S EQUATIONS FOR SCALAR DISTANCE MEASUREMENT Blue = Inputs, Red = Outputs, Black = Constant Parameters , Gray = Working variables d k +1 | k = ˆ ˆ State prediction d k (Predict where the system state will be) P k +1 | k = P k + 1 · 10 − 5 Covariance prediction (Predict the amount of error in state prediction) ν k +1 = d k +1 − ˆ Innovation d k +1 | k (Compare reality against prediction) Σ ν k +1 = P k +1 | k + 1 · 10 − 1 Innovation covariance (Compare real error against prediction ) G k +1 = P k +1 | k Σ − 1 Kalman gain ν k +1 (Rescale/weight the prediction) d k +1 = ˆ ˆ d k +1 | k + G k +1 ν k +1 State update (New estimate of the system state) P k +1 = (1 − G k +1 ) P k +1 | k Covariance update (New estimate of error) 5

  6. PERFORMANCE OF KF ON SCALAR DISTANCE MEASURES Di ff erent random realizations of the same process and filter 6

  7. PERFORMANCE OF KF ON SCALAR DISTANCE MEASURES Di ff erent parameters for the process and the filter 0.10 0.02 0.03 0.02 7

  8. PERFORMANCE OF KF ON SCALAR DISTANCE MEASURES Di ff erent parameters for the process and the filter 0.02 0.98 0.15 8

  9. EQUATIONS’ ANALYSIS FOR THE SCALAR FILTERING PROBLEM Scenario: The state, the measures, and the controls are all scalars. One scalar measure is obtained when an The state-observation equations: observation is available. For simplicity, all the linear ξ k +1 = ξ k + u k + ν k coe ffi cients are set to 1. If no controls are issued, the z k +1 = ξ k + w k scenario is the same as in the case of the scalar filtering of a stream of measures ( Scalar filtering problem ) The KF equations: At every time step k + 1 when an observation z k +1 is available: At every time step k : b ξ k +1 = b ξ k +1 | k + G k +1 ( z k +1 − b ξ k +1 | k ) b ξ k +1 | k = b ξ k + u k P k +1 = P k +1 | k − G k +1 P k +1 | k P k +1 | k = P k + V k G k +1 = P k +1 | k ( P k +1 | k + W k +1 ) − 1 In scalar, one-dimensional notation, the equations become: σ 2  ξ G ←   σ 2 ξ + σ 2   ξ ← ξ + u   w   Prediction update Measurement correction ξ ← ξ + G ( z − ξ ) σ 2 ξ ← σ 2 ξ + σ 2   ν     σ 2 ξ ← (1 − G ) σ 2  ξ 9

  10. EQUATIONS’ ANALYSIS FOR THE SCALAR FILTERING PROBLEM σ 2  ξ G ←   σ 2 ξ + σ 2   ξ ← ξ + u   w   Prediction update Measurement correction ξ ← ξ + G ( z − ξ ) σ 2 ξ ← σ 2 ξ + σ 2   ν     σ 2 ξ ← (1 − G ) σ 2  ξ Variance of the state estimate: σ 2 σ 2 w σ 2 1 ← 1 + 1 ⇣ ⌘ ξ ξ σ 2 σ 2 ξ = 1 − ξ = = ⇒ σ 2 σ 2 σ 2 σ 2 σ 2 ξ + σ 2 ξ + σ 2 ξ ξ w w w (Mean) State estimate: σ 2 σ 2 ⇣ ⌘ ⇣ ⌘ ξ w ξ ← (1 − G ) ξ + Gz = ξ + z σ 2 σ 2 ξ + σ 2 ξ + σ 2 w w ⇣ σ 2 ⌘⇣ ξ ξ σ 2 + z ⌘ w = σ 2 σ 2 σ 2 ξ + σ 2 w w ξ ⇣ 1 ⌘ − 1 h 1 + 1 ξ + 1 i ξ ← z σ 2 σ 2 σ 2 σ 2 w w ξ ξ Weighted arithmetic mean, w 1 x 1 + w 2 x 2 , w i proportional to inverse of variance 10 w 1 + w 2

  11. EXAMPLE OF KF BELIEF EVOLUTION IN A SCALAR CASE 11

  12. EXAMPLE OF KF BELIEF EVOLUTION IN A SCALAR CASE Motion 12

  13. EQUATIONS’ ANALYSIS FOR THE SCALAR FILTERING PROBLEM • ^ ξ k+1|k and z k+1 (i.e. ξ and z above before state update) are two normal distributed, independent RVs. ξ represents the current state prediction (out of the process model and past history), while z is the current state measure. • They can be also thought as two readings (x 1 , σ 1 ) and (x 2 , σ 2 ) from two independent instruments with di ff erent precision, or as two sequential independent readings made with di ff erent precision from the same instrument. All the readings are about the same quantity to be estimated, which is the true state of the system σ 2 p 1 (x) p 2 (x) σ 1 x 2 p(x) = p 1 (x)p 2 (x) σ 1 σ 2 x 2 x 1 x 1 In any chosen mental or practical representation, the question is how to combine the ξ and z readings/estimates (and their variances) into a new, single state estimation that best represents the information from ξ and z 13

  14. EQUATIONS’ ANALYSIS FOR THE SCALAR FILTERING PROBLEM σ 2 p 1 (x) p 2 (x) σ 1 x 2 p(x) = p 1 (x)p 2 (x) σ 1 σ 2 x 2 x 1 x 1 p(x) is the probability of a value x given the readings (or the estimates) x 1 and x 2. # 2 " � x 1 σ 2 2 + x 2 σ 2 � σ 2 1 + σ 2 p(x) = N(x 1 , σ 1 )N(x 2 , σ 2 ) − 1 1 2 x − σ 2 1 σ 2 σ 2 1 + σ 2 2 2 2 p ( x ) = Ce � � The most probable result, that best σ 2 = σ 2 1 + σ 2 1 x 1 σ 2 2 + x 2 σ 2 2 With variance: 1 represent the data in the ML sense, b σ 2 1 σ 2 b x = σ 2 1 + σ 2 2 corresponds to the distribution center: 2 This is precisely the result of ⇣ 1 ⌘ − 1 h 1 + 1 ξ + 1 1 ← 1 + 1 i the new state estimate ξ ← z σ 2 σ 2 σ 2 σ 2 σ 2 σ 2 σ 2 w w ξ ξ produced by the KF! ξ ξ w 14

  15. E X A M P L E F O R R O B O T L O C A L I Z AT I O N x 0 , σ 2 bel ( x 0 ) = N (ˆ 0 ) ( ˆ x 1 | 0 = A ˆ x 0 + Bu 1 bel ( x 1 ) = σ 2 1 | 0 = A 2 σ 2 0 + σ 2 action ( ˆ x 1 = ˆ x 1 | 0 + G 1 (ˆ x z 1 − ˆ x 1 | 0 ) bel ( x 1 ) = σ 2 1 = (1 − G 1 ) σ 2 1 | 0 ( ˆ x 2 | 1 = A ˆ x 1 + Bu 2 bel ( x 2 ) = σ 2 2 | 1 = A 2 σ 2 1 + σ 2 action 15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend