kalman and particle filters
play

KALMAN AND PARTICLE FILTERS Tutorial 10 H. R. B. Orlande, M. J. - PowerPoint PPT Presentation

KALMAN AND PARTICLE FILTERS Tutorial 10 H. R. B. Orlande, M. J. Colao, G. S. Dulikravich, F. L. V. Vianna, W. B. da Silva, H. M. da Fonseca and O. Fudym Department of Mechanical Engineering Escola Politcnica/COPPE Federal University of Rio de


  1. KALMAN AND PARTICLE FILTERS Tutorial 10 H. R. B. Orlande, M. J. Colaço, G. S. Dulikravich, F. L. V. Vianna, W. B. da Silva, H. M. da Fonseca and O. Fudym Department of Mechanical Engineering Escola Politécnica/COPPE Federal University of Rio de Janeiro, UFRJ Rio de Janeiro, RJ, Brazil helcio@mecanica.ufrj.br 1

  2. SUMMARY • State estimation problems • Kalman filter • Particle filter • Examples: Lumped system and 1D Heat conduction • Applications • Conclusions

  3. STATE ESTIMATION PROBLEM x k = f k ( x k-1 , u k-1 , v k-1 ) State Evolution Model:  ( , ) z h x n Observation Model: k k k k  n = state variables to be estimated x R x u  R n p = input variable  n = state noise v R v  n = measurements z R z  n = measurement noise n R n Subscript k = 1, 2, …, denotes an instant t k in a dynamic problem

  4. STATE ESTIMATION PROBLEM Definition: The state estimation problem aims at obtaining information about x k based on the state evolution model and on the measurements given by the observation model.

  5. BAYESIAN FRAMEWORK The solution of the inverse problem within the Bayesian framework is recast in the form of statistical inference from the posterior probability density , which is the model for the conditional probability distribution of the unknown parameters given the measurements. The measurement model incorporating the related uncertainties is called the likelihood , that is, the conditional probability of the measurements given the unknown parameters. The model for the unknowns that reflects all the uncertainty of the parameters without the information conveyed by the measurements, is called the prior model.

  6. BAYESIAN FRAMEWORK The formal mechanism to combine the new information (measurements) with the previously available information (prior) is known as the Bayes ’ theorem:   ( ) ( ) x z x     ( ) ( ) x x z  posterior ( ) z where  posterior ( x ) is the posterior probability density,  ( x ) is the prior density,  ( z | x ) is the likelihood function and  ( z ) is the marginal probability density of the measurements, which plays the role of a normalizing constant.

  7. STATE ESTIMATION PROBLEM x k = f k ( x k-1 , u k-1 , v k-1 ) State Evolution Model:  ( , ) z h x n Observation Model: k k k k The evolution-observation model is based on the following assumptions : for k = 1, 2, …, is a Markovian process, that is, x (i) The sequence k    ( , , , ) ( ) x x x x x x   k 0 1 k 1 k k 1 z for k = 1, 2, …, is a Markovian process with respect to (ii) The sequence k x the history of , that is, k    ( , , , ) ( ) z x x x z x 0 1 k k k k x k (iii) The sequence depends on the past observations only through its own history, that is,    ( , ) ( ) x x z x x    1 1: 1 1 k k k k k

  8. STATE ESTIMATION PROBLEM x k = f k ( x k-1 , u k-1 , v k-1 ) State Evolution Model:  ( , ) z h x n Observation Model: k k k k Different problems can be considered:  1. The prediction problem , concerned with the determination of ( ) ; x z  k 1: k 1  x z 2. The filtering problem , concerned with the determination of ( ) ; 1: k k  ( ) x z 3. The fixed-lag smoothing problem , concerned the determination of ,  1: k k p p  is the fixed lag; 1 where 4. The whole-domain smoothing problem , concerned with the determination of   z { z , 1, , } i i K  x z 1: ( ) K , where is the complete sequence of k 1: K measurements.

  9. FILTERING PROBLEM    ( ) ( ) x z x By assuming that is available, the 0 0 0  x z ( ) posterior probability density is then obtained 1: k k with Bayesian filters in two steps: prediction and update

  10.  ( x 0 )  ( x 0 )  ( x 1 | x 0 )  ( x 1 | x 0 ) Prediction Prediction  ( x 1 )  ( x 1 )  ( z 1 | x 1 )  ( z 1 | x 1 ) Update Update  ( x 1 | z 1 )  ( x 1 | z 1 )  ( x 2 | x 1 )  ( x 2 | x 1 ) Prediction Prediction  ( x 2 | z 1 )  ( x 2 | z 1 )  ( z 2 | x 2 )  ( z 2 | x 2 ) Update Update  ( x 2 | z 1:2 )  ( x 2 | z 1:2 )

  11. THE KALMAN FILTER • Evolution and observation models are linear. • Noises in such models are additive and Gaussian, with known means and covariances. • Optimal solution if these hypotheses hold. − = F 𝑙 x 𝑙− 1 + G 𝑙 u 𝑙− 1 + s 𝑙− 1 +v 𝑙− 1 State Evolution Model: x 𝑙   Observation Model: z H x n k k k k • F and H are known matrices for the linear evolutions of the state x and of the observation z , respectively. • G is matrix that determines how the control u affects the state x . • Vector s is assumed to be a known input . • Noises v and n have zero means and covariance matrices Q and R , respectively.

  12. THE KALMAN FILTER Prediction:  − = F 𝑙 x 𝑙− 1 + G 𝑙 u 𝑙− 1 + s 𝑙− 1 +v 𝑙− 1  x F x x 𝑙  k k k 1    T Q P F P F  1 k k k k k Update:    1     T T K P H H P H R k k k k k k k      ( ) x x K z H x k k k k k k     P P I - K H k k k k

  13. THE PARTICLE FILTER • Monte-Carlo techniques are the most general and robust for non-linear and/or non-Gaussian distributions. • The key idea is to represent the required posterior density function by a set of random samples (particles) with associated weights, and to compute the estimates based on these samples and weights. • Introduced in the 50’s, but no much used until recently because of limited computational resources. • Particles degenerated very fast in early implementations, i.e., most of the particles would have negligible weight. The resampling step has a fundamental role in the advancement of the particle filter.

  14. Sampling Importance Resampling (SIR) Algorithm (Ristic, B., Arulampalam, S., Gordon, N., 2004, Beyond the Kalman Filter , Artech House, Boston)

  15. Sampling Importance Resampling (SIR) Algorithm (Ristic, B., Arulampalam, S., Gordon, N., 2004, Beyond the Kalman Filter , Artech House, Boston)

  16. Sampling Importance Resampling (SIR) Algorithm (Ristic, B., Arulampalam, S., Gordon, N., 2004, Beyond the Kalman Filter , Artech House, Boston) Although the resampling step reduces the effects of degeneracy, it introduces other practical problems: • Limitation in the parallelization. • Particles that have high weights are statistically selected many times: Loss of diversity, known as sample impoverishment, specially if the evolution model errors are small.

  17. Sampling Importance Resampling (SIR) Algorithm (Ristic, B., Arulampalam, S., Gordon, N., 2004, Beyond the Kalman Filter , Artech House, Boston) Step 1  i For 1, , draw new particles x k from the prior i N • Weights are easily evaluated and    x i density and then use the likelihood density x  1 k k   importance density easily   i i to calculate the correspondent weights . w z x k k k Step 2 sampled.   N i Calculate the total weight and then normalize T w • Sampling of the importance w k  1 i    1 i i the particle weights, that is, for 1, , let i N w T w k w k density is independent of the Step 3 Resample the particles as follows : measurements at that time. The Construct the cumulative sum of weights (CSW) by filter can be sensitive to outliers.    c  . computing i for 1, , , with i N 0 c c w  1 0 i i k i  and draw a starting point Let 1 u from the uniform • Resampling is applied every 1  N   1 distribution 0,  U  iteration, which can result in fast  1, , For j N       1 1 Move along the CSW by making u u N j loss of diversity of the particles. j 1   .  While make 1 i i u c j i  j i Assign sample x x k k  N  1 j Assign sample w k

  18. EXAMPLE: Lumped System  ( ) ( ) d t mq t    ( ) m t for t > 0 dt h    for t = 0 0 where    ( ) ( ) t T t T     T T  0 0 h  m  c L

  19. EXAMPLE: Lumped System Two illustrative cases are examined: (i) Heat Flux q ( t ) = q 0 constant and deterministically known; (ii) Heat Flux q ( t ) = q 0 f ( t ) with unknown time variation. • Plate is made of aluminum (  = 2707 kgm -3 , c = 896 Jkg -1 K -1 ), with thickness L = 0.03 m, q 0 = 8000 Wm -2 , =20 o C, h = 50 Wm -2 K -1 and T 0 = 50 o C. • Measurements of the transient temperature of the slab are assumed available . These measurements contain additive, uncorrelated, Gaussian errors, with zero mean and a constant standard deviation s z . • The errors in the state evolution model are also supposed to be additive, uncorrelated, Gaussian, with zero mean and a constant standard deviation s  .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend