Course outline Week 1 : Introduction to intelligent vehicles, - - PDF document

course outline
SMART_READER_LITE
LIVE PREVIEW

Course outline Week 1 : Introduction to intelligent vehicles, - - PDF document

ME400 Intelligent vehicles and road transportation systems (ITS) Week 5 : Multi-sensor data fusion techniques Denis Gingras Winter 2015 1 4-janv.-15 D Gingras ME470 IV course CalPoly Week 5 Course outline Week 1 : Introduction to


slide-1
SLIDE 1

1

4-janv.-15 1

Intelligent vehicles and road transportation systems (ITS)

Week 5 : Multi-sensor data fusion techniques

ME400

Denis Gingras Winter 2015

D Gingras – ME470 IV course CalPoly Week 5 4-janv.-15 2

Course outline

 Week 1 : Introduction to intelligent vehicles, context, applications and

motivations

 Week 2 : Vehicle dynamics and vehicle modelling  Week 3: Positioning and navigation systems and sensors  Week4: Vehicular perception and map building  Week 5 : Multi-sensor data fusion techniques  Week 6 : Object detection, recognition and tracking  Week 7: ADAS systems and vehicular control  Week 8 : VANETS and connected vehicles  Week 9 : Multi-vehicular scenarios and collaborative architectures  Week 10 : The future: toward autonomous vehicles and automated driving

(Final exam)

D Gingras – ME470 IV course CalPoly Week 5

slide-2
SLIDE 2

2

4-janv.-15 3

 Brainstorming and introduction  Modeling and managing uncertainty  Introduction to data fusion  State space modeling  Basics on recursive filters  Data registration  Main data fusion algorithms  Kalman  Extended Kalman  Unscented Kalman  Particle filter  DD1 and DD2  Multiple model approach  Applications in navigation and map building

Week 5 outline

D Gingras – ME470 IV course CalPoly Week 5 4

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 4 D Gingras – ME470 IV course CalPoly Week 5

Define “statistical inference”, “estimation” and “parameter”.

slide-3
SLIDE 3

3

5

What is optimization?

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 5 D Gingras – ME470 IV course CalPoly Week 5 6

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 6 D Gingras – ME470 IV course CalPoly Week 5

What is a Bayesian estimator?

slide-4
SLIDE 4

4

7

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 7 D Gingras – ME470 IV course CalPoly Week 5

What is Maximum Likelihood (ML) ?

8

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 8 D Gingras – ME470 IV course CalPoly Week 5

What is noise ?

slide-5
SLIDE 5

5

9

Define the following two words: robustness and reliability

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 9 D Gingras – ME470 IV course CalPoly Week 5 10

What do we mean by data fusion?

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 10 D Gingras – ME470 IV course CalPoly Week 5

slide-6
SLIDE 6

6

11

Why is multi-sensor fusion important in intelligent vehicle applications?

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 11 D Gingras – ME470 IV course CalPoly Week 5 12

What is the difference between “recursion” and “iteration”?

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 12 D Gingras – ME470 IV course CalPoly Week 5

slide-7
SLIDE 7

7

13

What do we mean by real-time processing/system?

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 13 D Gingras – ME470 IV course CalPoly Week 5 14

What is the Markov property?

Brainstorming

Open questions and introductory discussion

Brainstorming 4-janv.-15 14 D Gingras – ME470 IV course CalPoly Week 5

slide-8
SLIDE 8

8

Modeling Uncertainty

Modeling uncertainty

15 4-janv.-15 15 D Gingras – ME470 IV course CalPoly Week 5

Uncertainty models can be applied to many forms of error or noise, including:  random & systematic  temporal and spatial We are generally concerned with modelling errors in measurements. Measurements may be:  incomplete: related to some but not all of the variables of interest  indirect: related indirectly to the quantities of interest  intermittent: available at irregularly-spaced instants of time Every measurement in nature has some amount of noise impressed on it Modelling that noise is required to systems with higher performance. This may mean:  Improved safety, availability, overall robustness of on-board systems.  Improved accuracy of estimated position or of the generated models of the environment

Modeling uncertainty

16 4-janv.-15 16 D Gingras – ME470 IV course CalPoly Week 5

Example

Suppose we measure the rotation of a constant speed wheel with both a typical wheel encoder (odometer) and an ideal sensor (straight black line). We might get the following two curves.

  • utliers

Systematic bias and scale errors

Sensor with random noise, bias and scale error, and two outliers

Modeling Uncertainty

slide-9
SLIDE 9

9

Modeling uncertainty

17 4-janv.-15 17 D Gingras – ME470 IV course CalPoly Week 5

Example (suite)

Ignoring the outliers, we might model the errors as follows: where N represents the Normal or Gaussian probability distribution function (pdf). Notice that the unknowns include parameters of both systematic (a,b) and stochastic (,σ) processes. Of course, is normally unknown. Otherwise we would just subtract it from the measurement and all sensors would effectively be perfect. Do not confuse the unknown with the random. Errors may be unknown but systematic. Random error models are useful for approximating unknown quantities, which cannot be predicted, but they are not the same thing as unknown systematic error quantities.

Modeling Uncertainty

Modeling uncertainty

18 4-janv.-15 18 D Gingras – ME470 IV course CalPoly Week 5

We remove systematic error through the process of calibration. To do calibration you need:  A model for the systematic components of .This model must make some sense.  A set of observations from a reference that has the accuracy you want to achieve.  An estimation process which computes the best fit parameters of the model which matches the observations. We remove random error and outliers by the process of filtering.  If two measurements at different times or in different places are correlated, the correlated part of the error can be removed by differential observation.  If two measurements are corrupted by the same constant value,

  • bservations of their difference will not include the error

Modeling Uncertainty

Managing uncertainty

slide-10
SLIDE 10

10

Modeling uncertainty

19 4-janv.-15 19 D Gingras – ME470 IV course CalPoly Week 5

Random variables

A random variable (random vector) may be continuous or discrete. A random variable be described in terms of its probability distribution function (pdf). An arbitrary pdf may look like this:

Modeling Uncertainty

The gray area under the pdf determines the probability that an observation

  • f x falls between a and b.

2 2

) ( 2 1 2

2 1 ) ( : ) , ( ~ ) (

 

   

 

x

e x p N x p

Univariate Gaussian process

1

1( ) ( ) 2 1/2 /2

( ) ~ ( ) : 1 ( ) (2 )

t C

n

p Ν ,C p e C 

  

x μ x μ

x μ x

2D Multivariate Gaussian process

Modeling uncertainty

Modeling Uncertainty

 

20 4-janv.-15 20 D Gingras – ME470 IV course CalPoly Week 5

slide-11
SLIDE 11

11

Modeling uncertainty

Modeling Uncertainty

Using Gaussian pdf to model uncertainty

The mean and covariance are the first and second moments of an arbitrary pdf. Use of these alone to characterize a real pdf is a form of linear assumption. This linear assumption happens to correspond to the use of a Gaussian because its higher moments are all zero. Many real noise signals follow a Gaussian distribution anyway. By the Central Limit Theorem, the sum of a (sufficiently large) number

  • f independent variables has a Gaussian distribution regardless of their

individual distributions.

21 4-janv.-15 21 D Gingras – ME470 IV course CalPoly Week 5

~ ( , ) ~ ( , )

T

X N C Y N A B ACA Y AX B         

1 1 1 2 1 1 2 1 2 1 1 2 2 2 1 2 1 2 1 2

~ ( , ) 1 ( ) ( ) ~ , ~ ( , ) X N C C C p X p X N X N C C C C C C C    

 

              

We stay in the “Gaussian world” as long as we start with Gaussians and perform only linear transformations.

Modeling uncertainty

Modeling Uncertainty

Operations on Gaussian processes

Linear transformation of a Gaussian random variable. Sum of two independent Gaussian random variables:

22 4-janv.-15 22 D Gingras – ME470 IV course CalPoly Week 5

slide-12
SLIDE 12

12

Maximum Likelihood Estimators The expected value or expectation of anything is its weighted average where the pdf provides the weighting

Modeling uncertainty

Modeling Uncertainty 23 4-janv.-15 23 D Gingras – ME470 IV course CalPoly Week 5

The expected value or expectation of anything is its weighted average where the pdf provides the weighting Note that the expectation is a linear functional, so you need the entire distribution to compute it.

Modeling uncertainty

Modeling Uncertainty 24 4-janv.-15 24 D Gingras – ME470 IV course CalPoly Week 5

slide-13
SLIDE 13

13 The mean: first statistical moment

 Let x be a random variable.The expected value E[x] is the

population mean:

 The sample mean is an estimated of the true mean and is given by

 Expected value of a vector is computed component by component.

[ ] ( )

x

E x x f x dx    

1

[ ] [ , ]T

n

E x x   x x 

25 4-janv.-15 25 D Gingras – ME470 IV course CalPoly Week 5

1

1

N i i

x x N

Modeling uncertainty

Modeling Uncertainty

The variance and covariance: 2nd statistical moment

 The variance (of the population) is given by  For vector variables, the covariance matrix is given by  The sample variance (unbiased) is given by,

2 2 2

[( ) ] ( ) ( )

x x

E x x f x dx       

Cij  1 N (xik  x

i)(x jk  x j) k1 N

26 4-janv.-15 26 D Gingras – ME470 IV course CalPoly Week 5

s2  1 N 1 (xi  x )2

1 N

Modeling uncertainty

Modeling Uncertainty

slide-14
SLIDE 14

14 The covariance matrix

 Along the main diagonal, Cii are variances.  Off- the main diagonal Cij are essentially correlations

coefficients (not normalized).

 C1,1  1

2

C1,2 C1,N C2,1 C2,2   2

2

  CN,1  CN,N   N

2

           

27 4-janv.-15 27 D Gingras – ME470 IV course CalPoly Week 5

The covariance matrix of a random vector is non-negative definite, Toeplitz and square. It is therefore full rank and its inverse exists.

Modeling uncertainty

Modeling Uncertainty

When two variables are non correlated

x and y are two independent

zero-mean Gaussian random variables (N =100)

 Generated with x =1 y = 3  True covariance matrix is:  Sample covariance matrix is: 0.90 0.44 ˆ 0.44 8.82

xy

C       

28 4-janv.-15 28 D Gingras – ME470 IV course CalPoly Week 5

1 9

xy

C       

Modeling uncertainty

Modeling Uncertainty

slide-15
SLIDE 15

15

Ccd  10.62 7.93 7.93 8.84      

29 4-janv.-15 29 D Gingras – ME470 IV course CalPoly Week 5

When two variables are correlated

x and y are two zero-mean Gaussian

random variables (N =100) generated with

x =1 y = 3.

 We form two other rvs.:

c = x + y , d = x - y

 The sample covariance matrix is:

NB: The slope is -1

Modeling uncertainty

Modeling Uncertainty 30 4-janv.-15 30 D Gingras – ME470 IV course CalPoly Week 5

Building uncertainty ellipses

 It assumes a Gaussian model  Representation of contours

  • f constant probability is easy

 In general, these curves of equiprobability are n – ellipsoids because is constant when the exponent, called the Mahalanobis distance is a constant that depends on the probability. n = dimensions of random vector  We usually use an integer number of σ

Modeling uncertainty

Modeling Uncertainty 1 2

( ) ( ) ( )

T C

k p

   x μ x μ

slide-16
SLIDE 16

16

31 4-janv.-15 31 D Gingras – ME470 IV course CalPoly Week 5

Building uncertainty ellipses

 These ellipses are contours of the Gaussian density and are sometimes called “concentration ellipses” since the bulk of the probability is concentrated inside them.

Modeling uncertainty

Modeling Uncertainty 1 2

( ) ( ) ( )

T C

k p

   x μ x μ

 The covariance matrix is diagonal when it is expressed in a set of rectangular coordinates coincident with the major and minor axes of the elliptical distribution that it represents.

4-janv.-15 32 D Gingras – ME470 IV course CalPoly Week 5

Example of error ellipses for vehicular odometry derived from covariance matrices of the position estimate.

Modeling uncertainty

Modeling Uncertainty

slide-17
SLIDE 17

17

Remark: The modeling of uncertainty is so important in data fusion problems that a number of alternatives to probability theory have been proposed to in order to deal with the limitations perceived with the probabilistic methods (system complexity, inconsistency of likelihood, accuracy of the model, mutual exclusivity of hypothesis etc.). Among those alternatives, we can find in the literature  Fuzzy logic: it is a powerful tool for the representation of uncertainty dealing with vagueness and in cases where classes or hypothesis are not mutually exclusive. It is

  • ften used to control systems and tasks of high-level data fusion.

 The theory of evidence (Dempster-Shafer): Extension of the classical probabilistic

  • approach. It uses belief functions and probability masses that deal with conflicting or

ambiguous information from different sensors.  Uncertainty computation per interval: it provides a good estimate of uncertainty in particular when there is a lack of probabilistic information and where errors and the parameters of the sensors are known and bounded.

Modeling uncertainty

Modeling Uncertainty 33 4-janv.-15 33 D Gingras – ME470 IV course CalPoly Week 5 34 4-janv.-15 34 D Gingras – ME470 IV course CalPoly Week 5

Uncertainty linked to the estimate of a variable value. Uncertainty

Maximum Likelihood Estimators

Modeling uncertainty

Modeling Uncertainty

slide-18
SLIDE 18

18

To ensure good performance, the vehicular intelligent systems system should have precision, integrity, availability and continuity so to reach adequate performance in safety critical situations. Data fusion is usually encounter in several branches of vehicular applications:  Positioning and navigation  Perception  Objects detection, recognition and tracking Fusion may implies not only on-board sensors, but also all other useful sources of information, local or remote.

Data fusion Intro

Introduction to data fusion

35 4-janv.-15 35 D Gingras – ME470 IV course CalPoly Week 5

IVs motivation for data fusion Fusion of data from multiple sensors – basic principle

 The states of a vehicle (ex. Pose, dynamics, trajectory etc.) are

represented as a state vector By transforming the information obtained from several different sensors into the same coordinate system (reference frame), one will get several (noisy) measurements about the same state vector

 To estimate the state vector of the vehicle from these multiple

sources of information, each measurement is weighted differently, as it depends on the variance (or covariance) of the measurement coming from each sensor. The more the sensors (sources of information) are independent, the better the fusion process.

[ , , , , , , , , , ...] x x y x y x y etc          

36 4-janv.-15 36 D Gingras – ME470 IV course CalPoly Week 5 Data fusion Intro

Introduction to data fusion

slide-19
SLIDE 19

19

4-janv.-15 37 D Gingras – ME470 IV course CalPoly Week 5

A common data fusion functional model was developed in 1985 by the U.S. Joint Directors of Laboratories (JDL) Data Fusion Group initially for military

  • applications. Here is a possible revised JDL model for vehicular perception

applications.

Source: Panagiotis Lytrivis et al., Sensor Data Fusion in Automotive Applications, InTech Open Science Europe, 2009

Data fusion Intro

Introduction to data fusion

4-janv.-15 38 D Gingras – ME470 IV course CalPoly Week 5

Source: Panagiotis Lytrivis et al., Sensor Data Fusion in Automotive Applications, InTech Open Science Europe, 2009

Centralized Fusion Architecture for tracking mobile objects: In a tracking application,

  • bservations of angular direction, range, and range rate (a basic measurement level

fusion of various data) are used for estimating a target’s positions, velocities, and accelerations in one or more axes. This is achieved using state-estimation techniques like a Kalman filter.

Data fusion Intro

Introduction to data fusion

slide-20
SLIDE 20

20

4-janv.-15 39 D Gingras – ME470 IV course CalPoly Week 5

Source: Panagiotis Lytrivis et al., Sensor Data Fusion in Automotive Applications, InTech Open Science Europe, 2009

Decentralized fusion Architecture for tracking

Data fusion Intro

Introduction to data fusion

4-janv.-15 40 D Gingras – ME470 IV course CalPoly Week 5

Source: Panagiotis Lytrivis et al., Sensor Data Fusion in Automotive Applications, InTech Open Science Europe, 2009

Hybrid Fusion Architecture for Tracking

Data fusion Intro

Introduction to data fusion

slide-21
SLIDE 21

21

Source: Experiment by W. Hoff,

With data fusion, the resulting state uncertainty corresponds to the intersection of the original ellipses. The state uncertainty was reduced by 90% State uncertainty ellipsoids obtained from two different sensors. These ellipsoids are obtained from the covariance matrices of the estimated states computed from each of the sensors data.

41 4-janv.-15 41 D Gingras – ME470 IV course CalPoly Week 5 Data fusion Intro

Introduction to data fusion

Illustration of the effect of data fusion upon the quality

  • f state estimation

Data fusion Intro

Introduction to data fusion

42 4-janv.-15 42 D Gingras – ME470 IV course CalPoly Week 5

Example of fusion in positioning and navigation

Source: Isaac Skog et al., State-of-the art and future in-car navigation systems: a survey, IEEE Transaction on ITS,

  • Vol. 10, No. 1, March 2009
slide-22
SLIDE 22

22

Data fusion Intro

Introduction to data fusion

43 4-janv.-15 43 D Gingras – ME470 IV course CalPoly Week 5

Example in Frontal Object Perception (FOP)

Source: Trung-Dung Vu et al., Object Perception for Intelligent Vehicle Applications: A Multi-Sensor Fusion Approach. IEEE Intelligent Vehicles Symposium, 2014. Vehicle State Filter

4-janv.-15 44

State space model

In vehicular dynamics, the states are usually physical quantities of interest , such as (linear and angular) position, velocity, acceleration, etc. As a first approximation, all the noises are often assumed to be white, which is not always the case. is

  • ften referred to the model noise, which is inducted to the system and affects the

state variables. is the noise coming from the sensors. ( ) w k ( ) k  From the state space model equations, we can obser that the next state value depends only on the present state value and the current measurement. Therefore the model assumes that the system is Markovian, that is, all the system history, previous to the current state value, does not influence the next state value. Both matrices A and H are with all elements being constant. The matrix A describes how the system changes with time. It contains the equations of motion of the vehicle, expressed as a 1st order difference (discrete) or differential (continuous)

  • equations. The matrix H describes the relationship between the measurements and

the state variables. It contains the sensor information and defines how each state variable is mapped into the measurements.

State-space model D Gingras – ME470 IV course CalPoly Week 5

slide-23
SLIDE 23

23

4-janv.-15 45

State space model

Because both noises are assumed to be white, they are zero-mean uncorrelated stochastic processes. Therefore, the only thing we need to describe the noises are their covariance matrix. The two covariance matrices are diagonal: = model noise covariance matrix of , (n x n) diagonal matrix = measurement noise covariance matrix of , (m x m) diagonal matrix

( ) w k Q R ( ) k 

The main diagonal elements correspond to the variance of the noise random variables (since the noise are assumed iid). To use the state space model adequately, we need to design both matrices A and H , and we need to estimate as accurately as possible the noise covariance matrices. The model is used extensively in multi-sensor data fusion (Week 5) and control (Week 7). The most common approach to estimate the state variables using this model is the Kalman Filter.

State-space model D Gingras – ME470 IV course CalPoly Week 5 4-janv.-15 D Gingras - UdeS - GEI 756 46

Les modèles d'état sont une façon alternative puissante de décrire une relation entre un signal d'entrée u(k ) et un signal de sortie y(k ) , en particulier, pour des applications en temps réel. Exemple: Considérons un véhicule se déplaçant sur une droite à vitesse constante v. Partant d’une position initiale x0, la position à l’instant n est donné par . Cette équation de mouvement peut s’écrire: Avec comme condition initiale . L’hypothèse selon laquelle la vitesse est constante n’est pas totalement fiable; ainsi pour prendre en compte ce doute on suppose que Où w(n) est un processus aléatoire de variance (covariance) Q modélisant le bruit de modèle.

( 1) ( ) ( ) ( 1) ( ) x n x n v n v n v n        

( ) ( ) x n x v n   x et v v  

( 1) ( ) ( ) ( 1) ( ) ( ) x n x n v n v n v n w n         

State space model

State-space model

slide-24
SLIDE 24

24

4-janv.-15 47

State space model

State-space model

Dynamical systems are usually expressed in recursive form. The “state space model ” is a widely used linear recursive model for describing dynamical systems. It is expressed as follow:

( 1) ( ) ( ) ( ) x k Ax k w k Bu k    

( ) ( ) ( ) ( ) z k H k x k k   

where:

= state variables, (n x 1) column vector = measurements, (m x 1) column vector = control signal to state matrix = state transition matrix, (n x n) matrix = states to measurements matrix, (m x n) matrix = state transition noise, (n x 1) column vector = measurement noise, (n x 1) column vector = control signal (command)

( ) ( ) ( ) ( ) ( ) x k z k B A H w k k u k 

D Gingras – ME470 IV course CalPoly Week 5

State space model equations

48 4-janv.-15 48 D Gingras – ME470 IV course CalPoly Week 5

Linear dynamic system (recursive form, Markovian): The Kalman filter is based on this model and gives the optimal (maximum likelihood) solution minimizing the mean square error for Gaussian noise processes.

) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 ( k w k x k H k z k v k u k G k x k F k x      

State space model

State-space model

slide-25
SLIDE 25

25

49 4-janv.-15 49 D Gingras – ME470 IV course CalPoly Week 5

Combining uncertainty (or information)

Recursive filters

Intro to recursive filters

The basic theory behind everything to do with recursive filter in general and the Kalman filter in particular implies two mathematical tools: 1) Uncertainty transformation rule with the Jacobian matrix 1) The Central Limit Theorem If , then the sample mean of these random variables. tend to have a Gaussian pdf,

2 2 T y x

J J    n   ( , / )

x

x N n   

Intuitively, this means that the most likely sample mean gets better as the sample size increases. In other words, the variance in the sample mean decreases with increasing numbers of measurements.

50 4-janv.-15 50 D Gingras – ME470 IV course CalPoly Week 5 Recursive filters

Intro to recursive filters

Let us define the new random variable y as the sum of all of these rvs.

1 n i i

y x



The uncertainty transformation rule with the Jacobian matrix give us,

2 2 T y x

J J   

 

1,1,...,1 J 

with in this example We then get

2 2 y x

n   

Example: variance of a sum of random variables We can use our uncertainty transformation rules to compute the variance

  • f a sum of random variables. Let there be random variables, each of

which are normally distributed with the same distribution:

( , ), 1...

i

x N i n    

slide-26
SLIDE 26

26

51 4-janv.-15 51 D Gingras – ME470 IV course CalPoly Week 5

Combining uncertainty (or information)

Recursive filters

Intro to recursive filters

Consider a problem where an integral is being computed from noisy

  • measurements. This is important for example with dead reckoning sensors

such as odometers or INS. In such a case where a new random variable is added at a regular frequency, the expression gives the development of the standard deviation versus time because n = t/∆t in a discrete time

  • system. In this simple model, uncertainty expressed as standard deviation

grows with the square root of time. Uncertainty grows rapidly and then levels off as time

  • evolves. This arises from

the fact that in practice random errors tend to cancel each other if enough

  • f them are added.

52 4-janv.-15 52 D Gingras – ME470 IV course CalPoly Week 5

Combining uncertainty (or information)

Recursive filters

Intro to recursive filters

Let us now consider the combined observations of a constant and define the new random variable y as the average (sample mean) of all of these random variables. Notice that the sample size is also a random variable. The Central Limit Theorem (CLT) directly tells us that the uncertainty of the sample mean decreases with more measurements. That is, to get a better answer, we measure the same thing over and over again and take the average. We can get the behavior of the sample mean variance over time because n = t/∆t in a discrete time system. We have, Hence with a sample mean, uncertainty expressed as standard deviation decreases with the square root of time.

2 2 / y x

n   

slide-27
SLIDE 27

27

53 4-janv.-15 53 D Gingras – ME470 IV course CalPoly Week 5

Combining uncertainty (or information)

Recursive filters

Intro to recursive filters

This idea that taking and merging multiple observations reduces the uncertainty of the combined result is the basic idea of the Kalman filter. Uncertainty decreases rapidly and then levels off as time evolves.

54 4-janv.-15 54 D Gingras – ME470 IV course CalPoly Week 5

Batch processing to estimate the mean

Recursive filters

Intro to recursive filters

The average of a set of data or the sample mean is given by the well known expression, However, in order to compute the sample mean using this equation, we need to store and process all the data at once, which is called « batch-

  • processing. If a new datum comes in, such as in a real-time situation,

this approach is not efficient, because we need to recompute the average using all the former data again plus the new datum. We need therefore to find a way to express in a different form the sample mean estimator, such that it is composed of two terms: the previous average estimate and the new datum.

1

1

N k k

x x N

slide-28
SLIDE 28

28

55 4-janv.-15 55 D Gingras – ME470 IV course CalPoly Week 5

Recursive estimation of the mean

Recursive filters

Intro to recursive filters

This is the basic idea behind recursion in algorithmic computations. We can manipulate the former equation this way: In essence, most data fusion filters and real-time processing algorithms, such as the Kalman filter, use recursive forms.

1

1

k k k

k x x x k

  

1 1 1 1

1 , 1

k k k i k i i i

x x kx x k

   

   

1

1 1

k k k

k k x x x k k

   

4-janv.-15 56 D Gingras – ME470 IV course CalPoly Week 5

Source: M Grewal et al., Kalman filtering: Theory and Practice Using MATLAB, 3rd Ed, Wiley, 2008.

Mathematical concepts in Kalman filtering

Kalman Filter

Kalman filter

slide-29
SLIDE 29

29

Kalman Filter

Model Proprioceptive Data Exteroceptive Data Prediction Correction The Kalman filter is a vectorial recursive filter and is optimal for linear systems

Kalman Filter

Kalman filter steps

  • Ex. inertials or
  • dometer
  • Ex. GPS

4-janv.-15 57 D Gingras – ME470 IV course CalPoly Week 5 4-janv.-15 58 D Gingras – ME470 IV course CalPoly Week 5

 It is an optimal (linear) estimator or optimal recursive data processing algorithm.  Belongs to the state space model (time domain) compared to frequency domain  Components: vehicle’s dynamics model, control inputs, and recursive measurements (include noise)  Parameters include indirect, inaccurate and uncertain observations.  Optimal recursive data processing algorithm  A set of mathematical equations  Iterative, recursive process  Optimal data processing algorithm under certain criteria  Discrete linear data  Estimates past, present and future states

Features of the Kalman filter

Kalman Filter

Kalman filter

slide-30
SLIDE 30

30

4-janv.-15 59 D Gingras – ME470 IV course CalPoly Week 5

 Measurement information is used to find and eliminate modeling errors, errors in the input data and errors in the model parameters.  Model information is used to eliminate outliers in the measurements.  Filtering can be use in the three main following tasks:

  • 1. Filtering: aim to reconstruct the state at time k, using all available

information until that time k

  • 2. Prediction: based on the measurements until time k we forecast the

system for times i > k

  • 3. Smoothing: reconstruct the state for times i, given measurements at

times k > i

Use of the Kalman filter

Kalman Filter

Kalman filter Kalman Filter

 Process  Measurement

1 1 1   

  

k k k k

w Bu Ax x

k k k

v Hx z  

1

 

Q N , ~

 

R N , ~

Process noise Measurement noise States Measurements

Generic state space model of the process to be estimated

slide-31
SLIDE 31

31

The Kalman Filter

The process to be estimated

 Priori estimate error  Posteriori estimate error  

 

k k

k

x x e ˆ

Priori state Posteriori state

k k k

x x e ˆ  

 

T k k

E

k

   

e e P

 

T k k k

E e e P 

The Kalman Filter

The process to be estimated

 Kalman gain  Posteriori state estimate

 

1   

  R H HP H P K

T k T k k

 

 

  

k k k k

x H z K x x ˆ ˆ ˆ

Innovation or residual

slide-32
SLIDE 32

32

The Kalman Filter

Overall process

 Time Update (“predict”)  Measurement Update (“correct”)

1 1

ˆ ˆ

  

 

k k k

Bu x A x Q A AP P  

  T k k 1

 

 

k k k

P H K I P

 

 

  

k k k k k

x H z K x x ˆ ˆ ˆ

 

1   

  R H HP H P K

T k T k k

4-janv.-15 64 D Gingras – ME470 IV course CalPoly Week 5

Typical Kalman filter set-up

Kalman Filter

Kalman filter

slide-33
SLIDE 33

33

4-janv.-15 65 D Gingras – ME470 IV course CalPoly Week 5

Deviation from classical Kalman filter hypothesis

Kalman Filter

Kalman filter

Markovian - semi or non-Markovian Linear - non-linear dynamic system Additive - non additive system noise Gaussian - non Gaussian system noise Additive - non additive measurement noise Gaussian - non Gaussian measurement noise )) ( ),..., 1 ( ), ( ( ) 1 ( x k x k x F k x    )) ( ( ) 1 ( k x F k x   )) ( ), ( ( ) ( k w k x H k z 

( , )

v v

v N m   ( , )

w w

w N m  

)) ( ), ( ( ) 1 ( k v k x F k x  

4-janv.-15 66

Iterations pour n > 0 :

1 * *

( ) ( 1) ( ) ( ) ( 1) ( ) ( )

T T ep ep

K k R k k H k H k R k k H k R k

 

       

*

( 1 ) ( 1, ) ( ) ( 1, ) ( )

T ep ef w

R k k k k R k k k k R k        ( ) ( 1) ( ) ( ) ( 1)

ef e ep

R k k R k k K k H k R k k     ˆ ˆ ˆ ( ) ( , 1) ( 1) ( ) ( ) ( ) ( , 1) ( 1) s k k k s k K k x k H k k k s k             

This matrix is called the Kalman gain

( ) K k

D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter

Kalman filter

slide-34
SLIDE 34

34

 The state distribution is approximated by a Gaussian Random

Variables, which is then propagated analytically through the “first-

  • rder’ linearization of a nonlinear system.

 These approximations can introduce large errors in the true

posterior mean and covariance of the transformed (Gaussian) random variable, especially when the models are highly non-linear, which may lead to

 suboptimal performance  sometimes divergence of the filter

 Local linearity assumption breaks down when the higher order

terms become significant.

A Basic Flaw of Extended Kalman filter

Kalman Filter variants

Kalman filter variants

4-janv.-15 67 D Gingras – ME470 IV course CalPoly Week 5

Main Variants Kalman Filters

First Order Filters:

 EKF (Extended Kalman Filter)  DD1 (Divided Differences 1) [Nørgaard et al]

Second Order Filters:

 UKF (Unscented Kalman Filter) [Julier et al]  DD2 (Divided Differences 2) [Nørgaard et al]

 These variants of the Kalman filter are all sub-optimal

Kalman Filter variants

Kalman filter variants

4-janv.-15 68 D Gingras – ME470 IV course CalPoly Week 5

slide-35
SLIDE 35

35 First Order KF

EKF (Extended Kalman Filter)

 Linearization of the non-linear function around the estimated

state

 Taylor first order

Limitations

 Linearization is achieved around estimated state and not around

true state

 The function has to be derivable

4-janv.-15 69 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

First Order KF

DD1

 Linear regression on an interval   Stirling first order

Advantages

 Less sensitive to the error Xe - µ  No more derivability condition

Kalman Filter variants

Kalman filter variants

4-janv.-15 70 D Gingras – ME470 IV course CalPoly Week 5

slide-36
SLIDE 36

36 Drawback of First Order

 First order can truncate some significant terms

Let us consider

Kalman Filter variants

Kalman filter variants

4-janv.-15 71 D Gingras – ME470 IV course CalPoly Week 5

DD2 Second Order KF

 Linear regression on an interval   Stirling second order

Taylor second

  • rder

Stirling second order [Nørgaard et al]

Kalman Filter variants

Kalman filter variants

4-janv.-15 72 D Gingras – ME470 IV course CalPoly Week 5

slide-37
SLIDE 37

37 Output noise remains Gaussian when the function is linear

4-janv.-15 73 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

Effect of nonlinear functions on gaussianity

Kalman Filter variants

Kalman filter variants

4-janv.-15 74 D Gingras – ME470 IV course CalPoly Week 5

slide-38
SLIDE 38

38

75

EKF Linearization

Kalman Filter variants

Kalman filter variants

4-janv.-15 75 D Gingras – ME470 IV course CalPoly Week 5 76 Kalman Filter variants

Kalman filter variants

EKF Linearization

4-janv.-15 76 D Gingras – ME470 IV course CalPoly Week 5

slide-39
SLIDE 39

39

77 Kalman Filter variants

Kalman filter variants

EKF Linearization

4-janv.-15 77 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

4-janv.-15 78 D Gingras – ME470 IV course CalPoly Week 5

Comparisaon Unscented vs EKF

slide-40
SLIDE 40

40 Second Order KF 2/3

UKF Unscented Transform: It is easier to approximate a gaussian distribution than approximate a non-linear function

Transformed Sigma points Unscented Mean

NLF [Julier et al]

Sigma points Kalman Filter variants

Kalman filter variants

4-janv.-15 79 D Gingras – ME470 IV course CalPoly Week 5

Second Order KF

 Considering again:

Kalman Filter variants

Kalman filter variants

4-janv.-15 80 D Gingras – ME470 IV course CalPoly Week 5

slide-41
SLIDE 41

41 IVs are Nonlinear Dynamic Systems

Most realistic intelligent vehicle problems involve nonlinear functions:

) , (

1 

t t t

x u g x ) (

t t

x h z 

Kalman Filter variants

Kalman filter variants

4-janv.-15 81 D Gingras – ME470 IV course CalPoly Week 5

Ex: nonlinear functions in vehicle dynamics model and control Ex: nonlinear functions in vehicle dynamic states to sensor measurements

Extended Kalman filter

 Consider the following non-linear scalar system:  Assume that we can somehow determine a reference trajectory  Then:  where

( ) ( )

k+1 k k k k k k k k

x = f x +G w z m x v  

k

x

( ) ( ) ( ) ( )

k k k k k k k k k k k k k k k k k

x f x f x f x G w F x F x f x G w

 

      

1

   

k

k i k k j x

f F x   

EKF uses first order terms of the Taylor series expansion of the nonlinear functions.

4-janv.-15 82 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

slide-42
SLIDE 42

42 The Taylor series expansion (Linearization) for the EKF

     

          

3 3 2 2

! 3 1 2 1 x x x x x x x     f f f f f f

   

  

      

4 4 2

2 1 2 1 x P x x y

xx

 E f f Ef  

        

           

T T

f E E E f f f

2 2 2 4 2

! 4 2 1

2 yy yy yy xx yy

P x P P x x P P   

Kalman Filter variants

Kalman filter variants

4-janv.-15 83 D Gingras – ME470 IV course CalPoly Week 5

EKF uses first order terms of the Taylor series expansion of the nonlinear functions.

 Time Update (“prediction”)  Measurement Update (“correction”)

 

1 1,

ˆ ˆ

    k k k

u x F x

 

Q F FP P    

  T k k 1

 

  

k k k

P H K I P

 

 

  

k k k k k

x H z K x x ˆ ˆ ˆ

 

1   

     R H HP H P K

T k T k k

4-janv.-15 84 D Gingras – ME470 IV course CalPoly Week 5

Overall process of the Extended Kalman Filter

Kalman Filter variants

Kalman filter variants

slide-43
SLIDE 43

43 Example with EKF : Vehicle location, circle path                    sin cos r r y x                cos sin sin cos r r f

sensor r 4-janv.-15 85 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

 Real location (0,1)  Accuracy

 Range (2 m standard deviation)  Angle (15 degree standard deviation)

True : Monte Carlo simulation with extremely large number of samples:

6

10 5 . 3 

Linearization error

4-janv.-15 86 D Gingras – ME470 IV course CalPoly Week 5

Example with EKF : Vehicle location

Kalman Filter variants

Kalman filter variants

slide-44
SLIDE 44

44 Use of Extended Kalman filter

The Extended Kalman Filter (EKF) has become a standard technique used in a number of nonlinear estimation and data fusion applications

 State estimation

estimating the state of a nonlinear dynamic system

 Parameter estimation

estimating parameters for nonlinear system identification e.g., learning the weights of a neural network

 Dual estimation

both states and parameters are estimated simultaneously e.g., the Expectation Maximization (EM) algorithm

Kalman Filter variants

Kalman filter variants

4-janv.-15 87 D Gingras – ME470 IV course CalPoly Week 5

Concluding remarks on extended Kalman filter variants

 The Extended Kalman Filter (EKF) has long been the de-facto

standard for nonlinear state space estimation

 EKF has simplicity, robustness and suitability for real-time

implementations

 Higher order filters (DD2 and UKF) cumulate a bias when only the

predictive step is used

 According to the tuning provided, DD1 seems to have a better

performance than EKF

Kalman Filter variants

Kalman filter variants

4-janv.-15 88 D Gingras – ME470 IV course CalPoly Week 5

slide-45
SLIDE 45

45 Extended Kalman filter

 For the measurement equation, we have:  We can then apply the standard Kalman filter to the linearized

model

 How to choose the reference trajectory?  Idea of the extended Kalman filter is to re-linearize the model

around the most recent state estimate, i.e.

   

k k k k k k k k

z M x M x m (x ) v

 

k 1 k k k k|k

x f (x ) ˆ x x

4-janv.-15 89 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

Extended Kalman Filter

 Time-propagation (prediction):  Measurement adaptation (correction):  Kalman gain:

 

| | | |

ˆ ˆ

k

k k k k k k k k k k k k k k k k               

       

1 1 1 1 1 1 1 1 1 1 1 1 1 1

where

x x

x f x f P F P F G Q G F x

   

| | | | |

ˆ ˆ ˆ ( )

k k k k k k k k k k k k k k k k k k k k   

            

1 1 1

x x K z m x P I K M P I K M K R K

|

| | ˆ k k k k k k k k k k k k k k

   

          

1

1 1 1

where

x x

m K P M M P M R M x

4-janv.-15 90 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

slide-46
SLIDE 46

46

 Prediction step:  Correction step:

EKF Linearization: First Order Taylor Series Expansion

) ( ) , ( ) , ( ) ( ) , ( ) , ( ) , (

1 1 1 1 1 1 1 1 1 1          

       

t t t t t t t t t t t t t t t t

x G u g x u g x x u g u g x u g      ) ( ) ( ) ( ) ( ) ( ) ( ) (

t t t t t t t t t t t

x H h x h x x h h x h             

4-janv.-15 91 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

UKF

 UKF

is a straightforward application

  • f

the unscented transformation

 Unscented Transformation

 The

unscented transformation (UT) is a method for calculating the statistics

  • f

a random variable which undergoes a nonlinear transformation

 builds on the principle that it is easier to approximate a

probability distribution than an arbitrary nonlinear function

4-janv.-15 92 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

slide-47
SLIDE 47

47

 UKF is a sort of Linear Regression Kalman Filter (LRKF)  IN UKF, the state distribution is again approximated by a

Gaussian random vector, but is represented using a minimal set

  • f carefully chosen sample points (sigma points).

 These sample points of the UKF completely capture the true

mean and covariance of the gaussian random vector

 When propagated through the true nonlinear system, the UKF

sample points capture the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity

Kalman Filter variants

Kalman filter variants

4-janv.-15 93 D Gingras – ME470 IV course CalPoly Week 5

The Unscented Kalman Filter

The Basic Idea of Unscented Transform Sigma point

Kalman Filter variants

Kalman filter variants

4-janv.-15 94 D Gingras – ME470 IV course CalPoly Week 5

slide-48
SLIDE 48

48 The Unscented Kalman Filter

The Unscented Transform

 The n-dimensional random state variable with mean 

The covariance matrix is approximated by 2n+1 weighted point by

x x

xx

P

   

 

   

 

 

                  

 

n W n n W n n W

n i i xx n i i i xx i

2 / 1 2 / 1 / P x χ P x χ x χ

constant : weight / : point / sigma :  W χ

4-janv.-15 95 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

The Unscented Kalman Filter

 The Unscented Transform

 Instantiates each point through the function to yield the set of

transformed sigma points

 The mean and covariance are given by the weighted average

and the weighted outer product of the transformed points,

 

i i

f  ς χ

n i i i

W

2

ς y

  

T i n i i i yy

W y ς y ς P    

 2

Kalman Filter variants

Kalman filter variants

4-janv.-15 96 D Gingras – ME470 IV course CalPoly Week 5

slide-49
SLIDE 49

49 The Unscented Kalman Filter

The vehicle location circle path example 2nd order accuracy, Much better !

Kalman Filter variants

Kalman filter variants

4-janv.-15 97 D Gingras – ME470 IV course CalPoly Week 5

The Unscented Kalman Filter

Overall process

 Time Update (“prediction”)

   n i k i i k

W

2 ,

ˆ χ x

   

   

  

n i T k k i k k i i k

W

2 , ,

ˆ ˆ x χ x χ P

1 1 1 ,

, ˆ

    k k k i

P x χ

 

1 1 , ,

,

 

k k i k i

F u χ χ

 

1 , , 

k i k i

H χ Ζ

   n i k i i k

W

2 ,

ˆ Ζ z

Kalman Filter variants

Kalman filter variants

4-janv.-15 98 D Gingras – ME470 IV course CalPoly Week 5

slide-50
SLIDE 50

50 The Unscented Kalman Filter

Overall process Measurement Update (“correction”)

T k k k

k k K

P K P P

z z k

 

 

 

  

k k k k k

z z K x x ˆ ˆ ˆ

   

T k k i k k i n i i

W

k k

  

    z Ζ z Ζ P

z z

ˆ ˆ

, , 2

   

T k k i k k i n i i

W

k k

  

    z Ζ x χ P

z x

ˆ ˆ

, , 2 1 

k k k k

z z z x

P P K

Kalman Filter variants

Kalman filter variants

4-janv.-15 99 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

UKF: algorithm summary

4-janv.-15 100 D Gingras – ME470 IV course CalPoly Week 5

slide-51
SLIDE 51

51

Kalman Filter variants

Kalman filter variants

UKF: algorithm summary

4-janv.-15 101 D Gingras – ME470 IV course CalPoly Week 5

UKF: Example of the UT for mean and covariance propagation

Only five sigma points are required

Kalman Filter variants

Kalman filter variants

4-janv.-15 102 D Gingras – ME470 IV course CalPoly Week 5

slide-52
SLIDE 52

52 UKF Definition of Sigma Points

 

 

 

 

L L i P L x L i P L x i x

L i x i i x i

2 , , 1 , , 1             

     Where is a scaling parameter The constant determines the spread of the sigma points around and is usually set to a small positive value (e.g., ). The constant is a secondary scaling parameter which is usually set to 3-L is used to incorporate prior knowledge of the distribution of x (for Gaussian distributions, is optimal) is the i th column of the matrix square root of L L    ) (

2

  

 

 

i x

P L  

  x

P L  

Assume has mean and covariance A set of weighted samples or sigma points are chosen as follows:

x

x

x

P

1 2  L 

x

4

10 1

  

2  

Kalman Filter variants

Kalman filter variants

4-janv.-15 103 D Gingras – ME470 IV course CalPoly Week 5

Propagation of Sigma Points

 Each sigma point is propagated through the nonlinear function and the estimated mean and covariance of Y are computed as follows  These estimates of the mean and covariance are accurate to the second order of the Taylor series expansion of for any nonlinear function

 

L i f Y

i i

2 , , 0    

  

 

 

   

L i T i i c i y L i i m i

y Y y Y W P Y W y

2 ) ( 2 ) (

 

x f

       

L i L W W L W i L W

c i m i c m

2 , , 1 2 1 1

) ( ) ( 2 ) ( ) (

                   

Kalman Filter variants

Kalman filter variants

4-janv.-15 104 D Gingras – ME470 IV course CalPoly Week 5

slide-53
SLIDE 53

53 What can UKF do

 State Estimation of a discrete-time

nonlinear dynamic system

 Parameter Estimation in a nonlinear

mapping

 Dual Estimation of a discrete-time

nonlinear dynamic system

k k k k k k

n x H y v x F x , ( ) , (

1

 

 k k k k k k k

e w x G y u w w    

) , (

1

) w x G y

k k

, ( 

) w n x H y w v x F x

k k k k k k

, , ( ) , , (

1

 

 Kalman Filter variants

Kalman filter variants

4-janv.-15 105 D Gingras – ME470 IV course CalPoly Week 5

Some concluding remarks on Kalman filter variants

 The Extended Kalman Filter (EKF) has long been the de-facto

standard for nonlinear state space estimation

 EKF has simplicity, robustness and suitability for real-time

implementations

 EKF can diverge if nonlinearities are large  EKF works surprisingly well even when most assumptions are

violated.

 EKF is efficient: complexity is polynomial in measurement

dimensionality k and state dimensionality n: O(k2.376 + n2)

 Higher order filters (DD2 and UKF) cumulate a bias when only the

predictive step is used

 According to the tuning provided, DD1 seems to have a better

performance than EKF

Kalman Filter variants

Kalman filter variants

4-janv.-15 106 D Gingras – ME470 IV course CalPoly Week 5

slide-54
SLIDE 54

54 Some concluding remarks on Kalman filter variants

 UKF approximates the distribution rather than the function

nonlinearity

 UKF is accurate to at least the 2nd order (3rd order for Gaussian

inputs)

 With UKF no Jacobians nor Hessians are calculated, same

computational complexity as EKF

 UKF is an efficient “sampling” approach  In UKF, weights β and α can be modified to capture higher-order

statistics

Kalman Filter variants

Kalman filter variants

4-janv.-15 107 D Gingras – ME470 IV course CalPoly Week 5

 UKF is an improvement on EKF  A central operation performed in the Kalman Filter is the propagation of

a Gaussian random variable (GRV) through the system dynamics

 EKF

 the state distribution is approximated by a GRV  which is then propagated analytically through the first-order

linearization of the nonlinear system.

 UKF

 The UKF addresses EKF’s problem by using a deterministic

sampling approach

 UKF filter has demonstrated higher accuracy and robustness for

nonlinear models than EKF

 Remarkably, the computational complexity of the UKF is the

same order as that of the EKF

4-janv.-15 108 D Gingras – ME470 IV course CalPoly Week 5 Kalman Filter variants

Kalman filter variants

slide-55
SLIDE 55

55

Summary

 Real measurements incorporate many forms of error and we can

remove them in various ways including filtering, calibration, and differential observation.

 Covariance measures the spread of data or a population of random

  • vectors. It is expressed as a matrix and computed as the expected

value of the outer product of two deviations from the mean.

 Covariance completely determines the size and shape of the  multivariate normal distribution. Contours of constant probability for

this distribution are ellipsoids.

 A quadratic form involving the Jacobian of a nonlinear transform is

used to convert the coordinates of covariance.

 It is possible and fairly easy to convert uncertainty of perceptual

data into uncertainty in the quantities derived from it.

 The random process is a randomly chosen function. Its values  may or may not be randomly varying in time.

4-janv.-15 109 D Gingras – ME470 IV course CalPoly Week 5

Concluding remarks on uncertainty modeling Concluding remarks on Kalman filtering

 If all noise sources are Gaussian, the Kalman filter minimizes

the mean square error of the estimated parameters

 Recursive form of the filters is well adapted to real time

processing.

 Optimal for linear systems under Gaussian noise processes  Assumes a Markov process  Relatively easy to formulate and implement, given a basic

understanding.

 To improve convergence (fewer steps), we may:

 Improve modeling of the system and sensors  Improve models of the noise sources  Take into account nonlinearities and non-gaussianity.

110 Kalman Filter

Summary

slide-56
SLIDE 56

56

Summary

 The Extended Kalman Filter

 Linearized version of the Kalman filter  Computationnaly efficient  Linearization error problem: uses only first term of Taylor series

expansion

 The Unscented Kalman Filter

 Approximates the distribution  Highly efficient: Same complexity as EKF, with a constant factor

slower in typical practical applications

 Better linearization than EKF: Accurate in first two terms of

Taylor expansion

 No Jacobians or Hessian are calculated.  Efficient “sampling” approach.  The UKF consistently achieves a better level of accuracy than the

EKF at a comparable level of complexity.

4-janv.-15 111 D Gingras – ME470 IV course CalPoly Week 5

Concluding remarks on EKF and UKF

4-janv.-15 112

References

Anderson Brian D.O., Optimal filtering, Prentice Hall, 1979. Branko Ristic, Beyond the Kalman filter: particle filters and tracking applications, 2004 Candy James , Bayesian and particle filtering signal processing. 2008, Clark James et al., Data fusion for sensory information processing systems, Kluwer, 1990 Jazwinski, Andrew H. Stochastic processes and filtering theory., Dover, 1998. Kim Phil, Kalman filters for beginners with Matlab examples, A-JIN Pub, 2010 Panagiotis Lytrivis et al., Sensor Data Fusion in Automotive Applications, Sensor and Data Fusion, Nada Milisavljevic (Ed.), InTech Open Access Database www.intechweb.org

D Gingras – ME470 IV course CalPoly Week 5

slide-57
SLIDE 57

57

Reference

Lindsay Kleeman, Understanding and Applying Kalman Filtering, Department of Electrical and Computer Systems Engineering, Monash University, Clayton Peter Maybeck, Stochastic Models, Estimation, and Control, Volume 1 Greg Welch, Gary Bishop, "An Introduction to the Kalman filter", University of North Carolina at Chapel Hill Department of Computer Science, 2001 Tine Lefebvre, et al, “Comment on a New Method for the Nonlinear Transformation of Means and Covariances in Filters and Estimations”, IEEE Transactions on Automatic Control, Vol.47, No,8. August 2002

4-janv.-15 113 D Gingras – ME470 IV course CalPoly Week 5 4-janv.-15 114

QUESTIONS?

D Gingras - UdeS – IV course CalPoly Week 5