Multiple Sensor Target Tracking: Basic Idea Iterative updating of - - PowerPoint PPT Presentation

multiple sensor target tracking basic idea
SMART_READER_LITE
LIVE PREVIEW

Multiple Sensor Target Tracking: Basic Idea Iterative updating of - - PowerPoint PPT Presentation

Multiple Sensor Target Tracking: Basic Idea Iterative updating of conditional probability densities! kinematic target state x k at time t k , accumulated sensor data Z k a priori knowledge: target dynamics models, sensor model dynamics model p ( x


slide-1
SLIDE 1

Multiple Sensor Target Tracking: Basic Idea

Iterative updating of conditional probability densities!

kinematic target state xk at time tk, accumulated sensor data Zk a priori knowledge: target dynamics models, sensor model

  • prediction:

p(xk−1|Zk−1)

dynamics model

− − − − − − − − − − → p(xk|Zk−1)

  • filtering:

p(xk|Zk−1)

sensor data Zk

− − − − − − − − − − →

sensor model

p(xk|Zk)

  • retrodiction:

p(xl−1|Zk)

filtering output

← − − − − − − − − − −

dynamics model

p(xl|Zk)

A first look at retrodiction today!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 1

slide-2
SLIDE 2

Recapitulation: An Important Data Fusion Algorithm

Kalman filter: xk = (r⊤

k , ˙

r⊤

k )⊤, Zk = {zk, Zk−1}

initiation: p(x0) = Nx0; x0|0, P0|0

  • ,

initial ignorance:

P0|0 ‘large’

prediction: Nxk−1; xk−1|k−1, Pk−1|k−1

  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

Nxk; xk|k−1, Pk|k−1

  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: N

  • xk; xk|k−1, Pk|k−1
  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’ A deeper look into the dynamics and sensor models necessary!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 2

slide-3
SLIDE 3

Remember your own ground truth generator!

Exercise 3.1

Consider a car moving on a mountain pass road modeled by:

r(t) =

  • x(t)

y(t) z(t)

  • =
  • vt

ay sin( 4πv

ax t)

az sin( πv

axt)

  • v = 20km

h , ax = 10 km, ay = az = 1 km, t ∈ [0, ax/v].

  • 1. Plot the trajectory. Are the parameters reasonable? Try alternatives.
  • 2. Calculate and plot the velocity and acceleration vectors:

˙

r(t) =

  • ˙

x(t) ˙ y(t) ˙ z(t)

  • ,

¨

r(t) = −q

  • ¨

x(t) ¨ y(t) ¨ z(t)

  • .
  • 3. Calculate for each instance of time t the tangential vectors in r(t):

t(t) =

1 |˙ r(t)|˙

r(t).

  • 4. Plot |˙

r(t)|, |¨ r(t)|, and ¨ r(t)t(t) over the time interval.

  • 5. Discuss the temporal behaviour based on the trajectory r(t)!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 3

slide-4
SLIDE 4

Create your own sensor simulator!

Exercise 4.1

Simulate normally distributed radar measurements! ∆T = 2 s, 2 radars at r1,2

s

= (x1,2

s , y1,2 s

, z1,2

s

)⊤, x1,2

s

= 0, 100 km, y1,2

s

= 100, 0 km, z1,2

s

= 10 km. State at time tk = k∆T, k ∈ Z: xk = (r⊤

k , ˙

r⊤

k )⊤

  • 1. Simulate range and azimuth measurements of the target position rk with a random num-

ber generator normrnd(0, 1) producing normally distributed zero-mean and unit-variance random numbers:

zp

k =

  • zr

k

k

  • =

(xk−xs)2+(yk−ys)2+(zk−zs)2−z2

s

arctan(

yk−ys xk−xs)

  • +
  • σr normrnd(0,1)

σϕ normrnd(0,1)

  • with σr = 10 m, σϕ = 0.1◦ denoting the standard deviations in range and azimuth. As-

sume that the radars are not able to measure the elevation angle (see discussion on the whiteboard!).

  • 2. Transform the measurements in x-y-Cartesian coordinates zr

k(cos zϕ k , sin zϕ k )⊤ + rs and

plot them over x-y projection of the true target trajectory! Play with sensor positions and measurement error standard deviations!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 4

slide-5
SLIDE 5

Recapitulation: Piecewise Constant White Acceleration

Consider state vectors: xk = (r⊤

k , ˙

r⊤

k )⊤

(position, velocity) For known xk−1 and without external influences we have with ∆Tk = tk − tk−1:

xk =

  • I

∆Tk I

O I rk−1

˙

rk−1

  • =: Fk|k−1xk−1,

see blackboard! Assume during the interval ∆Tk a constant acceleration ak causing the state evolution:

1

2∆T 2 k I

∆Tk I

  • ak =: Gkak,

linear transform! Let ak be a Gaussian RV with pdf: p(ak) = N

  • ak; o, Σ2

kI

  • , we therefore have:

p(Gkak) = N

  • Gkak; o, Σ2

kGkG⊤ k

  • .

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 5

slide-6
SLIDE 6

Therefore: p(xk|xk−1) = N

  • xk; Fk|k−1xk−1, Dk|k−1
  • with

Fk|k−1 =

  • I

∆Tk I

O I

  • ,

Dk|k−1 = Σ2

k

1

4∆T 4 k I 1 2∆T 3 k I 1 2∆T 3 k I

∆T 2

k I

  • Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019

— slide 6

slide-7
SLIDE 7

Recapitulation Range, Azimuth Measurements

  • measurements in polar coordinates:

zk = (rk, ϕk)⊤, measurement error: R =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent
  • in Cartesian coord.: expand around truth position:

t[zk] = rk cos ϕk

sin ϕk

  • ≈ t[rk|k−1] + T (zk − rk)

T =

  • cos ϕ

− sin ϕ sin ϕ cos ϕ

  • rotation Dϕ
  • 1

r

  • dilation Sr
  • Cartesian error covariance (time dependent):

TRT⊤ = DϕSr R SrD⊤

ϕ = Dϕ

  • σ2

r

0 (rσϕ)2

  • D⊤

ϕ

  • sensor fusion: sensor-to-target-geometry enters into TRT⊤

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 7

slide-8
SLIDE 8

initiation: p(x0) = N

  • x0; x0|0, P0|0
  • ,

initial ignorance:

P0|0 ‘large’

prediction: N

  • xk−1; xk−1|k−1, Pk−1|k−1
  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

N

  • xk; xk|k−1, Pk|k−1
  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: Nxk; xk|k−1, Pk|k−1

  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’

Exercise 4.2

In your sensor simulator, chose a sensor at position rs that produces x-y measurements zk of the Cartesian target x-y positions Hxk from your ground truth generator using the measurement matrix H:

Hxk = 1,0,0,0,0,0

0,1,0,0,0,0

  • xk

Calculate for each measurement the measurement error covariance matrix Rk based on the true target position. Program your first Kalman filter initiated by the first measurement and reasonably chosen covariance matrices P1|1. What is reasonable? Visualize nicely and compare with the truth and the measurements.

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 8

slide-9
SLIDE 9

Recapitulation: Create a singe effective measurement by preprocessing of the individual measurements!

zk = Rk

Sk

  • s=1
  • Rs

k

−1zs

k

weighted arithmetic mean of measurements

Rk =

 

Sk

  • s=1
  • Rs

k

−1  

−1

harmonic mean of measuremt covariances

A typical structure for fusion equations!

Exercise 4.3

With measurement specific measurement error covariances, your Kalman filter already is a multiple sensor fusion algorithms. Use two radar sensors, fuse the measurements, feed them into the Kalman filter and discuss the result!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 9

slide-10
SLIDE 10

More general: measurement process

  • linear measurement equation:

zk = Hkxk + uk,

p(uk) = N

  • uk; o, Rk
  • – to be measured: linear functions of the object state

– measurement error: biasfree, Gaussian distrib. independent for different tk – yk = zk − Hkxk has the pdf: p(yk) = p(uk)

  • Approach for the requested pdf (‘likelihood fkt.):

p

  • zk| xk
  • = N
  • zk; Hkxk, Rk
  • Example: position measurement

Hk = (I, O, O), Hkxk = rk Rk : measurement error covariance matrix

possibly depending on the sensor-to-target geometry

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 10

slide-11
SLIDE 11

Retrodiction: How to calculate the pdf p(xl|Zk)?

Consider the past: l < k! an observation:

p

  • xl| Zk

=

  • dxl+1 p
  • xl, xl+1| Zk

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 11

slide-12
SLIDE 12

Retrodiction: How to calculate the pdf p(xl|Zk)?

Consider the past: l < k! an observation:

p

  • xl| Zk

=

  • dxl+1 p
  • xl, xl+1| Zk

=

  • dxl+1 p
  • xl| xl+1, Zk

p

  • xl+1| Zk
  • retrodiction: tl+1

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 12

slide-13
SLIDE 13

Retrodiction: How to calculate the pdf p(xl|Zk)?

Consider the past: l < k! an observation:

p

  • xl| Zk

=

  • dxl+1 p
  • xl, xl+1| Zk

=

  • dxl+1 p
  • xl| xl+1, Zk

p

  • xl+1| Zk
  • retrodiction: tl+1

p

  • xl| xl+1, Zk

= p(Zk, . . . , Zl+1|xl+1, xl, Zl) pxl| xl+1, Zl

  • dxl p(Zk, . . . , Zl+1|xl+1, xl, Zl) p
  • xl| xl+1, Zl = p
  • xl| xl+1, Zl

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 13

slide-14
SLIDE 14

Retrodiction: How to calculate the pdf p(xl|Zk)?

Consider the past: l < k! an observation:

p

  • xl| Zk

=

  • dxl+1 p
  • xl, xl+1| Zk

=

  • dxl+1 p
  • xl| xl+1, Zk

p

  • xl+1| Zk
  • retrodiction: tl+1

p

  • xl| xl+1, Zk

= p

  • xl| xl+1, Zl

= pxl+1| xl

  • pxl| Zl
  • dxl p
  • xl+1| xl
  • dynamics model

p

  • xl| Zl
  • filtering tl

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 14

slide-15
SLIDE 15

Retrodiction: How to calculate the pdf p(xl|Zk)?

Consider the past: l < k! an observation:

p

  • xl| Zk

=

  • dxl+1 p
  • xl, xl+1| Zk

=

  • dxl+1

p

  • xl+1| xl
  • p
  • xl| Zl
  • dxl p
  • xl+1| xl
  • dynamics model

p

  • xl| Zl
  • filtering tl

p

  • xl+1| Zk
  • retrodiction: tl+1
  • p(xl+1|Zk)

retrodiction: last iteration step

  • p(xk|xk−1)

dynamic object behavior

  • p(xl|Zl)

filtering at the time considered

  • GAUSSians, GAUSSian mixtures: Exploit product formula!
  • linear GAUSSian likelihood/dynamics: Rauch-Tung-Striebel smoothing

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 15

slide-16
SLIDE 16

Exercise 4.4 Derive the Rauch-Tung-Striebel formulae by using the Kalman filter assumptions and the product formula (twice)!

retrodiction: Nxl; xl|k, Pl|k

  • filtering, prediction

← − − − − − − − − − −

dynamics model

Nxl+1; xl+1|k, Pl+1|k

  • xl|k

=

xl|l + Wl|l+1(xl+1|k − xl+1|l), Wl|l+1 = Pl|lF⊤

l+1|lP−1 l+1|l

Pl|k

=

Pl|l + Wl|l+1(Pl+1|k − Pl+1|l)Wl|l+1⊤

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 16

slide-17
SLIDE 17

Kalman filter: linear GAUSSian likelihood/dynamics, xk = (r⊤

k , ˙

r⊤

k ,¨

r⊤

k )⊤, Zk = {zk, Zk−1}

initiation: p(x0) = N

  • x0; x0|0, P0|0
  • ,

initial ignorance:

P0|0 ‘large’

prediction: N

  • xk−1; xk−1|k−1, Pk−1|k−1
  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

N

  • xk; xk|k−1, Pk|k−1
  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: Nxk; xk|k−1, Pk|k−1

  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’ retrodiction: N

  • xl; xl|k, Pl|k
  • filtering, prediction

← − − − − − − − − − −

dynamics model

N

  • xl+1; xl+1|k, Pl+1|k
  • xl|k

=

xl|l + Wl|l+1(xl+1|k − xl+1|l), Wl|l+1 = Pl|lF⊤

l+1|lP−1 l+1|l

Pl|k

=

Pl|l + Wl|l+1(Pl+1|k − Pl+1|l)Wl|l+1⊤

Exercise 4.5 Implement the Rauch-Tung-Striebel formulae in your simulator (in the course of the semester ...)!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 17

slide-18
SLIDE 18

Continuous Time Retrodiction for tl < tl+θ < tl+1 with 0 < θ < 1

Interpolate between p(xl|Zk) and p(xl+1|Zk) based on the evolution model: p(xl+θ|Zk) =

  • dxl+1 p(xl+θ, xl+1|Zk)

=

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 18

slide-19
SLIDE 19

Continuous Time Retrodiction for tl < tl+θ < tl+1 with 0 < θ < 1

Interpolate between p(xl|Zk) and p(xl+1|Zk) based on the evolution model: p(xl+θ|Zk) =

  • dxl+1 p(xl+θ, xl+1|Zk)

=

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

where: p(xl+θ|xl+1, Zk) = p(xl+1|xl+θ) p(xl+θ|Zl)

dxl+θ p(xl+1|xl+θ) p(xl+θ|Zl)

with: p(xl+1|xl+θ) = N

  • xl+1; Fl+1|l+θxl+θ, Dl+1|l+θ
  • p(xl+θ|Zl) =
  • dxl p(xl+θ|xl) p(xl|Zl)

p(xl+1|Zl) =

  • dxl+θ p(xl+1|xl+θ) p(xl+θ|Zl)

= N

  • xl+1; xl+1|l, Pl+1|l
  • Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019

— slide 19

slide-20
SLIDE 20

Continuous Time Retrodiction for tl < tl+θ < tl+1 with 0 < θ < 1

Interpolate between p(xl|Zk) and p(xl+1|Zk) based on the evolution model: p(xl+θ|Zk) =

  • dxl+1 p(xl+θ, xl+1|Zk)

=

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

where: p(xl+θ|xl+1, Zk) = p(xl+1|xl+θ) p(xl+θ|Zl)

dxl+θ p(xl+1|xl+θ) p(xl+θ|Zl)

with: p(xl+1|xl+θ) = N

  • xl+1; Fl+1|l+θxl+θ, Dl+1|l+θ
  • p(xl+θ|Zl) =
  • dxl p(xl+θ|xl) p(xl|Zl)

p(xl+1|Zl) =

  • dxl+θ p(xl+1|xl+θ) p(xl+θ|Zl)

= N

  • xl+1; xl+1|l, Pl+1|l
  • Looks like a Kalman filtering update!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 20

slide-21
SLIDE 21

p(xl+θ|xl+1, Zk) ∝ p(xl+1|xl+θ) p(xl+θ|Zl) Looks like filtering! = N

  • xl+θ; al+θ|l+1, ∆l+θ|l+1
  • = N
  • bl+θ|l+1; Φl+θ|l+1xl+1, ∆l+θ|l+1
  • al+θ|l+1= xl+θ|l + Φl+θ|l+1(xl+1 − Fl+1|l+θxl+θ|l)

= xl+θ|l − Φl+θ|l+1xl+1|l + Φl+θ|l+1xl+1

bl+θ|l+1= xl+θ − xl+θ|l + Φl+θ|l+1xl+1|l ∆l+θ|l+1= Pl+θ|l − Φl+θ|l+1Pl+1|lΦ⊤

l+θ|l+1

Φl+θ|l+1= Pl+θ|lF⊤

l+1|l+θP−1 l+1|l

Pl+1|l= Fl+1|l+θPl+θ|lF⊤

l+1|l+θ + Dl+1|l+θ.

p(xl+θ|Zk) =

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

Looks like prediction! = N

  • xl+θ; xl+θ|k, xl+θ|k
  • xl+θ|k= xl+θ|l + Φl+θ|l+1

xl+1|k − xl+1|l

  • Pl+θ|k= xl+θ|l + Φl+θ|l+1
  • Pl+1|k − Pl+1|l
  • Φ⊤

l+θ|l+1

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 21

slide-22
SLIDE 22

p(xl+θ|xl+1, Zk) ∝ p(xl+1|xl+θ) p(xl+θ|Zl) Looks like filtering! = N

  • xl+θ; al+θ|l+1, ∆l+θ|l+1
  • = N
  • bl+θ|l+1; Φl+θ|l+1xl+1, ∆l+θ|l+1
  • al+θ|l+1 = xl+θ|l + Φl+θ|l+1(xl+1 − Fl+1|l+θxl+θ|l)

= xl+θ|l − Φl+θ|l+1xl+1|l + Φl+θ|l+1xl+1

bl+θ|l+1= xl+θ − xl+θ|l + Φl+θ|l+1xl+1|l ∆l+θ|l+1 = Pl+θ|l − Φl+θ|l+1Pl+1|lΦ⊤

l+θ|l+1

Φl+θ|l+1 = Pl+θ|lF⊤

l+1|l+θP−1 l+1|l

Pl+1|l = Fl+1|l+θPl+θ|lF⊤

l+1|l+θ + Dl+1|l+θ.

p(xl+θ|Zk) =

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

Looks like prediction! = N

  • xl+θ; xl+θ|k, xl+θ|k
  • xl+θ|k= xl+θ|l + Φl+θ|l+1

xl+1|k − xl+1|l

  • Pl+θ|k= xl+θ|l + Φl+θ|l+1
  • Pl+1|k − Pl+1|l
  • Φ⊤

l+θ|l+1

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 22

slide-23
SLIDE 23

p(xl+θ|xl+1, Zk) ∝ p(xl+1|xl+θ) p(xl+θ|Zl) Looks like filtering! = N

  • xl+θ; al+θ|l+1, ∆l+θ|l+1
  • = N
  • bl+θ|l+1; Φl+θ|l+1xl+1, ∆l+θ|l+1
  • al+θ|l+1 = xl+θ|l + Φl+θ|l+1(xl+1 − Fl+1|l+θxl+θ|l)

= xl+θ|l − Φl+θ|l+1xl+1|l + Φl+θ|l+1xl+1

bl+θ|l+1 = xl+θ − xl+θ|l + Φl+θ|l+1xl+1|l ∆l+θ|l+1 = Pl+θ|l − Φl+θ|l+1Pl+1|lΦ⊤

l+θ|l+1

Φl+θ|l+1 = Pl+θ|lF⊤

l+1|l+θP−1 l+1|l

Pl+1|l = Fl+1|l+θPl+θ|lF⊤

l+1|l+θ + Dl+1|l+θ.

p(xl+θ|Zk) =

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

Looks like prediction! = N

  • xl+θ; xl+θ|k, xl+θ|k
  • xl+θ|k= xl+θ|l + Φl+θ|l+1

xl+1|k − xl+1|l

  • Pl+θ|k= xl+θ|l + Φl+θ|l+1
  • Pl+1|k − Pl+1|l
  • Φ⊤

l+θ|l+1

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 23

slide-24
SLIDE 24

p(xl+θ|xl+1, Zk) ∝ p(xl+1|xl+θ) p(xl+θ|Zl) Looks like filtering! = N

  • xl+θ; al+θ|l+1, ∆l+θ|l+1
  • = N
  • bl+θ|l+1; Φl+θ|l+1xl+1, ∆l+θ|l+1
  • al+θ|l+1 = xl+θ|l + Φl+θ|l+1(xl+1 − Fl+1|l+θxl+θ|l)

= xl+θ|l − Φl+θ|l+1xl+1|l + Φl+θ|l+1xl+1

bl+θ|l+1 = xl+θ − xl+θ|l + Φl+θ|l+1xl+1|l ∆l+θ|l+1 = Pl+θ|l − Φl+θ|l+1Pl+1|lΦ⊤

l+θ|l+1

Φl+θ|l+1 = Pl+θ|lF⊤

l+1|l+θP−1 l+1|l

Pl+1|l = Fl+1|l+θPl+θ|lF⊤

l+1|l+θ + Dl+1|l+θ.

p(xl+θ|Zk) =

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

Looks like prediction! = N

  • xl+θ; xl+θ|k, xl+θ|k
  • xl+θ|k = xl+θ|l + Φl+θ|l+1

xl+1|k − xl+1|l

  • Pl+θ|k = xl+θ|l + Φl+θ|l+1
  • Pl+1|k − Pl+1|l
  • Φ⊤

l+θ|l+1

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 24

slide-25
SLIDE 25

Kalman filter: linear GAUSSian likelihood/dynamics, xk = (r⊤

k , ˙

r⊤

k ,¨

r⊤

k )⊤, Zk = {zk, Zk−1}

initiation: p(x0) = N

  • x0; x0|0, P0|0
  • ,

initial ignorance:

P0|0 ‘large’

prediction: N

  • xk−1; xk−1|k−1, Pk−1|k−1
  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

N

  • xk; xk|k−1, Pk|k−1
  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: Nxk; xk|k−1, Pk|k−1

  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’ retrodiction: N

  • xl; xl|k, Pl|k
  • filtering, prediction

← − − − − − − − − − −

dynamics model

N

  • xl+1; xl+1|k, Pl+1|k
  • xl|k

=

xl|l + Wl|l+1(xl+1|k − xl+1|l), Wl|l+1 = Pl|lF⊤

l+1|lP−1 l+1|l

Pl|k

=

Pl|l + Wl|l+1(Pl+1|k − Pl+1|l)Wl|l+1⊤

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 6, 2019 — slide 25