R 0 0 z 0 ( v max ) 2 1 x 0 | 0 = r 0 | 0 = P 0 | 0 = - - PowerPoint PPT Presentation

r 0 0
SMART_READER_LITE
LIVE PREVIEW

R 0 0 z 0 ( v max ) 2 1 x 0 | 0 = r 0 | 0 = P 0 | 0 = - - PowerPoint PPT Presentation

k ) , Z k = { z k , Z k 1 } Recapitulation: Kalman filter: x k = ( r r k , p ( x 0 ) = N initiation: x 0 ; x 0 | 0 , P 0 | 0 , initial ignorance: P 0 | 0 large N x k 1 ; x k 1 | k 1 , P k 1 |


slide-1
SLIDE 1

Recapitulation: Kalman filter: xk = (r⊤

k , ˙

r⊤

k )⊤, Zk = {zk, Zk−1}

initiation: p(x0) = N

  • x0; x0|0, P0|0
  • ,

initial ignorance:

P0|0 ‘large’

prediction: Nxk−1; xk−1|k−1, Pk−1|k−1

  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

Nxk; xk|k−1, Pk|k−1

  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: N

  • xk; xk|k−1, Pk|k−1
  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’

Exercise 3.6

In your sensor simulator, chose a sensor at position rs, for example

rs = (0, 0)⊤, that produces measurements zk of the Cartesian target

positions Hxk from your ground truth generator. Use the measurement covariance matrix R = σ2

c diag[1, 1], σc = 50 m, for all

measurements, but allow individual measurement error covariances for each measurement. Program your first Kalman filter using a constant acceleration. Visualize your results nicely! Compare the ground truth, the measurement, and the estimates!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 1

slide-2
SLIDE 2

a first remark on initiation: p(x0|Z0) = N

  • x0; x0|0, P0|0
  • ,

P0|0 ‘large’ x0|0 =

   

r0|0

˙

r0|0

¨

r0|0

    =   

z0

   ,

P0|0 =

  

R

(vmax)21 (qmax)21

  

position information: first measurement z0, ignorance = measurement error R! ignorance on velocity: sphere with radius vmax around zero (= no information on direction, but on ‘limits’) ignorance on acceleration: sphere with radius qmax around zero

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 2

slide-3
SLIDE 3

Sensor Fusion: Gain in Localization Accuracy

If a stationary target is observed by N sensors, we na¨ ıvely expect an improvement in accuracy ∝ 1/ √ N.

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 3

slide-4
SLIDE 4

Sensor Fusion: Gain in Localization Accuracy

If a stationary target is observed by N sensors, we na¨ ıvely expect an improvement in accuracy ∝ 1/ √ N. a closer look: The error of each measurement zi is described by a related measurement error covariance matrix Ri (‘error ellipsoids’). In 2 dimensions:

s1 s2 s3

Ri can strongly depend on the underlying senor-to-target geometry!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 4

slide-5
SLIDE 5

More Realistic: Range, Azimuth Measurements

  • measurements in polar coordinates:

zk = (rk, ϕk)⊤, measurement error: R =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 5

slide-6
SLIDE 6

More Realistic: Range, Azimuth Measurements

  • measurements in polar coordinates:

zk = (rk, ϕk)⊤, measurement error: R =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent

Likelihood function in polar coordinates: p(zk|xk) = N(zk; xp

k, Rp)

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 6

slide-7
SLIDE 7

More Realistic: Range, Azimuth Measurements

  • measurements in polar coordinates:

zk = (rk, ϕk)⊤, measurement error: R =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent

Likelihood function in polar coordinates: p(zk|xk) = N(zk; xp

k, Rp)

  • What is the likelihood function in Cartesian coordinates?

t[zk] = rk

cos ϕk

sin ϕk

  • Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018

— slide 7

slide-8
SLIDE 8

More Realistic: Range, Azimuth Measurements

  • measurements in polar coordinates:

zk = (rk, ϕk)⊤, measurement error: R =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent
  • in Cartesian coord.: expand around rk|k−1 = (rk|k−1, ϕk|k−1)⊤:

t[zk] = rk cos ϕk

sin ϕk

  • ≈ t[rk|k−1] + T (zk − rk|k−1)

constant and linear term of a Taylor series only, blackboard!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 8

slide-9
SLIDE 9

More Realistic: Range, Azimuth Measurements

  • measurements in polar coordinates:

zk = (rk, ϕk)⊤, measurement error: R =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent
  • in Cartesian coord.: expand around rk|k−1 = (rk|k−1, ϕk|k−1)⊤:

t[zk] = rk cos ϕk

sin ϕk

  • ≈ t[rk|k−1] + T (zk − rk|k−1)

T = ∂t[rk|k−1]

∂rk|k−1

=

  • cos ϕk|k−1 −rk|k−1 sin ϕk|k−1

sin ϕk|k−1 rk|k−1 cos ϕk|k−1

  • =
  • cos ϕ

− sin ϕ sin ϕ cos ϕ

  • rotation Dϕ
  • 1

r

  • dilation Sr

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 9

slide-10
SLIDE 10

More Realistic: Range, Azimuth Measurements

  • measurements in polar coordinates:

zk = (rk, ϕk)⊤, measurement error: R =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent
  • in Cartesian coord.: expand around rk|k−1 = (rk|k−1, ϕk|k−1)⊤:

t[zk] = rk cos ϕk

sin ϕk

  • ≈ t[rk|k−1] + T (zk − rk|k−1)

T = ∂t[rk|k−1]

∂rk|k−1

=

  • cos ϕk|k−1 −rk|k−1 sin ϕk|k−1

sin ϕk|k−1 rk|k−1 cos ϕk|k−1

  • =
  • cos ϕ

− sin ϕ sin ϕ cos ϕ

  • rotation Dϕ
  • 1

r

  • dilation Sr
  • affine transform of GAUSSian random variables:

Nz; x, R

z′=t+Tz

− − − − − − → Nz′; t + Tx, TRT⊤

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 10

slide-11
SLIDE 11

More Realistic: Range, Azimuth Measurements

  • measurements in polar coordinates:

zk = (rk, ϕk)⊤, measurement error: R =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent
  • in Cartesian coord.: expand around rk|k−1 = (rk|k−1, ϕk|k−1)⊤:

t[zk] = rk cos ϕk

sin ϕk

  • ≈ t[rk|k−1] + T (zk − rk|k−1)

T = ∂t[rk|k−1]

∂rk|k−1

=

  • cos ϕk|k−1 −rk|k−1 sin ϕk|k−1

sin ϕk|k−1 rk|k−1 cos ϕk|k−1

  • =
  • cos ϕ

− sin ϕ sin ϕ cos ϕ

  • rotation Dϕ
  • 1

r

  • dilation Sr
  • Cartesian error covariance (time dependent):

TRT⊤ = DϕSr R SrD⊤

ϕ = Dϕ

  • σ2

r

0 (rσϕ)2

  • D⊤

ϕ

  • sensor fusion: sensor-to-target-geometry enters into TRT⊤

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 11

slide-12
SLIDE 12

s1 s2 s3

sensor fusion: sensor-to-target-geometry enters into TRT⊤

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 12

slide-13
SLIDE 13

Sk Sensors Producing Target Measurement at the Same Time

One possibility:

Hkxk =

   

H1

k

. . .

HSk

k

    xk,

Rk = diag[R1

k, . . . , RSk k ]

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 13

slide-14
SLIDE 14

Sk Sensors Producing Target Measurement at the Same Time

One possibility:

Hkxk =

   

H1

k

. . .

HSk

k

    xk,

Rk = diag[R1

k, . . . , RSk k ]

Alternatively, provided that Hi

k = Hk, i = 1, . . . , Sk:

p(z1

k, z2 k|xk) = p(z1 k|xk) p(z2 k|xk)

independent sensors = N

  • z1

k; Hkxk, R1 k

  • N
  • z2

k; Hkxk, R2 k

  • Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018

— slide 14

slide-15
SLIDE 15

Sk Sensors Producing Target Measurement at the Same Time

One possibility:

Hkxk =

   

H1

k

. . .

HSk

k

    xk,

Rk = diag[R1

k, . . . , RSk k ]

Alternatively, provided that Hi

k = Hk, i = 1, . . . , Sk:

p(z1

k, z2 k|xk) = p(z1 k|xk) p(z2 k|xk)

independent sensors = N

  • z1

k; Hkxk, R1 k

  • N
  • z2

k; Hkxk, R2 k

  • Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018

— slide 15

slide-16
SLIDE 16

A Useful Product Formula for GAUSSians

N

  • z; Hx, R
  • N
  • x; y, P
  • = N
  • z; Hy, S
  • independent of x

×

  • N
  • x; y + Wν, P − WSW⊤

Nx; Q−1(P−1x + H⊤R−1z), Q

  • ν = z − Hy,

S = HPH⊤ + R, W = PH⊤S−1, Q−1 = P−1 + H⊤R−1H.

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 16

slide-17
SLIDE 17

Sk Sensors Producing Target Measurement at the Same Time

One possibility:

Hkxk =

   

H1

k

. . .

HSk

k

    xk,

Rk = diag[R1

k, . . . , RSk k ]

Alternatively, provided that Hi

k = Hk, i = 1, . . . , Sk:

p(z1

k, z2 k|xk) = p(z1 k|xk) p(z2 k|xk)

independent sensors = N

  • z1

k; Hkxk, R1 k

  • N
  • z2

k; Hkxk, R2 k

  • = N
  • Hkxk; z1

k, R1 k

  • N
  • z2

k; Hkxk, R2 k

  • ∝ N
  • Hkxk; Rk(R1 −1

k

z1

k + R2 −1 k

z2

k)

  • =zk

, (R1 −1

k

+ R2 −1

k

)−1

  • =Rk
  • = N
  • zk; Hkxk, Rk
  • Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018

— slide 17

slide-18
SLIDE 18

Exercise 4.1

Generalize to the case Sk > 2 (induction argument)!

One possible fusion strategy: Create a singe effective measurement by preprocessing of the individual measurements!

zk = Rk

Sk

  • s=1
  • Rs

k

−1zs

k

weighted arithmetic mean of measurements

Rk =

 

Sk

  • s=1
  • Rs

k

−1  

−1

harmonic mean of measuremt covariances

A typical structure for fusion equations!

With measurement specific measurement error covariances, your Kalman filter already is a multiple sensor fusion algorithms. Try and play!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 18

slide-19
SLIDE 19

Recapitulation: A popular model for object evolutions

Piecewise Constant White Acceleration Model

p(xk|xk−1) = N

  • xk; Fk|k−1xk−1, Dk|k−1
  • Consider state vectors: xk = (r⊤

k , ˙

r⊤

k )⊤

(position, velocity)

Fk|k−1 =

  • I

∆Tk I

O I

  • ,

Dk|k−1 = Σ2

k

1

4∆T 4 k I 1 2∆T 3 I 1 2∆T 3 k I

∆T 2

k I

  • Consider state vectors xk = (r⊤

k , ˙

r⊤

k ,¨

r⊤

k )⊤

(position, velocity, acceleration)

Fk|k−1 =

  

I

∆Tk I

1 2∆T 2 k I

O I

∆Tk I

O I I

   ,

Dk|k−1 = Σ2

k

   

1 4∆T 4 k I 1 2∆T 3 k I 1 2∆T 2 k I 1 2∆T 3 k I

∆T 2

k I

∆Tk I

1 2∆T 2 k I

∆TkI

I

   

with ∆Tk = tk − tk−1. Reasonable choice:

1 2vmax/vmax ≤ Σk ≤ vmax/qmax

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 19

slide-20
SLIDE 20

Another, rather realistic model (van Keuk):

Fk|k−1 =   I

(tk − tk−1) I

1 2(tk − tk−1)2 I

O I

(tk − tk−1) I

O O

e−(tk−tk−1)/θ I

  , I = diag[1, 1, 1] Dk|k−1 = Σ2(1 − e−2(tk−tk−1)/θ)   O O O O O O O O I   , O = diag[0, 0, 0]

maneuver correlation time θ (z.B. 60 s), limiting acceleration Σ (z.B. 2 g) There are many different evolution models adapted to particular problems!

Exercise 4.2 (voluntary!)

Show for the acceleration process: E[¨

rk] = 0,

E[¨

rk¨ r⊤

l ] = Σ2 e−(tk−tl)/θ I, l ≤ k

E[¨

rk¨ r⊤

l ] is called ‘auto correlation function’.

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 20

slide-21
SLIDE 21

Recapitulation: measurement process

  • linear measurement equation:

zk = Hkxk + uk,

p(uk) = N

  • uk; o, Rk
  • – to be measured: linear functions of the object state

– measurement error: biasfree, Gaussian distrib. independent for different tk – yk = zk − Hkxk has the pdf: p(yk) = p(uk)

  • Approach for the requested pdf (‘likelihood fkt.):

p

  • zk| xk
  • = N
  • zk; Hkxk, Rk
  • Example: position measurement

Hk = (I, O, O), Hkxk = rk Rk : measurement error covariance matrix

possibly depending on the sensor-to-target geometry

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 21

slide-22
SLIDE 22

Retrodiction: How to calculate the pdf p(xl|Zk)?

Consider the past: l < k! an observation:

p

  • xl| Zk

=

  • dxl+1 p
  • xl, xl+1| Zk

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 22

slide-23
SLIDE 23

Retrodiction: How to calculate the pdf p(xl|Zk)?

Consider the past: l < k! an observation:

p

  • xl| Zk

=

  • dxl+1 p
  • xl, xl+1| Zk

=

  • dxl+1 p
  • xl| xl+1, Zk

p

  • xl+1| Zk
  • retrodiction: tl+1

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 23

slide-24
SLIDE 24

Retrodiction: How to calculate the pdf p(xl|Zk)?

Consider the past: l < k! an observation:

p

  • xl| Zk

=

  • dxl+1 p
  • xl, xl+1| Zk

=

  • dxl+1 p
  • xl| xl+1, Zk

p

  • xl+1| Zk
  • retrodiction: tl+1

p

  • xl| xl+1, Zk

= p(Zk, . . . , Zl+1|xl+1, xl, Zl) pxl| xl+1, Zl

  • dxl p(Zk, . . . , Zl+1|xl+1, xl, Zl) p
  • xl| xl+1, Zl = p
  • xl| xl+1, Zl

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 24

slide-25
SLIDE 25

Retrodiction: How to calculate the pdf p(xl|Zk)?

Consider the past: l < k! an observation:

p

  • xl| Zk

=

  • dxl+1 p
  • xl, xl+1| Zk

=

  • dxl+1 p
  • xl| xl+1, Zk

p

  • xl+1| Zk
  • retrodiction: tl+1

p

  • xl| xl+1, Zk

= p

  • xl| xl+1, Zl

= pxl+1| xl

  • pxl| Zl
  • dxl p
  • xl+1| xl
  • dynamics model

p

  • xl| Zl
  • filtering tl

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 25

slide-26
SLIDE 26

Retrodiction: How to calculate the pdf p(xl|Zk)?

Consider the past: l < k! an observation:

p

  • xl| Zk

=

  • dxl+1 p
  • xl, xl+1| Zk

=

  • dxl+1

p

  • xl+1| xl
  • p
  • xl| Zl
  • dxl p
  • xl+1| xl
  • dynamics model

p

  • xl| Zl
  • filtering tl

p

  • xl+1| Zk
  • retrodiction: tl+1
  • p(xl+1|Zk)

retrodiction: last iteration step

  • p(xk|xk−1)

dynamic object behavior

  • p(xl|Zl)

filtering at the time considered

  • GAUSSians, GAUSSian mixtures: Exploit product formula!
  • linear GAUSSian likelihood/dynamics: Rauch-Tung-Striebel smoothing

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 26

slide-27
SLIDE 27

Exercise 4.3 Derive the Rauch-Tung-Striebel formulae by using the Kalman filter assumptions and the product formula (twice)!

retrodiction: Nxl; xl|k, Pl|k

  • filtering, prediction

← − − − − − − − − − −

dynamics model

Nxl+1; xl+1|k, Pl+1|k

  • xl|k

=

xl|l + Wl|l+1(xl+1|k − xl+1|l), Wl|l+1 = Pl|lF⊤

l+1|lP−1 l+1|l

Pl|k

=

Pl|l + Wl|l+1(Pl+1|k − Pl+1|l)Wl|l+1⊤

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 27

slide-28
SLIDE 28

Kalman filter: linear GAUSSian likelihood/dynamics, xk = (r⊤

k , ˙

r⊤

k ,¨

r⊤

k )⊤, Zk = {zk, Zk−1}

initiation: p(x0) = N

  • x0; x0|0, P0|0
  • ,

initial ignorance:

P0|0 ‘large’

prediction: N

  • xk−1; xk−1|k−1, Pk−1|k−1
  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

N

  • xk; xk|k−1, Pk|k−1
  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: Nxk; xk|k−1, Pk|k−1

  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’ retrodiction: N

  • xl; xl|k, Pl|k
  • filtering, prediction

← − − − − − − − − − −

dynamics model

N

  • xl+1; xl+1|k, Pl+1|k
  • xl|k

=

xl|l + Wl|l+1(xl+1|k − xl+1|l), Wl|l+1 = Pl|lF⊤

l+1|lP−1 l+1|l

Pl|k

=

Pl|l + Wl|l+1(Pl+1|k − Pl+1|l)Wl|l+1⊤

Exercise 4.4 Implement the Rauch-Tung-Striebel formulae in your simulator!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 28

slide-29
SLIDE 29

Continuous Time Retrodiction for tl < tl+θ < tl+1 with 0 < θ < 1

Interpolate between p(xl|Zk) and p(xl+1|Zk) based on the evolution model: p(xl+θ|Zk) =

  • dxl+1 p(xl+θ, xl+1|Zk)

=

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 29

slide-30
SLIDE 30

Continuous Time Retrodiction for tl < tl+θ < tl+1 with 0 < θ < 1

Interpolate between p(xl|Zk) and p(xl+1|Zk) based on the evolution model: p(xl+θ|Zk) =

  • dxl+1 p(xl+θ, xl+1|Zk)

=

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

where: p(xl+θ|xl+1, Zk) = p(xl+1|xl+θ) p(xl+θ|Zl)

dxl+θ p(xl+1|xl+θ) p(xl+θ|Zl)

with: p(xl+1|xl+θ) = N

  • xl+1; Fl+1|l+θxl+θ, Dl+1|l+θ
  • p(xl+θ|Zl) =
  • dxl p(xl+θ|xl) p(xl|Zl)

p(xl+1|Zl) =

  • dxl+θ p(xl+1|xl+θ) p(xl+θ|Zl)

= N

  • xl+1; xl+1|l, Pl+1|l
  • Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018

— slide 30

slide-31
SLIDE 31

Continuous Time Retrodiction for tl < tl+θ < tl+1 with 0 < θ < 1

Interpolate between p(xl|Zk) and p(xl+1|Zk) based on the evolution model: p(xl+θ|Zk) =

  • dxl+1 p(xl+θ, xl+1|Zk)

=

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

where: p(xl+θ|xl+1, Zk) = p(xl+1|xl+θ) p(xl+θ|Zl)

dxl+θ p(xl+1|xl+θ) p(xl+θ|Zl)

with: p(xl+1|xl+θ) = N

  • xl+1; Fl+1|l+θxl+θ, Dl+1|l+θ
  • p(xl+θ|Zl) =
  • dxl p(xl+θ|xl) p(xl|Zl)

p(xl+1|Zl) =

  • dxl+θ p(xl+1|xl+θ) p(xl+θ|Zl)

= N

  • xl+1; xl+1|l, Pl+1|l
  • Looks like a Kalman filtering update!

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 31

slide-32
SLIDE 32

p(xl+θ|xl+1, Zk) ∝ p(xl+1|xl+θ) p(xl+θ|Zl) Looks like filtering! = N

  • xl+θ; al+θ|l+1, ∆l+θ|l+1
  • = N
  • bl+θ|l+1; Φl+θ|l+1xl+1, ∆l+θ|l+1
  • al+θ|l+1= xl+θ|l + Φl+θ|l+1(xl+1 − Fl+1|l+θxl+θ|l)

= xl+θ|l − Φl+θ|l+1xl+1|l + Φl+θ|l+1xl+1

bl+θ|l+1= xl+θ − xl+θ|l + Φl+θ|l+1xl+1|l ∆l+θ|l+1= Pl+θ|l − Φl+θ|l+1Pl+1|lΦ⊤

l+θ|l+1

Φl+θ|l+1= Pl+θ|lF⊤

l+1|l+θP−1 l+1|l

Pl+1|l= Fl+1|l+θPl+θ|lF⊤

l+1|l+θ + Dl+1|l+θ.

p(xl+θ|Zk) =

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

Looks like prediction! = N

  • xl+θ; xl+θ|k, xl+θ|k
  • xl+θ|k= xl+θ|l + Φl+θ|l+1

xl+1|k − xl+1|l

  • Pl+θ|k= xl+θ|l + Φl+θ|l+1
  • Pl+1|k − Pl+1|l
  • Φ⊤

l+θ|l+1

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 32

slide-33
SLIDE 33

p(xl+θ|xl+1, Zk) ∝ p(xl+1|xl+θ) p(xl+θ|Zl) Looks like filtering! = N

  • xl+θ; al+θ|l+1, ∆l+θ|l+1
  • = N
  • bl+θ|l+1; Φl+θ|l+1xl+1, ∆l+θ|l+1
  • al+θ|l+1 = xl+θ|l + Φl+θ|l+1(xl+1 − Fl+1|l+θxl+θ|l)

= xl+θ|l − Φl+θ|l+1xl+1|l + Φl+θ|l+1xl+1

bl+θ|l+1= xl+θ − xl+θ|l + Φl+θ|l+1xl+1|l ∆l+θ|l+1 = Pl+θ|l − Φl+θ|l+1Pl+1|lΦ⊤

l+θ|l+1

Φl+θ|l+1 = Pl+θ|lF⊤

l+1|l+θP−1 l+1|l

Pl+1|l = Fl+1|l+θPl+θ|lF⊤

l+1|l+θ + Dl+1|l+θ.

p(xl+θ|Zk) =

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

Looks like prediction! = N

  • xl+θ; xl+θ|k, xl+θ|k
  • xl+θ|k= xl+θ|l + Φl+θ|l+1

xl+1|k − xl+1|l

  • Pl+θ|k= xl+θ|l + Φl+θ|l+1
  • Pl+1|k − Pl+1|l
  • Φ⊤

l+θ|l+1

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 33

slide-34
SLIDE 34

p(xl+θ|xl+1, Zk) ∝ p(xl+1|xl+θ) p(xl+θ|Zl) Looks like filtering! = N

  • xl+θ; al+θ|l+1, ∆l+θ|l+1
  • = N
  • bl+θ|l+1; Φl+θ|l+1xl+1, ∆l+θ|l+1
  • al+θ|l+1 = xl+θ|l + Φl+θ|l+1(xl+1 − Fl+1|l+θxl+θ|l)

= xl+θ|l − Φl+θ|l+1xl+1|l + Φl+θ|l+1xl+1

bl+θ|l+1 = xl+θ − xl+θ|l + Φl+θ|l+1xl+1|l ∆l+θ|l+1 = Pl+θ|l − Φl+θ|l+1Pl+1|lΦ⊤

l+θ|l+1

Φl+θ|l+1 = Pl+θ|lF⊤

l+1|l+θP−1 l+1|l

Pl+1|l = Fl+1|l+θPl+θ|lF⊤

l+1|l+θ + Dl+1|l+θ.

p(xl+θ|Zk) =

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

Looks like prediction! = N

  • xl+θ; xl+θ|k, xl+θ|k
  • xl+θ|k= xl+θ|l + Φl+θ|l+1

xl+1|k − xl+1|l

  • Pl+θ|k= xl+θ|l + Φl+θ|l+1
  • Pl+1|k − Pl+1|l
  • Φ⊤

l+θ|l+1

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 34

slide-35
SLIDE 35

p(xl+θ|xl+1, Zk) ∝ p(xl+1|xl+θ) p(xl+θ|Zl) Looks like filtering! = N

  • xl+θ; al+θ|l+1, ∆l+θ|l+1
  • = N
  • bl+θ|l+1; Φl+θ|l+1xl+1, ∆l+θ|l+1
  • al+θ|l+1 = xl+θ|l + Φl+θ|l+1(xl+1 − Fl+1|l+θxl+θ|l)

= xl+θ|l − Φl+θ|l+1xl+1|l + Φl+θ|l+1xl+1

bl+θ|l+1 = xl+θ − xl+θ|l + Φl+θ|l+1xl+1|l ∆l+θ|l+1 = Pl+θ|l − Φl+θ|l+1Pl+1|lΦ⊤

l+θ|l+1

Φl+θ|l+1 = Pl+θ|lF⊤

l+1|l+θP−1 l+1|l

Pl+1|l = Fl+1|l+θPl+θ|lF⊤

l+1|l+θ + Dl+1|l+θ.

p(xl+θ|Zk) =

  • dxl+1 p(xl+θ|xl+1, Zk) p(xl+1|Zk)

Looks like prediction! = N

  • xl+θ; xl+θ|k, xl+θ|k
  • xl+θ|k = xl+θ|l + Φl+θ|l+1

xl+1|k − xl+1|l

  • Pl+θ|k = xl+θ|l + Φl+θ|l+1
  • Pl+1|k − Pl+1|l
  • Φ⊤

l+θ|l+1

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 35

slide-36
SLIDE 36

Kalman filter: linear GAUSSian likelihood/dynamics, xk = (r⊤

k , ˙

r⊤

k ,¨

r⊤

k )⊤, Zk = {zk, Zk−1}

initiation: p(x0) = N

  • x0; x0|0, P0|0
  • ,

initial ignorance:

P0|0 ‘large’

prediction: N

  • xk−1; xk−1|k−1, Pk−1|k−1
  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

N

  • xk; xk|k−1, Pk|k−1
  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: Nxk; xk|k−1, Pk|k−1

  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’ retrodiction: N

  • xl; xl|k, Pl|k
  • filtering, prediction

← − − − − − − − − − −

dynamics model

N

  • xl+1; xl+1|k, Pl+1|k
  • xl|k

=

xl|l + Wl|l+1(xl+1|k − xl+1|l), Wl|l+1 = Pl|lF⊤

l+1|lP−1 l+1|l

Pl|k

=

Pl|l + Wl|l+1(Pl+1|k − Pl+1|l)Wl|l+1⊤

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 36

slide-37
SLIDE 37

Usually considered in tracking applications: pdfs of object states at given time instants

kinematic state: xk, sensor data accumulated over time: Zk = Zk, . . . , Z1

  • Prediction:

p(xk−1|Zk−1)

dynamics model

− − − − − − − − − →

p(xk|xk−1)

p(xk|Zk−1) Filtering: p(xk|Zk−1)

sensor model

− − − − − − − − →

ℓ(xk;Zk)

p(xk|Zk) Retrodiction: p(xl|Zk)

dynamics model

← − − − − − − − − −

filtering

p(xl+1|Zk)

The likelihood function ℓ(xk; Zk) contains the full sensor information: current measurements + context information on the sensor performance. p(xk|xk−1) contains context information on the object’s kinematic properties.

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 37

slide-38
SLIDE 38

In certain applications accumulated state vectors are to be considered.

Object state vectors accumulated over a time window tk, tk−1, . . . , tn:

xk:n = (xk, . . . , xn)

The full information on xk:n based on a time series of sensor data Zk is contained in the corresponding accumulated state density (ASD): p(xk, . . . , xn|Zk) = p(xk:n|Zk). ASDs can be a useful notion in several applications. Examples are:

  • dealing with out-of-sequence measurements
  • track-to-track fusion in sensor networks.

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 38

slide-39
SLIDE 39

Via marginalizing over all object states in the window, excepting xl, n ≤ l ≤ k,

  • dxk, . . . , dxl+1, dxl−1, . . . , dxn p(xk, . . . , xn|Zk) = p(xl|Zk),

we obtain the pdfs for filtering & retrodiction; ASDs thus unify these notions. In addition, all correlations between different instants of time are contained.

Bayes’ Theorem: a recursion formula for updating ASDs:

p(xk:n|Zk) = ℓ(xk; Zk) p(xk|xk−1) p(xk−1:n|Zk−1)

dxk:n ℓ(xk; Zk) p(xk|xk−1) p(xk−1:n|Zk−1)

A little formalistically speaking, ‘sensor data processing’ means: Make sure that the sensor data are no longer explicitly present.

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 39

slide-40
SLIDE 40

Closed-form Representations for ASDs

Example: conditions, where Kalman filtering is applicable

likelihood function: ℓ(xk; Zk) = N(zk; Hkxk, Rk) evolution model: p(xk|xk−1) = N(xk; Fk|k−1xk−1, Dk|k−1)

Induction argument (product formula, see paper on homepage):

The corresponding ASD is a Gaussian:

p(xk:n|Zk) = N(xk:n; xk

k:n, Pk k:n)

with xk

k:n = (x⊤ k|k, x⊤ k−1|k, . . . , x⊤ n|k)⊤.

xk|k: standard Kalman filtering, xl|k, l < k: standard RTS smoothing

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 40

slide-41
SLIDE 41

Block Matrix Representation of the ASD Covariance

Pk

k:n =

         

Pk|k Pk|kW⊤

k−1|k

Pk|kW⊤

k−2|k

· · ·

Pk|kW⊤

1|k

Wk−1|kPk|k Pk−1|k Pk−1|kW⊤

k−2|k−1

Pk−1|kW⊤

1|k−1

Wk−2|kPk|k Wk−2|k−1Pk−1|k Pk−2|k

∗ . . . . . . ∗ ∗ ∗

P2|kW⊤

1|2

W1|kPk|k W1|k−1Pk−1|k

· · ·

W1|2P2|k P1|k

         

Pk|k: standard Kalman filtering, Pl|k, l < k: standard RTS smoothing

with: Wl|k = k−1

λ=l Pλ|λF⊤ λ+1|λP−1 λ+1|λ and Pk+1|k: Kalman prediction.

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 41

slide-42
SLIDE 42

Block Matrix Representations of the ASD Covariance

Pk

k:n =

       Pk|k Pk|kW⊤

k−1|k

Pk|kW⊤

k−2|k

· · ·

Pk|kW⊤

1|k

Wk−1|kPk|k Pk−1|k Pk−1|kW⊤

k−2|k−1

Pk−1|kW⊤

1|k−1

Wk−2|kPk|k Wk−2|k−1Pk−1|k Pk−2|k

∗ . . . . . . ∗ ∗ ∗

P2|kW⊤

1|2

W1|kPk|k W1|k−1Pk−1|k

· · ·

W1|2P2|k P1|k       

=

        Tk|k

−W⊤

k−1|kQ−1 k−1|k

O

· · ·

O

−Q−1

k−1|kWk−1|k

Tk−1|k

−W⊤

k−2|kQ−1 k−2|k

... . . .

O

−Q−1

k−2|kWk−2|k

... ...

O

. . . ... ...

Tm+1|k

−W⊤

m|kQm|k

O

· · ·

O

−Qm|kWm|k

Tm|k        

−1

Markov property: tridiagonal structure of the inverse ASD covariance

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 42

slide-43
SLIDE 43

Application to OoS Measurement Processing

Latencies in the communication infrastructure: Measurements can arrive at a processing node in a distributed data fusion system “too late”, i.e. after sensor data with a time stamp newer than the time stamp of an out-of-sequence measurement have already been processed [Seminal paper by Bar-Shalom, IEEE T-AES, 2002].

In situations where Kalman filtering is applicable, let zm be produced at time tm with n ≤ m < k, and be arriving at time tk. With a projection matrix Πm, defined by Πmxk:n = xm, the impact of zm on the present and past object states is described by: p(zm|xk:n) = N(zm; HmΠmxk:n, Rm). Direct generalizations to situations with ambiguous sensor data / IMM models.

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 43

slide-44
SLIDE 44

Standard Bayesian reasoning directly yields for the ASD: p(xk:n|zm, Zk) = p(zm|xk:n) p(xk:n|Zk)

dxk:n p(zm|xk:n) p(xk:n|Zk)

= N

  • xk:n; xk:m:n, Pk:m:n
  • with parameters obtained by a version of the Kalman update:

xk:m:n = xk:n + Wk:m:n(zm − HmΠmxk:n) Pk:m:n = Pk:n − Wk:M:nSk,m,1W⊤

k:m:n

Sk:m:n = HmΠmPk:nΠ⊤

mH⊤ m + Rm

Wk:m:n = Pk:nΠ⊤

mH⊤ mS−1 k:m:n.

Note that Sk:m:n to be inverted has the same dimension as zm.

Simultaneous update step for filtering and retrodiction

Efficient realizations exploit the structure of the ASD covariance.

Sensor Data Fusion - Methods and Applications, 4th Lecture on November 14, 2018 — slide 44