Likelihood Functions The likelihood function answers the question: - - PowerPoint PPT Presentation

likelihood functions
SMART_READER_LITE
LIVE PREVIEW

Likelihood Functions The likelihood function answers the question: - - PowerPoint PPT Presentation

ambiguous sensor data ( P D < 1 , F > 0 ) k } n k n k + 1 possible interpretations of the sensor data Z k = { z j j =1 ! E 0 : the object was not detected; n k false data in the Field of View ( FoV ) E j , j = 1 , . . . , n k :


slide-1
SLIDE 1

ambiguous sensor data (PD < 1, ρF > 0)

nk + 1 possible interpretations of the sensor data Zk = {zj

k}nk j=1!

  • E0: the object was not detected; nk false data in the Field of View (FoV)
  • Ej, j = 1, . . . , nk: Object detected; zj

k is object measurement; nk − 1 false

measurements

Consider the interpretations in the likelihood function p(Zk, nk|xk)!

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 1

slide-2
SLIDE 2

ambiguous sensor data (PD < 1, ρF > 0)

nk + 1 possible interpretations of the sensor data Zk = {zj

k}nk j=1!

  • E0: the object was not detected; nk false data in the Field of View (FoV)
  • Ej, j = 1, . . . , nk: Object detected; zj

k is object measurement; nk − 1 false

measurements

Consider the interpretations in the likelihood function p(Zk, nk|xk)!

p(Zk, nk|xk) = p(Zk, nk, ¬D|xk) + p(Zk, nk, D|xk) D = “object was detected”

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 2

slide-3
SLIDE 3

ambiguous sensor data (PD < 1, ρF > 0)

nk + 1 possible interpretations of the sensor data Zk = {zj

k}nk j=1!

  • E0: the object was not detected; nk false data in the Field of View (FoV)
  • Ej, j = 1, . . . , nk: Object detected; zj

k is object measurement; nk − 1 false

measurements

Consider the interpretations in the likelihood function p(Zk, nk|xk)!

p(Zk, nk|xk) = p(Zk, nk, ¬D|xk) + p(Zk, nk, D|xk) D = “object was detected” = p(Zk, nk|¬D, xk) P(¬D|xk)

  • =1−PD

+p(Zk, nk|D, xk) P(D|xk)

  • =PD

sensor parameter: detection probability PD

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 3

slide-4
SLIDE 4

ambiguous sensor data (PD < 1, ρF > 0)

nk + 1 possible interpretations of the sensor data Zk = {zj

k}nk j=1!

  • E0: the object was not detected; nk false data in the Field of View (FoV)
  • Ej, j = 1, . . . , nk: Object detected; zj

k is object measurement; nk − 1 false

measurements

Consider the interpretations in the likelihood function p(Zk, nk|xk)!

p(Zk, nk|xk) = p(Zk, nk, ¬D|xk) + p(Zk, nk, D|xk) D = “object was detected” = p(Zk, nk|¬D, xk) P(¬D|xk) + p(Zk, nk|D, xk) p(D|xk) = p(Zk|nk, ¬D, xk)

  • =|FoV|−nk

p(nk|¬D, xk)

  • =pF(nk)

(1 − PD) + PD

nk

  • j=1

p(Zk, nk, j|D, xk) false measurements: Poisson distributed in #, uniformly distributed in the FoV

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 4

slide-5
SLIDE 5

ambiguous sensor data (PD < 1, ρF > 0)

nk + 1 possible interpretations of the sensor data Zk = {zj

k}nk j=1!

  • E0: the object was not detected; nk false data in the Field of View (FoV)
  • Ej, j = 1, . . . , nk: Object detected; zj

k is object measurement; nk − 1 false

measurements

Consider the interpretations in the likelihood function p(Zk, nk|xk)!

p(Zk, nk|xk) = p(Zk, nk, ¬D|xk) + p(Zk, nk, D|xk) D = “object was detected” = p(Zk, nk|¬D, xk) P(¬D|xk) + p(Zk, nk|D, xk) p(D|xk) = p(Zk|nk, ¬D, xk) p(nk|¬D, xk) (1 − PD) + PD

nk

  • j=1

p(Zk, nk, j|D, xk) = |FoV|−nk pF(nk) (1 − PD) + PD

nk

  • j=1

p(Zk|nk, j, D, xk)

  • |FoV|−(nk−1)N(zj

k;Hxk,R)

p(j|nk, D)

  • =1/nk

p(nk|D)

  • =pF(nk−1)

Insert Poisson distribution: pF(nk) = (ρF|FoV|)−nk

nk!

e−ρF|FoV|

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 5

slide-6
SLIDE 6

ambiguous sensor data (PD < 1, ρF > 0)

nk + 1 possible interpretations of the sensor data Zk = {zj

k}nk j=1!

  • E0: the object was not detected; nk false data in the Field of View (FoV)
  • Ej, j = 1, . . . , nk: Object detected; zj

k is object measurement; nk − 1 false

measurements

Consider the interpretations in the likelihood function p(Zk, nk|xk)!

p(Zk, nk|xk) = p(Zk, nk, ¬D|xk) + p(Zk, nk, D|xk) D = “object was detected” = p(Zk, nk|¬D, xk) P(¬D|xk) + p(Zk, nk|D, xk) p(D|xk) = p(Zk|nk, ¬D, xk) p(nk|¬D, xk) (1 − PD) + PD

nk

  • j=1

p(Zk, nk, j|D, xk) = |FoV|−nk pF(nk) (1 − PD) + PD

nk

  • j=1

p(Zk|nk, j, D, xk) p(j|nk, D) p(nk|D) = e−ρF |FoV|

nk!

ρnk−1

F

  • (1 − PD)ρF + PD

nk

  • j=1

N

  • zj

k; Hxk, R

  • Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019

— slide 6

slide-7
SLIDE 7

Likelihood Functions

The likelihood function answers the question: What does the sensor tell about the state x of the object? (input: sensor data, sensor model)

  • ideal conditions, one object: PD = 1, ρF = 0

at each time one measurement:

p(zk|xk) = N(zk; Hxk, R)

  • real conditions, one object: PD < 1, ρF > 0

at each time nk measurements Zk = {z1

k, . . . , znk k }! p(Zk, nk|xk) ∝ (1 − PD)ρF + PD

nk

  • j=1

Nzj

k; Hxk, R

7 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 7

slide-8
SLIDE 8

Bayes Filtering for: PD < 1, ρF > 0, well-separated objects

state xk, current data Zk = {zj

k}mk j=1,

accumulated data Zk = {Zk, Zk−1}

interpretation hypotheses Ek for Zk

  • bject not detected, 1 − PD

zk ∈ Zk from object, PD

  • mk + 1 interpretations

interpretation histories Hk for Zk

  • tree structure: Hk = (EHk, Hk−1) ∈ Hk
  • current: EHk, prehistories: Hk−i

p

  • xk| Zk

=

  • Hk

p

  • xk, Hk| Zk

=

  • Hk

p

  • Hk| Zk
  • weight!

pxk| Hk, Zk

  • given Hk:

unique

‘mixture’ density

8 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 8

slide-9
SLIDE 9

Closer look: PD < 1, ρF > 0, well-separated targets

filtering (at time tk−1): p(xk−1|Zk−1) =

  • Hk−1

pHk−1 N

  • xk−1; xHk−1, PHk−1
  • prediction (for time tk):

p(xk|Zk−1) =

  • dxk−1 p(xk|xk−1) p(xk−1|Zk−1)

(MARKOV model) =

  • Hk−1

pHk−1 N

  • xk; FxHk−1, FPHk−1F⊤ + D
  • (IMM also possible)

measurement likelihood: p(Zk, mk|xk) =

mk

  • j=0

p(Zk|Ej

k, xk, mk) P(Ej k|xk, mk)

(Ej

k: interpretations)

∝ (1 − PD) ρF + PD

mk

  • j=1

N

  • zj

k; Hxk, R

  • (H, R, PD, ρF)

filtering (at time tk): p(xk|Zk) ∝ p(Zk, mk|xk) p(xk|Zk−1) (BAYES’ rule) =

  • Hk

pHk N

  • xk; xHk, PHk
  • (Exploit product formula)

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 9

slide-10
SLIDE 10

Problem: Growing Memory Disaster:

m data, N hypotheses → Nm+1 continuations

radical solution: mono-hypothesis approximation

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 10

slide-11
SLIDE 11

Problem: Growing Memory Disaster:

m data, N hypotheses → Nm+1 continuations

radical solution: mono-hypothesis approximation

  • gating: Exclude competing data with ||νi

k|k−1|| > λ!

→ KALMAN filter (KF) + very simple, − λ too small: loss of target measurement

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 11

slide-12
SLIDE 12

Problem: Growing Memory Disaster:

m data, N hypotheses → Nm+1 continuations

radical solution: mono-hypothesis approximation

  • gating: Exclude competing data with ||νi

k|k−1|| > λ!

→ KALMAN filter (KF) + very simple, − λ too small: loss of target measurement

  • Force a unique interpretation in case of a conflict!

look for smallest statistical distance: mini ||νi

k|k−1||

→ Nearest-Neighbor filter (NN)

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 12

slide-13
SLIDE 13

Problem: Growing Memory Disaster:

m data, N hypotheses → Nm+1 continuations

radical solution: mono-hypothesis approximation

  • gating: Exclude competing data with ||νi

k|k−1|| > λ!

→ KALMAN filter (KF) + very simple, − λ too small: loss of target measurement

  • Force a unique interpretation in case of a conflict!

look for smallest statistical distance: mini ||νi

k|k−1||

→ Nearest-Neighbor filter (NN) + one hypothesis, − hard decision, − not adaptive

  • global combining: Merge all hypotheses!

→ PDAF, JPDAF filter + all data, + adaptive, − reduced applicability

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 13

slide-14
SLIDE 14

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 14

slide-15
SLIDE 15

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 15

slide-16
SLIDE 16

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 16

slide-17
SLIDE 17

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 17

slide-18
SLIDE 18

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 18

slide-19
SLIDE 19

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 19

slide-20
SLIDE 20

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 20

slide-21
SLIDE 21

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 21

slide-22
SLIDE 22

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 22

slide-23
SLIDE 23

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 23

slide-24
SLIDE 24

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 24

slide-25
SLIDE 25

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 25

slide-26
SLIDE 26

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 26

slide-27
SLIDE 27

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 27

slide-28
SLIDE 28

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 28

slide-29
SLIDE 29

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 29

slide-30
SLIDE 30

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 30

slide-31
SLIDE 31

Moment Matching: Approximate an arbitrary pdf p(x) with E[x] = x, C[x] = P by p(x) ≈ N

  • x; x, P
  • !

here especially: p(x) =

  • H

pH N(x; xH, PH) (normal mixtures)

x =

  • H

pH xH

P =

  • H

pH

  • PH +

spread term

  • (xH − x)(xH − x)⊤

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 31

slide-32
SLIDE 32

PDAF Filter: formally analogous to Kalman Filter

Filtering (scan k−1): p(xk−1|Zk−1) = N(xk−1; xk−1|k−1, Pk−1|k−1) (→ initiation) prediction (scan k): p(xk|Zk−1) ≈ N(xk; xk|k−1, Pk|k−1) (like Kalman) Filtering (scan k): p(xk|Zk) ≈

mk

  • j=0

pj

k N(xk; xj k|k, Pj k|k)

≈ N(xk; xk|k, Pk|k)

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 32

slide-33
SLIDE 33

PDAF Filter: formally analogous to Kalman Filter

Filtering (scan k−1): p(xk−1|Zk−1) = N(xk−1; xk−1|k−1, Pk−1|k−1) (→ initiation) prediction (scan k): p(xk|Zk−1) ≈ N(xk; xk|k−1, Pk|k−1) (like Kalman) Filtering (scan k): p(xk|Zk) ≈

mk

  • j=0

pj

k N(xk; xj k|k, Pj k|k)

≈ N(xk; xk|k, Pk|k)

xj

k|k =

xk|k−1

j=0

xk|k−1 + Wkνj

k j=0

Pj

k|k =

Pk|k−1

j=0

Pk|k−1 − WkSkW⊤

k j=0

νj

k = zj k − Hxk

  • innovation

,

Wk = Pk|k−1H⊤S−1

k

  • gain matrix

,

Sk = HPk|k−1H⊤ + Rk

  • innovation covariance

pj

k =

pj∗

k

  • j pj∗

k Gewichte

, pj∗

k =

  • (1 − PD) ρF

j=0 PD

|2πSHk| e− 1

2ν⊤ HkSHkνHk

j=0

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 33

slide-34
SLIDE 34

Second-order Approximation of the Mixture Density:

mk

  • j=1

pj

k N

  • xk; xj

k|k, Pj k|k

  • ≈ N
  • xk; xk|k, Pk|k
  • mit:

xk|k =

mk

  • j=0

pj

k xj k|k

Pk|k =

mk

  • j=0

pj

k

  • Pj

k|k + (xj k|k − xk|k)(xj k|k − xk|k)⊤

34 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 34

slide-35
SLIDE 35

xk|k =

mk

  • j=0

pj

kxj k|k,

x0

k|k = xk|k−1, xj k|k = xk|k−1 + Wkνj k

Pk|k =

mk

  • j=0

pj

k

Pj

k|k + (xj k|k − xk|k)(xj k|k − xk|k)⊤

35 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 35

slide-36
SLIDE 36

xk|k =

mk

  • j=0

pj

kxj k|k

= p0

kxk|k−1 + mk

  • j=1

pj

k

  • xk|k−1 + Wkνj

k

  • Pk|k =

mk

  • j=0

pj

k

  • Pj

k|k + (xj k|k − xk|k)(xj k|k − xk|k)⊤

36 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 36

slide-37
SLIDE 37

xk|k =

mk

  • j=0

pj

kxj k|k

= p0

kxk|k−1 + mk

  • j=1

pj

k

  • xk|k−1 + Wkνj

k

  • = xk|k−1
  • p0

k + mk

  • j=1

pj

k

  • =1!
  • + Wk

mk

  • j=1

pj

kνj k

  • mean!

Pk|k =

mk

  • j=0

pj

k

  • Pj

k|k + (xj k|k − xk|k)(xj k|k − xk|k)⊤

37 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 37

slide-38
SLIDE 38

xk|k =

mk

  • j=0

pj

kxj k|k

= p0

kxk|k−1 + mk

  • j=1

pj

k

xk|k−1 + Wkνj

k

  • = xk|k−1 + Wk νk

Pk|k =

mk

  • j=0

pj

k

  • Pj

k|k + (xj k|k − xk|k)(xj k|k − xk|k)⊤

Combined Innovation:

νk =

mk

  • j=1

pj

kνj k

38 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 38

slide-39
SLIDE 39

xk|k =

mk

  • j=0

pj

kxj k|k

= p0

kxk|k−1 + mk

  • j=1

pj

k

  • xk|k−1 + Wkνj

k

  • = xk|k−1 + Wk νk

Pk|k =

mk

  • j=0

pj

k

  • Pj

k|k + (xj k|k − xk|k)(xj k|k − xk|k)⊤

,

P0

k|k = Pk|k−1, Pj k|k = Pk|k−1 − WkSkW⊤ k

= Pk|k−1 −

mk

  • j=1

pj

kWkSkW⊤ k + mk

  • j=1

pj

kWk(νj k − νk)(νj k − νk)⊤W⊤ k

Combined Innovation:

νk =

mk

  • j=1

pj

kνj k

39 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 39

slide-40
SLIDE 40

xk|k =

mk

  • j=0

pj

kxj k|k

= p0

kxk|k−1 + mk

  • j=1

pj

k

xk|k−1 + Wkνj

k

  • = xk|k−1 + Wk νk

Pk|k =

mk

  • j=0

pj

k

  • Pj

k|k + (xj k|k − xk|k)(xj k|k − xk|k)⊤

= Pk|k−1 −

mk

  • j=1

pj

kWkSkW⊤ k + mk

  • j=1

pj

kWk(νj k − νk)(νj k − νk)⊤W⊤ k

= Pk|k−1 − (1 − p0

k)WkSkW⊤ k + Wk

mk

  • j=1

pj

kνj kνj⊤ k

− νkνk⊤

W⊤

k

Combined Innovation:

νk =

mk

  • j=1

pj

kνj k

40 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 40

slide-41
SLIDE 41

PDAF Filter: formally analog to Kalman Filter

Filtering (scan k−1): p(xk−1|Zk−1) = N(xk−1; xk−1|k−1, Pk−1|k−1) (→ initiation) prediction (scan k): p(xk|Zk−1) ≈ N(xk; xk|k−1, Pk|k−1) (like Kalman) Filtering (scan k): p(xk|Zk) ≈

mk

  • j=0

pj

k N(xk; xj k|k, Pj k|k) ≈ N(xk; xk|k, Pk|k)

νk

=

mk

j=0 pj k νj k ,

νj

k

= zj

k − Hxk|k−1

combined innovation

Wk

= Pk|k−1H⊤S−1

k ,

Sk

= HPk|k−1H⊤ + Rk Kalman gain matrix pj

k

= pi∗

k / j pj∗ k ,

pj∗

k

=

  • (1 − PD) ρF

PD

|2πSHk| e− 1

2ν⊤ HkSHkνHk

weighting factors

xk = xk|k−1 + Wk νk

(Filtering Update: Kalman)

Pk = Pk|k−1 − (1−p0

k) WkSW⊤ k

(Kalman part) + Wk

mk

j=0 pj k νj kνj⊤ k

− νkνk⊤

W⊤

k

(Spread of Innovations)

41 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 41

slide-42
SLIDE 42

PDAF: Characteristic Properties

  • filtering: processing of combined innovation
  • all data Zk in the gate are considered
  • pi data dependent! Update not linear
  • missing measurement: Pk|k−1 with weight p0
  • “usual” Kalman covarianve according to (1 − p0)
  • Spread positively semidefinite: larger covariance
  • therefore: data driven adaptivity
  • non linear estimator: data dependent error
  • Performance prediction only via simulations

Problem: Multimodality is lost!

42 Introduction to Sensor Data Fusion: Methods and Applications — 9th Lecture on January 9, 2019 slide 42

slide-43
SLIDE 43

The qualitative shape of p(xk|Zk) is often much simpler than its correct representation: a few pronounced modes

adaptive solution: nearly optimal approximation

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 43

slide-44
SLIDE 44

The qualitative shape of p(xk|Zk) is often much simpler than its correct representation: a few pronounced modes

adaptive solution: nearly optimal approximation

  • individual gating: Exclude irrelevant data!

before continuing existing track hypotheses Hk−1

→ limiting case: KALMAN filter (KF)

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 44

slide-45
SLIDE 45

The qualitative shape of p(xk|Zk) is often much simpler than its correct representation: a few pronounced modes

adaptive solution: nearly optimal approximation

  • individual gating: Exclude irrelevant data!

before continuing existing track hypotheses Hk−1

→ limiting case: KALMAN filter (KF)

  • pruning: Kill hypotheses of very small weight!

after calculating the weights pHk, before filtering

→ limiting case: Nearest Neighbor filter (NN)

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 45

slide-46
SLIDE 46

The qualitative shape of p(xk|Zk) is often much simpler than its correct representation: a few pronounced modes

adaptive solution: nearly optimal approximation

  • individual gating: Exclude irrelevant data!

before continuing existing track hypotheses Hk−1

→ limiting case: KALMAN filter (KF)

  • pruning: Kill hypotheses of very small weight!

after calculating the weights pHk, before filtering

→ limiting case: Nearest Neighbor filter (NN)

  • local combining: Merge similar hypotheses!

after the complete calculation of the pdfs

→ limiting case: PDAF (global combining)

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 46

slide-47
SLIDE 47

Successive Local Combining

Partial sums of similar densities → moment matching:

  • Hk∈Hk∗

pHk N(xk; xHk, PHk) ≈ pH∗

k N(xk; xH∗ k, PH∗ k)

Hk∗ ⊂ Hk → H∗

k: effective hypothesis

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 47

slide-48
SLIDE 48

Successive Local Combining

Partial sums of similar densities → moment matching:

  • Hk∈Hk∗

pHk N(xk; xHk, PHk) ≈ pH∗

k N(xk; xH∗ k, PH∗ k)

Hk∗ ⊂ Hk → H∗

k: effective hypothesis

similarity: d(H1, H2) < µ mit (z.B.): d(H1, H2) = (xH1 − xH2)⊤(PH1 + PH2)−1(xH1 − xH2) Start: Hypothesis of highest weight H1 → search similar hypothesis (pH ց) → merge: (H1, H) ≻ H∗

1 → continue search (pH ց) . . .

→ restart: hypothesis with next to highest weight H2 → . . .

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 48

slide-49
SLIDE 49

Successive Local Combining

Partial sums of similar densities → moment matching:

  • Hk∈Hk∗

pHk N(xk; xHk, PHk) ≈ pH∗

k N(xk; xH∗ k, PH∗ k)

Hk∗ ⊂ Hk → H∗

k: effective hypothesis

similarity: d(H1, H2) < µ mit (z.B.): d(H1, H2) = (xH1 − xH2)⊤(PH1 + PH2)−1(xH1 − xH2) Start: Hypothesis of highest weight H1 → search similar hypothesis (pH ց) → merge: (H1, H) ≻ H∗

1 → continue search (pH ց) . . .

→ restart: hypothesis with next to highest weight H2 → . . .

  • In many cases: good approximations → quasi-optimality
  • PDAF, JPDAF: Hk∗ = Hk → limited applicability
  • robustness → detail mostly irrelevant

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 49

slide-50
SLIDE 50

Retrodiction for GAUSSian Mixtures

wanted: p(xl|Zk) ← − p(xl+1|Zk) for l < k p(xl|Zk) =

  • Hk

p(xl, Hk|Zk) =

  • Hk

p(xl|Hk, Zk)

  • no ambiguities!

p(Hk|Zk)

  • filtering!

Calculation of p(xl|Hk, Zk) as in case of PD = 1, ρF = 0! p(xl|Hk, Zk) = N

  • xl; xHk(l|k), PHk(l|k)
  • with parameters given by RAUCH-TUNG-STRIEBEL formulae:

xHk(l|k) = xHk(l|l) + WHk(l|k) (xHk(l+1|k) − xHk(l+1|l)) PHk(l|k) = PHk(l|l) + WHk(l|k) (PHk(l+1|k) − PHk(l+1|l)) WHk(l|k)⊤

gain matrix:

WHk(l|k) = PHk(l|l)F⊤

l+1|lPHk(l+1|l)−1

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 50

slide-51
SLIDE 51

Retrodiction of Hypotheses’ Weights

Consider approximation: neglect RTS step!

p(xl|Hk, Zk) = N

  • xl; xHk(l|k), PHk(l|k)
  • ≈ N
  • xl; xHk(l|l), PHk(l|l)
  • p(xl|Hk, Zk) ≈
  • Hl

p∗

Hl N

  • xl; xHk(l|l), PHk(l|l)
  • with recursively defined weights:

p∗

Hk = pHk,

p∗

Hl = p∗ Hl+1

summation over all histories Hl+1 with equal pre-histories!

  • Strong sons strengthen weak fathers.
  • Weak sons weaken even strong fathers.
  • If all sons die, also the father must die.

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 51

slide-52
SLIDE 52

Track Extraction: Initiation of the PDF Iteration

extraction of target tracks: detection on a higher level of abstraction start: data sets Zk = {zj

k}mk j=1

(sensor performance: PD, ρF, R) goal: Detect a target trajectory in a time series: Zk = {Zi}k

i=1!

at first simplifying assumptions:

  • The targets in the sensors’ field of view (FoV) are well-separated.
  • The sensor data in the FoV in scan i are produced simultaneously.

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 52

slide-53
SLIDE 53

Track Extraction: Initiation of the PDF Iteration

extraction of target tracks: detection on a higher level of abstraction start: data sets Zk = {zj

k}mk j=1

(sensor performance: PD, ρF, R) goal: Detect a target trajectory in a time series: Zk = {Zi}k

i=1!

at first simplifying assumptions:

  • The targets in the sensors’ field of view (FoV) are well-separated.
  • The sensor data in the FoV in scan i are produced simultaneously.

decision between two competing hypotheses: h1: Besides false returns Zk contains also target measurements. h0: There is no target existing in the FoV; all data in Zk are false. statistical decision errors: P1 = Prob(accept h1|h1) analogous to the sensors’ PD P0 = Prob(accept h1|h0) analogous to the sensors’ PF

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 53

slide-54
SLIDE 54

Practical Approach: Sequential Likelihood Ratio Test

Goal: Decide as fast as possible for given decision errors P0, P1! Consider the ratio of the conditional probabilities p(h1|Zk), p(h0|Zk) and the likelihood ratio LR(k) = p(Zk|h1)/p(Zk|h0) as an intuitive decision function: p(h1|Zk) p(h0|Zk) = p(Zk|h1) p(Zk|h0) p(h1) p(h0) a priori: p(h1) = p(h0)

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 54

slide-55
SLIDE 55

Practical Approach: Sequential Likelihood Ratio Test

Goal: Decide as fast as possible for given decision errors P0, P1! Consider the ratio of the conditional probabilities p(h1|Zk), p(h0|Zk) and the likelihood ratio LR(k) = p(Zk|h1)/p(Zk|h0) as an intuitive decision function: p(h1|Zk) p(h0|Zk) = p(Zk|h1) p(Zk|h0) p(h1) p(h0) a priori: p(h1) = p(h0) Starting from a time window with length k = 1, calculate the test function LR(k) successively and compare it with two thresholds A, B: If LR(k) < A, accept hypothesis h0 (i.e. no target is existing)! If LR(k) > B, accept hypothesis h1 (i.e. target exists in FoV)! If A < LR(k) < B, wait for new data Zk+1, repeat with LR(k + 1)!

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 55

slide-56
SLIDE 56

Iterative Calculation of the Likelihood Ratio

LR(k) = p(Zk|h1) p(Zk|h0) =

  • dxk p(Zk, mk, xk, Zk−1|h1)

p(Zk, mk, Zk−1, h0)

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 56

slide-57
SLIDE 57

Iterative Calculation of the Likelihood Ratio

LR(k) = p(Zk|h1) p(Zk|h0) =

  • dxk p(Zk, mk, xk, Zk−1|h1)

p(Zk, mk, Zk−1, h0) =

  • dxk p(Zk, mk|xk) p(xk|Zk−1, h1) p(Zk−1|h1)

|FoV|−mk pF(mk) p(Zk−1|h0)

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 57

slide-58
SLIDE 58

Iterative Calculation of the Likelihood Ratio

LR(k) = p(Zk|h1) p(Zk|h0) =

  • dxk p(Zk, mk, xk, Zk−1|h1)

p(Zk, mk, Zk−1, h0) =

  • dxk p(Zk, mk|xk) p(xk|Zk−1, h1) p(Zk−1|h1)

|FoV|−mk pF(mk) p(Zk−1|h0) =

  • dxk p(Zk, mk|xk, h1) p(xk|Zk−1, h1)

|FoV|−mk pF(mk) LR(k − 1)

basic idea: iterative calculation!

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 58

slide-59
SLIDE 59

Iterative Calculation of the Likelihood Ratio

LR(k) = p(Zk|h1) p(Zk|h0) =

  • dxk p(Zk, mk, xk, Zk−1|h1)

p(Zk, mk, Zk−1, h0) =

  • dxk p(Zk, mk|xk) p(xk|Zk−1, h1) p(Zk−1|h1)

|FoV|−mk pF(mk) p(Zk−1|h0) =

  • dxk p(Zk, mk|xk, h1) p(xk|Zk−1, h1)

|FoV|−mk pF(mk) LR(k − 1)

basic idea: iterative calculation!

Let Hk = {Ek, Hk−1} be an interpretation history of the time series Zk = {Zk, Zk−1}. Ek = E0

k:

target was not detected, Ek = Ej

k: zj k ∈ Zk is a target measurement.

p(xk|Zk−1, h1) =

  • Hk−1

p(xk|Hk−1Zk−1, h1) p(Hk−1|Zk−1, h1) The standard MHT prediction! p(Zk, mk|xk, h1, h1) =

  • Ek

p(Zk, Ek|xk, h1) The standard MHT likelihood function! The calculation of the likelihood ratio is just a by-product of Bayesian MHT tracking.

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 59

slide-60
SLIDE 60

Iteration Formula for LR(k) = p(Zk|h1)/p(Zk|h0)

initiation: k = 0,

j0 = 0,

λj0 = 1 recursion: LR(k + 1) =

  • jk+1

λjk+1 =

mk+1

  • jk+1=0
  • jk

λjk+1jk λjk

with:

λjk+1jk =

  • 1 − PD

for jk+1 = 0

PD ρF N(νjk+1jk, Sjk+1jk)

for jk+1 = 0 convenient notation: with jk = (jk, . . . , j1) let

  • jk

λjk =

mk

  • jk=0

· · ·

m1

  • j1=0

λjk...j1

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 60

slide-61
SLIDE 61

Iteration Formula for LR(k) = p(Zk|h1)/p(Zk|h0)

initiation: k = 0,

j0 = 0,

λj0 = 1 recursion: LR(k + 1) =

  • jk+1

λjk+1 =

mk+1

  • jk+1=0
  • jk

λjk+1jk λjk

with:

λjk+1jk =

  • 1 − PD

for jk+1 = 0

PD ρF N(νjk+1jk, Sjk+1jk)

for jk+1 = 0 innovation:

νjk+1jk = zjk+1 − Hjk+1xjk+1|k

  • innov. cov.:

Sjk+1jk = Hjk+1Pjk+1|kH⊤

jk+1 + Rjk+1

state update:

xjk+1|k = Fjk+1xjk xjk = xjk|k−1 + Wjkjk−1νjk,jk−1

covariances:

Pjk+1|k = Fjk+1PjkF⊤

jk+1 + Djk+1

Pjk = Pjk|k−1 − Wjkjk−1Sjkjk−1W⊤

jkjk−1

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 61

slide-62
SLIDE 62

Iteration Formula for LR(k) = p(Zk|h1)/p(Zk|h0)

initiation: k = 0,

j0 = 0,

λj0 = 1 recursion: LR(k + 1) =

  • jk+1

λjk+1 =

mk+1

  • jk+1=0
  • jk

λjk+1jk λjk

with:

λjk+1jk =

  • 1 − PD

for jk+1 = 0

PD ρF N(νjk+1jk, Sjk+1jk)

for jk+1 = 0 innovation:

νjk+1jk = zjk+1 − Hjk+1xjk+1|k

  • innov. cov.:

Sjk+1jk = Hjk+1Pjk+1|kH⊤

jk+1 + Rjk+1

state update:

xjk+1|k = Fjk+1xjk xjk = xjk|k−1 + Wjkjk−1νjk,jk−1

covariances:

Pjk+1|k = Fjk+1PjkF⊤

jk+1 + Djk+1

Pjk = Pjk|k−1 − Wjkjk−1Sjkjk−1W⊤

jkjk−1

Exercise Show that this recursion formulae for calculating the decision function is true.

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 62

slide-63
SLIDE 63

Sequential Track Extraction: Discussion

  • LR(k) is given by a growing number of summands, each related to a parti-

cular interpretation history. The tuple {λjk, xjkPjk} is called a sub-track.

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 63

slide-64
SLIDE 64

Sequential Track Extraction: Discussion

  • LR(k) is given by a growing number of summands, each related to a parti-

cular interpretation history. The tuple {λjk, xjkPjk} is called a sub-track.

  • For mitigating growing memory problems all approximations discussed for

track maintenance can be used if they do not significantly affect LR(k): – individual gating: Exclude data not likely to be associated. – pruning: Kill sub-tacks contributing marginally to the test function. – local combining: Merge similar sub tracks: {λi, xi, Pi}i → {λ, x, P} with: λ =

i λi,

x = 1

λ

  • i λixi,

P = 1

λ

  • i λi[Pi + (xi − x)(. . .)⊤].

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 64

slide-65
SLIDE 65

Sequential Track Extraction: Discussion

  • LR(k) is given by a growing number of summands, each related to a parti-

cular interpretation history. The tuple {λjk, xjkPjk} is called a sub-track.

  • For mitigating growing memory problems all approximations discussed for

track maintenance can be used if they do not significantly affect LR(k): – individual gating: Exclude data not likely to be associated. – pruning: Kill sub-tacks contributing marginally to the test function. – local combining: Merge similar sub tracks: {λi, xi, Pi}i → {λ, x, P} with: λ =

i λi,

x = 1

λ

  • i λixi,

P = 1

λ

  • i λi[Pi + (xi − x)(. . .)⊤].
  • The LR test ends with a decision in favor of or against the hypotheses: h0

(no target) or h1 (target existing). Intuitive interpretation of the thresholds!

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 65

slide-66
SLIDE 66

track extraction at tk: Decide in favor of h1! initiation of pdf iteration (track maintenance):

Normalize coefficients λjk: pjk = λjk

  • jk λjk

! (λjk, xjk, Pjk) → p(xk|Zk) =

  • jk

pjk N

  • xk; xjk, Pjk
  • Continue track extraction with the remaining sensor data!

sequential LR test for track monitoring:

After deciding in favor of h1 reset LR(0) = 1! Calculate LR(k) from p(xk|Zk)! track confirmation: LR(k) > P1

P0: reset LR(0) = 1!

track deletion: LR(k) < 1−P1

1−P0; ev. track re-initiation

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 66

slide-67
SLIDE 67

DEMONSTRATION (simulated)

Exercise (voluntary) Simulate a detection process with a given PD, target measurements with a given R, a detection process with a given PD and realize the track extraction procedure.

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 67

slide-68
SLIDE 68

ABRAHAM WALD (1902-1950) Austro-Hungarian mathematician who contributed to decision theory, geometry, and econometrics; founded the theory of economic equilibria in Oskar Morgenstern’s institut in Vienna: “Berechnung der Ausschaltung von Saisonschwankungen” (Springer Verlag, 1936) the basis of Game Theory: Morgenstern, John von Neumann, John Forbes Nash (1994: Nobel price with Reinhard Selten, Bonn University) → sensor management! Founder of statistical sequential analysis in WW II. 1950 plenary talk at the International Congress of Mathematicians ICM, Cambridge (Mass.): “Basic ideas of a general theory of statistical decision rules” (1900: Hilbert’s 23 Problems). Student and friend: Jacob Wolfowitz (statistician, information theory), classical text book: “Coding Theorems of Information Theory” (1978). Posthumous attack by Ronald Fisher: “an incompetent book on statistics”, passionately defended by Jerzy Neyman as imminent a statistician as Fisher.

Sensor Data Fusion - Methods and Applications, 9th Lecture on Janaury 9, 2019 — slide 68