Sequential Monte Carlo Methods Particle Filter Martin Ulmke Head of - - PowerPoint PPT Presentation
Sequential Monte Carlo Methods Particle Filter Martin Ulmke Head of - - PowerPoint PPT Presentation
Sequential Monte Carlo Methods Particle Filter Martin Ulmke Head of Research Group Distributed Sensor Systems Sensor Data and Information Fusion Fraunhofer FKIE Lecture: Introduction to Sensor Data Fusion 12 December 2018 Inhalt
Inhalt
1
Dynamic State Estimation Problem setting Conditional Probability Density:
2
Monte Carlo Methods
3
Sequential Bayesian Estimation Linear Gaussian Systems Grid-based methods Non-linear, non-Gaussian systems
4
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS “Sampling Importance Resampling”, SIR
5
Multitarget Tracking “Combinatorial Desaster” Probability Hypothesis Density
6
Extensions and Variants of Particle Filters
7
Summary
8
Literatur
1/33
Dynamic State Estimation Problem setting
Dynamic State Estimation
Zustandsraum Zeit t Zieltrajektorie (unbekannt) t=0 k
Target trajectory: X(t)
2/33
Dynamic State Estimation Problem setting
Dynamic State Estimation
Zustandsraum Zeit t Zieltrajektorie (unbekannt) t=0 k Messungen k=1 . k=2 . . Falschalarm Ausfaller
Target trajectory: X(t) Measurements: Z1:k = {z1, · · · , zk}
2/33
Dynamic State Estimation Problem setting
Dynamic State Estimation
Zustandsraum Zeit t Zieltrajektorie (unbekannt) t=0 k Messungen k=1 . k=2 . . Wahrsch.-Dichte
Target trajectory: X(t) Measurements: Z1:k = {z1, · · · , zk} Probability density: p(X0:k|Z1:k) with X0:k ≡ {x0, · · · , xk}
2/33
Dynamic State Estimation Conditional Probability Density:
Conditional Probability Density: p(X0:k|Z1:k) contains our collective knowledge on the target state X0:k = {x0, · · · , xk} at time instances tj (j = 1, · · · , k), given measurements Z1:k ≡ {z1, · · · , zk} and given prior knowledge , e.g., about target dynamics.
3/33
Dynamic State Estimation Conditional Probability Density:
Conditional Probability Density: p(X0:k|Z1:k) contains our collective knowledge on the target state X0:k = {x0, · · · , xk} at time instances tj (j = 1, · · · , k), given measurements Z1:k ≡ {z1, · · · , zk} and given prior knowledge , e.g., about target dynamics. Specific marginalized densities:: p(xk|Z1:k) =
- p(X0:k|Z1:k) dX0:k−1
filter estimation p(xj|Z1:k) =
- p(X0:k|Z1:k)
i=j dxi
retrodiction (j < k) Expectation values: E [f (X0:k)] =
- p(X0:k|Z1:k) f (X0:k) dX0:k
3/33
Dynamic State Estimation Conditional Probability Density:
Conditional Probability Density: p(X0:k|Z1:k) contains our collective knowledge on the target state X0:k = {x0, · · · , xk} at time instances tj (j = 1, · · · , k), given measurements Z1:k ≡ {z1, · · · , zk} and given prior knowledge , e.g., about target dynamics. Specific marginalized densities:: p(xk|Z1:k) =
- p(X0:k|Z1:k) dX0:k−1
filter estimation p(xj|Z1:k) =
- p(X0:k|Z1:k)
i=j dxi
retrodiction (j < k) Expectation values: E [f (X0:k)] =
- p(X0:k|Z1:k) f (X0:k) dX0:k
In general, integrals not analytically solvable ⇒ “Monte Carlo Methods” Replace integral by a sum of weighted “random paths”. E [f (X0:k)] ≈
N
- i=1
w (i)
k f (X(i) 0:k)
⇔ p(X0:k|Z1:k) ≈
N
- i=1
w (i)
k δ(X0:k − X(i) 0:k)
3/33
Monte Carlo Methods Zustandsraum Zeit t Zieltrajektorie (unbekannt) t=0 k Wahrsch.-Dichte k=1 . k=2 . . Teilchen-Trajektorien
Random path (trajectory): X(i)
0:k
Weight: w (i)
k
p(X0:k|Z1:k) ≈
N
- i=1
w (i)
k δ(X0:k − X(i) 0:k)
4/33
Monte Carlo Methods Zustandsraum Zeit t Zieltrajektorie (unbekannt) t=0 k Wahrsch.-Dichte k=1 . k=2 . . Teilchen-Trajektorien
Random path (trajectory): X(i)
0:k
Weight: w (i)
k
p(X0:k|Z1:k) ≈
N
- i=1
w (i)
k δ(X0:k − X(i) 0:k)
How to determine X(i)
0:k and w (i) k ?
4/33
Monte Carlo Methods
Sampling a) simple sampling: X(i)
0:k equally distributed
w (i)
k
∝ p(X(i)
0:k|Z1:k)
→ very inefficient
X p
5/33
Monte Carlo Methods
Sampling a) simple sampling: X(i)
0:k equally distributed
w (i)
k
∝ p(X(i)
0:k|Z1:k)
→ very inefficient
X p
b) perfect sampling: X(i)
0:k ∼ p(X(i) 0:k|Z1:k)
- ptimal sampling
w (i)
k
= 1/N
- f phase space
→ sampling ∼ p difficult
X p
5/33
Monte Carlo Methods
Sampling a) simple sampling: X(i)
0:k equally distributed
w (i)
k
∝ p(X(i)
0:k|Z1:k)
→ very inefficient
X p
b) perfect sampling: X(i)
0:k ∼ p(X(i) 0:k|Z1:k)
- ptimal sampling
w (i)
k
= 1/N
- f phase space
→ sampling ∼ p difficult
X p
c) importance sampling: X(i)
0:k ∼ q(X(i) 0:k|Z1:k)
w (i)
k
∝
p(X(i)
0:k |Z1:k )
q(X(i)
0:k |Z1:k )
→ q arbitrary (in principle)
X p q
5/33
Sequential Bayesian Estimation
Often, quantities at time step k can be calculated recursively from the former time step k − 1.
p(X0:k|Z1:k) =
1 p(zk |Z1:k−1)
p(zk|X0:kZ1:k−1) p(X0:k|Z1:k−1) (Bayes)
6/33
Sequential Bayesian Estimation
Often, quantities at time step k can be calculated recursively from the former time step k − 1.
p(X0:k|Z1:k) =
1 p(zk |Z1:k−1)
p(zk|X0:kZ1:k−1) p(X0:k|Z1:k−1) (Bayes) ∝ p(zk|X0:kZ1:k−1) p(xk|X0:k−1Z1:k−1) p(X0:k−1|Z1:k−1)
6/33
Sequential Bayesian Estimation
Often, quantities at time step k can be calculated recursively from the former time step k − 1.
p(X0:k|Z1:k) =
1 p(zk |Z1:k−1)
p(zk|X0:kZ1:k−1) p(X0:k|Z1:k−1) (Bayes) ∝ p(zk|X0:kZ1:k−1) p(xk|X0:k−1Z1:k−1) p(X0:k−1|Z1:k−1) = p(zk|xk) p(xk|xk−1) p(X0:k−1|Z1:k−1)
↑ ↑
.
Measurement does not depend on history. Markov dynamics
6/33
Sequential Bayesian Estimation
Often, quantities at time step k can be calculated recursively from the former time step k − 1.
p(X0:k|Z1:k) =
1 p(zk |Z1:k−1)
p(zk|X0:kZ1:k−1) p(X0:k|Z1:k−1) (Bayes) ∝ p(zk|X0:kZ1:k−1) p(xk|X0:k−1Z1:k−1) p(X0:k−1|Z1:k−1) = p(zk|xk) p(xk|xk−1) p(X0:k−1|Z1:k−1)
↑ ↑
.
Measurement does not depend on history. Markov dynamics
Measurement eq.: zk = hk(xk, uk), p(zk|xk) =
- δ(zk − hk(xk, uk)) pu(uk) duk
Markov dynamics: xk = fk(xk−1, vk), p(xk|xk−1) =
- δ(xk − fk(xk−1, vk)) pv(vk) dvk
uk ∼ pu(u) : measurement noise vk ∼ pv(v) : process noise
6/33
Sequential Bayesian Estimation
p(X0:k|Z1:k) ∝ p(zk|xk) p(xk|xk−1) p(X0:k−1|Z1:k−1) Often, only current estimates xk are needed, not the history X0:k.
- dX0:k−1 · · · −
→ recursive filter equation for time step k : p(xk|Z1:k) ∝ p(zk|xk) p(xk|Z1:k−1) p(xk|Z1:k−1) =
- dxk−1 p(xk|xk−1) p(xk−1|Z1:k−1)
measurement update prediction
7/33
Sequential Bayesian Estimation
p(X0:k|Z1:k) ∝ p(zk|xk) p(xk|xk−1) p(X0:k−1|Z1:k−1) Often, only current estimates xk are needed, not the history X0:k.
- dX0:k−1 · · · −
→ recursive filter equation for time step k : p(xk|Z1:k) ∝ p(zk|xk) p(xk|Z1:k−1) p(xk|Z1:k−1) =
- dxk−1 p(xk|xk−1) p(xk−1|Z1:k−1)
measurement update prediction ☛ ✡ ✟ ✠ In general, this is only a conceptional solution. Analytically solvable in special cases, only.
7/33
Sequential Bayesian Estimation Linear Gaussian Systems
Linear Gaussian Systems
Target dynamics: xk = Fk xk−1 + vk, vk ∼ N(o, Dk) Measurement eq.: zk = Hk xk + uk, uk ∼ N(o, Rk) Then, conditional densities normally distributed: p(xk−1|Z1:k−1) = N(xk−1; xk−1|k−1, Pk−1|k−1) p(xk|Z1:k−1) = N(xk; xk|k−1, Pk|k−1) p(xk|Z1:k) = N(xk; xk|k, Pk|k) with Kalman filter equations: ✗ ✖ ✔ ✕
xk|k−1 = Fk xk−1|k−1, Pk|k−1 = FkPk−1|k−1FT
k + Dk
xk|k = xk|k−1 + Wk
- zk − Hkxk|k−1
- ,
Pk|k = Pk|k−1 − WkSkWT
k
Wk = Pk|k−1HT
k S−1 k ,
Sk = HkPk|k−1HT
k + Rk
(see Lecture II)
8/33
Sequential Bayesian Estimation Grid-based methods
Grid-based methods Finite number of discrete target states x(i)
k (i = 1 · · · N)
Be P(xk−1 = x(i)
k−1|Z1:k−1) ≡ w (i) k−1|k−1 given. Then:
✬ ✫ ✩ ✪
p(xk|Z1:k−1) =
N
- i=1
w (i)
k|k−1 δ(xk − x(i) k )
p(xk|Z1:k) =
N
- i=1
w (i)
k|k δ(xk − x(i) k )
mit
w (i)
k|k−1
=
N
- j=1
w (i)
k−1|k−1 p(x(i) k |x(j) k−1)
w (i)
k|k
∝ w (i)
k|k−1 p(zk|x(i) k )
9/33
Sequential Bayesian Estimation Grid-based methods
Grid-based methods Finite number of discrete target states x(i)
k (i = 1 · · · N)
Be P(xk−1 = x(i)
k−1|Z1:k−1) ≡ w (i) k−1|k−1 given. Then:
✬ ✫ ✩ ✪
p(xk|Z1:k−1) =
N
- i=1
w (i)
k|k−1 δ(xk − x(i) k )
p(xk|Z1:k) =
N
- i=1
w (i)
k|k δ(xk − x(i) k )
mit
w (i)
k|k−1
=
N
- j=1
w (i)
k−1|k−1 p(x(i) k |x(j) k−1)
w (i)
k|k
∝ w (i)
k|k−1 p(zk|x(i) k )
Optimal filter for arbitrary densities. p(x(i)
k |x(j) k−1) and p(zk|x(i) k )
9/33
Sequential Bayesian Estimation Grid-based methods
Grid-based methods Finite number of discrete target states x(i)
k (i = 1 · · · N)
Be P(xk−1 = x(i)
k−1|Z1:k−1) ≡ w (i) k−1|k−1 given. Then:
✬ ✫ ✩ ✪
p(xk|Z1:k−1) =
N
- i=1
w (i)
k|k−1 δ(xk − x(i) k )
p(xk|Z1:k) =
N
- i=1
w (i)
k|k δ(xk − x(i) k )
mit
w (i)
k|k−1
=
N
- j=1
w (i)
k−1|k−1 p(x(i) k |x(j) k−1)
w (i)
k|k
∝ w (i)
k|k−1 p(zk|x(i) k )
Optimal filter for arbitrary densities. p(x(i)
k |x(j) k−1) and p(zk|x(i) k )
Approximation for continuous target states.
9/33
Sequential Bayesian Estimation Grid-based methods
Grid-based methods Finite number of discrete target states x(i)
k (i = 1 · · · N)
Be P(xk−1 = x(i)
k−1|Z1:k−1) ≡ w (i) k−1|k−1 given. Then:
✬ ✫ ✩ ✪
p(xk|Z1:k−1) =
N
- i=1
w (i)
k|k−1 δ(xk − x(i) k )
p(xk|Z1:k) =
N
- i=1
w (i)
k|k δ(xk − x(i) k )
mit
w (i)
k|k−1
=
N
- j=1
w (i)
k−1|k−1 p(x(i) k |x(j) k−1)
w (i)
k|k
∝ w (i)
k|k−1 p(zk|x(i) k )
Optimal filter for arbitrary densities. p(x(i)
k |x(j) k−1) and p(zk|x(i) k )
Approximation for continuous target states. Not feasible for higher dimensional systems!
9/33
Sequential Bayesian Estimation Non-linear, non-Gaussian systems
Non-linear, non-Gaussian systems
Target dynamics: xk = fk(xk−1, vk), vk ∼ pv(v) Measurement eq.: zk = hk(xk, uk), uk ∼ pu(u)
10/33
Sequential Bayesian Estimation Non-linear, non-Gaussian systems
Non-linear, non-Gaussian systems
Target dynamics: xk = fk(xk−1, vk), vk ∼ pv(v) Measurement eq.: zk = hk(xk, uk), uk ∼ pu(u) Non-linear target dynamics
- pilot / driver behavior
- different maneuvers
- influence of terrain
- constraints (maximal velocity, roads, coastlines, ...)
10/33
Sequential Bayesian Estimation Non-linear, non-Gaussian systems
Non-linear, non-Gaussian systems
Target dynamics: xk = fk(xk−1, vk), vk ∼ pv(v) Measurement eq.: zk = hk(xk, uk), uk ∼ pu(u) Non-linear target dynamics
- pilot / driver behavior
- different maneuvers
- influence of terrain
- constraints (maximal velocity, roads, coastlines, ...)
Non-linear measurement equation
- coordinate transformation
- integration over state space sector
- detection-target assignments
10/33
Sequential Bayesian Estimation Non-linear, non-Gaussian systems
Non-linear, non-Gaussian systems
Target dynamics: xk = fk(xk−1, vk), vk ∼ pv(v) Measurement eq.: zk = hk(xk, uk), uk ∼ pu(u) Non-linear target dynamics
- pilot / driver behavior
- different maneuvers
- influence of terrain
- constraints (maximal velocity, roads, coastlines, ...)
Non-linear measurement equation
- coordinate transformation
- integration over state space sector
- detection-target assignments
non-Gaussian process or measurement noise
10/33
Sequential Bayesian Estimation Non-linear, non-Gaussian systems
Non-linear, non-Gaussian systems
Target dynamics: xk = fk(xk−1, vk), vk ∼ pv(v) Measurement eq.: zk = hk(xk, uk), uk ∼ pu(u) Non-linear target dynamics
- pilot / driver behavior
- different maneuvers
- influence of terrain
- constraints (maximal velocity, roads, coastlines, ...)
Non-linear measurement equation
- coordinate transformation
- integration over state space sector
- detection-target assignments
non-Gaussian process or measurement noise In general, arbitrary multi-modal probability densities p(xk|Z1:k) possible.
10/33
Sequential Bayesian Estimation Non-linear, non-Gaussian systems
Example: single target with Pd < 1 and pf = 0
Measurement at time step k of mk detections: zk = {z(1)
k , . . . , z(mk) k
}
Measurement eq.: z(i)
k = hk(xk, uk),
p1(z(i)
k |xk) =
- δ(z(i)
k − hk(xk, uk)) pu(uk) duk
Markov dynamics: xk = fk(xk−1, vk), p(xk|xk−1) =
- δ(xk − fk(xk−1, vk)) pv(vk) dvk
uk: measurement noise; vk: process noise
11/33
Sequential Bayesian Estimation Non-linear, non-Gaussian systems
Example: single target with Pd < 1 and pf = 0
Measurement at time step k of mk detections: zk = {z(1)
k , . . . , z(mk) k
}
Measurement eq.: z(i)
k = hk(xk, uk),
p1(z(i)
k |xk) =
- δ(z(i)
k − hk(xk, uk)) pu(uk) duk
Markov dynamics: xk = fk(xk−1, vk), p(xk|xk−1) =
- δ(xk − fk(xk−1, vk)) pv(vk) dvk
uk: measurement noise; vk: process noise
Possible assignments detection ↔ target for likelihood function: p(zk|xk) =
mk
- j=1
pf (z(j)
k ) [
1 − Pd
- nly false alarms
+ Pd
mk
- i=1
p1(z(i)
k |xk)
pf (z(i)
k )
- ne detection
]
11/33
Sequential Bayesian Estimation Non-linear, non-Gaussian systems
Example: single target with Pd < 1 and pf = 0
Measurement at time step k of mk detections: zk = {z(1)
k , . . . , z(mk) k
}
Measurement eq.: z(i)
k = hk(xk, uk),
p1(z(i)
k |xk) =
- δ(z(i)
k − hk(xk, uk)) pu(uk) duk
Markov dynamics: xk = fk(xk−1, vk), p(xk|xk−1) =
- δ(xk − fk(xk−1, vk)) pv(vk) dvk
uk: measurement noise; vk: process noise
Possible assignments detection ↔ target for likelihood function: p(zk|xk) =
mk
- j=1
pf (z(j)
k ) [
1 − Pd
- nly false alarms
+ Pd
mk
- i=1
p1(z(i)
k |xk)
pf (z(i)
k )
- ne detection
] ☛ ✡ ✟ ✠ − → Multi-modal structure even for linear, Gaussian system equations.
11/33
Sequential Monte Carlo Methods
Sequential Monte Carlo Methods
Approximate p(X0:k|Z1:k) by weighted random paths X(i)
0:k !
✬ ✫ ✩ ✪ p(X0:k|Z1:k) ≈
N
- i=1
w (i)
k δ(X0:k − X(i) 0:k)
X(i)
0:k ∼ q(X(i) 0:k|Z1:k),
w (i)
k
∝ p(X(i)
0:k|Z1:k)
q(X(i)
0:k|Z1:k)
12/33
Sequential Monte Carlo Methods
Sequential Monte Carlo Methods
Approximate p(X0:k|Z1:k) by weighted random paths X(i)
0:k !
✬ ✫ ✩ ✪ p(X0:k|Z1:k) ≈
N
- i=1
w (i)
k δ(X0:k − X(i) 0:k)
X(i)
0:k ∼ q(X(i) 0:k|Z1:k),
w (i)
k
∝ p(X(i)
0:k|Z1:k)
q(X(i)
0:k|Z1:k)
Reminder (sequential estimation): p(X0:k|Z1:k) ∝ p(zk|xk) p(xk|xk−1) p(X0:k−1|Z1:k−1) q in principle arbitrary; optimal was q ≡ p. Ansatz: q(X0:k|Z1:k) = q(xk|xk−1zk) q(X0:k−1|Z1:k−1) Inserting p and q in w (i)
k
gives ...
12/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
“Sequential Importance Sampling”, SIS Recursive calculation of particle weights:
✓ ✒ ✏ ✑
w (i)
k
∝ w (i)
k−1
p(zk|x(i)
k )p(x(i) k |x(i) k−1)
q(x(i)
k |x(i) k−1zk)
- dX0:k−1 · · · −
→ Filtered density: p(xk|Z1:k) ≈
N
- i=1
w (i)
k δ(xk − x(i) k )
“Importance density”: q(xk|x(i)
k−1zk)
arbitrary
13/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
“Sequential Importance Sampling”, SIS Recursive calculation of particle weights:
✓ ✒ ✏ ✑
w (i)
k
∝ w (i)
k−1
p(zk|x(i)
k )p(x(i) k |x(i) k−1)
q(x(i)
k |x(i) k−1zk)
- dX0:k−1 · · · −
→ Filtered density: p(xk|Z1:k) ≈
N
- i=1
w (i)
k δ(xk − x(i) k )
“Importance density”: q(xk|x(i)
k−1zk)
arbitrary SIS-Algorithmus:
✤ ✣ ✜ ✢
Initialize {x(i)
0 }, {w (i) 0 }
For k = 1, 2, . . . Sample N “particles” x(i)
k ∼ q(xk|x(i) k−1zk)
Calculate {w (i)
k }
13/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
SIS-Algorithmus:
✤ ✣ ✜ ✢ Initialize {x(i)
0 }, {w (i) 0 }
For k = 1, 2, . . . Sample N “particles” x(i)
k ∼ q(xk|x(i) k−1zk)
Calculate {w (i)
k }
14/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
SIS-Algorithmus:
✤ ✣ ✜ ✢ Initialize {x(i)
0 }, {w (i) 0 }
For k = 1, 2, . . . Sample N “particles” x(i)
k ∼ q(xk|x(i) k−1zk)
Calculate {w (i)
k }
Properties: DEMO converges ∼ 1/ √ N
14/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
SIS-Algorithmus:
✤ ✣ ✜ ✢ Initialize {x(i)
0 }, {w (i) 0 }
For k = 1, 2, . . . Sample N “particles” x(i)
k ∼ q(xk|x(i) k−1zk)
Calculate {w (i)
k }
Properties: DEMO converges ∼ 1/ √ N easy to implement
14/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
SIS-Algorithmus:
✤ ✣ ✜ ✢ Initialize {x(i)
0 }, {w (i) 0 }
For k = 1, 2, . . . Sample N “particles” x(i)
k ∼ q(xk|x(i) k−1zk)
Calculate {w (i)
k }
Properties: DEMO converges ∼ 1/ √ N easy to implement easy to parallelize
14/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
SIS-Algorithmus:
✤ ✣ ✜ ✢ Initialize {x(i)
0 }, {w (i) 0 }
For k = 1, 2, . . . Sample N “particles” x(i)
k ∼ q(xk|x(i) k−1zk)
Calculate {w (i)
k }
Properties: DEMO converges ∼ 1/ √ N easy to implement easy to parallelize does not work (usually) Reason: “degeneracy”: After few iterations, almost all weights occupied by a few particles (or by one).
14/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
“Degeneracy”
Quantity for degeneracy: estimated effective particle number Neff = 1/
N
- i=1
- w (i)
k
2 1 ≤ Neff ≤ N, Neff = N f¨ ur w (i)
k
≡ 1/N Improvement of degeneracy problem: many, many particle (not always possible) sensible choice of “importance density” q(xk|x(i)
k−1zk)
“resampling”
15/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
Optimal “Importance Density”
q(xk|x(i)
k−1zk) = p(xk|x(i) k−1zk) = p(zk|xk) p(xk|x(i) k−1)
p(zk|x(i)
k−1)
Statistical weights: w (i)
k
∝ w (i)
k−1 p(zk|x(i) k−1)
independent of x(i)
k !
= w (i)
k−1
- dx′
k p(zk|x′ k) p(x′ k|x(i) k−1)
16/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
Optimal “Importance Density”
q(xk|x(i)
k−1zk) = p(xk|x(i) k−1zk) = p(zk|xk) p(xk|x(i) k−1)
p(zk|x(i)
k−1)
Statistical weights: w (i)
k
∝ w (i)
k−1 p(zk|x(i) k−1)
independent of x(i)
k !
= w (i)
k−1
- dx′
k p(zk|x′ k) p(x′ k|x(i) k−1)
Drawbacks: How to sample x(i)
k ∼ p(xk|x(i) k−1zk) ??
How to calculate integral efficiently?? ⇒ Applicable in special cases, only. DEMO
16/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
Choice of “Importance Density” Example: Gaussian random mover in d dimensionen
10
1
10
2
10
3
10
4
10
5
10
−2
10
−1
10
"Markov" importance density number of particles error
d=1 d=2 d=4 d=6 d=8 d=10 d=16 10
1
10
2
10
3
10
4
10
5
10
−2
10
−1
10 number of particles error d=1 d=4 d=6 d=8 d=16
- ptimal importance density
d=1 d=12 d=10
Required particle number for given accuracy −
→
2 4 6 8 10 12 14 16 10
2
10
3
10
4
10
5
dimension number of particles
- ptimal importance density
"Markov" importance density
17/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
Choice of “Importance Density” Example: Gaussian random mover in d dimensionen
10
1
10
2
10
3
10
4
10
5
10
−2
10
−1
10
"Markov" importance density number of particles error
d=1 d=2 d=4 d=6 d=8 d=10 d=16 10
1
10
2
10
3
10
4
10
5
10
−2
10
−1
10 number of particles error d=1 d=4 d=6 d=8 d=16
- ptimal importance density
d=1 d=12 d=10
Required particle number for given accuracy −
→
2 4 6 8 10 12 14 16 10
2
10
3
10
4
10
5
dimension number of particles
- ptimal importance density
"Markov" importance density
“Curse of Dimensionality”
17/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
Resampling
Idea (similar principle as “importance sampling”): Replace sample {x(i)
k } with weights {w (i) k } by a new
sample {˜ x(i)
k } with constant weights { ˜
w (i)
k } ≡ 1/N.
X w X w
18/33
Sequential Monte Carlo Methods “Sequential Importance Sampling”, SIS
Resampling
Idea (similar principle as “importance sampling”): Replace sample {x(i)
k } with weights {w (i) k } by a new
sample {˜ x(i)
k } with constant weights { ˜
w (i)
k } ≡ 1/N.
X w X w
Method (“genetic” element): replicate particles with large weight eliminate particles with small weight ⇒ particle number in state i: Ni ∝ w (i)
k
remark: can be done in O(N) operations
18/33
Sequential Monte Carlo Methods “Sampling Importance Resampling”, SIR
“Sampling Importance Resampling”, SIR simple application:
- “Importance density”=“Markov density”
q(xk|x(i)
k−1zk) = p(xk|x(i) k−1)
⇒ w (i)
k
∝ w (i)
k−1 p(zk|x(i) k )
- “Resampling” in each iteration
⇒ w (i)
k
∝ p(zk|x(i)
k )
19/33
Sequential Monte Carlo Methods “Sampling Importance Resampling”, SIR
“Sampling Importance Resampling”, SIR simple application:
- “Importance density”=“Markov density”
q(xk|x(i)
k−1zk) = p(xk|x(i) k−1)
⇒ w (i)
k
∝ w (i)
k−1 p(zk|x(i) k )
- “Resampling” in each iteration
⇒ w (i)
k
∝ p(zk|x(i)
k )
SIR-Algorithmus:
✬ ✫ ✩ ✪
Initialize {x(i)
0 }, {w (i) 0 }
For k = 1, 2, . . .
- Generate new states acc. to target dynamics:
x(i)
k = fk(x(i) k−1, v(i) k )
- Calculate w (i)
k
- acc. to measurement equation.
- Generate new sample by “resampling”
19/33
Sequential Monte Carlo Methods “Sampling Importance Resampling”, SIR
SIR-Algorithmus
Propagation (Zieldynamik) Resampling
{x }
(i) k
{x }
(i) k
,{w }
(i) k
{x }
(i) k+1 ,{w } (i) k+1
Propagation
{x }
(i) k-1
Gewichte (Messgleichung)
res.
20/33
Sequential Monte Carlo Methods “Sampling Importance Resampling”, SIR
SIR-Algorithmus
Propagation (Zieldynamik) Resampling
{x }
(i) k
{x }
(i) k
,{w }
(i) k
{x }
(i) k+1 ,{w } (i) k+1
Propagation
{x }
(i) k-1
Gewichte (Messgleichung)
res.
Properties: converges ∼ 1/ √ N easy to implement
20/33
Sequential Monte Carlo Methods “Sampling Importance Resampling”, SIR
SIR-Algorithmus
Propagation (Zieldynamik) Resampling
{x }
(i) k
{x }
(i) k
,{w }
(i) k
{x }
(i) k+1 ,{w } (i) k+1
Propagation
{x }
(i) k-1
Gewichte (Messgleichung)
res.
Properties: converges ∼ 1/ √ N easy to implement not so easy to parallelize (resampling) Measurement does not enter into propagation → possibly inefficient “impoverishment”: for small process noise, all particles descend from same ancester. DEMO
20/33
Multitarget Tracking
Multitarget Tracking
Joint multitarget probability density: p(Xk|Z1:k) For the joint state of all targets: Xk = (x(1)
k , . . . , x(n) k )T high dimensional!
(s. Kastella, Mahler, Stone, 1995 ff.)
21/33
Multitarget Tracking
Multitarget Tracking
Joint multitarget probability density: p(Xk|Z1:k) For the joint state of all targets: Xk = (x(1)
k , . . . , x(n) k )T high dimensional!
(s. Kastella, Mahler, Stone, 1995 ff.) ☛ ✡ ✟ ✠ For fixed target number n (formally) the usual Bayesian filter equations hold: p(Xk+1|Z1:k) =
- p(Xk+1|Xk) p(Xk|Z1:k)dXk
p(Xk+1|Z1:k+1) ∝ p(Zk+1|Xk) p(Xk+1|Z1:k) prediction filtering
21/33
Multitarget Tracking
Multitarget Tracking
Joint multitarget probability density: p(Xk|Z1:k) For the joint state of all targets: Xk = (x(1)
k , . . . , x(n) k )T high dimensional!
(s. Kastella, Mahler, Stone, 1995 ff.) ☛ ✡ ✟ ✠ For fixed target number n (formally) the usual Bayesian filter equations hold: p(Xk+1|Z1:k) =
- p(Xk+1|Xk) p(Xk|Z1:k)dXk
p(Xk+1|Z1:k+1) ∝ p(Zk+1|Xk) p(Xk+1|Z1:k) prediction filtering
Kinematic prior: potentially complex, correlated target dynamics (convoys, air combat, . . . ) likelihood: many assignment possibilities (exponential in n and mk) Methods: NN, PDAF,JPDAF, MHT, Particle filter, . . .
21/33
Multitarget Tracking
Multitarget dynamics
Example: independent targets p(Xk+1|Xk) =
n
- l1=···=lnk =1
n
- j=1
pt
k(x(lj) k+1|x(j) k ) k k+1 k k+1
sum over all single-target assignments pt
k: Single target Markov density
e.g. for Gauss-Markov dynamics: xk+1 = Fk+1xk + vk+1 pt
k(x(i) k+1|x(j) k ) = N(x(i) k+1 ; Fk+1 x(j) k , Dk+1 )
22/33
Multitarget Tracking
Multitarget likelihood function
Beispiel: independent detektions, uncorrelated clutter, PD < 1
p(Zk|Xk) =
min(m,n)
- j=0
P0(m, n, j)
m
- m1=···=mj =1
n
- n1<···<nj =1
j
- l=1
p1(z(ml )
k
|x(nl )
k
)
Weighted sum over all single-target-detection assignments P0(m, n, j) = Pj
d(1 − Pd)n−j
e−λV (m − j)! λm−j
(V : size of field of view; λ : false alarm density; j : number of detected target)
p1 : Single-target-detection likelihood function, e.g., for linear measurement model: zk = Hkxk + uk p1(z(i)
k |x(j) k ) = N(z(i) k ; Hkx(j) k , Rk)
23/33
Multitarget Tracking “Combinatorial Desaster”
“Combinatorial Desaster”
Number of possible assignments grows exponentially with number of targets n and number of detections. m.
Example: Three targets, six detections Example: Six and eight targets
Meldung Meldung Meldung Meldung
x x x x x x x x x x x x x x x x x x x x x x x
Ziel
x
Detektion Falschalarm 2 4 6 8 10 12 14 16 18 20 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 10
7
Anzahl Meldungen Anzahl Zuordnungsmoeglichkeiten
6 Ziele 8 Ziele
· · · N(m, n) =
min{m,n}
- j=0
j! m j n j
- 24/33
Multitarget Tracking “Combinatorial Desaster”
Simple szenario; unknown target number Two groups of road targets with two targets each Starting from different ends, crossing, turning, following Road partially not visible ⇒ number of detectable targets varies: (0,2,4) PD =0.9, 0.8, 30 false alarm on average sequential ratio test for target number estimation Szenario
−14000 −12000 −10000 −8000 −6000 −4000 −2000 −2000 2000 4000 6000 8000 x[m] y[m]
25/33
Multitarget Tracking “Combinatorial Desaster”
Simple szenario; unknown target number Two groups of road targets with two targets each Starting from different ends, crossing, turning, following Road partially not visible ⇒ number of detectable targets varies: (0,2,4) PD =0.9, 0.8, 30 false alarm on average sequential ratio test for target number estimation Szenario
−14000 −12000 −10000 −8000 −6000 −4000 −2000 −2000 2000 4000 6000 8000 x[m] y[m]
True and estimated target number
20 40 60 80 100 120 140 1 2 3 4 Scan−No target number
Extraction and tracking of a joint 4-target state (10,000 particles!)
25/33
Multitarget Tracking Probability Hypothesis Density
Simplification: Probability Hypothesis Density
Probability that in a certain, small part of state space a target exists Identy of targets and true assignment to detections often not interesting. For the joint pdf (identical targets): p(x(1)
s , . . . , x(n) s )
v(xs|n) = n
- p(xs, y(1)
s , . . . , y(n−1) s
)dy(1)
s
. . . dy(n−1)
s
PHD: v(xs) =
∞
- n=0
v(xs|n) P(n)
Dimension of a single target density
26/33
Multitarget Tracking Probability Hypothesis Density
Simplification: Probability Hypothesis Density
Probability that in a certain, small part of state space a target exists Identy of targets and true assignment to detections often not interesting. For the joint pdf (identical targets): p(x(1)
s , . . . , x(n) s )
v(xs|n) = n
- p(xs, y(1)
s , . . . , y(n−1) s
)dy(1)
s
. . . dy(n−1)
s
PHD: v(xs) =
∞
- n=0
v(xs|n) P(n)
Dimension of a single target density
Cardinality distribution: P(n) In an area:
- v(xs)dxs =
∞
- n=0
n P(n) = n (Target number estimator) If P(n) = 0 for n > 1 : v(xs) = P(1) p(xs) (Single target pdf)
26/33
Multitarget Tracking Probability Hypothesis Density
PHD filter equations
Iterative filter equations (Mahler 2003) for: Independent targets Target independent Poisson clutter Poisson distributed cardinalty distribution P(n) Probabilities for target “birth” (b) and “death” (d),
27/33
Multitarget Tracking Probability Hypothesis Density
PHD filter equations
Iterative filter equations (Mahler 2003) for: Independent targets Target independent Poisson clutter Poisson distributed cardinalty distribution P(n) Probabilities for target “birth” (b) and “death” (d), Prediction: vk|k−1(xs) = b(xs) +
- pk|k−1(xs|ys) vk−1|k−1(ys) (1 − d(ys))dys
Update: vk|k(xs) = vk|k−1(xs)
- (1 − Pd(x)) +
- s
Pd(x) p(zs|xs) λ +
- Pd(ys) p(zs|ys) vk|k−1(ys)dys
- Target number:
nk|k =
- dxs vk|k(xs)
27/33
Multitarget Tracking Probability Hypothesis Density
Properties of PHD filter equations
Multitarget tracking avoiding combinatorial desaster PHD vk(xs): dimension of a single target unknown and variable target numbers non-linear filter equations
28/33
Multitarget Tracking Probability Hypothesis Density
Properties of PHD filter equations
Multitarget tracking avoiding combinatorial desaster PHD vk(xs): dimension of a single target unknown and variable target numbers non-linear filter equations
- linear, Gaussian systems (Gaussian mixture densities)
- particle filter (Vo et al. 2005)
track extraction using clustering techniques
28/33
Multitarget Tracking Probability Hypothesis Density
PHD-SMC application
Variable target number in clutter (s. Vo et al. 2005)
29/33
Multitarget Tracking Probability Hypothesis Density
PHD-SMC application
Variable target number in clutter (s. Vo et al. 2005)
29/33
Extensions and Variants of Particle Filters
Extensions and Variants of Particle Filters
Avoiding/improvement of degeneracy and impoverishment: regularized particle filter auxiliary particle filter “Rao-Blackwellization’ ... Further applications: Robotic: localization Sensor management Polymeres and protein folding DNA sequential analysis Statistical physics Financial mathematics ... Under various names: particle filter population monte carlo diffusion monte carlo transfer matrix monte carlo projection monte carlo Green function monte carlo “go with the winners” ...
30/33
Summary
Summary
Advantages of sequential Monte Carlo methods: Non-linear system equations Non-Gaussian noise Rather easy implementation Efficient implementation ∼ O(N) Dynamical adaptation and concentration to statistically relevant parts of the state space.
31/33
Summary
Summary
Advantages of sequential Monte Carlo methods: Non-linear system equations Non-Gaussian noise Rather easy implementation Efficient implementation ∼ O(N) Dynamical adaptation and concentration to statistically relevant parts of the state space. Therefore: Use particle filters and forget the rest !?
31/33
Summary
Summary
. . . please, don’t, because: Convergency guaranteed only for N → ∞ If assumption are fulfilled, Kalman and grid-based filters are optimal and, hence, preferable. Extended and unscented Kalmanfilter and Gaussian mixtures etc., often, are sufficiently precise, using less CPU and storage. Then: Particle filter useful as numerical “benchmark”.
32/33
Literatur
Literature
1
- A. Doucet, N. de Freitas, and N. Gordon, eds., Sequential Monte Carlo Methods in Practice,
Springer, 2001.
2
- S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on Particle Filters for on-line
nonlinear/non-Gaussian Bayesian tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2,
- pp. 174–188, 2002.
3
- B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracking
Applications, Artech House, 2004.
4
- B. Vo, S. Singh, and A. Doucet, “Sequential Monte Carlo methods for Multi-target Filtering with
Random Finite Sets” IEEE Trans. AES, Vol. 41, No. 4, pp. 1224-1245, 2005.
5
- R. Mahler, “Statistical Multisource-Multitarget Information Fusion”, Artech House, 2007.
6
- O. Cappe, S. J. Godsill, and E. Moulines, “An Overview of Existing Methods and Recent Advances
in Sequential Monte Carlo,” in Proceedings of the IEEE, Vol. 95 (5), pp. 899-924, 2007.
7
- A. Doucet and A. M. Johansen, “A Tutorial on Particle Filtering and Smoothing: Fifteen years
Later,” Chapter 8.1 in The Oxford Handbook of Nonlinear Filtering, D. Crisan and B. Rozovskiieds (eds.), 2011.
8
- T. B. Sch¨
- n, F. Gustafsson, and R. Karlsson, “The Particle Filter in Practice,” Chapter 8.4 ibid.
9
- B. Ristic, “Particle Filters for Random Set Models,” Springer 2013.
33/33