bayesian tracking basic idea
play

Bayesian Tracking: Basic Idea Iterative updating of conditional - PowerPoint PPT Presentation

Bayesian Tracking: Basic Idea Iterative updating of conditional probability densities! kinematic target state x k at time t k , accumulated sensor data Z k a priori knowledge: target dynamics models, sensor model, road maps dynamics model p ( x k


  1. Bayesian Tracking: Basic Idea Iterative updating of conditional probability densities! kinematic target state x k at time t k , accumulated sensor data Z k a priori knowledge: target dynamics models, sensor model, road maps dynamics model p ( x k − 1 |Z k − 1 ) p ( x k |Z k − 1 ) • prediction: − − − − − − − − − − → road maps sensor data Z k p ( x k |Z k − 1 ) p ( x k |Z k ) • filtering: − − − − − − − − − − → sensor model filtering output p ( x l − 1 |Z k ) p ( x l |Z k ) • retrodiction: ← − − − − − − − − − − dynamics model − finite mixture: inherent ambiguity (data, model, road network ) − optimal estimators: e.g. minimum mean squared error (MMSE) − initiation of pdf iteration: multiple hypothesis track extraction Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 1

  2. • p ( x k |Z k − 1 ) is a prediction of the target state at time t k based on all measurements in the past . p ( x k |Z k − 1 ) = � d x k − 1 p ( x k , x k − 1 |Z k − 1 ) marginal pdf = � d x k − 1 p ( x k | x k − 1 , Z k − 1 ) p ( x k − 1 |Z k − 1 ) notion of a conditional pdf � �� � � �� � object dynamics! idea: iteration! often: p ( x k | x k − 1 , Z k − 1 ) = p ( x k | x k − 1 ) (M ARKOV ) sometimes: p ( x k | x k − 1 ) = N � x k ; F k | k − 1 � x k − 1 , D k | k − 1 (linear G AUSS -M ARKOV ) � �� � � �� � deterministic random • p ( Z k , m k | x k ) ∝ ℓ ( x k ; Z k , m k ) describes, what the current sensor output Z k , m k can say about the current target state x k and is called likelihood function . � � sometimes: ℓ ( z k ; x k ) = N x k ; H k x k , R k (1 target, 1 measurement) ℓ ( x k ; Z k , m k ) � d x k − 1 p ( x k | x k − 1 ) p ( x k − 1 |Z k − 1 ) p ( x k |Z k ) = iteration formula: � d x k ℓ ( x k ; Z k , m k ) � d x k − 1 p ( x k | x k − 1 ) p ( x k − 1 |Z k − 1 ) Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 2

  3. G AUSS ian transition pdf: p ( x k | x k − 1 , Z k − 1 ) = N ( x k ; F k | k − 1 x k − 1 , D k | k − 1 ) with: F k | k − 1 (evolution matrix) , D k | k − 1 (dynamics covariance matrix) � �� � � �� � describes deterministic motion models of random maneuvers G AUSS ian posterior: p ( x k − 1 |Z k − 1 ) = N ( x k − 1 ; x k − 1 | k − 1 , P k − 1 | k − 1 ) � � � � � p ( x k |Z k − 1 ) = d x k − 1 N N x k ; F k | k − 1 x k − 1 , D k | k − 1 x k − 1 ; x k − 1 | k − 1 , P k − 1 | k − 1 � �� � � �� � dynamics model posterior at time t k − 1 = N � x k ; F k | k − 1 x k − 1 | k − 1 � , F k | k − 1 P k − 1 | k − 1 F ⊤ k | k − 1 + D k | k − 1 � �� � � �� � =: x k | k − 1 =: P k | k − 1 � d x k − 1 N � x k − 1 ; . . . , . . . � × (exploit product formula!) � �� � =1 (normalization!) = N ( x k ; x k | k − 1 , P k | k − 1 ) Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 3

  4. k ) ⊤ , Z k = { z k , Z k − 1 } Kalman filter: linear G AUSS ian likelihood/dynamics, x k = ( r ⊤ r ⊤ r ⊤ k , ˙ k , ¨ � � p ( x 0 ) = N x 0 ; x 0 | 0 , P 0 | 0 initiation: , initial ignorance: P 0 | 0 ‘large’ � � dynamics model � � N − − − − − − − − − → N prediction: x k − 1 ; x k − 1 | k − 1 , P k − 1 | k − 1 x k ; x k | k − 1 , P k | k − 1 F k | k − 1 , D k | k − 1 x k | k − 1 = F k | k − 1 x k − 1 | k − 1 ⊤ + D k | k − 1 P k | k − 1 = F k | k − 1 P k − 1 | k − 1 F k | k − 1 N � x k ; x k | k − 1 , P k | k − 1 � � � current measurement z k − − − − − − − − − − − − − → N x k ; x k | k , P k | k filtering: sensor model: H k , R k ν k | k − 1 = z k − H k x k | k − 1 x k | k = x k | k − 1 + W k | k − 1 ν k | k − 1 , S k | k − 1 = H k P k | k − 1 H k ⊤ + R k P k | k − 1 − W k | k − 1 S k | k − 1 W k | k − 1 ⊤ , = P k | k W k | k − 1 = P k | k − 1 H k ⊤ S k | k − 1 − 1 ‘K ALMAN gain matrix’ � � filtering, prediction � � retrodiction: N x l ; x l | k , P l | k ← − − − − − − − − − − N x l +1 ; x l +1 | k , P l +1 | k dynamics model l +1 | l P − 1 W l | l +1 = P l | l F ⊤ = x l | l + W l | l +1 ( x l +1 | k − x l +1 | l ) , x l | k l +1 | l P l | l + W l | l +1 ( P l +1 | k − P l +1 | l ) W l | l +1 ⊤ = P l | k Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 4

  5. Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 5

  6. in practical applications: uncertainty on which dynamics model j k out of a set of r alternatives is in effect at t k (IMM: Interacting Multiple Models) Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 6

  7. Quite general: agent switching between different modes of over-all behavior M1 M2 M3 Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 7

  8. Quite general: agent switching between different modes of over-all behavior M1 M2 M3 Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 8

  9. Quite general: agent switching between different modes of over-all behavior P(1|1) P(2|1) M1 M2 P(3|1) M3 P(1|1) + P(2|1) + P(3|1) = 1 Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 9

  10. Quite general: agent switching between different modes of over-all behavior P(1|1) P(2|2) P(2|1) M1 M2 P(1|2) P(1|3) P(2|3) P(3|1) P(3|2) M3 P(1|1) + P(2|1) + P(3|1) = 1 P(1|2) + P(2|2) + P(3|2) = 1 P(3|3) P(1|3) + P(2|3) + P(3|3) = 1 Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 10

  11. A quite general mathematical structure: a graph, characterized by nodes (here: evolution models) and directed edges defining an adjacency matrix (here: transition matrix P , stochastic matrix: columns sum up to one) initial information on which model is currently being in effect: p k = ( p 1 k , p 2 k , p 3 k ) ⊤     p 1 p (1 | 1) p (1 | 2) p (1 | 3) k − 1 p 2 Markov propagation: p (2 | 1) p (2 | 2) p (2 | 3) p k = P p k − 1 =     k − 1 p 3 p (3 | 1) p (3 | 2) p (3 | 3) k − 1 Perron-Frobenius: the spectral radius of stochastic matrices is 1, 1 is also an eigenvalue and the corresponding eigenvector is positive.   0 . 5 0 . 3 0 . 2 0 . 2 0 . 4 0 . 4 Exercise: Consider the example:   0 . 3 0 . 3 0 . 4 and calculate the invariant state (eigenvector for eingenvalue 1). Show numerically or mathematically that each initial state converges to the invariant state. Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 11

  12. Excursus: Stochastic Characterization of Object Interrelations: Estimation and Tracking of Adjacency Matrices • Multiple object tracking: estimate from uncertain data Z at each time the kinematic state vector of all relevant objects: p ( x | Z ) . • Sometimes of interest: interrelations between tracked objects. Example: reachability between two objects (communications, mutual help). • Interrelations completely described by the adjacency matrix X of a graph (nodes: tracked objects, matrix elements: properties of the interrelation). • Uncertainty of sensor data (kinematics, attributes) z , Z : adjacency matrix is a random matrix (matrix variate probability densities). • State to be estimated: kinematics x of all objects, adjacency matrix X . Based on the sensor data, the knowledge on x , X is contained in: p ( x, X | z, Z ) . • suitable families of matrix variate densities and likelihood functions: Bayes! Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 12

  13. in practical applications: uncertainty on which dynamics model j k out of a set of r alternatives is in effect at t k (IMM: Interacting Multiple Models) ! p ( x k , j k | x k − 1 , j k − 1 ) = p ( x k | j k , x k − 1 , j k − 1 ) p ( j k | x k − 1 , j k − 1 ) = p ( x k | x k − 1 , j k ) p ( j k | j k − 1 ) (M ARKOV ) N � x k ; F j k � k | k − 1 x k − 1 , D j k = p ( j k | j k − 1 ) k | k − 1 � �� � � �� � interaction dynamics model j k Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 13

  14. in practical applications: uncertainty on which dynamics model j k out of a set of r alternatives is in effect at t k (IMM: Interacting Multiple Models) p ( x k , j k | x k − 1 , j k − 1 ) = p ( x k | x k − 1 , j k ) p ( j k | j k − 1 ) (M ARKOV ) � � x k ; F j k k | k − 1 x k − 1 , D j k = p ( j k | j k − 1 ) N k | k − 1 � �� � � �� � interaction dynamics model j k previous posterior written as a G AUSS ian mixture: r r � � � � x k − 1 ; x j k − 1 k − 1 | k − 1 , P j k − 1 p ( x k − 1 |Z k − 1 ) = p ( x k − 1 , j k − 1 |Z k − 1 ) = p ( j k − 1 |Z k − 1 ) N , k − 1 | k − 1 j k − 1 =1 j k − 1 =1 Sensor Data Fusion - Methods and Applications, 5th Lecture on November 21, 2018 — slide 14

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend