lecture 21 motion and tracking
play

Lecture 21: Motion and tracking Thursday, Nov 29 Prof. Kristen - PowerPoint PPT Presentation

Lecture 21: Motion and tracking Thursday, Nov 29 Prof. Kristen Grauman Prof. Kristen Grauman Detection vs tracking Detection vs. tracking Tracking with dynamics : We use image Tracking with dynamics : We use image measurements to


  1. Lecture 21: Motion and tracking Thursday, Nov 29 Prof. Kristen Grauman Prof. Kristen Grauman

  2. Detection vs tracking Detection vs. tracking … Tracking with dynamics : We use image Tracking with dynamics : We use image measurements to estimate position of object, but also incorporate position predicted by dynamics, i.e., our expectation of object’s motion pattern.

  3. Tracking with dynamics Tracking with dynamics • Have a model of expected motion Have a model of expected motion • Given that, predict where objects will occur in next frame, even before seeing the image e a e, e e be o e see g e age • Intent: – do less work looking for the object restrict do less work looking for the object, restrict search – improved estimates since measurement noise improved estimates since measurement noise tempered by trajectory smoothness

  4. Tracking as inference: Bayes Filters � Hidden state x t – The unknown true parameters The unknown true parameters – E.g., actual position of the person we are tracking � Measurement y t – Our noisy observation of the state – E.g., detected blob’s centroid E d t t d bl b’ t id � Can we calculate p ( x t | y 1 , y 2 , …, y t ) ? p ( t | y 1 , y 2 , , y t ) – Want to recover the state from the observed measurements

  5. States and observations States and observations Hidden state is the list of parameters of interest Measurement is what we get to directly observe (in Measurement is what we get to directly observe (in the images)

  6. Recursive estimation Recursive estimation • Unlike a batch fitting process, Unlike a batch fitting process, decompose estimation problem into – Part that depends on new observation – Part that can be computed from previous history • For tracking, essential given Example from last time: typical goal of real-time running average processing. i

  7. Tracking as inference Tracking as inference • Recursive process: Recursive process: – Assume we have initial prior that predicts state in absence of any evidence: P( X 0 ) state in absence of any evidence: P( X 0 ) – At the first frame , correct this given the value of Y 0 = y 0 of Y 0 y 0 – Given corrected estimate for frame t • Predict for frame t +1 • Correct for frame t +1

  8. Tracking as inference Tracking as inference • Prediction: Prediction: – Given the measurements we have seen up to this point what state should we predict? this point, what state should we predict? • Correction: – Now given the current measurement, what state should we predict?

  9. Independence assumptions Independence assumptions • Only immediate past state influences Only immediate past state influences current state • Measurements at time t only depend on the current state

  10. Tracking as inference Tracking as inference • Goal is then to Goal is then to – choose good model for the prediction and correction distributions correction distributions – use the updates to compute best estimate of state state • Prior to seeing measurement • After seeing the measurement

  11. Gaussian distributions, notation x μ Σ ~ N ( , ) • random variable with Gaussian probability distribution that has the mean vector μ and μ covariance matrix Σ . • x and μ are d -dimensional, Σ is d x d . d =2 d =1

  12. Linear dynamic model • Describe the a priori knowledge about – System dynamics model: represents evolution of state over time, with noise x Dx Σ ~ N ( ( ; ) ) − t t d 1 n x 1 n x n n x 1 – Measurement model: at every time step we get a noisy measurement of the state t i t f th t t y Mx Σ N ~ ( ; ) t t m m x 1 m x n n x 1

  13. Example: randomly x Dx Σ N ~ ( ; ) − t t 1 d d ifti drifting points i t y Mx Σ ~ N ( ; ) t t m • Consider a stationary object, with state as position C id t ti bj t ith t t iti • State evolution is described by identity matrix D = I • Position is constant, only motion due to random Position is constant only motion due to random noise term.

  14. x Dx Σ ~ N ( ; ) Example: constant − t t 1 d y Mx M Σ Σ velocity l it N N ~ ( ( ; ) ) t t m • State vector x is 1d position and velocity. State vector x is 1d position and velocity. • Measurement y is position only. = + Δ Δ ) + ξ ξ p p ( ( t ) v ⎡ ⎤ ⎡ ⎤ Δ ⎡ ⎤ − − p p 1 t t t t 1 1 = = + x noise ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ 0 1 ⎣ ⎦ t = + ζ ζ v v v v − 1 t t − 1 − 1 t t t t ⎡ ⎤ [ ] p = + ξ = + ξ y 1 0 p ⎢ ⎥ ⎣ ⎣ ⎦ ⎦ t t v t t ⎡ ⎤ Δ [ [ ] ] ⎡ ⎤ = p = 1 t = = = v = x x D D M M 1 1 0 0 ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎣ ⎦ ⎣ ⎦ 0 1

  15. State State’s position over time locity vel position State and measurements Figures from F&P

  16. State State’s position over time locity sition pos vel position time State and measurements Figures from F&P

  17. State State’s position over time locity sition pos vel position time measurements state Figures time from F&P

  18. x Dx Σ ~ N ( ; ) Example: constant − t t 1 d y Mx M Σ Σ acceleration l ti N N ~ ( ( ; ) ) t t m • State is 1d position, velocity, and acceleration State is 1d position, velocity, and acceleration • Measurement is position only. = + + Δ Δ + + ξ ξ p p p p ( ( t t ) ) v v ⎡ ⎤ ⎡ ⎤ − − Δ ⎡ ⎤ p p t t 1 t 1 1 t 0 ⎢ ⎥ ⎢ ⎥ = = Δ + x ⎢ ⎥ v 0 1 t v noise = + Δ + ζ v v ( t ) a ⎢ ⎥ ⎢ ⎥ t ⎢ ⎢ ⎥ ⎥ − − ⎣ ⎣ ⎦ ⎦ a a 0 0 0 0 1 a a t t 1 t 1 ⎣ ⎣ ⎦ ⎦ ⎣ ⎣ ⎦ ⎦ − 1 t t = + ε a a − 1 D x t t − t 1 [ [ ] ] = M 1 0 0

  19. State’s position over time State’s position and velocity

  20. Kalman filter as density propagation Distribution shifts due to object dynamics model increased uncertainty due to random random component of dynamics model peak towards observation Figure from Isard & Blake 1998

  21. Kalman filter as density propagation measurement belief belief belief new belief old belief Slide by S. Thrun and J. Kosecka, Stanford

  22. Kalman filtering Kalman filtering Know prediction of state, receive current measurement � Update distribution over current state estimate t t t ti t Time update Measurement update (“Predict”) ( Predict ) (“Correct”) ( Correct ) Know corrected state from previous time step, and all measurements up to the current one � TIME ADVANCES Update distribution over i++ predicted state di t d t t

  23. Kalman filtering Kalman filtering • Linear models + Gaussian distributions Linear models + Gaussian distributions work well (read, simplify computation) • Gaussians also represented compactly • Gaussians also represented compactly prediction prediction correction

  24. Kalman filter for 1d state Kalman filter for 1d state σ σ 2 x x ~ N N ( ( dx dx , ) ) − Dynamic i i 1 d model σ 2 y y ~ N ( ( mx , , ) ) i i i i m m Want to represent and update d d t

  25. Notation shorthand Notation shorthand

  26. Kalman filtering Kalman filtering Know prediction of state, − − σ σ X X , receive current i i measurement � Update distribution over current state estimate t t t ti t Time update Measurement update (“Predict”) ( Predict ) ( Correct ) (“Correct”) Know corrected state from previous time step, and all measurements up to the current one � TIME ADVANCES Update distribution over + + σ predicted state di t d t t X 1 , − − i i 1

  27. Kalman filtering Kalman filtering Know prediction of state, − − σ σ X X , receive current i i measurement � Update distribution over current state estimate t t t ti t Time update Measurement update (“Predict”) ( Predict ) ( Correct ) (“Correct”) Know corrected state from previous time step, and all measurements up to the current one � TIME ADVANCES Update distribution over + + σ predicted state di t d t t X 1 , − − i i 1

  28. Kalman filter for 1d state: prediction • Linear dynamic model defines expected state evolution, with noise: , σ 2 x ~ N ( dx , ) − i i 1 d • Want to estimate distribution for next predicted state: • Want to estimate distribution for next predicted state: − − = σ 2 N ( X , ( ) ) i i – Update the mean: − = Predicted mean depends on state + transition value (constant d ), and X d X − 1 i i i i 1 mean of previous state. – Update the variance: Variance depends on uncertainty − + σ = σ + d σ 2 2 2 2 2 2 at previous state, and noise of ( ) ( ) − i d i 1 system’s model of state evolution.

  29. Kalman filtering Kalman filtering Know prediction of state, − − σ σ X X , receive current i i measurement � Update distribution over current state estimate t t t ti t Time update Measurement update (“Predict”) ( Predict ) ( Correct ) (“Correct”) Know corrected state from previous time step, and all measurements up to the current one � TIME ADVANCES Update distribution over + + σ predicted state di t d t t X 1 , − − i i 1

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend