detection vs tracking lecture 21 motion and tracking
play

Detection vs. tracking Lecture 21: Motion and tracking Thursday, - PDF document

11/30/2007 Detection vs. tracking Lecture 21: Motion and tracking Thursday, Nov 29 Prof. Kristen Grauman Tracking with dynamics : We use image measurements to estimate position of object, but also incorporate position predicted by


  1. 11/30/2007 Detection vs. tracking Lecture 21: Motion and tracking … Thursday, Nov 29 Prof. Kristen Grauman Tracking with dynamics : We use image measurements to estimate position of object, but also incorporate position predicted by dynamics, i.e., our expectation of object’s motion pattern. Tracking as inference: Bayes Filters Tracking with dynamics � Hidden state x t • Have a model of expected motion – The unknown true parameters – E.g., actual position of the person we are tracking • Given that, predict where objects will occur in next frame, even before seeing the image � Measurement y t • Intent: Intent: – Our noisy observation of the state – do less work looking for the object, restrict – E.g., detected blob’s centroid search – improved estimates since measurement noise � Can we calculate p ( x t | y 1 , y 2 , …, y t ) ? tempered by trajectory smoothness – Want to recover the state from the observed measurements Recursive estimation States and observations • Unlike a batch fitting process, decompose estimation problem into – Part that depends on new p observation – Part that can be computed from previous history • For tracking, essential given Hidden state is the list of parameters of interest Example from last time: typical goal of real-time running average Measurement is what we get to directly observe (in processing. the images) 1

  2. 11/30/2007 Tracking as inference Tracking as inference • Recursive process: • Prediction: – Assume we have initial prior that predicts – Given the measurements we have seen up to state in absence of any evidence: P( X 0 ) this point, what state should we predict? – At the first frame correct this given the value At the first frame , correct this given the value of Y 0 = y 0 • Correction: – Given corrected estimate for frame t – Now given the current measurement, what • Predict for frame t +1 state should we predict? • Correct for frame t +1 Independence assumptions Tracking as inference • Only immediate past state influences • Goal is then to current state – choose good model for the prediction and correction distributions – use the updates to compute best estimate of use the updates to compute best estimate of state • Measurements at time t only depend on • Prior to seeing measurement the current state • After seeing the measurement Linear dynamic model Gaussian distributions, notation • Describe the a priori knowledge about x μ Σ N ~ ( , ) – System dynamics model: represents evolution • random variable with Gaussian probability of state over time, with noise distribution that has the mean vector μ and x Dx Σ ~ N ( ; ) covariance matrix Σ . − t t 1 d • x and μ are d -dimensional, Σ is d x d . n x 1 n x n n x 1 d =2 d =1 – Measurement model: at every time step we get a noisy measurement of the state y Mx Σ ~ N ( ; ) t t m m x 1 m x n n x 1 2

  3. 11/30/2007 x Dx Σ ~ N ( ; ) Example: randomly Example: constant x Dx Σ − ~ N ( ; ) t t 1 d − t t 1 d y Mx Σ drifting points velocity ~ N ( ; ) y Mx Σ ~ N ( ; ) t t m t t m • State vector x is 1d position and velocity. • Consider a stationary object, with state as position • Measurement y is position only. • State evolution is described by identity matrix D = I • Position is constant, only motion due to random = + Δ + ξ p p ( t ) v ⎡ ⎤ ⎡ Δ ⎤ ⎡ ⎤ − − p p t t 1 t 1 1 t = = = = + + noise term. noise term x x noise noise ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎣ ⎦ ⎣ 0 1 ⎦ ⎣ ⎦ = + ζ t v v v v − 1 t t − 1 t t ⎡ ⎤ [ ] p = + ξ = + ξ y 1 0 p ⎢ ⎥ t ⎣ ⎦ t v t ⎡ ⎤ Δ [ ] ⎡ ⎤ p 1 t = v = = x D M 1 0 ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ 0 1 State’s position over time State State’s position over time State position velocity velocity position i i position i i time State and State and measurements measurements Figures Figures from F&P from F&P State State’s position over time x Dx Σ Example: constant ~ N ( ; ) − t t d 1 y Mx Σ acceleration ~ N ( ; ) t t m velocity position • State is 1d position, velocity, and acceleration • Measurement is position only. = + Δ + ξ p p ( t ) v ⎡ ⎤ ⎡ ⎤ − − Δ p ⎡ ⎤ p t t 1 t 1 1 t 0 ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ = = Δ Δ + x ⎢ ⎢ ⎥ ⎥ position i i = + Δ + ζ v 0 0 1 1 t t v noise i time v v ( t ) a t ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − a ⎣ 0 0 1 ⎦ a t t 1 t 1 ⎣ ⎦ ⎣ ⎦ − 1 t t measurements = + ε a a D − 1 x t t − t 1 [ ] = M state 1 0 0 Figures time from F&P 3

  4. 11/30/2007 Kalman filter as density propagation State’s position and velocity State’s position over time Distribution shifts due to object dynamics model increased uncertainty y due to random component of dynamics model peak towards observation Figure from Isard & Blake 1998 Kalman filter as density propagation Kalman filtering measurement Know prediction of state, receive current measurement � Update distribution over belief belief current state estimate Time update Measurement update (“Predict”) (“Correct”) new belief Know corrected state from previous time step, and all measurements up to the current one � TIME ADVANCES Update distribution over i++ predicted state old belief Slide by S. Thrun and J. Kosecka, Stanford Kalman filtering Kalman filter for 1d state • Linear models + Gaussian distributions σ 2 x N dx ~ ( , ) − Dynamic i i 1 d work well (read, simplify computation) model σ 2 y ~ N ( mx , ) • Gaussians also represented compactly i i m prediction Want to represent correction and update 4

  5. 11/30/2007 Notation shorthand Kalman filtering Know prediction of state, − σ − X , receive current i i measurement � Update distribution over current state estimate Time update Measurement update (“Predict”) (“Correct”) Know corrected state from previous time step, and all measurements up to the current one � TIME ADVANCES Update distribution over + + predicted state σ X 1 , − − i i 1 Kalman filter for 1d state: prediction Kalman filtering • Linear dynamic model defines expected state Know prediction of state, evolution, with noise: − − σ X , receive current σ 2 i i x ~ N ( dx , ) measurement � − i i 1 d Update distribution over • Want to estimate distribution for next predicted state: current state estimate = − σ − 2 2 N N ( ( X X , ( ( ) ) ) ) Time update Measurement update i i (“Predict”) (“Correct”) – Update the mean: Know corrected state − = Predicted mean depends on state + from previous time step, X d X transition value (constant d ), and − i i 1 mean of previous state. and all measurements up to the current one � – Update the variance: TIME ADVANCES Update distribution over Variance depends on uncertainty − + + σ + σ = σ + d σ predicted state 2 2 2 at previous state, and noise of ( ) ( ) X 1 , − i d i 1 − − system’s model of state evolution. i i 1 Kalman filter for 1d state: correction Kalman filtering • Linear model of dynamics reflects how state is mapped to measurements: Know prediction of state, − σ − σ 2 X y ~ N ( mx , ) , receive current i i m i i measurement � • Know predicted state distribution: Update distribution over − − = σ 2 N ( X , ( ) ) current state estimate i i Time update Measurement update • Want to correct distribution over current state given (“Predict”) (“Correct”) new measurement : – Update mean Know corrected state − σ + σ − Corrected state estimate from previous time step, 2 2 X my ( ) + = i m i i incorporates current measurement, X and all measurements up σ + σ − i 2 2 2 m ( ) predicted state, meas. model, and to the current one � m i TIME ADVANCES their uncertainties. Update distribution over – Update variance Small measurement noise � rely on? σ σ − + σ + 2 2 predicted state ( ) + Large measurement noise � rely on? X 1 , σ = 2 m i ( ) − − σ + m σ − i i 1 i 2 2 2 ( ) m i 5

  6. 11/30/2007 Constant velocity model Constant velocity model Recall this example: State measurements on positio state time State is 2d: position + velocity State is 2d: position + velocity Measurement is 1d: position Measurement is 1d: position Constant velocity model Constant velocity model Kalman filter processing o state o state x measurement x measurement * predicted mean estimate * predicted mean estimate + corrected mean estimate + corrected mean estimate on on positio positio bars: variance estimates bars: variance estimates time time N-d Kalman filtering Data association • We’ve assumed entire • This generalizes to state vectors of any measurement ( y ) was cue dimension of interest for the state • Update rules in FP Alg 17.2 • But, there are typically uninformative uninformative measurements too–clutter. • Data association : task of determining which measurements go with which tracks. http://www.dkimages.com/discover/previews/1002/50215713.JPG 6

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend