Lecture 21: Motion and tracking
Thursday, Nov 29
- Prof. Kristen Grauman
- Prof. Kristen Grauman
Lecture 21: Motion and tracking Thursday, Nov 29 Prof. Kristen - - PowerPoint PPT Presentation
Lecture 21: Motion and tracking Thursday, Nov 29 Prof. Kristen Grauman Prof. Kristen Grauman Detection vs tracking Detection vs. tracking Tracking with dynamics : We use image Tracking with dynamics : We use image measurements to
Hidden state xt
Measurement yt
Can we calculate p(xt | y1, y2, …, yt) ?
t | y1, y2,
Example from last time: running average
d=2 d=1
1 d t t
−
n x n n x 1 n x 1
m t t
m x n n x 1 m x 1
1 d t t
−
m t t
1 d t t
−
m t t
− − 1 1
t t t
−1 t t
t t t
−1
−1 t t
t t t
t
State’s position over time State locity vel position State and measurements
Figures from F&P
State’s position over time State locity sition vel pos position time State and measurements
Figures from F&P
State’s position over time State locity sition vel pos position time measurements state
Figures from F&P
time
1 d t t
−
m t t
− − 1 1
t t t
− − 1 1
t t t
t
−1 t t
t t
−1
1 − t
State’s position and velocity State’s position over time
Distribution shifts due to
increased uncertainty due to random random component
model
Figure from Isard & Blake 1998
peak towards observation
measurement belief belief belief new belief
Slide by S. Thrun and J. Kosecka, Stanford
Know prediction of state, receive current measurement Update distribution over t t t ti t current state estimate
Know corrected state from previous time step,
and all measurements up to the current one Update distribution over di t d t t TIME ADVANCES i++ predicted state
prediction prediction correction
2
1 d i i
−
2 m i i
m i i
Know prediction of state, − −
receive current measurement Update distribution over t t t ti t i i
current state estimate
Know corrected state from previous time step,
and all measurements up to the current one Update distribution over di t d t t TIME ADVANCES predicted state
+ − + − 1 1, i i
Know prediction of state, − −
receive current measurement Update distribution over t t t ti t i i
current state estimate
Know corrected state from previous time step,
and all measurements up to the current one Update distribution over di t d t t TIME ADVANCES predicted state
+ − + − 1 1, i i
2 1 d i i
−
2 − −
i i
+ − = 1 i i
Predicted mean depends on state transition value (constant d), and
−1 i i
2 2 2
mean of previous state. Variance depends on uncertainty
2 1 2 2
+ − −
i d i
at previous state, and noise of system’s model of state evolution.
Know prediction of state, − −
receive current measurement Update distribution over t t t ti t i i
current state estimate
Know corrected state from previous time step,
and all measurements up to the current one Update distribution over di t d t t TIME ADVANCES predicted state
+ − + − 1 1, i i
2
2 − −
m i i
i i
– Update mean
2 2
− − +
i i m i
Corrected state estimate i t t t
– Update variance
2 2 2
− +
i m i i m i i
2 2
−
incorporates current measurement, predicted state, meas. model, and their uncertainties. Small measurement noise rely on?
2 2 2 2 2 2
− − +
i m i m i
y Large measurement noise rely on?
State
State is 2d: position + velocity Measurement is 1d: position
measurements
state p time State is 2d: position + velocity Measurement is 1d: position
Kalman filter processing
x measurement * predicted mean estimate + corrected mean estimate bars: variance estimates
p time
x measurement * predicted mean estimate + corrected mean estimate bars: variance estimates
p time
http://www.dkimages.com/discover/previews/1002/50215713.JPG
ats/ search/b ~betke/re .bu.edu/~ //www.cs
P(x)
P x
Distribution shifts due to
increased uncertainty due to random random component
model y
Figure from Isard & Blake 1998
peak towards observation What if we have several competing observations, say due to clutter?
P( | t ki ) n each space xels in B space P(y | x=not skin) P(y | x=skin) kin pixels i n of RGB s
bin of RGB % sk bin % n each y y
y
Figure from Isard & Blake 1998
How to represent and update these distributions?
Figure from Isard & Blake 1998
Start with weighted samples from previous time step Shift each sample according to dynamics g y model Spread due to randomness; this is effective prior density p(xt|yt-1) Weight the samples Weight the samples according to
Arrive at current estimate for posterior p(xt|yt)
Figure from Isard & Blake 1998
Start with weighted samples from previous time step Sample and shift according to dynamics g y model Spread due to randomness; this is effective prior density p(xt|yt-1) Weight the samples Weight the samples according to
Arrive at current estimate for posterior p(xt|yt)
Figure from Isard & Blake 1998
Start with weighted samples from previous time step Sample and shift according to dynamics g y model Spread due to randomness; this is effective prior density p(xt|yt-1) Weight the samples Weight the samples according to
Arrive at current estimate for posterior p(xt|yt)
Figure from Isard & Blake 1998
Start with weighted samples from previous time step Sample and shift according to dynamics g y model Spread due to randomness; this is effective prior density p(xt|yt-1) Weight the samples Weight the samples according to
Arrive at current estimate for posterior p(xt|yt)
Figure from Isard & Blake 1998
Start with weighted samples from previous time step Sample and shift according to dynamics g y model Spread due to randomness; this is effective prior density p(xt|yt-1) Weight the samples Weight the samples according to
Arrive at current estimate for posterior p(xt|yt)
Figure from Isard & Blake 1998
The green spheres correspond to the members of the sample set, where the size of the sphere is an indication of the sample weight. The red line is the measurement density the sample weight. The red line is the measurement density function. http://www.robots.ox.ac.uk/~misard/condensation.html
Monitor is a distractor, multiple hypotheses necessary. Kalman filter fails once it starts tracking the monitor. http://www.robots.ox.ac.uk/~vdg/dynamics.html Visual Dynamics Group, Dept. Engineering Science, University of Oxford 1998 1998
Switching between multiple motion models. http://www.robots.ox.ac.uk/~vdg/dynamics.html Visual Dynamics Group, Dept. Engineering Science, University of Oxford 1998 1998
http://www.cs.wisc.edu/graphics/Talks/Gleicher/2002/AnimByExample_files/frame.htm
http://www.cs.wisc.edu/graphics/Talks/Gleicher/2002/AnimByExample_files/frame.htm
http://www.cs.wisc.edu/graphics/Talks/Gleicher/2002/AnimByExample_files/frame.htm
Ren, Shakhnarovich, Hodgins, Pfister, and Viola, 2005.
Ren, Shakhnarovich, Hodgins, Pfister, and Viola, 2005.
3d j i i i 3d joint positions: [x1 y1 z1, x2 y2, z2,…x20 y20 z20]
body guration, ent orientations ent orientations nt body nt body urations, same ation
Ren, Shakhnarovich, Hodgins, Pfister, and Viola, 2005.