Lecture 21: Motion and tracking Thursday, Nov 29 Prof. Kristen - - PowerPoint PPT Presentation

lecture 21 motion and tracking
SMART_READER_LITE
LIVE PREVIEW

Lecture 21: Motion and tracking Thursday, Nov 29 Prof. Kristen - - PowerPoint PPT Presentation

Lecture 21: Motion and tracking Thursday, Nov 29 Prof. Kristen Grauman Prof. Kristen Grauman Detection vs tracking Detection vs. tracking Tracking with dynamics : We use image Tracking with dynamics : We use image measurements to


slide-1
SLIDE 1

Lecture 21: Motion and tracking

Thursday, Nov 29

  • Prof. Kristen Grauman
  • Prof. Kristen Grauman
slide-2
SLIDE 2

Detection vs tracking Detection vs. tracking

… Tracking with dynamics: We use image Tracking with dynamics: We use image measurements to estimate position of object, but also incorporate position predicted by dynamics, i.e., our expectation of object’s motion pattern.

slide-3
SLIDE 3

Tracking with dynamics Tracking with dynamics

  • Have a model of expected motion

Have a model of expected motion

  • Given that, predict where objects will occur in

next frame, even before seeing the image e a e, e e be o e see g e age

  • Intent:

– do less work looking for the object restrict do less work looking for the object, restrict search – improved estimates since measurement noise improved estimates since measurement noise tempered by trajectory smoothness

slide-4
SLIDE 4

Tracking as inference: Bayes Filters

Hidden state xt

– The unknown true parameters The unknown true parameters – E.g., actual position of the person we are tracking

Measurement yt

– Our noisy observation of the state E d t t d bl b’ t id – E.g., detected blob’s centroid

Can we calculate p(xt | y1, y2, …, yt) ?

p(

t | y1, y2,

, yt)

– Want to recover the state from the observed measurements

slide-5
SLIDE 5

States and observations States and observations

Hidden state is the list of parameters of interest Measurement is what we get to directly observe (in Measurement is what we get to directly observe (in the images)

slide-6
SLIDE 6

Recursive estimation Recursive estimation

  • Unlike a batch fitting process,

Unlike a batch fitting process, decompose estimation problem into – Part that depends on new

  • bservation

– Part that can be computed from previous history

  • For tracking, essential given

typical goal of real-time i

Example from last time: running average

processing.

slide-7
SLIDE 7

Tracking as inference Tracking as inference

  • Recursive process:

Recursive process:

– Assume we have initial prior that predicts state in absence of any evidence: P(X0) state in absence of any evidence: P(X0) – At the first frame, correct this given the value

  • f Y0=y0
  • f Y0 y0

– Given corrected estimate for frame t

  • Predict for frame t+1
  • Correct for frame t+1
slide-8
SLIDE 8

Tracking as inference Tracking as inference

  • Prediction:

Prediction:

– Given the measurements we have seen up to this point what state should we predict? this point, what state should we predict?

  • Correction:

– Now given the current measurement, what state should we predict?

slide-9
SLIDE 9

Independence assumptions Independence assumptions

  • Only immediate past state influences

Only immediate past state influences current state

  • Measurements at time t only depend on

the current state

slide-10
SLIDE 10

Tracking as inference Tracking as inference

  • Goal is then to

Goal is then to

– choose good model for the prediction and correction distributions correction distributions – use the updates to compute best estimate of state state

  • Prior to seeing measurement
  • After seeing the measurement
slide-11
SLIDE 11

Gaussian distributions, notation

) , ( ~ Σ μ x N

  • random variable with Gaussian probability

distribution that has the mean vector μ and μ covariance matrix Σ.

  • x and μ are d-dimensional, Σ is d x d.

d=2 d=1

slide-12
SLIDE 12

Linear dynamic model

  • Describe the a priori knowledge about

– System dynamics model: represents evolution

  • f state over time, with noise

) ( ) ; ( ~

1 d t t

N Σ Dx x

n x n n x 1 n x 1

– Measurement model: at every time step we t i t f th t t get a noisy measurement of the state

) ; ( ~

m t t

N Σ Mx y

m x n n x 1 m x 1

slide-13
SLIDE 13

Example: randomly d ifti i t

) ; ( ~

1 d t t

N Σ Dx x

drifting points

C id t ti bj t ith t t iti

) ; ( ~

m t t

N Σ Mx y

  • Consider a stationary object, with state as position
  • State evolution is described by identity matrix D=I

Position is constant only motion due to random

  • Position is constant, only motion due to random

noise term.

slide-14
SLIDE 14

Example: constant l it

) ; ( ~

1 d t t

N Σ Dx x

) ( N Σ M

  • State vector x is 1d position and velocity.

velocity

) ; ( ~

m t t

N Σ Mx y

State vector x is 1d position and velocity.

  • Measurement y is position only.

ξ Δ ) ( ξ + Δ + =

− − 1 1

) (

t t t

v t p p

ζ + =

−1 t t

v v

noise v p t v p

t t t

+ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ Δ = ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ =

−1

1 1 x

ζ

−1 t t

[ ]

ξ ξ + = + ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ =

t t t

p v p 1 y

⎥ ⎤ ⎢ ⎡ = p x

⎥ ⎤ ⎢ ⎡ Δ = 1 t D

[ ]

1 = M

⎦ ⎣

t

⎥ ⎦ ⎢ ⎣ = v x

⎥ ⎦ ⎢ ⎣ = 1 D

[ ]

1 = M

slide-15
SLIDE 15

State’s position over time State locity vel position State and measurements

Figures from F&P

slide-16
SLIDE 16

State’s position over time State locity sition vel pos position time State and measurements

Figures from F&P

slide-17
SLIDE 17

State’s position over time State locity sition vel pos position time measurements state

Figures from F&P

time

slide-18
SLIDE 18

Example: constant l ti

) ; ( ~

1 d t t

N Σ Dx x

) ( N Σ M

  • State is 1d position, velocity, and acceleration

acceleration

) ; ( ~

m t t

N Σ Mx y

State is 1d position, velocity, and acceleration

  • Measurement is position only.

ξ + Δ + ) ( v t p p ξ + Δ + =

− − 1 1

) (

t t t

v t p p

ζ + Δ + =

− − 1 1

) (

t t t

a t v v

noise a v p t t a v p

t

+ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ Δ Δ = ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ = 1 1 1 x

ε + =

−1 t t

a a

a a

t t

⎦ ⎣ ⎥ ⎦ ⎢ ⎣ ⎦ ⎣

−1

D

1 − t

x

= M

[ ]

1

[ ]

slide-19
SLIDE 19

State’s position and velocity State’s position over time

slide-20
SLIDE 20

Kalman filter as density propagation

Distribution shifts due to

  • bject dynamics model

increased uncertainty due to random random component

  • f dynamics

model

Figure from Isard & Blake 1998

peak towards observation

slide-21
SLIDE 21

Kalman filter as density propagation

measurement belief belief belief new belief

  • ld belief

Slide by S. Thrun and J. Kosecka, Stanford

slide-22
SLIDE 22

Kalman filtering Kalman filtering

Know prediction of state, receive current measurement Update distribution over t t t ti t current state estimate

Time update (“Predict”) Measurement update (“Correct”)

Know corrected state from previous time step,

( Predict ) ( Correct )

and all measurements up to the current one Update distribution over di t d t t TIME ADVANCES i++ predicted state

slide-23
SLIDE 23

Kalman filtering Kalman filtering

  • Linear models + Gaussian distributions

Linear models + Gaussian distributions work well (read, simplify computation)

  • Gaussians also represented compactly
  • Gaussians also represented compactly

prediction prediction correction

slide-24
SLIDE 24

Kalman filter for 1d state Kalman filter for 1d state

) (

2

dx N x σ

Dynamic model

) , ( ~

1 d i i

dx N x σ

) , ( ~

2 m i i

mx N y σ ) , (

m i i

y

Want to represent d d t and update

slide-25
SLIDE 25

Notation shorthand Notation shorthand

slide-26
SLIDE 26

Kalman filtering Kalman filtering

Know prediction of state, − −

X σ

receive current measurement Update distribution over t t t ti t i i

X σ ,

current state estimate

Time update (“Predict”) Measurement update (“Correct”)

Know corrected state from previous time step,

( Predict ) ( Correct )

and all measurements up to the current one Update distribution over di t d t t TIME ADVANCES predicted state

+ − + − 1 1, i i

X σ

slide-27
SLIDE 27

Kalman filtering Kalman filtering

Know prediction of state, − −

X σ

receive current measurement Update distribution over t t t ti t i i

X σ ,

current state estimate

Time update (“Predict”) Measurement update (“Correct”)

Know corrected state from previous time step,

( Predict ) ( Correct )

and all measurements up to the current one Update distribution over di t d t t TIME ADVANCES predicted state

+ − + − 1 1, i i

X σ

slide-28
SLIDE 28

Kalman filter for 1d state: prediction

  • Linear dynamic model defines expected state

evolution, with noise: ,

  • Want to estimate distribution for next predicted state:

) , ( ~

2 1 d i i

dx N x σ

  • Want to estimate distribution for next predicted state:

) ) ( , (

2 − −

=

i i

X N σ

– Update the mean:

+ − = 1 i i

X d X

Predicted mean depends on state transition value (constant d), and

– Update the variance:

−1 i i

2 2 2

mean of previous state. Variance depends on uncertainty

2 1 2 2

) ( ) (

+ − −

+ =

i d i

dσ σ σ

at previous state, and noise of system’s model of state evolution.

slide-29
SLIDE 29

Kalman filtering Kalman filtering

Know prediction of state, − −

X σ

receive current measurement Update distribution over t t t ti t i i

X σ ,

current state estimate

Time update (“Predict”) Measurement update (“Correct”)

Know corrected state from previous time step,

( Predict ) ( Correct )

and all measurements up to the current one Update distribution over di t d t t TIME ADVANCES predicted state

+ − + − 1 1, i i

X σ

slide-30
SLIDE 30

Kalman filter for 1d state: correction

  • Linear model of dynamics reflects how state is

mapped to measurements:

) ( ~

2

mx N y σ

  • Know predicted state distribution:

) ) ( (

2 − −

X N σ

) , ( ~

m i i

mx N y σ

  • Want to correct distribution over current state given

t

) ) ( , ( =

i i

X N σ

new measurement :

– Update mean

2 2

) (

− − +

+

i i m i

my X X σ σ

Corrected state estimate i t t t

– Update variance

2 2 2

) ( ) (

− +

+ =

i m i i m i i

m y X σ σ

2 2

) (

incorporates current measurement, predicted state, meas. model, and their uncertainties. Small measurement noise rely on?

2 2 2 2 2 2

) ( ) ( ) (

− − +

+ =

i m i m i

m σ σ σ σ σ

y Large measurement noise rely on?

slide-31
SLIDE 31

Constant velocity model Constant velocity model

Recall this example:

State

State is 2d: position + velocity Measurement is 1d: position

slide-32
SLIDE 32

Constant velocity model Constant velocity model

measurements

  • sition

state p time State is 2d: position + velocity Measurement is 1d: position

slide-33
SLIDE 33

Constant velocity model

Kalman filter processing

  • state

y

  • state

x measurement * predicted mean estimate + corrected mean estimate bars: variance estimates

  • sition

p time

slide-34
SLIDE 34

Constant velocity model

  • state

y

  • state

x measurement * predicted mean estimate + corrected mean estimate bars: variance estimates

  • sition

p time

slide-35
SLIDE 35

N-d Kalman filtering N d Kalman filtering

  • This generalizes to state vectors of any

This generalizes to state vectors of any dimension

  • Update rules in FP Alg 17 2
  • Update rules in FP Alg 17.2
slide-36
SLIDE 36

Data association Data association

  • We’ve assumed entire

t ( ) measurement (y) was cue

  • f interest for the state
  • But there are typically
  • But, there are typically

uninformative measurements too–clutter measurements too clutter.

  • Data association: task of

determining which determining which measurements go with which tracks.

http://www.dkimages.com/discover/previews/1002/50215713.JPG

slide-37
SLIDE 37

Data association (single object ) in clutter)

  • Global nearest neighbor

Global nearest neighbor

– Choose to pay attention to the measurement with the highest probability given the with the highest probability given the predicted state – Can lead to tracking non-existent object Can lead to tracking non existent object

  • Probabilistic approach

Weight the measurements by probability – Weight the measurements by probability given predicted state

slide-38
SLIDE 38

ats/ search/b ~betke/re .bu.edu/~ //www.cs

  • http:/
slide-39
SLIDE 39

Kalman filter limitations Kalman filter limitations

  • Gaussian densities, linear dynamic model:

Gaussian densities, linear dynamic model:

+ Simple updates, compact and efficient – But, unimodal distribution, only single hypothesis y g yp – Restricted class of motions defined by linear model

) ( ~ Σ μ x N

P(x)

) , ( Σ μ x N

P x

slide-40
SLIDE 40

Kalman filter as density propagation

Distribution shifts due to

  • bject dynamics model

increased uncertainty due to random random component

  • f dynamics

model y

Figure from Isard & Blake 1998

peak towards observation What if we have several competing observations, say due to clutter?

slide-41
SLIDE 41

Recall conditional densities from skin detection example detection example

P( | t ki ) n each space xels in B space P(y | x=not skin) P(y | x=skin) kin pixels i n of RGB s

  • n-skin pix

bin of RGB % sk bin % n each y y

Measurement is feature y = [R G B] Bayes’ rule: P(skin | y) α P(y | skin) P(skin) Bayes’ rule: P(skin | y) α P(y | skin) P(skin)

slide-42
SLIDE 42

Density propagation with non- Gaussian densities Gaussian densities

y

Figure from Isard & Blake 1998

How to represent and update these distributions?

slide-43
SLIDE 43

Non-parametric representations for non-Gaussian densities

Can represent distribution with set

  • f weighted samples
  • f weighted samples

(“particles”)

slide-44
SLIDE 44

Factored sampling (single frame) p g ( g )

Represent the posterior p(x|y) non-parametrically: Represent the posterior p(x|y) non parametrically:

  • Sample points randomly from prior density

for the state, p(x).

Figure from Isard & Blake 1998

  • Weight the samples according to p(y|x).
slide-45
SLIDE 45

Particle filtering Particle filtering

  • Extend idea of sampling to propagate densities
  • ver time (i.e., across frames in a video

sequence).

  • At each time step, represent posterior p(xt|yt)

with weighted sample set

  • Previous time step’s sample set p(xt|yt-1) is

passed to next time step as the effective prior

  • (a.k.a. survival of the fittest, sequential Monte

Carlo filtering, Condensation [Isard & Blake 96])

slide-46
SLIDE 46

Particle filtering: Condensation g

Start with weighted samples from previous time step Shift each sample according to dynamics g y model Spread due to randomness; this is effective prior density p(xt|yt-1) Weight the samples Weight the samples according to

  • bservation density

Arrive at current estimate for posterior p(xt|yt)

Figure from Isard & Blake 1998

slide-47
SLIDE 47

Particle filtering: Condensation g

Start with weighted samples from previous time step Sample and shift according to dynamics g y model Spread due to randomness; this is effective prior density p(xt|yt-1) Weight the samples Weight the samples according to

  • bservation density

Arrive at current estimate for posterior p(xt|yt)

Figure from Isard & Blake 1998

slide-48
SLIDE 48

Particle filtering: Condensation g

Start with weighted samples from previous time step Sample and shift according to dynamics g y model Spread due to randomness; this is effective prior density p(xt|yt-1) Weight the samples Weight the samples according to

  • bservation density

Arrive at current estimate for posterior p(xt|yt)

Figure from Isard & Blake 1998

slide-49
SLIDE 49

Particle filtering: Condensation g

Start with weighted samples from previous time step Sample and shift according to dynamics g y model Spread due to randomness; this is effective prior density p(xt|yt-1) Weight the samples Weight the samples according to

  • bservation density

Arrive at current estimate for posterior p(xt|yt)

Figure from Isard & Blake 1998

slide-50
SLIDE 50

Particle filtering: Condensation g

Start with weighted samples from previous time step Sample and shift according to dynamics g y model Spread due to randomness; this is effective prior density p(xt|yt-1) Weight the samples Weight the samples according to

  • bservation density

Arrive at current estimate for posterior p(xt|yt)

Figure from Isard & Blake 1998

slide-51
SLIDE 51

Particle filtering: Condensation Particle filtering: Condensation

The green spheres correspond to the members of the sample set, where the size of the sphere is an indication of the sample weight. The red line is the measurement density the sample weight. The red line is the measurement density function. http://www.robots.ox.ac.uk/~misard/condensation.html

slide-52
SLIDE 52

Particle filtering: what we need Particle filtering: what we need

Initialize according to prior Initialize according to prior

  • n state p(x0)

C diti l d it ( | ) i Conditional density p(y|x) is defined

  • e.g., render model

according to state x, then compare actual image and that rendering and that rendering Object dynamics p(xt|xt-1)

slide-53
SLIDE 53

Particle filtering Particle filtering

This matches our general picture of density propagation and the prediction correction cycle propagation, and the prediction-correction cycle

  • f tracking with dynamics.
slide-54
SLIDE 54

Condensation-based results Condensation based results

Monitor is a distractor, multiple hypotheses necessary. Kalman filter fails once it starts tracking the monitor. http://www.robots.ox.ac.uk/~vdg/dynamics.html Visual Dynamics Group, Dept. Engineering Science, University of Oxford 1998 1998

slide-55
SLIDE 55

Condensation-based results Condensation based results

Switching between multiple motion models. http://www.robots.ox.ac.uk/~vdg/dynamics.html Visual Dynamics Group, Dept. Engineering Science, University of Oxford 1998 1998

slide-56
SLIDE 56

Issues

  • Initialization

Initialization – Often done manually

  • Data association multiple tracked objects

Data association, multiple tracked objects – Occlusions

  • Deformable and articulated objects
  • Deformable and articulated objects
  • Constructing accurate models of dynamics

Next, a brief look at an example-based technique for estimating pose and technique for estimating pose and representing human motion dynamics…

slide-57
SLIDE 57

http://www.cs.wisc.edu/graphics/Talks/Gleicher/2002/AnimByExample_files/frame.htm

slide-58
SLIDE 58

http://www.cs.wisc.edu/graphics/Talks/Gleicher/2002/AnimByExample_files/frame.htm

slide-59
SLIDE 59

Motion capture (Mocap) Motion capture (Mocap)

Collect pose data with active sensing – special markers, cameras cameras.

http://www.cs.wisc.edu/graphics/Talks/Gleicher/2002/AnimByExample_files/frame.htm

slide-60
SLIDE 60

Motion graphs Motion graphs

  • Graphics application:

Graphics application:

– Any walk on the graph is a valid motion – Can synthesize new animation: Can synthesize new animation:

  • Select motion clips from the graph
  • Reassemble them to form new motion

– Maintain realism of motions because clips retain subtle details of real motion.

Vi i li ti

  • Vision application:

– Non-parametric representation of human motion dynamics motion dynamics

slide-61
SLIDE 61

Example-based pose estimation d i ti and animation

  • Build a two-character motion graph from examples of

g p p people dancing with mocap

  • Populate database with synthetically generated

ilh tt i d fi d b (b h i ifi silhouettes in poses defined by mocap (behavior specific dynamics)

  • Use discriminative silhouette features to identify similar
  • Use discriminative silhouette features to identify similar

examples in database

  • Retrieve the pose stored for those similar examples to

Retrieve the pose stored for those similar examples to estimate user’s pose

  • Animate user and hypothetical partner

Ren, Shakhnarovich, Hodgins, Pfister, and Viola, 2005.

slide-62
SLIDE 62

Overview

Ren, Shakhnarovich, Hodgins, Pfister, and Viola, 2005.

slide-63
SLIDE 63

Pose parameters Pose parameters

3d j i i i 3d joint positions: [x1 y1 z1, x2 y2, z2,…x20 y20 z20]

slide-64
SLIDE 64

Rendering database examples Rendering database examples

body guration, ent orientations ent orientations nt body nt body urations, same ation

slide-65
SLIDE 65

Possible silhouette features

slide-66
SLIDE 66

Feature selection Feature selection

W t t fi d f t

  • Want to find features

that are discriminative for

  • verall orientation,

and specific body configuration configuration

  • Use boosting to

choose features that separate “similar” and “dissimilar” pairs well well

slide-67
SLIDE 67

Feature selection Feature selection

Some features l t d ith selected with AdaBoost based on i d paired classification task

slide-68
SLIDE 68

Two-character motion graph Two character motion graph

  • Dancing partners’ motions are highly correlated

Dancing partners motions are highly correlated

  • Extend motion graph to represent partner’s pose

relative to user’s

slide-69
SLIDE 69

Example-based pose estimation and animation p p

Ren, Shakhnarovich, Hodgins, Pfister, and Viola, 2005.

slide-70
SLIDE 70
  • http://graphics.cs.cmu.edu/projects/swing/
slide-71
SLIDE 71
  • Issues?

Issues?

slide-72
SLIDE 72

References References

  • Conditional density propagation for visual tracking

y p p g g (CONDENSATION), Isard and Blake, IJCV 1998.

  • Lucas Kovar Michael Gleicher Frederic Pighin. Motion

Graphs ACM Transactions on Graphics 21(3) (Proceedings

  • Graphs. ACM Transactions on Graphics 21(3) (Proceedings
  • f SIGGRAPH 2002). July 2002.
  • L. Ren, G. Shakhnarovich, J. Hodgins, H. Pfister, P. Viola,

"Learning Silho ette Feat res for Control of H man Motion" "Learning Silhouette Features for Control of Human Motion", ACM Transactions on Graphics,2005.