6.869 Computer Vision and Applications Prof. Bill Freeman Tracking - - PowerPoint PPT Presentation

6 869
SMART_READER_LITE
LIVE PREVIEW

6.869 Computer Vision and Applications Prof. Bill Freeman Tracking - - PowerPoint PPT Presentation

6.869 Computer Vision and Applications Prof. Bill Freeman Tracking Density propagation Linear Dynamic models / Kalman filter Data association Multiple models Readings: F&P Ch 17 1 2 Huttenlocher talk 3 Huttenlocher


slide-1
SLIDE 1

1

6.869

Computer Vision and Applications

  • Prof. Bill Freeman

Tracking

– Density propagation – Linear Dynamic models / Kalman filter – Data association – Multiple models

Readings: F&P Ch 17

slide-2
SLIDE 2

2

Huttenlocher talk

slide-3
SLIDE 3

3

Huttenlocher talk

slide-4
SLIDE 4

4

Huttenlocher talk

slide-5
SLIDE 5

5

Schedule

  • Thursday, April 28:

– Kalman filter, PS4 due.

  • Tuesday, May 3:

– Tracking articulated objects, Exam 2 out

  • Thursday, May 5:

– How to write papers & give talks, Exam 2 due

  • Tuesday, May 10:

– Motion microscopy, separating shading and paint (“fun things my group is doing”)

  • Thursday, May 12:

– 5-10 min. student project presentations, projects due.

slide-6
SLIDE 6

6

Tracking Applications

  • Motion capture
  • Recognition from motion
  • Surveillance
  • Targeting
slide-7
SLIDE 7

7

Things to consider in tracking

What are the

  • Real world dynamics
  • Approximate / assumed model
  • Observation / measurement process
slide-8
SLIDE 8

8

Density propogation

  • Tracking == Inference over time
  • Much simplification is possible with linear

dynamics and Gaussian probability models

slide-9
SLIDE 9

9

Outline

  • Recursive filters
  • State abstraction
  • Density propagation
  • Linear Dynamic models / Kalman filter
  • Data association
  • Multiple models
slide-10
SLIDE 10

10

Tracking and Recursive estimation

  • Real-time / interactive imperative.
  • Task: At each time point, re-compute estimate of

position or pose.

– At time n, fit model to data using time 0…n – At time n+1, fit model to data using time 0…n+1

  • Repeat batch fit every time?
slide-11
SLIDE 11

11

Recursive estimation

  • Decompose estimation problem

– part that depends on new observation – part that can be computed from previous history

  • E.g., running average:

at = α at-1 + (1-α) yt

  • Linear Gaussian models: Kalman Filter
  • First, general framework…
slide-12
SLIDE 12

12

Tracking

  • Very general model:

– We assume there are moving objects, which have an underlying state X – There are measurements Y, some of which are functions of this state – There is a clock

  • at each tick, the state changes
  • at each tick, we get a new observation
  • Examples

– object is ball, state is 3D position+velocity, measurements are stereo pairs – object is person, state is body configuration, measurements are frames, clock is in camera (30 fps)

slide-13
SLIDE 13

13

Three main issues in tracking

slide-14
SLIDE 14

14

Simplifying Assumptions

slide-15
SLIDE 15

15

Kalman filter graphical model

x1 x2 x3 x4 y1 y2 y3 y4

slide-16
SLIDE 16

16

Tracking as induction

  • Assume data association is done

– we’ll talk about this later; a dangerous assumption

  • Do correction for the 0’th frame
  • Assume we have corrected estimate for i’th frame

– show we can do prediction for i+1, correction for i+1

slide-17
SLIDE 17

17

Base case

slide-18
SLIDE 18

18

Induction step

given

slide-19
SLIDE 19

Update step

given

19

slide-20
SLIDE 20

20

Linear dynamic models

  • A linear dynamic model has the form
  • This is much, much more general than it looks, and extremely

powerful xi = N Di−1xi−1;Σdi

( )

yi = N Mixi;Σmi

( )

slide-21
SLIDE 21

21

xi = N Di−1xi−1;Σdi

( )

Examples

  • Drifting points

– assume that the new position of the point is the old one, plus noise D = Id yi = N Mixi;Σmi

( )

cic.nist.gov/lipman/sciviz/images/random3.gif http://www.grunch.net/synergetics/images/random 3.jpg

slide-22
SLIDE 22

22

Constant velocity

  • We have

– (the Greek letters denote noise terms)

  • Stack (u, v) into a single state vector

– which is the form we had above yi = N Mixi;Σmi

( )

xi = N Di−1xi−1;Σdi

( )

ui = ui−1 + ∆tvi−1 + εi vi = vi−1 + ςi u v ⎛ ⎝ ⎜ ⎞ ⎠ ⎟

i

= 1 ∆t 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ u v ⎛ ⎝ ⎜ ⎞ ⎠ ⎟

i−1

+ noise

slide-23
SLIDE 23

23

position position

Constant Velocity Model

velocity time measurement,position time

slide-24
SLIDE 24

24

xi = N Di−1xi−1;Σdi

( )

Constant acceleration

  • We have

– (the Greek letters denote noise terms)

  • Stack (u, v) into a single state vector

– which is the form we had above ui = ui−1 + ∆tvi−1 + εi vi = vi−1 + ∆tai−1 +ς i ai = ai−1 + ξi u v a ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟

i

= 1 ∆t 1 ∆t 1 ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ u v a ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟

i−1

+ noise yi = N Mixi;Σmi

( )

slide-25
SLIDE 25

25

time position position velocity

Constant Acceleration Model

slide-26
SLIDE 26

26

Assume we have a point, moving on a line with a periodic movement defined with a differential eq: can be defined as with state defined as stacked position and velocity u=(p, v)

Periodic motion

yi = N Mixi;Σmi

( )

xi = N Di−1xi−1;Σdi

( )

slide-27
SLIDE 27

27

xi = N Di−1xi−1;Σdi

( )

Periodic motion

yi = N Mixi;Σmi

( )

Take discrete approximation….(e.g., forward Euler integration with ∆t stepsize.)

slide-28
SLIDE 28

28

Higher order models

  • Independence assumption
  • Velocity and/or acceleration augmented position
  • Constant velocity model equivalent to

– velocity == – acceleration == – could also use , etc.

slide-29
SLIDE 29

29

The Kalman Filter

  • Key ideas:

– Linear models interact uniquely well with Gaussian noise - make the prior Gaussian, everything else Gaussian and the calculations are easy – Gaussians are really easy to represent --- once you know the mean and covariance, you’re done

slide-30
SLIDE 30

30

Recall the three main issues in tracking

(Ignore data association for now)

slide-31
SLIDE 31

31

The Kalman Filter

[figure from http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html]

slide-32
SLIDE 32

32

The Kalman Filter in 1D

  • Dynamic Model
  • Notation

Predicted mean Corrected mean

slide-33
SLIDE 33

33

The Kalman Filter

slide-34
SLIDE 34

34

Prediction for 1D Kalman filter

  • The new state is obtained by

– multiplying old state by known constant – adding zero-mean noise

  • Therefore, predicted mean for new state is

– constant times mean for old state

  • Old variance is normal random variable

– variance is multiplied by square of constant – and variance of noise is added.

slide-35
SLIDE 35

35

slide-36
SLIDE 36

36

The Kalman Filter

slide-37
SLIDE 37

37

Correction for 1D Kalman filter

Notice:

– if measurement noise is small, we rely mainly on the measurement, – if it’s large, mainly on the prediction – σ does not depend on y

slide-38
SLIDE 38

38

slide-39
SLIDE 39

39

position position velocity time

Constant Velocity Model

slide-40
SLIDE 40

position time

40

slide-41
SLIDE 41

41

position time

slide-42
SLIDE 42

42

The o-s give state, x-s measurement. position time

slide-43
SLIDE 43

43

The o-s give state, x-s measurement. position time

slide-44
SLIDE 44

44

Smoothing

  • Idea

– We don’t have the best estimate of state - what about the future? – Run two filters, one moving forward, the other backward in time. – Now combine state estimates

  • The crucial point here is that we can obtain a smoothed

estimate by viewing the backward filter’s prediction as yet another measurement for the forward filter

slide-45
SLIDE 45

45

position time Forward estimates. The o-s give state, x-s measurement.

slide-46
SLIDE 46

46

position time The o-s give state, x-s measurement. Backward estimates.

slide-47
SLIDE 47

47

Combined forward-backward estimates. position time The o-s give state, x-s measurement.

slide-48
SLIDE 48

48

n-D

Generalization to n-D is straightforward but more complex.

slide-49
SLIDE 49

49

n-D

Generalization to n-D is straightforward but more complex.

slide-50
SLIDE 50

50

n-D Prediction

Generalization to n-D is straightforward but more complex. Prediction:

  • Multiply estimate at prior time with forward model:
  • Propagate covariance through model and add new noise:
slide-51
SLIDE 51

51

n-D Correction

Generalization to n-D is straightforward but more complex. Correction:

  • Update a priori estimate with measurement to form a

posteriori

slide-52
SLIDE 52

52

n-D correction

Find linear filter on innovations which minimizes a posteriori error covariance: K is the Kalman Gain matrix. A solution is

( ) ( )⎥

⎦ ⎤ ⎢ ⎣ ⎡ − −

+ +

x x x x E

T

slide-53
SLIDE 53

53

Kalman Gain Matrix

As measurement becomes more reliable, K weights residual more heavily, As prior covariance approaches 0, measurements are ignored:

1

lim

− → Σ

= M Ki

m

lim =

→ Σ− i

K

i

slide-54
SLIDE 54

54

slide-55
SLIDE 55

55

[figure from http://www.ai.mit.edu/~murphyk/Software/Kalman/kalman.html]

2-D constant velocity example from Kevin Murphy’s Matlab toolbox

slide-56
SLIDE 56

56

2-D constant velocity example from Kevin Murphy’s Matlab toolbox

  • MSE of filtered estimate is 4.9; of smoothed estimate. 3.2.
  • Not only is the smoothed estimate better, but we know that it is better,

as illustrated by the smaller uncertainty ellipses

  • Note how the smoothed ellipses are larger at the ends, because these

points have seen less data.

  • Also, note how rapidly the filtered ellipses reach their steady-state

(“Ricatti”) values.

[figure from http://www.ai.mit.edu/~murphyk/Software/Kalman/kalman.html]

slide-57
SLIDE 57

57

Data Association

In real world yi have clutter as well as data… E.g., match radar returns to set of aircraft trajectories.

slide-58
SLIDE 58

58

Data Association

Approaches:

  • Nearest neighbours

– choose the measurement with highest probability given predicted state – popular, but can lead to catastrophe

  • Probabilistic Data Association

– combine measurements, weighting by probability given predicted state – gate using predicted state

slide-59
SLIDE 59

59

position time Red: tracks of 10 drifting points. Blue, black: point being tracked

slide-60
SLIDE 60

60

position time Red: tracks of 10 drifting points. Blue, black: point being tracked

slide-61
SLIDE 61

61

position time Red: tracks of 10 drifting points. Blue, black: point being tracked

slide-62
SLIDE 62

62

time Red: tracks of 10 drifting points. Blue, black: point being tracked position

slide-63
SLIDE 63

63

position time Red: tracks of 10 drifting points. Blue, black: point being tracked

slide-64
SLIDE 64

64

Abrupt changes

What if environment is sometimes unpredictable? Do people move with constant velocity? Test several models of assumed dynamics, use the best.

slide-65
SLIDE 65

65

Multiple model filters

Test several models of assumed dynamics

[figure from Welsh and Bishop 2001]

slide-66
SLIDE 66

66

MM estimate

Two models: Position (P), Position+Velocity (PV)

[figure from Welsh and Bishop 2001]

slide-67
SLIDE 67

67

P likelihood

[figure from Welsh and Bishop 2001]

slide-68
SLIDE 68

68

No lag

[figure from Welsh and Bishop 2001]

slide-69
SLIDE 69

69

Smooth when still

[figure from Welsh and Bishop 2001]

slide-70
SLIDE 70

70

Resources

  • Kalman filter homepage

http://www.cs.unc.edu/~welch/kalman/

  • Kevin Murphy’s Matlab toolbox:

http://www.ai.mit.edu/~murphyk/Software/Kalman/k alman.html

slide-71
SLIDE 71

71

Jepson, Fleet, and El-Maraghi tracker

slide-72
SLIDE 72

72

Jepson, Fleet, and El-Maraghi tracker

slide-73
SLIDE 73

73

Add fleet&jepson tracking slides Jepson, Fleet, and El-Maraghi tracker

slide-74
SLIDE 74

74

Add fleet&jepson tracking slides Jepson, Fleet, and El-Maraghi tracker

slide-75
SLIDE 75

75

Show videos