Lecture: Motion Juan Carlos Niebles and Ranjay Krishna Stanford - - PowerPoint PPT Presentation

lecture motion
SMART_READER_LITE
LIVE PREVIEW

Lecture: Motion Juan Carlos Niebles and Ranjay Krishna Stanford - - PowerPoint PPT Presentation

Motion Lecture: Motion Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 19-Nov-2019 1 St Stanfor ord University CS 131 Roadmap Motion Pixels Segments Images Videos Web Neural networks Convolutions Recognition


slide-1
SLIDE 1

Motion

St Stanfor

  • rd University

19-Nov-2019 1

Lecture: Motion

Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab

slide-2
SLIDE 2

Motion

St Stanfor

  • rd University

19-Nov-2019 2

CS 131 Roadmap

Pixels Images

Convolutions Edges Descriptors

Segments

Resizing Segmentation Clustering Recognition Detection Machine learning

Videos

Motion Tracking

Web

Neural networks Convolutional neural networks

slide-3
SLIDE 3

Motion

St Stanfor

  • rd University

19-Nov-2019 3

What will we learn today?

  • Optical flow
  • Lucas-Kanade method
  • Pyramids for large motion
  • Horn-Schunk method
  • Common fate
  • Applications

Reading: [Szeliski] Chapters: 8.4, 8.5

[Fleet & Weiss, 2005] http://www.cs.toronto.edu/pub/jepson/teaching/vision/2503/opticalFlow.pdf

slide-4
SLIDE 4

Motion

St Stanfor

  • rd University

19-Nov-2019 4

What will we learn today?

  • Optical flow
  • Lucas-Kanade method
  • Pyramids for large motion
  • Horn-Schunk method
  • Common fate
  • Applications

Reading: [Szeliski] Chapters: 8.4, 8.5

[Fleet & Weiss, 2005] http://www.cs.toronto.edu/pub/jepson/teaching/vision/2503/opticalFlow.pdf

slide-5
SLIDE 5

Motion

St Stanfor

  • rd University

19-Nov-2019 5

From images to videos

  • A video is a sequence of frames captured over time
  • Now our image data is a function of space (x, y) and time (t)
slide-6
SLIDE 6

Motion

St Stanfor

  • rd University

19-Nov-2019 6

Why is motion useful?

slide-7
SLIDE 7

Motion

St Stanfor

  • rd University

19-Nov-2019 7

Why is motion useful?

slide-8
SLIDE 8

Motion

St Stanfor

  • rd University

19-Nov-2019 8

Optical flow

  • Definition: optical flow is the apparent motion of brightness

patterns in the image

  • Note: apparent motion can be caused by lighting changes

without any actual motion

–Think of a uniform rotating sphere under fixed lighting vs. a stationary sphere under moving illumination

GOAL: Recover image motion at each pixel from optical flow

Source: Silvio Savarese

slide-9
SLIDE 9

Motion

St Stanfor

  • rd University

19-Nov-2019 9 Picture courtesy of Selim Temizer - Learning and Intelligent Systems (LIS) Group, MIT

Optical flow

Vector field function of the spatio-temporal image brightness variations

slide-10
SLIDE 10

Motion

St Stanfor

  • rd University

19-Nov-2019 10

Estimating optical flow

  • Given two subsequent frames, estimate the apparent motion field u(x,y),

v(x,y) between them

  • Key assumptions
  • Brightness constancy: projection of the same point looks the same in every

frame

  • Small motion: points do not move very far
  • Spatial coherence: points move like their neighbors

I(x,y,t) I(x,y,t+1)

Source: Silvio Savarese

slide-11
SLIDE 11

Motion

St Stanfor

  • rd University

19-Nov-2019 11

Key Assumptions: small motions

* Slide from Michael Black, CS143 2003

slide-12
SLIDE 12

Motion

St Stanfor

  • rd University

19-Nov-2019 12

Key Assumptions: spatial coherence

* Slide from Michael Black, CS143 2003

slide-13
SLIDE 13

Motion

St Stanfor

  • rd University

19-Nov-2019 13

Key Assumptions: brightness Constancy

* Slide from Michael Black, CS143 2003

𝐽 𝑦, 𝑧, 𝑢 = 𝐽(𝑦 + 𝑣 𝑦, 𝑧 , 𝑧 + 𝑤 𝑦, 𝑧 , 𝑢 + 1)

slide-14
SLIDE 14

Motion

St Stanfor

  • rd University

19-Nov-2019 14

  • Brightness Constancy Equation:

Linearizing the right side using Taylor expansion: I(x,y,t) I(x,y,t+1)

» + × + ×

t y x

I v I u I

Hence,

Image derivative along x

→ ∇I ⋅ u v

[ ]

T + It = 0

The brightness constancy constraint

Source: Silvio Savarese

𝐽 𝑦, 𝑧, 𝑢 = 𝐽(𝑦 + 𝑣, 𝑧 + 𝑤, 𝑢 + 1)

𝐽 𝑦 + 𝑣, 𝑧 + 𝑤, 𝑢 + 1 ≈ 𝐽 𝑦, 𝑧, 𝑢 + 𝐽. / 𝑣 + 𝐽0 / 𝑤 + 𝐽1

Image derivative along t

𝐽 𝑦 + 𝑣, 𝑧 + 𝑤, 𝑢 + 1 − 𝐽 𝑦, 𝑧, 𝑢 ≈ 𝐽. / 𝑣 + 𝐽0 / 𝑤 + 𝐽1

slide-15
SLIDE 15

Motion

St Stanfor

  • rd University

19-Nov-2019 15

Filters used to find the derivatives

𝐽. 𝐽0 𝐽1

slide-16
SLIDE 16

Motion

St Stanfor

  • rd University

19-Nov-2019 16

The brightness constancy constraint

  • How many equations and unknowns per pixel?

The component of the flow perpendicular to the gradient (i.e., parallel to the edge) cannot be measured

edge (u,v) (u’,v’) gradient (u+u’,v+v’)

If (u, v ) satisfies the equation, so does (u+u’, v+v’ ) if

  • One equation (this is a scalar equation!), two unknowns (u,v)

∇I ⋅ u' v'

[ ]

T = 0

Can we use this equation to recover image motion (u,v) at each pixel?

∇I ⋅ u v

[ ]

T + It = 0 Source: Silvio Savarese

∇𝐽

slide-17
SLIDE 17

Motion

St Stanfor

  • rd University

19-Nov-2019 17

The aperture problem

Source: Silvio Savarese

Actual motion

slide-18
SLIDE 18

Motion

St Stanfor

  • rd University

19-Nov-2019 18

The aperture problem

Source: Silvio Savarese

slide-19
SLIDE 19

Motion

St Stanfor

  • rd University

19-Nov-2019 19

The aperture problem

Source: Silvio Savarese

Perceived motion

slide-20
SLIDE 20

Motion

St Stanfor

  • rd University

19-Nov-2019 20

The aperture problem

Actual motion

Source: Silvio Savarese

slide-21
SLIDE 21

Motion

St Stanfor

  • rd University

19-Nov-2019 21

The aperture problem

Perceived motion

Source: Silvio Savarese

slide-22
SLIDE 22

Motion

St Stanfor

  • rd University

19-Nov-2019 22

The barber pole illusion

http://en.wikipedia.org/wiki/Barberpole_illusion

slide-23
SLIDE 23

Motion

St Stanfor

  • rd University

19-Nov-2019 23

The barber pole illusion

http://en.wikipedia.org/wiki/Barberpole_illusion

Source: Silvio Savarese

slide-24
SLIDE 24

Motion

St Stanfor

  • rd University

19-Nov-2019 24

What will we learn today?

  • Optical flow
  • Lucas-Kanade method
  • Pyramids for large motion
  • Horn-Schunk method
  • Common fate
  • Applications

Reading: [Szeliski] Chapters: 8.4, 8.5

[Fleet & Weiss, 2005] http://www.cs.toronto.edu/pub/jepson/teaching/vision/2503/opticalFlow.pdf

slide-25
SLIDE 25

Motion

St Stanfor

  • rd University

19-Nov-2019 25

Solving the ambiguity…

  • How to get more equations for a pixel?
  • Spatial coherence constraint:
  • Assume the pixel’s neighbors have the same (u,v)

– If we use a 5x5 window, that gives us 25 equations per pixel

  • B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo
  • vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 674–

679, 1981.

Source: Silvio Savarese

slide-26
SLIDE 26

Motion

St Stanfor

  • rd University

19-Nov-2019 26

Lucas-Kanade flow

  • Overconstrained linear system:

Source: Silvio Savarese

slide-27
SLIDE 27

Motion

St Stanfor

  • rd University

19-Nov-2019 27

Lucas-Kanade flow

  • Overconstrained linear system

The summations are over all pixels in the K x K window

Least squares solution for d given by

Source: Silvio Savarese

slide-28
SLIDE 28

Motion

St Stanfor

  • rd University

19-Nov-2019 28

Conditions for solvability

– Optimal (u, v) satisfies Lucas-Kanade equation

Does this remind anything to you?

When is This Solvable?

  • ATA should be invertible
  • ATA should not be too small due to noise

– eigenvalues l1 and l 2 of ATA should not be too small

  • ATA should be well-conditioned

– l 1/ l 2 should not be too large (l 1 = larger eigenvalue)

Source: Silvio Savarese

slide-29
SLIDE 29

Motion

St Stanfor

  • rd University

19-Nov-2019 29

  • Eigenvectors and eigenvalues of ATA relate to edge

direction and magnitude

  • The eigenvector associated with the larger eigenvalue points in

the direction of fastest intensity change

  • The other eigenvector is orthogonal to it

M = ATA is the second moment matrix ! (Harris corner detector…)

Source: Silvio Savarese

slide-30
SLIDE 30

Motion

St Stanfor

  • rd University

19-Nov-2019 30

Interpreting the eigenvalues

l1 l2 “Corner” l1 and l2 are large, l1 ~ l2 l1 and l2 are small “Edge” l1 >> l2 “Edge” l2 >> l1 “Flat” region

Classification of image points using eigenvalues of the second moment matrix:

Source: Silvio Savarese

slide-31
SLIDE 31

Motion

St Stanfor

  • rd University

19-Nov-2019 31

Edge

– gradients very large or very small – large l1, small l2

Source: Silvio Savarese

slide-32
SLIDE 32

Motion

St Stanfor

  • rd University

19-Nov-2019 32

Low-texture region

– gradients have small magnitude

– small l1, small l2

Source: Silvio Savarese

slide-33
SLIDE 33

Motion

St Stanfor

  • rd University

19-Nov-2019 33

High-texture region

– gradients are different, large magnitudes

– large l1, large l2

Source: Silvio Savarese

slide-34
SLIDE 34

Motion

St Stanfor

  • rd University

19-Nov-2019 34

Errors in Lukas-Kanade

What are the potential causes of errors in this procedure? –Assumed ATA is easily invertible –Assumed there is not much noise in the image

  • When our assumptions are violated

– Brightness constancy is not satisfied – The motion is not small – A point does not move like its neighbors

  • window size is too large
  • what is the ideal window size?

* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003

slide-35
SLIDE 35

Motion

St Stanfor

  • rd University

19-Nov-2019 35

Improving accuracy

  • Recall our small motion assumption

– Can solve using Newton’s method (out of scope for this class) – Lukas-Kanade method does one iteration of Newton’s method

  • Better results are obtained via more iterations
  • This is not exact

– To do better, we need to add higher order terms back in:

  • This is a polynomial root finding problem

It-1(x,y) It-1(x,y) It-1(x,y)

* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003

slide-36
SLIDE 36

Motion

St Stanfor

  • rd University

19-Nov-2019 36

Iterative Refinement

  • Iterative Lukas-Kanade Algorithm
  • 1. Estimate velocity at each pixel by solving Lucas-Kanade

equations

  • 2. Warp I(t-1) towards I(t) using the estimated flow field
  • use image warping techniques
  • 3. Repeat until convergence

* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003

slide-37
SLIDE 37

Motion

St Stanfor

  • rd University

19-Nov-2019 37

When do the optical flow assumptions fail?

In other words, in what situations does the displacement of pixel patches not represent physical movement of points in space?

  • 1. Well, TV is based on illusory motion

– the set is stationary yet things seem to move

  • 2. A uniform rotating sphere

– nothing seems to move, yet it is rotating

  • 3. Changing directions or intensities of lighting can make things seem to move

– for example, if the specular highlight on a rotating sphere moves.

  • 4. Muscle movement can make some spots on a cheetah move opposite direction of motion.

– And infinitely more break downs of optical flow.

slide-38
SLIDE 38

Motion

St Stanfor

  • rd University

19-Nov-2019 38

What will we learn today?

  • Optical flow
  • Lucas-Kanade method
  • Pyramids for large motion
  • Horn-Schunk method
  • Common fate
  • Applications
slide-39
SLIDE 39

Motion

St Stanfor

  • rd University

19-Nov-2019 39

  • Key assumptions (Errors in Lucas-Kanade)
  • Small motion: points do not move very far
  • Brightness constancy: projection of the same point looks the same

in every frame

  • Spatial coherence: points move like their neighbors

Recap

Source: Silvio Savarese

slide-40
SLIDE 40

Motion

St Stanfor

  • rd University

19-Nov-2019 40

Revisiting the small motion assumption

  • Is this motion small enough?

– Probably not—it’s much larger than one pixel (2nd order terms dominate) – How might we solve this problem?

* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003

slide-41
SLIDE 41

Motion

St Stanfor

  • rd University

19-Nov-2019 41

Reduce the resolution!

* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003

slide-42
SLIDE 42

Motion

St Stanfor

  • rd University

19-Nov-2019 42

image I image H

Gaussian pyramid of image 1 Gaussian pyramid of image 2 image 2 image 1

u=10 pixels u=5 pixels u=2.5 pixels u=1.25 pixels

Coarse-to-fine optical flow estimation

Source: Silvio Savarese

slide-43
SLIDE 43

Motion

St Stanfor

  • rd University

19-Nov-2019 43

image I image J

Gaussian pyramid of image 1 (t) Gaussian pyramid of image 2 (t+1) image 2 image 1

Coarse-to-fine optical flow estimation

run iterative L-K run iterative L-K warp & upsample

. . .

Source: Silvio Savarese

slide-44
SLIDE 44

Motion

St Stanfor

  • rd University

19-Nov-2019 44

Optical Flow Results

* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003

slide-45
SLIDE 45

Motion

St Stanfor

  • rd University

19-Nov-2019 45

Optical Flow Results

* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003

  • http://www.ces.clemson.edu/~stb/klt/
  • OpenCV
slide-46
SLIDE 46

Motion

St Stanfor

  • rd University

19-Nov-2019 46

What will we learn today?

  • Optical flow
  • Lucas-Kanade method
  • Pyramids for large motion
  • Horn-Schunk method
  • Common fate
  • Applications

Reading: [Szeliski] Chapters: 8.4, 8.5

[Fleet & Weiss, 2005] http://www.cs.toronto.edu/pub/jepson/teaching/vision/2503/opticalFlow.pdf

slide-47
SLIDE 47

Motion

St Stanfor

  • rd University

19-Nov-2019 47

Horn-Schunk method for optical flow

  • The flow is formulated as a global energy function which is should be minimized:
slide-48
SLIDE 48

Motion

St Stanfor

  • rd University

19-Nov-2019 48

Horn-Schunk method for optical flow

  • The flow is formulated as a global energy function which is should be minimized:
  • The first part of the function is the brightness consistency.
slide-49
SLIDE 49

Motion

St Stanfor

  • rd University

19-Nov-2019 49

Horn-Schunk method for optical flow

  • The flow is formulated as a global energy function which is should be minimized:
  • The second part is the smoothness constraint. It’s trying to make sure that the

changes between pixels are small.

slide-50
SLIDE 50

Motion

St Stanfor

  • rd University

19-Nov-2019 50

Horn-Schunk method for optical flow

  • The flow is formulated as a global energy function which is should be minimized:
  • 𝛽 is a regularization constant. Larger values of 𝛽 lead to smoother flow.
slide-51
SLIDE 51

Motion

St Stanfor

  • rd University

19-Nov-2019 51

Horn-Schunk method for optical flow

  • The flow is formulated as a global energy function which is should be minimized:
  • This minimization can be solved by taking the derivative with respect to u and v,

we get the following 2 equations:

slide-52
SLIDE 52

Motion

St Stanfor

  • rd University

19-Nov-2019 52

Horn-Schunk method for optical flow

  • By taking the derivative with respect to u and v, we get the following 2 equations:
  • Where is called the Lagrange operator. In practice, it is measured

using:

  • where is the weighted average of u measured at (x,y).
slide-53
SLIDE 53

Motion

St Stanfor

  • rd University

19-Nov-2019 53

Horn-Schunk method for optical flow

  • Now we substitute in:
  • To get:
  • Which is linear in u and v and can be solved analytically for each pixel individually.
slide-54
SLIDE 54

Motion

St Stanfor

  • rd University

19-Nov-2019 54

Iterative Horn-Schunk

  • But since the solution depends on the neighboring values of the flow field, it must

be repeated once the neighbors have been updated.

  • So instead, we can iteratively solve for u and v using:
slide-55
SLIDE 55

Motion

St Stanfor

  • rd University

19-Nov-2019 55

What does the smoothness regularization do anyway?

  • It’s a sum of squared terms (a Euclidian distance measure).
  • We’re putting it in the expression to be minimized.
  • => In texture free regions, there is no optical flow

Regularized flow Optical flow

  • => On edges, points will flow to nearest points, solving the aperture problem.

Slide credit: Sebastian Thurn

slide-56
SLIDE 56

Motion

St Stanfor

  • rd University

19-Nov-2019 56

Dense Optical Flow with Michael Black’s method

  • Michael Black took Horn-Schunk’s method one step further, starting from the

regularization constant:

  • Which looks like a quadratic:
  • And replaced it with this:
  • Why does this regularization work better?
slide-57
SLIDE 57

Motion

St Stanfor

  • rd University

19-Nov-2019 57

What will we learn today?

  • Optical flow
  • Lucas-Kanade method
  • Pyramids for large motion
  • Horn-Schunk method
  • Common fate
  • Applications
slide-58
SLIDE 58

Motion

St Stanfor

  • rd University

19-Nov-2019 58

  • Key assumptions
  • Small motion: points do not move very far
  • Brightness constancy: projection of the same point looks the same

in every frame

  • Spatial coherence: points move like their neighbors

Recap

Source: Silvio Savarese

slide-59
SLIDE 59

Motion

St Stanfor

  • rd University

19-Nov-2019 59

Reminder: Gestalt – common fate

slide-60
SLIDE 60

Motion

St Stanfor

  • rd University

19-Nov-2019 60

Motion segmentation

  • How do we represent the motion in this scene?

Source: Silvio Savarese

slide-61
SLIDE 61

Motion

St Stanfor

  • rd University

19-Nov-2019 61

Motion segmentation

  • Break image sequence into “layers” each of which has a coherent (affine) motion
  • J. Wang and E. Adelson. Layered Representation for Motion Analysis. CVPR 1993.

Source: Silvio Savarese

slide-62
SLIDE 62

Motion

St Stanfor

  • rd University

19-Nov-2019 62

Affine motion

  • Substituting into the brightness constancy

equation: y a x a a y x v y a x a a y x u

6 5 4 3 2 1

) , ( ) , ( + + = + + =

» + × + ×

t y x

I v I u I

Source: Silvio Savarese

slide-63
SLIDE 63

Motion

St Stanfor

  • rd University

19-Nov-2019 63

) ( ) (

6 5 4 3 2 1

» + + + + + +

t y x

I y a x a a I y a x a a I

Affine motion

  • Substituting into the brightness constancy

equation: y a x a a y x v y a x a a y x u

6 5 4 3 2 1

) , ( ) , ( + + = + + =

  • Each pixel provides 1 linear constraint in 6 unknowns

[ ]

2

å

+ + + + + + =

t y x

I y a x a a I y a x a a I a Err ) ( ) ( ) (

6 5 4 3 2 1

!

  • Least squares minimization:

Source: Silvio Savarese

slide-64
SLIDE 64

Motion

St Stanfor

  • rd University

19-Nov-2019 64

How do we estimate the layers?

  • 1. Obtain a set of initial affine motion hypotheses

– Divide the image into blocks and estimate affine motion parameters in each block by least squares

  • Eliminate hypotheses with high residual error
  • Map into motion parameter space
  • Perform k-means clustering on affine motion parameters

–Merge clusters that are close and retain the largest clusters to obtain a smaller set of hypotheses to describe all the motions in the scene

Source: Silvio Savarese

slide-65
SLIDE 65

Motion

St Stanfor

  • rd University

19-Nov-2019 65

How do we estimate the layers?

  • 1. Obtain a set of initial affine motion hypotheses

– Divide the image into blocks and estimate affine motion parameters in each block by least squares

  • Eliminate hypotheses with high residual error
  • Map into motion parameter space
  • Perform k-means clustering on affine motion parameters

–Merge clusters that are close and retain the largest clusters to obtain a smaller set of hypotheses to describe all the motions in the scene

Source: Silvio Savarese

slide-66
SLIDE 66

Motion

St Stanfor

  • rd University

19-Nov-2019 66

How do we estimate the layers?

  • 1. Obtain a set of initial affine motion hypotheses

– Divide the image into blocks and estimate affine motion parameters in each block by least squares

  • Eliminate hypotheses with high residual error
  • Map into motion parameter space
  • Perform k-means clustering on affine motion parameters

–Merge clusters that are close and retain the largest clusters to obtain a smaller set of hypotheses to describe all the motions in the scene

  • 2. Iterate until convergence:
  • Assign each pixel to best hypothesis

–Pixels with high residual error remain unassigned

  • Perform region filtering to enforce spatial constraints
  • Re-estimate affine motions in each region

Source: Silvio Savarese

slide-67
SLIDE 67

Motion

St Stanfor

  • rd University

19-Nov-2019 67

Example result

  • J. Wang and E. Adelson. Layered Representation for Motion Analysis. CVPR 1993.

Source: Silvio Savarese

slide-68
SLIDE 68

Motion

St Stanfor

  • rd University

19-Nov-2019 68

What will we learn today?

  • Optical flow
  • Lucas-Kanade method
  • Horn-Schunk method
  • Pyramids for large motion
  • Common fate
  • Applications
slide-69
SLIDE 69

Motion

St Stanfor

  • rd University

19-Nov-2019 69

Uses of motion

  • Tracking features
  • Segmenting objects based on motion cues
  • Learning dynamical models
  • Improving video quality

–Motion stabilization –Super resolution

  • Tracking objects
  • Recognizing events and activities
slide-70
SLIDE 70

Motion

St Stanfor

  • rd University

19-Nov-2019 70

Estimating 3D structure

Source: Silvio Savarese

slide-71
SLIDE 71

Motion

St Stanfor

  • rd University

19-Nov-2019 71

Segmenting objects based on motion cues

  • Background subtraction

– A static camera is observing a scene – Goal: separate the static background from the moving foreground

Source: Silvio Savarese

slide-72
SLIDE 72

Motion

St Stanfor

  • rd University

19-Nov-2019 72

Segmenting objects based on motion cues

  • Motion segmentation

– Segment the video into multiple coherently moving objects

  • S. J. Pundlik and S. T. Birchfield, Motion Segmentation at Any Speed,

Proceedings of the British Machine Vision Conference (BMVC) 2006

Source: Silvio Savarese

slide-73
SLIDE 73

Motion

St Stanfor

  • rd University

19-Nov-2019 73 Z.Yin and R.Collins, "On-the-fly Object Modeling while Tracking," IEEE Computer Vision and Pattern Recognition (CVPR '07), Minneapolis, MN, June 2007.

Tracking objects

Source: Silvio Savarese

slide-74
SLIDE 74

Motion

St Stanfor

  • rd University

19-Nov-2019 74

Synthesizing dynamic textures

slide-75
SLIDE 75

Motion

St Stanfor

  • rd University

19-Nov-2019 75

Super-resolution

Example: A set of low quality images

Source: Silvio Savarese

slide-76
SLIDE 76

Motion

St Stanfor

  • rd University

19-Nov-2019 76

Super-resolution

Each of these images looks like this:

Source: Silvio Savarese

slide-77
SLIDE 77

Motion

St Stanfor

  • rd University

19-Nov-2019 77

Super-resolution

The recovery result:

Source: Silvio Savarese

slide-78
SLIDE 78

Motion

St Stanfor

  • rd University

19-Nov-2019 78

  • D. Ramanan, D. Forsyth, and A. Zisserman. Tracking People by Learning their Appearance. PAMI 2007.

Tracker

Recognizing events and activities

Source: Silvio Savarese

slide-79
SLIDE 79

Motion

St Stanfor

  • rd University

19-Nov-2019 79 Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words, (BMVC), Edinburgh, 2006.

Recognizing events and activities

slide-80
SLIDE 80

Motion

St Stanfor

  • rd University

19-Nov-2019 80

Crossing – Talking – Queuing – Dancing – jogging

  • W. Choi & K. Shahid & S. Savarese WMC 2010

Recognizing events and activities

Source: Silvio Savarese

slide-81
SLIDE 81

Motion

St Stanfor

  • rd University

19-Nov-2019 81

  • W. Choi, K. Shahid, S. Savarese, "What are they doing? : Collective Activity Classification Using Spatio-Temporal Relationship Among

People", 9th International Workshop on Visual Surveillance (VSWS09) in conjuction with ICCV 09

slide-82
SLIDE 82

Motion

St Stanfor

  • rd University

19-Nov-2019 82

Human Event Understanding: From Actions to Tasks

  • A recent talk:

http://tv.vera.com.uy/video/55276

slide-83
SLIDE 83

Motion

St Stanfor

  • rd University

19-Nov-2019 83

Optical flow without motion!

slide-84
SLIDE 84

Motion

St Stanfor

  • rd University

19-Nov-2019 84

What have we learned today?

  • Optical flow
  • Lucas-Kanade method
  • Pyramids for large motion
  • Horn-Schunk method
  • Common fate
  • Applications

[Fleet & Weiss, 2005] http://www.cs.toronto.edu/pub/jepson/teaching/vision/2503/opticalFlow.pdf

Reading: [Szeliski] Chapters: 8.4, 8.5