Schedule Tuesday, May 10: Motion microscopy, separating shading - - PowerPoint PPT Presentation

schedule
SMART_READER_LITE
LIVE PREVIEW

Schedule Tuesday, May 10: Motion microscopy, separating shading - - PowerPoint PPT Presentation

Schedule Tuesday, May 10: Motion microscopy, separating shading and paint Thursday, May 12: 5-10 min. student project presentations, projects due. Computer vision for photography Bill Freeman Computer Science and Artificial


slide-1
SLIDE 1

Schedule

  • Tuesday, May 10:

– Motion microscopy, separating shading and paint

  • Thursday, May 12:

– 5-10 min. student project presentations, projects due.

slide-2
SLIDE 2

Computer vision for photography

Bill Freeman Computer Science and Artificial Intelligence Laboratory, MIT

slide-3
SLIDE 3

Multiple-exposure images by Marey

slide-4
SLIDE 4

Edgerton

slide-5
SLIDE 5

Computational photography

Update those revealing photographic techniques with digital methods. 1) Shapetime photography 2) Motion microscopy 3) Separating shading and paint

slide-6
SLIDE 6

Shapetime photography

Joint work with Hao Zhang, U.C. Berkeley

slide-7
SLIDE 7

Video frames Multiple- exposure Shape- Time Layer- By-Time

slide-8
SLIDE 8
slide-9
SLIDE 9
slide-10
SLIDE 10
slide-11
SLIDE 11

“how to sew”

slide-12
SLIDE 12
slide-13
SLIDE 13

Shape-Time composite “inside-out” Input sequence

slide-14
SLIDE 14

Insert pictures describing zcam, and initial results

slide-15
SLIDE 15

Motion Magnification

Ce Liu Antonio Torralba William T. Freeman Fredo Durand Edward H. Adelson

slide-16
SLIDE 16

Goal

A microscope for motion

You focus the microscope by specifying which motions to magnify, and by how much; the motion microscope then re-renders the input sequence with the desired motions magnified.

slide-17
SLIDE 17

The naïve solution has artifacts

Original sequence Naïve motion magnification Amplified dense Lukas-Kanade optical flow. Note artifacts at occlusion boundaries.

slide-18
SLIDE 18

Motion magnification flowchart

Input raw video sequence Register frames Find feature point trajectories Cluster trajectories Interpolate dense

  • ptical flow

Segment flow into layers Magnify selected layer, Fill-in textures Render magnified video sequence

Layer-based motion analysis

2 3 4 5 1 7 6

slide-19
SLIDE 19

1 Video Registration

  • To find a reliable set of feature points that are

“still” in the sequence

– Detect and track feature points – Estimate the affine motion from the reference frame to each of the rest frames – Select feature points that are inliers through all the frames – Affine warping based on the inliers

slide-20
SLIDE 20

Inliers (red) and outliers (blue)

slide-21
SLIDE 21

Registration results

slide-22
SLIDE 22

2 Find feature point trajectories

  • An EM algorithm to find both trajectory and region of

support for each feature point

– E-step: to use the variance of matching score to compute the weight of the neighboring pixels – M-step: to track feature point based on the region of support

  • The following feature points are pruned

– Occluded (matching error) – Textureless (motion coherence) – Static (zero motion)

slide-23
SLIDE 23

Learned regions of support for features

slide-24
SLIDE 24

Robust feature point tracking

slide-25
SLIDE 25

Minimal SSD match to find feature point trajectories

slide-26
SLIDE 26

Use EM to find regions of support and prune low-likelihood trajectories

slide-27
SLIDE 27

3 Trajectory clustering

We need to cluster trajectories belonging to the same

  • bject, despite:

Points have different appearances Undergo very small motions, of varying amplitudes and directions

Vx Vy time time Vy Vx

slide-28
SLIDE 28

3 Compatibility function used to group feature point trajectories

ρn,m: compatibility between the nth and mth point trajectories. vx(n,k): the displacement, relative to the reference frame, of the nth feature point in the kth frame. Using the ρn,m compatibilities, cluster the point trajectories using normalized cuts.

slide-29
SLIDE 29

Clustering results

slide-30
SLIDE 30

4 Dense optical flow field interpolation

  • For each layer (cluster) a dense optical flow field

(per pixel) is interpolated

  • Use local weighted linear regression to interpolate

between feature point trajectories.

Dense trajectories interpolated for each cluster Clustered feature point trajectories

slide-31
SLIDE 31

5 Segment flow into layers

  • Assign each pixel to a motion cluster layer, using four

cues:

– Motion likelihood – Color likelihood – Spatial connectivity – Temporal coherence

  • Energy minimization using graph cuts
slide-32
SLIDE 32

Motion segmentation results

Note we have 2 special layers: the background layer (gray), and the outlier layer (black).

slide-33
SLIDE 33

6, 7 Magnification, texture fill-in and rendering

  • Amplify the motion of the selected layers by warping the reference

image pixels accordingly.

  • Render unselected layers without magnification.
  • Fill-in holes revealed in background layer using Efros-Leung texture

synthesis

  • Directly pass-through pixel values of the outlier layer.
slide-34
SLIDE 34

Summary of motion magnification steps

slide-35
SLIDE 35

Results

  • Demo
slide-36
SLIDE 36

Layered representation

slide-37
SLIDE 37
slide-38
SLIDE 38

Is the signal really in the video? (yes)

Original Magnified 25 frames

slide-39
SLIDE 39

Swingset details

Beam bending Proper handing

  • f occlusions

Artifact

slide-40
SLIDE 40

Bookshelf

44 frames 480x640 pixels Original Magnified time time

slide-41
SLIDE 41

Bookshelf deforms less, further from force

44 frames 480x640 pixels Original Magnified time time

slide-42
SLIDE 42

Outtakes from imperfect segmentations

slide-43
SLIDE 43

Breathing Mike

Original sequence…

slide-44
SLIDE 44

Breathing Mike

Feature points 4 clusters 2 clusters 8 clusters

slide-45
SLIDE 45

Breathing Mike

Sequence after magnification…

slide-46
SLIDE 46

Standing Mike

Sequence after magnification…

slide-47
SLIDE 47

Crane

slide-48
SLIDE 48

Crane

Feature points 4 clusters 2 clusters 8 clusters

slide-49
SLIDE 49

Crane

Things can go horribly wrong sometimes…

slide-50
SLIDE 50

What next

  • Continue improving the motion segmentation.

– Motion magnification is “segmentation artifact amplification”—a good test bed.

  • Real applications

– Videos of inner ear – Connections with mechanical engineering dept.

  • Generalization: amplifying small differences in

motion.

– What’s up with Tiger Woods’ golf swing, anyway?