Computational Photography Si Lu Spring 2018 - - PowerPoint PPT Presentation

computational photography
SMART_READER_LITE
LIVE PREVIEW

Computational Photography Si Lu Spring 2018 - - PowerPoint PPT Presentation

Computational Photography Si Lu Spring 2018 http://web.cecs.pdx.edu/~lusi/CS510/CS510_Computati onal_Photography.htm 04/24/2018 Last Time o Relighting n Tone Mapping n HDR 2 Today o Panorama n Overview n Feature detection n Feature matching


slide-1
SLIDE 1

Computational Photography

Si Lu

Spring 2018

http://web.cecs.pdx.edu/~lusi/CS510/CS510_Computati

  • nal_Photography.htm

04/24/2018

slide-2
SLIDE 2

Last Time

2

  • Relighting

n Tone Mapping n HDR

slide-3
SLIDE 3

Today

  • Panorama

n Overview n Feature detection n Feature matching

  • Mid-term project presentation

n Not real mid-term n 8 minutes presentation n Schedule

  • May 1 and 3

3

With slides by Prof. C. Dyer and K. Grauman

slide-4
SLIDE 4

Panorama Building: History

4

Along the River During Ching Ming Festival

by Z.D Zhang (1085-1145 )

San Francisco from Rincon Hill, 1851, by Martin Behrmanx

slide-5
SLIDE 5

Panorama Building: A Concise History

  • The state of the art and practice is good at assembling

images into panoramas

n Mid 90s –Commercial Players (e.g. QuicktimeVR) n Late 90s –Robust stitchers(in research) n Early 00s –Consumer stitching common n Mid 00s –Automation

5

slide-6
SLIDE 6

Stitching Recipe

  • Align pairs of images
  • Align all to a common frame
  • Adjust (Global) & Blend

6

slide-7
SLIDE 7

Stitching Images Together

7

slide-8
SLIDE 8

When do two images “stitch”?

8

slide-9
SLIDE 9

Images can be transformed to match

9

slide-10
SLIDE 10

Images are related by Homographies

10

slide-11
SLIDE 11

Compute Homographies

11

slide-12
SLIDE 12

Automatic Feature Points Matching

  • Match local neighborhoods around points
  • Use descriptors to efficiently compare: SIFT

n [Lowe 04] most common choice

12

slide-13
SLIDE 13

Stitching Recipe

  • Align pairs of images
  • Align all to a common frame
  • Adjust (Global) & Blend

13

slide-14
SLIDE 14

Wide Baseline Matching

  • Images taken by cameras that are far apart make the

correspondence problem very difficult

  • Feature-based approach: Detect and match feature

points in pairs of images

Credit: C. Dyer

slide-15
SLIDE 15
  • Detect feature points
  • Find corresponding pairs

Matching with Features

Credit: C. Dyer

slide-16
SLIDE 16

Matching with Features

  • Problem 1:

n Detect the same point independently in both images

no chance to match!

We need a repeatable detector

Credit: C. Dyer

slide-17
SLIDE 17

Matching with Features

  • Problem 2:

n For each point correctly recognize the corresponding point

?

We need a reliable and distinctive descriptor

Credit: C. Dyer

slide-18
SLIDE 18
  • Local: features are local, so robust to occlusion and clutter

(no prior segmentation)

  • Invariant (or covariant) to many kinds of geometric and

photometric transformations

  • Robust: noise, blur, discretization, compression, etc. do

not have a big impact on the feature

  • Distinctive: individual features can be matched to a large

database of objects

  • Quantity: many features can be generated for even small
  • bjects
  • Accurate: precise localization
  • Efficient: close to real-time performance

Properties of an Ideal Feature

Credit: C. Dyer

slide-19
SLIDE 19

Problem 1: Detecting Good Feature Points

Credit: C. Dyer

[Image from T. Tuytelaars ECCV 2006 tutorial]

slide-20
SLIDE 20
  • Hessian
  • Harris
  • Lowe: SIFT (DoG)
  • Mikolajczyk & Schmid:

Hessian/Harris-Laplacian/Affine

  • Tuytelaars & Van Gool: EBR and IBR
  • Matas: MSER
  • Kadir & Brady: Salient Regions
  • Others

Feature Detectors

Credit: C. Dyer

slide-21
SLIDE 21
  • C. Harris, M. Stephens, “A Combined Corner and Edge Detector,” 1988

Harris Corner Point Detector

Credit: C. Dyer

slide-22
SLIDE 22
  • We should recognize the point by looking

through a small window

  • Shifting a window in any direction should give

a large change in response

Harris Detector: Basic Idea

Credit: C. Dyer

slide-23
SLIDE 23

“flat” region: no change in all directions “edge”: no change along the edge direction “corner”: significant change in all directions

Harris Detector: Basic Idea

Credit: C. Dyer

slide-24
SLIDE 24

 

2 ,

( , ) ( , ) ( , ) ( , )

x y

E u v w x y I x u y v I x y    

Change of intensity for a (small) shift by [u,v] in image I:

Intensity Shifted intensity Weighting function

  • r

Weighting function w(x,y) = Gaussian 1 in window, 0 outside

Harris Detector: Derivation

Credit: R. Szeliski

slide-25
SLIDE 25

Apply 2nd order Taylor series expansion:

  

     

y x y x y x y y x x

y x I y x I y x w C y x I y x w B y x I y x w A Bv Cuv Au v u E

, , 2 , 2 2 2

) , ( ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( 2 ) , (

 

( , ) A C u E u v u v C B v             

x y x I I x    / ) , (

y y x I I y    / ) , (

Harris Detector

Credit: R. Szeliski

slide-26
SLIDE 26

 

( , ) , u E u v u v M v       

Expanding E(u,v) in a 2nd order Taylor series, we have, for small shifts, [u,v], a bilinear approximation:

2 2 ,

( , )

x x y x y x y y

I I I M w x y I I I         

where M is a 2  2 matrix computed from image derivatives:

Note: Sum computed over small neighborhood around given pixel

x y x I I x    / ) , ( y y x I I y    / ) , (

Harris Corner Detector

Credit: R. Szeliski

slide-27
SLIDE 27

 

( , ) , u E u v u v M v       

Intensity change in shifting window: eigenvalue analysis 1, 2 – eigenvalues of M

direction of the slowest change direction of the fastest change

(max)-1/2 (min)-1/2

Ellipse E(u,v) = const

Harris Corner Detector

Credit: R. Szeliski

slide-28
SLIDE 28

1 and 2 both large

Image patch SSD surface

Selecting Good Features

Credit: C. Dyer

slide-29
SLIDE 29

large 1, small 2

SSD surface

Selecting Good Features

Credit: C. Dyer

slide-30
SLIDE 30

small 1, small 2

SSD surface

Selecting Good Features

Credit: C. Dyer

slide-31
SLIDE 31

1 2 “Corner” 1 and 2 both large,

1 ~ 2;

E increases in all

directions

1 and 2 are small; E is almost constant

in all directions

“Edge” 1 >> 2 “Edge” 2 >> 1 “Flat” region Classification of image points using eigenvalues of M:

Harris Corner Detector

Credit: C. Dyer

slide-32
SLIDE 32

Harris Corner Detector

Measure of corner response:

 

2

det trace R M k M  

1 2 1 2

det trace M M       

k is an empirically-determined constant; e.g., k = 0.05

Credit: C. Dyer

slide-33
SLIDE 33

Harris Corner Detector

1 2 “Corner” “Edge” “Edge” “Flat”

  • R depends only on

eigenvalues of M

  • R is large for a corner
  • R is negative with large

magnitude for an edge

  • |R| is small for a flat

region R > 0 R < 0 R < 0 |R| small

Credit: C. Dyer

slide-34
SLIDE 34

Harris Corner Detector: Algorithm

  • Algorithm:
  • 1. Find points with large corner

response function R

(i.e., R > threshold)

  • 2. Take the points of local maxima
  • f R (for localization) by non-

maximum suppression

Credit: C. Dyer

slide-35
SLIDE 35

Harris Detector: Example

Credit: C. Dyer

slide-36
SLIDE 36

Compute corner response R = 12 – k(1 + 2)2

Harris Detector: Example

Credit: C. Dyer

slide-37
SLIDE 37

Harris Detector: Example

Find points with large corner response: R > threshold

Credit: C. Dyer

slide-38
SLIDE 38

Take only the points of local maxima of R

Harris Detector: Example

Credit: C. Dyer

slide-39
SLIDE 39

Harris Detector: Example

Credit: C. Dyer

slide-40
SLIDE 40

Interest points extracted with Harris (~ 500 points)

Harris Detector: Example

Credit: C. Dyer

slide-41
SLIDE 41

Harris Detector: Example

Credit: C. Dyer

slide-42
SLIDE 42

Harris Detector: Summary

  • Average intensity change in direction [u,v] can be

expressed in bilinear form:

  • Describe a point in terms of eigenvalues of M:

measure of corner response:

  • A good (corner) point should have a large intensity

change in all directions, i.e., R should be a large positive value

 

( , ) , u E u v u v M v       

 

2 1 2 1 2

R k       

Credit: C. Dyer

slide-43
SLIDE 43

Harris Detector Properties

  • Rotation invariance

Ellipse rotates but its shape (i.e., eigenvalues) remains the same Corner response R is invariant to image rotation

Credit: C. Dyer

slide-44
SLIDE 44
  • But not invariant to image scale

Fine scale: All points will be classified as edges Coarse scale: Corner

Harris Detector Properties

Credit: C. Dyer

slide-45
SLIDE 45

Harris Detector Properties

  • Quality of Harris detector for different scale

changes

Repeatability rate:

# correct correspondences # possible correspondences

  • C. Schmid et al., “Evaluation of Interest Point Detectors,” IJCV 2000

Credit: C. Dyer

slide-46
SLIDE 46

46

Invariant Local Features

  • Goal: Detect the same interest points

regardless of image changes due to translation, rotation, scale, viewpoint

slide-47
SLIDE 47

47

  • Geometry

n Rotation n Similarity (rotation + uniform scale) n Affine (scale dependent on direction) valid for: orthographic camera, locally planar

  • bject
  • Photometry

n Affine intensity change (I  a I + b)

Models of Image Change

Credit: C. Dyer

slide-48
SLIDE 48

SIFT Detector [Lowe ’04]

  • Difference-of-Gaussian (DoG)

is an approximation of the Laplacian-of-Gaussian (LoG)  =

Lowe, D. G., “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, 60, 2, pp. 91-110, 2004

Credit: C. Dyer

slide-49
SLIDE 49

SIFT Detector

Credit: C. Dyer

slide-50
SLIDE 50

SIFT Detector

Credit: C. Dyer

slide-51
SLIDE 51

SIFT Detector Algorithm Summary

  • Detect local maxima in

position and scale of squared values of difference-of-Gaussian

  • Fit a quadratic to

surrounding values for sub-pixel and sub- scale interpolation

  • Output = list of (x, y, )

points

B l u r R e s a m p l e S u b t r a c t

Credit: C. Dyer

slide-52
SLIDE 52

References on Feature Descriptors

  • A performance evaluation of local descriptors, K.

Mikolajczyk and C. Schmid, IEEE Trans. PAMI 27(10), 2005

  • Evaluation of features detectors and descriptors

based on 3D objects, P. Moreels and P. Perona, Int. J. Computer Vision 73(3), 2007

Credit: C. Dyer

slide-53
SLIDE 53

Today

  • Panorama

n Overview n Feature detection n Feature matching

53

With slides by Prof. C. Dyer and K. Grauman

slide-54
SLIDE 54

Stitching Recipe

  • Align pairs of images

n Feature Detection n Feature Matching n Homography Estimation

  • Align all to a common frame
  • Adjust (Global) & Blend

54

slide-55
SLIDE 55

55

Invariant Local Features

  • Goal: Detect the same interest points

regardless of image changes due to translation, rotation, scale, viewpoint

slide-56
SLIDE 56
  • After detecting points (and patches) in each image,
  • Next question: How to match them?

?

Point descriptor should be:

  • 1. Invariant
  • 2. Distinctive

Feature Point Descriptors

All the following slides are used from Prof. C. Dyer’s relevant course, except those with explicit acknowledgement.

slide-57
SLIDE 57
  • 1. Detection: Identify the

interest points

  • 2. Description: Extract feature

vector for each interest point

  • 3. Matching: Determine

correspondence between descriptors in two views

] , , [

) 1 ( ) 1 ( 1 1 d

x x   x ] , , [

) 2 ( ) 2 ( 1 2 d

x x   x

Local Features: Description

slide-58
SLIDE 58

Geometric Transformations

e.g. scale, translation, rotation

slide-59
SLIDE 59

Photometric Transformations

Figure from T. Tuytelaars ECCV 2006 tutorial

slide-60
SLIDE 60

Raw Patches as Local Descriptors

The simplest way to describe the neighborhood around an interest point is to write down the list of intensities to form a feature vector But this is very sensitive to even small shifts or rotations

slide-61
SLIDE 61

§ Find local orientation

Dominant direction of gradient:

§ Compute description relative to this

  • rientation

1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 2 D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004

Making Descriptors Invariant to Rotation

slide-62
SLIDE 62
  • Compute histogram of local

gradient directions computed at selected scale in neighborhood

  • f a feature point relative to

dominant local orientation

  • Compute gradients within sub-

patches, and compute histogram of orientations using discrete “bins”

  • Descriptor is rotation and scale

invariant, and also has some illumination invariance (why?)

2 

SIFT Descriptor: Select Major Orientation

slide-63
SLIDE 63
  • Compute gradient orientation histograms on 4 x 4 neighborhoods over

16 x 16 array of locations in scale space around each keypoint position, relative to the keypoint orientation using thresholded image gradients from Gaussian pyramid level at keypoint’s scale

  • Quantize orientations to 8 values
  • 4 x 4 array of histograms
  • SIFT feature vector of length 4 x 4 x 8 = 128 values for each keypoint
  • Normalize the descriptor to make it invariant to intensity change

D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints,” IJCV 2004

SIFT Descriptor

slide-64
SLIDE 64

64

slide-65
SLIDE 65
  • Stable (repeatable) feature points can

currently be detected that are invariant to

n Rotation, scale, and affine transformations, but not to more general perspective and projective transformations

  • Feature point descriptors can be computed,

but

n are noisy due to use of differential operators n are not invariant to projective transformations

Feature Detection and Description Summary

slide-66
SLIDE 66

Feature Matching

slide-67
SLIDE 67

Wide-Baseline Feature Matching

  • Standard approach for pair-wise matching:

n For each feature point in image A n Find the feature point with the closest descriptor in image B

From Schaffalitzky and Zisserman ’02

slide-68
SLIDE 68

Wide-Baseline Feature Matching

  • Compare the distance, d1, to the closest

feature, to the distance, d2, to the second closest feature

  • Accept if d1/d2 < 0.6

n If the ratio of distances is less than a threshold, keep the feature

  • Why the ratio test?

n Eliminates hard-to-match repeated features n Distances in SIFT descriptor space seem to be non-uniform

slide-69
SLIDE 69

Feature Matching

  • Exhaustive search

n for each feature in one image, look at all the other features in the other image(s)

  • Hashing

n compute a short descriptor from each feature vector, or hash longer descriptors (randomly)

  • Nearest neighbor techniques

n k-trees and their variants

slide-70
SLIDE 70

Wide-Baseline Feature Matching

  • Because of the high dimensionality of features,

approximate nearest neighbors are necessary for efficient performance

  • See ANN package, Mount and Arya

http://www.cs.umd.edu/~mount/ANN/

slide-71
SLIDE 71

Next Time

  • Panorama

n Homography estimation n Blending n Multi-perspective panoramas

71