lecture 5 edges corners sampling pyramids
play

Lecture 5: Edges, Corners, Sampling, Pyramids Thursday, Sept 13 - PDF document

Lecture 5: Edges, Corners, Sampling, Pyramids Thursday, Sept 13 Normalized cross correlation Best match Template Normalized correlation: normalize for image region brightness Windowed correlation search: inexpensive way to find a


  1. Lecture 5: Edges, Corners, Sampling, Pyramids Thursday, Sept 13

  2. Normalized cross correlation Best match Template • Normalized correlation: normalize for image region brightness • Windowed correlation search: inexpensive way to find a fixed scale pattern • (Convolution = correlation if filter is symmetric)

  3. Filters and scenes

  4. Filters and scenes • Scenes have holistic qualities • Can represent scene categories with global texture • Use Steerable filters, windowed for some limited spatial information • Model likelihood of filter responses given scene category as mixture of Gaussians, (and incorporate some temporal info…) [Torralba & Oliva, 2003] [Torralba, Murphy, Freeman, and Rubin, ICCV 2003]

  5. Steerable filters • Convolution linear -- synthesize a filter of arbitrary orientation as a linear combination of “basis filters” • Interpolated filter responses more efficient than explicit filter at arbitrary orientation [Freeman & Adelson, The Design and Use of Steerable Filters, PAMI 1991]

  6. Steerable filters Freeman & Adelson, 1991 = Basis filters for derivative of Gaussian =

  7. Probability of the scene given global features [Torralba, Murphy, Freeman, and Rubin, ICCV 2003]

  8. [Torralba, Murphy, Freeman, and Rubin, ICCV 2003]

  9. Contextual priors • Use scene recognition � predict objects present • For object(s) likely to be present, predict locations based on similarity to previous images with the same place and that object

  10. Scene category Specific place (black=right, red=wrong) [Torralba, Murphy, Freeman, and Rubin, ICCV 2003]

  11. Blue solid circle: recognition with temporal info Black hollow circle: instantaneous recognition using global feature only Cross: true location

  12. Image gradient The gradient of an image: The gradient points in the direction of most rapid change in intensity The gradient direction (orientation of edge normal) is given by: The edge strength is given by the gradient magnitude Slide credit S. Seitz

  13. Effects of noise Consider a single row or column of the image • Plotting intensity as a function of position gives a signal Where is the edge? Slide credit S. Seitz

  14. Solution: smooth first Where is the edge? Look for peaks in

  15. Derivative theorem of convolution This saves us one operation: Slide credit S. Seitz

  16. Laplacian of Gaussian Consider Laplacian of Gaussian operator Where is the edge? Zero-crossings of bottom graph

  17. 2D edge detection filters Laplacian of Gaussian Gaussian derivative of Gaussian • is the Laplacian operator: Slide credit S. Seitz

  18. The Canny edge detector original image (Lena)

  19. The Canny edge detector norm of the gradient

  20. The Canny edge detector thresholding

  21. Non-maximum suppression Check if pixel is local maximum along gradient direction, select single max across width of the edge • requires checking interpolated pixels p and r Slide credit S. Seitz

  22. The Canny edge detector thinning (non-maximum suppression)

  23. Predicting the next edge point Assume the marked point is an edge point. Then we construct the tangent to the edge curve (which is normal to the gradient at that point) and use this to predict the next points (here either r or s). (Forsyth & Ponce)

  24. Hysteresis Thresholding Reduces the probability of false contours and fragmented edges Given result of non-maximum suppression: For all edge points that remain, - locate next unvisited pixel where intensity > t high - start from that point, follow chains along edge and add points where intensity < t low

  25. Edge detection by subtraction original

  26. Edge detection by subtraction smoothed (5x5 Gaussian)

  27. Edge detection by subtraction Why does this work? smoothed – original (scaled by 4, offset +128)

  28. Gaussian - image filter Gaussian delta function Laplacian of Gaussian

  29. Causes of edges If the goal is image understanding, what do we want from an edge detector? Adapted from C. Rasmussen

  30. Learning good boundaries • Use ground truth (human-labeled) boundaries in natural images to learn good features • Supervised learning to optimize cue integration, filter scales, select feature types Work by D. Martin and C. Fowlkes and D. Tal and J. Malik, Berkeley Segmentation Benchmark, 2001

  31. Human- marked segment boundaries [D. Martin et al. PAMI 2004]

  32. What features are responsible for perceived edges? Feature profiles (oriented energy, brightness, color, and texture gradients) along the patch’s horizontal diameter [D. Martin et al. PAMI 2004]

  33. What features are responsible for perceived edges?

  34. Learning good boundaries [D. Martin et al. PAMI 2004]

  35. Original Human-labeled Boundary detection Berkeley Segmentation Database, D. Martin and C. Fowlkes and D. Tal and J. Malik

  36. [D. Martin et al. PAMI 2004]

  37. Edge detection and corners • Partial derivative estimates in x and y fail to capture corners Why do we care about corners?

  38. Case study: panorama stitching [Brown, Szeliski, and Winder, CVPR 2005]

  39. How do we build panorama? • We need to match (align) images [Slide credit: Darya Frolova and Denis Simakov]

  40. Matching with Features • Detect feature points in both images

  41. Matching with Features • Detect feature points in both images • Find corresponding pairs

  42. Matching with Features • Detect feature points in both images • Find corresponding pairs • Use these pairs to align images

  43. Matching with Features • Problem 1: – Detect the same point independently in both images no chance to match! We need a repeatable detector

  44. Matching with Features • (Problem 2: – For each point correctly recognize the corresponding one) ? We need a reliable and distinctive descriptor More on this aspect later!

  45. Corner detection as an interest operator • We should easily recognize the point by looking through a small window • Shifting a window in any direction should give a large change in intensity

  46. Corner detection as an interest operator “flat” region: “edge”: “corner”: no change in no change along significant change all directions the edge direction in all directions C.Harris, M.Stephens. “A Combined Corner and Edge Detector”. 1988

  47. Corner Detection M is a 2 × 2 matrix computed from image derivatives: ⎡ ⎤ 2 I I I Gradient with ∑ = x x y ⎢ ⎥ M w x y respect to x, ( , ) 2 ⎢ I I I ⎥ times gradient ⎣ ⎦ x y , x y y with respect to y Sum over image region – area we are checking for corner

  48. Corner Detection Eigenvectors of M: encode edge directions Eigenvalues of M: encode edge strength λ 1 , λ 2 – eigenvalues of M direction of the fastest change direction of the slowest change λ max λ min

  49. Corner Detection λ 2 Classification of “Edge” λ 2 >> λ 1 image points using “Corner” eigenvalues of M : λ 1 and λ 2 are large, λ 1 ~ λ 2 ; E increases in all directions λ 1 and λ 2 are small; “Edge” E is almost constant “Flat” λ 1 >> λ 2 in all directions region λ 1

  50. Harris Corner Detector Measure of corner response: ( ) = − 2 R M k M det trace Avoid computing eigenvalues = λ λ M themselves. det 1 2 = λ + λ M trace 1 2 ( k – empirical constant, k = 0.04-0.06)

  51. Harris Corner Detector λ 2 “Edge” “Corner” • R depends only on R < 0 eigenvalues of M • R is large for a corner R > 0 • R is negative with large magnitude for an edge • | R | is small for a flat region “Flat” “Edge” |R| small R < 0 λ 1

  52. Harris Corner Detector • The Algorithm: – Find points with large corner response function R ( R > threshold) – Take the points of local maxima of R

  53. Harris Detector: Workflow

  54. Harris Detector: Workflow Compute corner response R

  55. Harris Detector: Workflow Find points with large corner response: R> threshold

  56. Harris Detector: Workflow Take only the points of local maxima of R

  57. Harris Detector: Workflow

  58. Harris Detector: Some Properties • Rotation invariance Ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner response R is invariant to image rotation

  59. Harris Detector: Some Properties • Not invariant to image scale ! All points will be Corner ! classified as edges More on interest operators/descriptors with invariance properties later.

  60. This image is too big to fit on the screen. How can we reduce it? How to generate a half- sized version?

  61. Image sub-sampling 1/8 1/4 Throw away every other row and column to create a 1/2 size image - called image sub-sampling Slide credit: S. Seitz

  62. Image sub-sampling 1/2 1/4 (2x zoom) 1/8 (4x zoom)

  63. Sampling • Continuous function � discrete set of values

  64. Undersampling • Information lost Figure credit: S. Marschner

  65. Undersampling • Looks just like lower frequency signal!

  66. Undersampling • Looks like higher frequency signal! Aliasing : higher frequency information can appear as lower frequency information

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend