Comparison of Local Feature Descriptors Subhransu Maji Department - - PowerPoint PPT Presentation

comparison of local feature descriptors
SMART_READER_LITE
LIVE PREVIEW

Comparison of Local Feature Descriptors Subhransu Maji Department - - PowerPoint PPT Presentation

Outline Introduction Benchmarks Experiments and Results Future Work Comparison of Local Feature Descriptors Subhransu Maji Department of EECS, University of California, Berkeley. December 13, 2006 Subhransu Maji Comparison of Local


slide-1
SLIDE 1

Outline Introduction Benchmarks Experiments and Results Future Work

Comparison of Local Feature Descriptors

Subhransu Maji

Department of EECS, University of California, Berkeley.

December 13, 2006

Subhransu Maji Comparison of Local Feature Descriptors

slide-2
SLIDE 2

Outline Introduction Benchmarks Experiments and Results Future Work

1

Introduction Local Features

2

Benchmarks Mikolajczyk’s Dataset Caltech 101 Dataset

3

Experiments and Results Evaluation of Feature Detectors Evaluation of Feature Descriptors

4

Future Work

Subhransu Maji Comparison of Local Feature Descriptors

slide-3
SLIDE 3

Outline Introduction Benchmarks Experiments and Results Future Work Local Features

Applications of Local Features

Multi Camera Scene reconstruction. Robust to Backgrounds, Occlusions Compact Representation of Objects for Matching, Recognition and Tracking. Lots of uses, Lots of options. This work tries to address the issue of what features are suitable for what task, which is currently a black art!!

Subhransu Maji Comparison of Local Feature Descriptors

slide-4
SLIDE 4

Outline Introduction Benchmarks Experiments and Results Future Work Local Features

Key properties of a good local feature

Must be highly distinctive, i.e. low probability of a mismatch. Should be easy to extract. Invariance, a good local feature should be tolerant to.

Image noise Changes in illumination Uniform scaling Rotation Minor changes in viewing direction

Question: How to construct the local feature to achieve invariance to the above?

Subhransu Maji Comparison of Local Feature Descriptors

slide-5
SLIDE 5

Outline Introduction Benchmarks Experiments and Results Future Work Local Features

Various Feature Detectors

Harris detector find points at a fixed scale. Harris Laplace detector uses the scale-adapted Harris function to localize points in scale-space. It then selects the points for which the Laplacian-of-Gaussian attains a maximum over scale. Hessian Laplace localizes points in space at the local maxima of the Hessian determinant and in scale at the local maxima of the Laplacian-of-Gaussian. Harris/Hessian Affine detector does an affine adaptation of the Harris/Hessian Laplace using the second moment matrix. Maximally Stable Exremal Regions detector finds regions such that pixels inside the MSER have either higher (bright extremal regions) or lower (dark extremal regions) intensity than all the pixels on its outer boundary. Uniform Detector(unif) - Select 500 points uniformly on the edge maps by rejection sampling.

Subhransu Maji Comparison of Local Feature Descriptors

slide-6
SLIDE 6

Outline Introduction Benchmarks Experiments and Results Future Work Local Features

Various Feature Descriptors

Scale Invariant Feature Transformation A local image is path is divided into a grid (typically 4x4) and a orientation histogram is computed for each of these cells. Shape Contexts computes the ditance and orientaion histogram of other points relative to the interst point. Image Moments These compute the descriptors by taking various higher

  • rder image moments.

Jet Decriptors These are essentially higher order derivatives of the image at the interest point Gradient Location and Orientaiton Histogram As the name suggests it constructs a feature out of the image using the Histogram of location and Orientation in of points in a window around the interest point. Geometric Blur These compute the average of the edge signal response

  • ver small tranformations. Tunable parameters include the blur

gradient(β = 1), base blur (α = 0.5) and scale multiplier (s = 9).

Subhransu Maji Comparison of Local Feature Descriptors

slide-7
SLIDE 7

Outline Introduction Benchmarks Experiments and Results Future Work Local Features

Example Detections

Subhransu Maji Comparison of Local Feature Descriptors

slide-8
SLIDE 8

Outline Introduction Benchmarks Experiments and Results Future Work Mikolajczyk’s Dataset Caltech 101 Dataset

Evaluation Criteria

We want the feature to be repeatable, repeatability =

correct−matches ground−truth−matches

Descriptor Performance:

recall vs 1-precision graphs. recall = #correct matches

#correspondances

correct matches found by neareast neignbour matching in the feature space. correspondances obtained from ground truth matching. 1 − precision =

#falsematches #false matches+#correct matces

Subhransu Maji Comparison of Local Feature Descriptors

slide-9
SLIDE 9

Outline Introduction Benchmarks Experiments and Results Future Work Mikolajczyk’s Dataset Caltech 101 Dataset

Mikolajczyk’s Dataset

8 Datasets, 6 Images per dataset. Ground Truth Homography available for these Images.

Subhransu Maji Comparison of Local Feature Descriptors

slide-10
SLIDE 10

Outline Introduction Benchmarks Experiments and Results Future Work Mikolajczyk’s Dataset Caltech 101 Dataset

Caltech 101 Dataset

101 Categories, man-made objects, motifs, animals and plants. Foreground Mask is available. Obtain ground truth based on a rough alignement of the contours. Determine the scale, translation which maximizes area overlap

  • f the contours.

Correspondance: Features of the images within a threshold distance(10 Pixels) under the transformation. Many clasification techniques use the structure of image for computing similarity. For e.g. SC based caracter recognition using TSP. The performance of these algorithms is dependent on detecting features on the right positions. Ideally we would want the descriptor performance to be better on such a softer notion of matching.

Subhransu Maji Comparison of Local Feature Descriptors

slide-11
SLIDE 11

Outline Introduction Benchmarks Experiments and Results Future Work Mikolajczyk’s Dataset Caltech 101 Dataset

Best 8 and Worst 8

50 100 150 200 250 300 50 100 150 200 250 300 100 150 200 50 100 150 200 250 300 350 400 50 100 150 200 250 50 100 150 200 250 40 60 80 100 120 140 160 180 200 220 50 100 150 200 250 50 100 150 200 250 300 40 60 80 100 120 140 160 180 200 220 80 90 100 110 120 130 50 100 150 200 250 300 50 100 150 200 250 300 40 60 80 100 120 140 160 180 200 220 240 40 60 80 100 120 140

0.9781 0.9717 0.9642 0.9490 0.9486 0.9483 0.9405 0.9223

50 100 150 200 250 300 20 40 60 80 100 20 40 60 80 100 120 140 160 180 20 40 60 80 100 120 140 160 180 200 100 150 200 250 50 100 150 200 250 50 100 150 200 250 100 120 140 160 180 200 220 100 120 140 160 180 200 220 60 80 100 120 140 160 180 200 220 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 300 20 40 60 80 100 120 140 160 180 200 50 100 150 200 250 300 60 80 100 120

0.7097 0.6934 0.6919 0.6658 0.6614 0.6444 0.6426 0.3318

Subhransu Maji Comparison of Local Feature Descriptors

slide-12
SLIDE 12

Outline Introduction Benchmarks Experiments and Results Future Work Mikolajczyk’s Dataset Caltech 101 Dataset

Example Ground Truth Matches

100 200 300 400 500 600 700 800 900 1000 50 100 150 200 250 300 350

100 200 300 400 500 600 50 100 150

Faces car side

100 200 300 400 500 600 50 100 150 200 250

50 100 150 200 250 300 350 400 450 500 50 100 150

stop sign Motorbikes

Figure: Ground Truth matches. We use the harris Affine detector with a distance threshold of 5 pixels

Subhransu Maji Comparison of Local Feature Descriptors

slide-13
SLIDE 13

Outline Introduction Benchmarks Experiments and Results Future Work Evaluation of Feature Detectors Evaluation of Feature Descriptors

Repeatability Results on Benchmarks

Mikolajczyk Dataset: MSER was generally the best followed by Hessian Affine. Hessian-Affine and Harris-Affine provide more regions than the

  • ther detectors, which is useful in matching scenes with
  • cclusion and clutter.

Caltech 101 Dataset: Hessian Affine, Hessian Laplace, MSER, UNIF all perform equally well. Hessian Affine is slightly better than others in most cases. Almost any detector is equally good as the matching is softer.

Subhransu Maji Comparison of Local Feature Descriptors

slide-14
SLIDE 14

Outline Introduction Benchmarks Experiments and Results Future Work Evaluation of Feature Detectors Evaluation of Feature Descriptors

Desciptor Performance on Mikolajczyk’s Dataset

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−precision frac of correct Effect of scale − bikes gb sift sc spin mom jla 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 1−precision frac of correct Effect of scale − trees gb sift sc spin mom jla

(1)bikes (2)trees

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 1−precision frac of correct Effect of scale − graf gb sift sc spin mom jla 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 1−precision frac of correct Effect of scale − wall gb sift sc spin mom jla

(3)graffiti (4)wall

Subhransu Maji Comparison of Local Feature Descriptors

slide-15
SLIDE 15

Outline Introduction Benchmarks Experiments and Results Future Work Evaluation of Feature Detectors Evaluation of Feature Descriptors

Desciptor Performance on Mikolajczyk’s Dataset

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 1−precision frac of correct Effect of scale − bark gb sift sc spin mom jla 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 1−precision frac of correct Effect of scale − boat gb sift sc spin mom jla

(5)bark (6)boat

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−precision frac of correct Effect of scale − leuven gb sift sc spin mom jla 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1−precision frac of correct Effect of scale − ubc gb sift sc spin mom jla

(7)leuven (8)ubc

Subhransu Maji Comparison of Local Feature Descriptors

slide-16
SLIDE 16

Outline Introduction Benchmarks Experiments and Results Future Work Evaluation of Feature Detectors Evaluation of Feature Descriptors

Desciptor Performance on Caltech 101

0.4 0.5 0.6 0.7 0.8 0.9 1 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 1−precision frac of correct yin yang sift sc mom gloh gb jet 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.05 0.1 0.15 0.2 0.25 0.3 0.35 1−precision frac of correct Faces sift sc mom gloh gb jet 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.05 0.1 0.15 0.2 0.25 1−precision frac of correct Faces Easy sift sc mom gloh gb jet 0.95 0.96 0.97 0.98 0.99 1 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 1−precision frac of correct pizza sift sc mom gloh gb jet

Subhransu Maji Comparison of Local Feature Descriptors

slide-17
SLIDE 17

Outline Introduction Benchmarks Experiments and Results Future Work Evaluation of Feature Detectors Evaluation of Feature Descriptors

Desciptor Performance on Caltech 101

0.975 0.98 0.985 0.99 0.995 1 0.05 0.1 0.15 0.2 0.25 0.3 0.35 1−precision frac of correct barrel sift sc mom gloh gb jet 0.75 0.8 0.85 0.9 0.95 1 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 1−precision frac of correct car side sift sc mom gloh gb jet 0.4 0.5 0.6 0.7 0.8 0.9 1 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 1−precision frac of correct stop sign sift sc mom gloh gb jet 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.02 0.04 0.06 0.08 0.1 0.12 0.14 1−precision frac of correct Motorbikes sift sc mom gloh gb jet

Subhransu Maji Comparison of Local Feature Descriptors

slide-18
SLIDE 18

Outline Introduction Benchmarks Experiments and Results Future Work Evaluation of Feature Detectors Evaluation of Feature Descriptors

Results on Benchmarks

Mikolajczyk Dataset:

1

SIFT and Shape Context do better on wall, bark datasets.

2

Geometric Blur(GB) better on bikes, graf datasets

3

Both are Comparable on ubc, leuven, boat, trees datasets

Caltech 101 Dataset: GB, Shape Context and SIFT do the best in all cases. GLOH which did the best in the Mikolajczyk’s Dataset performs poorly. In general the performance in Caltech 101 is much worse than in Mikolajczyk’s dataset.

Subhransu Maji Comparison of Local Feature Descriptors

slide-19
SLIDE 19

Outline Introduction Benchmarks Experiments and Results Future Work Evaluation of Feature Detectors Evaluation of Feature Descriptors

Some Observations

The performance difference in significant between SIFT and GB in both 1 and 2. The performance of SIFT and SC are higly correlated. The performance of SIFT and GB are higly negatively correlated. Question: Do SIFT, GB carry complimentary information. When is one more useful than the other? SIFT does better when there is high texture. High Frequency Information incorporated better? More experiments required...

Subhransu Maji Comparison of Local Feature Descriptors

slide-20
SLIDE 20

Outline Introduction Benchmarks Experiments and Results Future Work

Future Work

More flexible notion of Matching, rotations, non-rigid transformations, etc to incorporate more classes Extend the analysis to Different Datatsets like PASCAL A systematic study of the Black Art!

Subhransu Maji Comparison of Local Feature Descriptors

slide-21
SLIDE 21

Outline Introduction Benchmarks Experiments and Results Future Work

THANK YOU1

1beamer rocks!! Subhransu Maji Comparison of Local Feature Descriptors