Model Fitting " - - PowerPoint PPT Presentation

model fitting
SMART_READER_LITE
LIVE PREVIEW

Model Fitting " - - PowerPoint PPT Presentation

Model Fitting " ... Fitting: Motivation 9300 Harris Corners Pkwy, Charlotte, NC Weve learned how to detect edges,


slide-1
SLIDE 1

Model Fitting

ע הנבנש רועיש לע ססובמ" לט ירנסה

slide-2
SLIDE 2

תורוקמ

  • דומילה רפס ינפ לע רזופמ...
slide-3
SLIDE 3

Fitting: Motivation

  • We’ve learned how to

detect edges, corners,

  • blobs. Now what?
  • We would like to form a

higher-level, more compact representation of the features in the image by grouping multiple features according to a simple model

9300 Harris Corners Pkwy, Charlotte, NC

slide-4
SLIDE 4

Source: K. Grauman

Fitting

  • Choose a parametric model to represent a

set of features

simple model: lines simple model: circles complicated model: car

slide-5
SLIDE 5

Fitting

  • Choose a parametric model to represent a

set of features

  • Line, ellipse, spline, etc.
  • Three main questions:
  • What model represents this set of features best?
  • Which of several model instances gets which feature?
  • How many model instances are there?
  • Computational complexity is important
  • It is infeasible to examine every possible set of parameters

and every possible combination of features

slide-6
SLIDE 6

Fitting: Issues

  • Noise in the measured feature locations
  • Extraneous data: clutter (outliers), multiple lines
  • Missing data: occlusions

Case study: Line detection

slide-7
SLIDE 7

Fitting: Issues

  • If we know which points belong to the line,

how do we find the “optimal” line parameters?

  • Least squares
  • What if there are outliers?
  • RANSAC
  • What if there are many lines?
  • Voting methods: Hough transform
  • What if we’re not even sure it’s a line?
  • Model selection
slide-8
SLIDE 8

תודוקנל וק תמאתה"תחת-שער"

slide-9
SLIDE 9

Least squares line fitting

Data: (x1, y1), …, (xn, yn) Line equation: yi = m xi + b Find (m, b) to minimize

 

  

n i i i

b x m y E

1 2

) (

(xi, yi) y=mx+b

slide-10
SLIDE 10

Least squares line fitting

Data: (x1, y1), …, (xn, yn) Line equation: yi = m xi + b Find (m, b) to minimize

2 2    Y X XB X dB dE

T T

 

) ( ) ( ) ( 2 ) ( ) ( 1 1 1

2 2 1 1 1 2

XB XB Y XB Y Y XB Y XB Y XB Y b m x x y y b m x y E

T T T T n n n i i i

                                                        Normal equations: least squares solution to XB=Y

 

  

n i i i

b x m y E

1 2

) (

(xi, yi) y=mx+b

Y X XB X

T T

slide-11
SLIDE 11

ב- MATLAB

  • תואוושמה תכרעמל ןורתפה

XB=Y : B = X\Y;

slide-12
SLIDE 12

Problem with “vertical” least squares

  • Not rotation-invariant
  • Fails completely for vertical lines
slide-13
SLIDE 13

Total least squares

Distance between point (xi, yi) and line ax+by=d (a2+b2=1): |axi + byi – d|

 

  

n i i i

d y b x a E

1 2

) ( (xi, yi)

ax+by=d

Unit normal: N=(a, b)

slide-14
SLIDE 14

Total least squares

Distance between point (xi, yi) and line ax+by=d (a2+b2=1): |axi + byi – d|

Proof: (from wikipedia)

 

  

n i i i

d y b x a E

1 2

) ( (xi, yi)

ax+by=d

Unit normal: N=(a, b)

slide-15
SLIDE 15

Total least squares

Distance between point (xi, yi) and line ax+by=d (a2+b2=1): |axi + byi – d| Find (a, b, d) to minimize the sum of squared perpendicular distances

 

  

n i i i

d y b x a E

1 2

) ( (xi, yi)

ax+by=d

 

  

n i i i

d y b x a E

1 2

) (

Unit normal: N=(a, b)

slide-16
SLIDE 16

Total least squares

Distance between point (xi, yi) and line ax+by=d (a2+b2=1): |axi + byi – d| Find (a, b, d) to minimize the sum of squared perpendicular distances

 

  

n i i i

d y b x a E

1 2

) ( (xi, yi)

ax+by=d

 

  

n i i i

d y b x a E

1 2

) (

Unit normal: N=(a, b)

) ( 2

1

      

 

n i i i

d y b x a d E

1 1 n n i i i i

a b d x y ax by n n

 

   

 

) ( ) ( )) ( ) ( (

2 1 1 1 2

UN UN b a y y x x y y x x y y b x x a E

T n n n i i i

                             ) ( 2   N U U dN dE

T

Solution to (UTU)N = 0, subject to ||N||2 = 1: eigenvector of UTU associated with the smallest eigenvalue (least squares solution to homogeneous linear system UN = 0)

slide-17
SLIDE 17

Total least squares

               y y x x y y x x U

n n

 

1 1

                  

   

    n i i n i i i n i i i n i i T

y y y y x x y y x x x x U U

1 2 1 1 1 2

) ( ) )( ( ) )( ( ) (

second moment matrix

slide-18
SLIDE 18

Total least squares

               y y x x y y x x U

n n

 

1 1

                  

   

    n i i n i i i n i i i n i i T

y y y y x x y y x x x x U U

1 2 1 1 1 2

) ( ) )( ( ) )( ( ) (

) , ( y x

N = (a, b)

second moment matrix

) , ( y y x x

i i

 

slide-19
SLIDE 19

Least squares: Robustness to noise

Least squares fit to the red points:

slide-20
SLIDE 20

Least squares: Robustness to noise

Least squares fit with an outlier:

Problem: squared error heavily penalizes outliers

slide-21
SLIDE 21

תוינוציח תודוקנ שישכ הרוק המ?

slide-22
SLIDE 22

RANSAC

  • Random sample consensus (RANSAC):

Very general framework for model fitting in the presence of outliers

  • M. A. Fischler, R. C. Bolles. Random Sample Consensus: A Paradigm for Model

Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, Vol 24, pp 381-395, 1981.

slide-23
SLIDE 23

Fitting a Line

Least squares fit

slide-24
SLIDE 24

RANSAC

  • Select sample of m

points at random

slide-25
SLIDE 25

RANSAC

  • Select sample of m points

at random

  • Calculate model

parameters that fit the data in the sample

slide-26
SLIDE 26

RANSAC

  • Select sample of m points

at random

  • Calculate model

parameters that fit the data in the sample

  • Calculate error function

for each data point

slide-27
SLIDE 27

RANSAC

  • Select sample of m points

at random

  • Calculate model

parameters that fit the data in the sample

  • Calculate error function

for each data point

  • Select data that support

current hypothesis

slide-28
SLIDE 28

RANSAC

  • Select sample of m points

at random

  • Calculate model

parameters that fit the data in the sample

  • Calculate error function

for each data point

  • Select data that support

current hypothesis

slide-29
SLIDE 29

RANSAC for line fitting

Repeat N times:

  • Draw s points uniformly at random
  • Fit line to these s points
  • Find inliers to this line among the remaining

points (i.e., points whose distance from the line is less than t)

  • If there are d or more inliers, accept the line

and refit using all inliers

slide-30
SLIDE 30

Choosing the parameters

  • Initial number of points s
  • Typically minimum number needed to fit the model
  • Distance threshold t
  • Choose t so probability for inlier is p (e.g. 0.95)
  • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2
  • Number of iterations N
  • Choose N so that, with probability p, at least one random

sample is free from outliers (e.g. p=0.99) (outlier ratio: e)

proportion of outliers e

s 5% 10% 20% 25% 30% 40% 50% 2 2 3 5 6 7 11 17 3 3 4 7 9 11 19 35 4 3 5 9 13 17 34 72 5 4 6 12 17 26 57 146 6 4 7 16 24 37 97 293 7 4 8 20 33 54 163 588 8 5 9 26 44 78 272 1177

Source: M. Pollefeys

slide-31
SLIDE 31

Choosing the parameters

  • Initial number of points s
  • Typically minimum number needed to fit the model
  • Distance threshold t
  • Choose t so probability for inlier is p (e.g. 0.95)
  • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2
  • Number of iterations N
  • Choose N so that, with probability p, at least one random

sample is free from outliers (e.g. p=0.99) (outlier ratio: e)

  • Consensus set size d
  • Should match expected inlier ratio

Source: M. Pollefeys

slide-32
SLIDE 32

RANSAC pros and cons

  • Pros
  • Simple and general
  • Applicable to many different problems
  • Often works well in practice
  • Cons
  • Lots of parameters to tune
  • Can’t always get a good initialization of the model based on

the minimum number of samples

  • Sometimes too many iterations are required
  • Can fail for extremely low inlier ratios
  • We can often do better than brute-force sampling
slide-33
SLIDE 33

דחא וקמ רתוי שישכ הרוק המ?

slide-34
SLIDE 34

Voting schemes

  • Let each feature vote for all the models that

are compatible with it

  • Hopefully the noise features will not vote

consistently for any single model

  • Missing data doesn’t matter as long as there

are enough features remaining to agree on a good model

slide-35
SLIDE 35

Hough transform

  • An early type of voting scheme
  • General outline:
  • Discretize parameter space into bins
  • For each feature point in the image, put a vote in every bin in

the parameter space that could have generated this point

  • Find bins that have the most votes

P.V.C. Hough, Machine Analysis of Bubble Chamber Pictures, Proc.

  • Int. Conf. High Energy Accelerators and Instrumentation, 1959

Image space Hough parameter space

slide-36
SLIDE 36

Parameter space representation

  • A line in the image corresponds to a point in

Hough space

Image space Hough parameter space

Source: S. Seitz

slide-37
SLIDE 37

Parameter space representation

  • What does a point (x0, y0) in the image space

map to in the Hough space?

Image space Hough parameter space

slide-38
SLIDE 38

Parameter space representation

  • What does a point (x0, y0) in the image space

map to in the Hough space?

Image space Hough parameter space

slide-39
SLIDE 39

Parameter space representation

  • What does a point (x0, y0) in the image space

map to in the Hough space?

  • Answer: the solutions of b = –x0m + y0
  • This is a line in Hough space

Image space Hough parameter space

slide-40
SLIDE 40

Parameter space representation

  • Where is the line that contains both (x0, y0)

and (x1, y1)?

Image space Hough parameter space

(x0, y0) (x1, y1) b = –x1m + y1

slide-41
SLIDE 41

Parameter space representation

  • Where is the line that contains both (x0, y0)

and (x1, y1)?

  • It is the intersection of the lines b = –x0m + y0 and

b = –x1m + y1 Image space Hough parameter space

(x0, y0) (x1, y1) b = –x1m + y1

slide-42
SLIDE 42

Line Detection by Hough Transform

y x

) , ( c m

Parameter Space

1 1 1 1 1 1 2 1 1 1 1 1 1

) , ( c m A

Algorithm:

  • Quantize Parameter Space
  • Create Accumulator Array
  • Set
  • For each image edge increment:
  • If lies on the line:
  • Find local maxima in

) , ( c m ) , ( c m A c m c m A , ) , (   ) , (

i i y

x 1 ) , ( ) , (   c m A c m A ) , ( c m ) , ( c m A

i i

y m x c   

slide-43
SLIDE 43
  • Problems with the (m,b) space:
  • Unbounded parameter domain
  • Vertical lines require infinite m
  • Quick solution?

Parameter space representation

slide-44
SLIDE 44
  • Problems with the (m,b) space:
  • Unbounded parameter domain
  • Vertical lines require infinite m
  • Alternative: polar representation

Parameter space representation

       sin cos y x

Each point will add a sinusoid in the (,) parameter space

slide-45
SLIDE 45

Algorithm outline

  • Initialize accumulator H

to all zeros

  • For each edge point (x,y)

in the image For θ = 0 to 180 ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 end end

  • Find the value(s) of (θ, ρ) where H(θ, ρ) is a

local maximum

  • The detected line in the image is given by

ρ = x cos θ + y sin θ ρ θ

slide-46
SLIDE 46

features votes

Basic illustration

slide-47
SLIDE 47

Other shapes

slide-48
SLIDE 48

Square Circle

Other shapes

slide-49
SLIDE 49

Several lines

slide-50
SLIDE 50

A more complicated image

http://ostatic.com/files/images/ss_hough.jpg

slide-51
SLIDE 51

ב- MATLAB

[H,T,R] = hough(BW); % The following function finds no more than 5 peaks in the Hough matrix H P = houghpeaks(H,5); % Extracts the line segments (not a part of the algorithm described, but useful) lines = houghlines(BW,T,R,P); % Each line i can then be plotted by: for i=1:numel(lines) xy = [lines(i).point1; lines(i).point2]; plot(xy(:,1),xy(:,2)); end

slide-52
SLIDE 52

features votes

Effect of noise

slide-53
SLIDE 53

features votes

Effect of noise

Peak gets fuzzy and hard to locate

slide-54
SLIDE 54

Effect of noise

  • Number of votes for a line of 20 points with

increasing noise:

slide-55
SLIDE 55

Random points

Uniform noise can lead to spurious peaks in the array

features votes

slide-56
SLIDE 56

Random points

  • As the level of uniform noise increases, the

maximum number of votes increases too:

slide-57
SLIDE 57

Dealing with noise

  • Choose a good grid / discretization
  • Too coarse: large votes obtained when too many different

lines correspond to a single bucket

  • Too fine: miss lines because some points that are not

exactly collinear cast votes for different buckets

  • Increment neighboring bins (smoothing in

accumulator array)

  • Try to get rid of irrelevant features
  • Take only edge points with significant gradient magnitude
slide-58
SLIDE 58

Incorporating image gradients

  • Recall: when we detect an

edge point, we also know its gradient direction

  • But this means that the line

is uniquely determined!

  • Modified Hough transform:

For each edge point (x,y) θ = gradient orientation at (x,y) ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 end

slide-59
SLIDE 59

Finding Circles by Hough Transform

Equation of Circle:

2 2 2

) ( ) ( r b y a x

i i

   

If radius is known:

) , ( b a A

Accumulator Array (2D Hough Space)

slide-60
SLIDE 60

סוידרה םא R עודי...

slide-61
SLIDE 61

Equation of Circle:

2 2 2

) ( ) ( r b y a x

i i

   

If radius is not known: 3D Hough Space! Use Accumulator array

) , , ( r b a A

What is the surface in the hough space?

Finding Circles by Hough Transform

slide-62
SLIDE 62

עודי אל סוידרה םא...

  • לדג םירטמרפה בחרמ :

a, b, R

b a r

slide-63
SLIDE 63

Using Gradient Information

  • Gradient information can save lot of computation:

Edge Location Edge Direction Need to increment only one point in Accumulator!!

i

) , (

i i y

x

Assume radius is known:

  sin cos r y b r x a    

slide-64
SLIDE 64

תורוצל רבעמ"תוטושפ"

slide-65
SLIDE 65

Generalized Hough transform

  • We want to find a shape defined by its boundary

points and a reference point

  • D. Ballard, Generalizing the Hough Transform to Detect Arbitrary Shapes,

Pattern Recognition 13(2), 1981, pp. 111-122. a

slide-66
SLIDE 66

Generalized Hough Transform

  • Model Shape NOT described by equation
slide-67
SLIDE 67

Generalized Hough Transform

  • Model Shape NOT described by equation
slide-68
SLIDE 68

Generalized Hough Transform

Find Object Center given edges Create Accumulator Array Initialize: For each edge point For each entry in table, compute: Increment Accumulator: Find Local Maxima in

) , (

c c y

x A ) , ( ) , (

c c c c

y x y x A   ) , , (

i i i y

x  1 ) , ( ) , (  

c c c c

y x A y x A ) , (

c c y

x A

i k i k i c i k i k i c

r y y r x x   sin cos    

i k

r

) , (

c c y

x ) , , (

i i i y

x 

slide-69
SLIDE 69

Generalized Hough transform

  • Assumption: translation is the only

transformation here, i.e., orientation and scale are fixed.

  • How can we generalize the idea to a model

that can be rotated or scaled?

Source: K. Grauman

slide-70
SLIDE 70
slide-71
SLIDE 71

Hough transform: Discussion

  • Pros
  • Can deal with non-locality and occlusion
  • Can detect multiple instances of a model
  • Some robustness to noise: noise points unlikely to contribute

consistently to any single bin

  • Cons
  • Complexity of search time increases exponentially with the

number of model parameters

  • Non-target shapes can produce spurious peaks in parameter

space

  • It’s hard to pick a good grid size
  • Hough transform vs. RANSAC
slide-72
SLIDE 72

תוצובקל תודוקנ תמאתה

slide-73
SLIDE 73

תוצובקל תודוקנ תמאתה

  • Clustering
  • Segmentation
  • K-means
slide-74
SLIDE 74

הנומתה תא קלחל הצרנ עבצ יפל םירוזאל

היצביטומ

slide-75
SLIDE 75
slide-76
SLIDE 76
slide-77
SLIDE 77
slide-78
SLIDE 78
slide-79
SLIDE 79
slide-80
SLIDE 80
slide-81
SLIDE 81
slide-82
SLIDE 82
slide-83
SLIDE 83

תואצות

רוקמ K=5 K=10

slide-84
SLIDE 84

K-means clustering

  • Want to minimize sum of squared Euclidean

distances between points xi and their nearest cluster centers mk Algorithm:

  • Randomly initialize K cluster centers
  • Iterate until convergence:
  • Assign each data point to the nearest center
  • Recompute each cluster center as the mean of all points

assigned to it

 

 

k k i k i

m x M X D

cluster cluster in point 2

) ( ) , (

slide-85
SLIDE 85

PUTTING IT ALL TOGETHER המגוד :תוינוכמ רותיאל תכרעמ

slide-86
SLIDE 86

Implicit shape models: Training

  • 1. Build codebook of patches around extracted

interest points using k-means clustering

  • B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and

Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004

slide-87
SLIDE 87

Implicit shape models: Training

  • 1. Build codebook of patches around extracted

interest points using k-means clustering

  • 2. Map the patch around each interest point to

closest codebook entry

slide-88
SLIDE 88

Implicit shape models: Training

  • 1. Build codebook of patches around extracted

interest points using k-means clustering

  • 2. Map the patch around each interest point to

closest codebook entry

  • 3. For each codebook entry, store all positions

it was found, relative to object center

slide-89
SLIDE 89

Implicit shape models: Testing

1. Given test image, extract patches, match to codebook entry 2. Cast votes for possible positions of object center 3. Search for maxima in voting space 4. Extract weighted segmentation mask based on stored masks for the codebook occurrences

slide-90
SLIDE 90

םויה וניאר המ זא?

slide-91
SLIDE 91

םוכיס

  • רשיל תודוקנ תמאתה

– םיתוחפ םיעוביר– יתש תואסרג – RANSAC –ףאאה

  • ףאאה תויללכ תורוצלו םילגעמל
  • תוצובקל תודוקנ ץוביק

k-means clustering

  • תוינוכמ רותיאל םתירוגלאל המגוד
slide-92
SLIDE 92

םיפקשל תורוקמ

  • שרופמב וניוצש ולא דבלמ , לע םיססובמ םיפקש

ש ולא:

  • Svetlana Lazebnik
  • Ondřej Chum
  • Jason Lawrence
  • Szymon Rusinkiewicz
  • Harvey Rhody
  • Utkarsh Sinha