Statistical Geometry Processing Winter Semester 2011/2012 Global - - PowerPoint PPT Presentation

statistical geometry processing
SMART_READER_LITE
LIVE PREVIEW

Statistical Geometry Processing Winter Semester 2011/2012 Global - - PowerPoint PPT Presentation

Statistical Geometry Processing Winter Semester 2011/2012 Global Shape Matching Rigid Global Matching Iterated Closest Points (ICP) Part B (moves, rotation & translation) Part A (stays fixed) Problems Need good intialization


slide-1
SLIDE 1

Global Shape Matching

Statistical Geometry Processing

Winter Semester 2011/2012

slide-2
SLIDE 2

Rigid Global Matching

slide-3
SLIDE 3

3

Iterated Closest Points (ICP)

Problems

  • Need good intialization
  • Non-convex problem
  • Runs into local minima
  • Deformable shape matching
  • Even worse: bad initialization even more problematic
  • Reason: more degrees of freedom

Part A

(stays fixed)

Part B

(moves, rotation & translation)

slide-4
SLIDE 4

4

Global Matching

How to assemble the bunny (globally)? Pipeline (rough sketch):

  • Feature detection
  • Feature descriptors
  • Spectral validation
slide-5
SLIDE 5

5

Feature Detection

Feature points (keypoints)

  • Regions that can be identified locally
  • “Bumps”, i.e. points with maximum curvature
  • “curvature” ∈ 𝜆1, 𝜆2, 1

2 𝜆1 + 𝜆2 , 𝜆1 ⋅ 𝜆2

  • Mean/principal curvature most stable

(𝜆2 often inaccurate when computed by least-squares fitting)

  • “SIFT” features – compute bumps at multiple scales:

– With with different radii – Search for maxima in 3D surface-scale space

  • Output: list of keypoints
slide-6
SLIDE 6

6

Bunny Curvature

Stanford Bunny

(dense point cloud) principal curvature 1 principal curvature 2 mean curvature Gaussian curvature [courtesy of Martin Bokeloh]

slide-7
SLIDE 7

7

Descriptors

Feature descriptors:

  • Rotation invariant description of local neighborhood

(within scale of the feature point)

  • Translation already fixed by feature point
  • Used to find match candidates
  • Not 100% reliable (typically 3x – 5x outlier ratio)
slide-8
SLIDE 8

8

Descriptors

Rotation invariant descriptors:

  • Curvatures 𝜆1, 𝜆2 , derived properties
  • Curvature histograms in spherical neighborhood
  • Pairwise distances
  • “d2-Histograms”: Histogram of pairwise distance within sphere
  • Histogram of distances to medial axis
  • Spin images
  • Use surface normal
  • Cut-out sphere
  • Rotate geometry around sphere and splat into “spin-image”
  • Spherical harmonics power spectrum, Zernicke

descriptors

slide-9
SLIDE 9

9

Correspondence Validation

We have:

  • Candidate matches
  • But every keypoint matches

5 others on average

  • At most one of these

is correct

Validation Criterion:

  • Euclidian distance should be preserved
slide-10
SLIDE 10

10

Invariants

Rigid Matching

  • Invariant: Euclidean distances are preserved
slide-11
SLIDE 11

11

Branch and Bound

Simple Algorithm:

  • Branch-and-bound [Gelfand et al. 2005]
  • Fix correspondences, prune all incompatible ones

(i.e., violation of Euclidian distance)

  • Try all possibilities

Efficiency:

  • Efficient for sparse (widely spaced) features
  • Only few combinations work
  • Possibly exponential for dense features

(try many equivalent solutions)

slide-12
SLIDE 12

12

Alternatives

Alternatives: We will look at

  • Spectral matching
  • Randomized search

Further alternatives:

  • Loopy belief propagation

(“Correlated Correspondences”, Anguelov 2005).

  • Quadratic assignment heuristics

Important:

  • Structure: Pairwise optimization problem
slide-13
SLIDE 13

Isometric Matching

slide-14
SLIDE 14

14

Invariants

Intrinsisc Matching

  • Invariants: All geodesic distances are preserved
slide-15
SLIDE 15

15

Invariants

Intrinsisc Matching

  • Presevation of geodesic distances

(„intrinsic distances“)

  • Approximation
  • Cloth is almost unstretchable
  • Skin does not stretch a lot
  • Most live objects show approximately isometric surfaces
  • Accepted model for deformable shape matching
  • In cases where one subject is presented in different poses
  • Accross different subjects: Other assumptions necessary
  • Then: global matching is an open problem
slide-16
SLIDE 16

Feature Based Matching

Quadratic Assignment Model

slide-17
SLIDE 17

17

Problem Statement

Deformable Matching

  • Two shapes: original, deformed
  • How to establish correspondences?
  • Looking for global optimum
  • Arbitrary pose

Assumption

  • Approximately isometric

deformation

[data set: S. König, TU Dresden]

slide-18
SLIDE 18

18

Algorithm

Feature-Matching

  • Detect feature points
  • Local matching: potential correspondences
  • Global filtering: correct subset
slide-19
SLIDE 19

19

Algorithm

Feature-Matching

  • Detect feature points
  • Local matching: potential correspondences
  • Global filtering: correct subset
  • Maxima of Gaussian curvature
  • Locally unique descriptors
slide-20
SLIDE 20

20

Algorithm

Feature-Matching

  • Detect feature points
  • Local matching: potential correspondences
  • Global filtering: correct subset
  • Maxima of Gaussian curvature
  • Locally unique descriptors
  • Curvature histograms
  • Heat-kernels, geodesic waves
slide-21
SLIDE 21

21

Algorithm

Feature-Matching

  • Detect feature points
  • Local matching: potential correspondences
  • Global filtering: correct subset
  • Curvature histograms
  • Heat-kernels, geodesic waves
  • Quadratic assignment
  • Spectral relaxation [Leordeanu et al. 05]
  • RANSAC
  • Maxima of Gaussian curvature
  • Locally unique descriptors
slide-22
SLIDE 22

22

Quadratic Assignment

Most difficult part: Global filtering

  • Find a consistent subset
  • Pairwise consistency:
  • Correspondence pair must preserve intrinsic distance
  • Maximize number of pairwise consistent pairs
  • Quadratic assignment (in general: NP-hard)
slide-23
SLIDE 23

23

Quadratic Assignment Model

Quadratic Assignment

  • n potential

correspondences

  • Each one can be

turned on or off

  • Label with variables xi
  • Compatibility score:

(incomplete model; details later)

} 1 , { , ) ,..., (

1 , ) ( , 1 ) ( 1 ) (

 

 

  i n j i compatible j i n i single i n match

x P P x x P

xj = 1 xi = 0

slide-24
SLIDE 24

24

Quadratic Assignment Model

Quadratic Assignment

  • Compatibility score:
  • Singeltons:

Descriptor match

xj = 1

} 1 , { , ) ,..., (

1 , ) ( , 1 ) ( 1 ) (

 

 

  i n j i compatible j i n i single i n match

x P P x x P

slide-25
SLIDE 25

25

Quadratic Assignment Model

Quadratic Assignment

  • Compatibility score:
  • Singeltons:

Descriptor match

  • Doubles:

Compatibility

xj = 1

} 1 , { , ) ,..., (

1 , ) ( , 1 ) ( 1 ) (

 

 

  i n j i compatible j i n i single i n match

x P P x x P

slide-26
SLIDE 26

26

Quadratic Assignment Model

Quadratic Assignment

  • Matrix notation:
  • Quadratic scores are encoded in Matrix D
  • Linear scores are encoded in Vector s

Dx x xs

T n j i compatible j i n i single i n match n j i compatible j i n i single i n match

P P x x P P P x x P     

   

    1 , ) ( , 1 ) ( 1 ) ( 1 , ) ( , 1 ) ( 1 ) (

log log ) ,..., ( log ) ,..., (

slide-27
SLIDE 27

27

Quadratic Assignment Model

Quadratic Assignment

  • Task: find optimal binary vector x

Regularization:

  • No trivial solution x = 0

Examples

  • As many „1“s as possible without exceeding error

threshold

  • Fixed norm of x-vector
slide-28
SLIDE 28

28

Spectral Matching

Simple & Effective Approximation:

  • Spectral matching [Leordeanu & Hebert 05]
  • Form compatibility matrix:

                

13 22 12 31 21 11

a a a a a a A

Diagonal: Descriptor match Off-Diagonal: Pairwise compatibility All entries within [0..1] = [no match...perfect match]

slide-29
SLIDE 29

29

Spectral Matching

Approximate largest clique:

  • Compute eigenvector with largest eigenvalue
  • Maximizes Rayleigh quotient:
  • “Best yield” for bounded norm
  • The more consistent pairs (rows of 1s), the better
  • Approximates largest clique
  • Implementation
  • For example: power iteration

2 T

max arg x Ax x

slide-30
SLIDE 30

30

Spectral Matching

Post-processing

  • Greedy quantization
  • Select largest remaining entry, set it to 1
  • Set all entries to 0 that are not pairwise consistent

with current set

  • Iterate until all entries are quantized

In practice...

  • This algorithm turns out to work quite well.
  • Very easy to implement
  • Limited to (approx.) quadratic assignment model
slide-31
SLIDE 31

31

Spectral Matching Example

Application to Animations

  • Feature points:

Geometric MLS-SIFT features [Li et al. 2005]

  • Descriptors:

Curvature & color ring histograms

  • Global Filtering:

Spectral matching

  • Pairwise animation matching:

Low precision passive stereo data

[Data set: Christian Theobald, Implementation: Martin Bokeloh]

slide-32
SLIDE 32

Ransac and Forward Search

slide-33
SLIDE 33

33

Random Sampling Algorithms

Estimation subject to outliers:

  • We have candidate

correspondences

  • But most of them are bad
  • Standard vision problem
  • Standard tools:

Ransac & forward search

slide-34
SLIDE 34

34

RANSAC

„Standard“ RANSAC line fitting example:

  • Randomly pick two points
  • Verify how many others fit
  • Repeat many times and pick the best one (most matches)

data data pick rnd. 2 pick rnd. 2 data data

slide-35
SLIDE 35

35

Forward Search

Forward Search:

  • Ransac variant
  • Like ransac,

but refine model by „growing“

  • Pick best match, then recalculate
  • Repeat until threshold is reached

start iteration iteration... result

slide-36
SLIDE 36

36

RANSAC/FWS Algorithm

Idea

  • Starting correspondence
  • Add more that are consistent
  • Preserve intrinsic distances
  • Importance sampling algorithm

Advantages

  • Efficient (small initial set)
  • General (arbitrary criteria)
slide-37
SLIDE 37

37

Ransac/FWS Details

Algorithm: Simple Idea

  • Select correspondences with probability proportional to

their plausibility

  • First correspondence: Descriptors
  • Second: Preserve distance (distribution peaks)
  • Third: Preserve distance (even fewer choices)

...

  • Rapidly becomes deterministic
  • Repeat multiple times (typ.: 100x)
  • Choose the largest solution (larges #correspondences)
slide-38
SLIDE 38

38

Ransac/FWS Details

Provably Efficient:

  • Theoretically efficient (details later)
  • Faster in practice (using descriptors)

Flexible:

  • In later iterations (> 3 correspondences), allow for outlier

geodesics

  • Can handle topological noise
slide-39
SLIDE 39

39

Foreward Search Algorithm

Forward Search

  • Add correspondences incrementally
  • Compute match probabilities given the information

already decided on

  • Iterate until no more matches can found that meet a

certain error threshold

  • Outer Loop:
  • Iterate the algorithm with random choices
  • Pick the best (i.e., largest) solution
slide-40
SLIDE 40

40

Foreward Search Algorithm

Step 1:

  • Start with one correspondence
  • Target side importance sampling:

prefer good descriptor matches

  • Optional source side imp. sampl: prefer unique descriptors

source target Descriptor matching scores

slide-41
SLIDE 41

41

Foreward Search Algorithm

Step 2:

  • Compute „posterior“ incorporating geodesic distance
  • Target side importance sampling:

sample according to descriptor match  distance score

  • Again: optional source side imp. sampl: prefer unique descriptors

source posterior (distance) target

slide-42
SLIDE 42

42

Foreward Search Algorithm

Step 2:

  • Compute „posterior“ incorporating geodesic distance
  • Target side importance sampling:

sample according to descriptor match  distance score

  • Again: optional source side imp. sampl: prefer unique descriptors

source target posterior (distance & descriptors)

slide-43
SLIDE 43

43

Foreward Search Algorithm

Step 3:

  • Same as step 2, continue sampling...

source target posterior (distance & descriptors)

slide-44
SLIDE 44

44

Foreward Search Algorithm

Step 3:

  • Same as step 2, continue sampling...

source target posterior (distance & descriptors)

slide-45
SLIDE 45

45

Foreward Search Algorithm

Source side:

  • Match all descriptors, compute entropy
  • Choose minimum entropy features for start
  • Subsequent features: consider entropy of all

matches in addition

source target posterior (distance & descriptors)

slide-46
SLIDE 46

46

Another View

Landmark Coordinates

  • Distance to already established points give a charting of

the manifold

slide-47
SLIDE 47

47

Results

[data sets: Stanford 3D Scanning Repository / Carsten Stoll]

slide-48
SLIDE 48

48

Results: Topological Noise

Spectral Quadratic Assignment [Leordeanu et al. 05] Ransac Algorithm [Tevs et al. 09]

slide-49
SLIDE 49

Complexity

slide-50
SLIDE 50

50

How expensive is all of this?

Cost analysis:

  • How many rounds of sampling are necessary?

Constraints [Lipman et al. 2009]:

  • Assume disc or sphere topology
  • An isometric mapping is in particular a conformal

mapping

  • A conformal mapping is determined by 3 point-to-point

correspondences

slide-51
SLIDE 51

51

How expensive is it..?

First correspondence:

  • Worst case: n trials (n feature points)
  • In practice: k << n good descriptor matches

(typically k  5-20)

Second correspondence:

  • Worst case: n trials, expected: n trials
  • In practice: very few (due to

descriptor matching, maybe 1-3)

Last match:

  • At most two matches
slide-52
SLIDE 52

52

Costs...

Overall costs:

  • Worst case: O(n2) matches to explore
  • Typical: O(n1.5) matches to explore

Randomization:

  • Exploring m items costs expected O(m log m) trials
  • Worst case bound of O(n2 log n) trials
  • Asymptotically sharp: O(c)-times more trials for shrinking

failure probability to O(exp(-c2))

slide-53
SLIDE 53

53

Costs...

Surface discretization:

  • Assume  -sampling of the manifold (no features):

O( -2) sample points

  • Worst case O( -4 log  -1) sample correspondences

for finding a match with accuracy .

  • Expected: O( -3 log  -1).

In practice:

  • Importance sampling by descriptors is very effective
  • Typically: Good results after 100 iterations
  • Entropy-based planning: 1-10 iteartions
slide-54
SLIDE 54

54

General Case

Numerical errors:

  • Noisy surfaces, imprecise features: reflected in probability

maps (we know how little we might know)

Topological noise:

  • Use robust constraint potentials
  • For example: account for 5 best matches only

Topologically complex cases:

  • No analysis beyond disc/spherical topology
  • However: the algorithm will work in the general case

(potentially, at additional costs)

slide-55
SLIDE 55

Other Application: Symmetry Detection

slide-56
SLIDE 56

56

Symmetry Detection

[data set: M. Wacker, HTW Dresden]

slide-57
SLIDE 57

57

Symmetry Detection

[data sets: IKG, Leibnitz University Hannover / M. Wacker, HTW Dresden]

slide-58
SLIDE 58

58

Rigid, Isometic, Relaxed Isometric

rigid isometric relaxed isometric

slide-59
SLIDE 59

Learning Correspondences

slide-60
SLIDE 60

60

Objective

Window Variants

slide-61
SLIDE 61

61

Objective

User: a few sparse sketches Find similar elements

slide-62
SLIDE 62

62

Learning a Matching Model

Learning a matching model

  • Learn descriptors
  • Learn geometric relations
slide-63
SLIDE 63

63

Energy Function

Markov Chain Model

  • Global optimum: Belief propagation
  • Symmetry: Enumerate local optima

1 𝑎 Φi(𝐲i) Ψi(𝐲i, 𝐲i+1)

𝑙−1 𝑗=1 𝑙 𝑗=1

slide-64
SLIDE 64

64

Result: Single-Class Learning

Window Variants

slide-65
SLIDE 65

65

Result: Multi-Class Learning

slide-66
SLIDE 66

66

Results: Ludwigskirche