Smoothed Analysis of ICP David Arthur Sergei Vassilvitskii - - PowerPoint PPT Presentation

smoothed analysis of icp
SMART_READER_LITE
LIVE PREVIEW

Smoothed Analysis of ICP David Arthur Sergei Vassilvitskii - - PowerPoint PPT Presentation

Smoothed Analysis of ICP David Arthur Sergei Vassilvitskii (Stanford University) Matching Datasets Problem: Given two point sets A and B, translate A to best match B. ICP: Iterative Closest Point Problem: Given two point sets A and B,


slide-1
SLIDE 1

Smoothed Analysis of ICP

David Arthur Sergei Vassilvitskii (Stanford University)

slide-2
SLIDE 2

Matching Datasets

Problem: Given two point sets A and B, translate A to best match B.

slide-3
SLIDE 3

ICP: Iterative Closest Point

Problem: Given two point sets A and B, translate A to best match B. Example: B = A =

slide-4
SLIDE 4

ICP: Iterative Closest Point

Problem: Given two point sets A and B, translate A to best match B. Example: B = A =

slide-5
SLIDE 5

ICP: Iterative Closest Point

Problem: Given two point sets A and B, translate A to best match B. Example: Which is best? B = A = min

x φ(x) =

  • a∈A

a + x − NB(a + x)2

2

slide-6
SLIDE 6

ICP: Iterative Closest Point

Given ,

  • 1. Begin with some translation
  • 2. Compute for each
  • 3. Fix , compute optimal

A, B |A| = |B| = n x0 a ∈ A NB(·) NB(a + xi) xi+1 =

  • a∈A

NB(a + xi) − a |A| A = B =

slide-7
SLIDE 7

ICP: Iterative Closest Point

Given ,

  • 1. Begin with some translation
  • 2. Compute for each
  • 3. Fix , compute optimal

A, B |A| = |B| = n x0 a ∈ A NB(·) NB(a + xi) xi+1 =

  • a∈A

NB(a + xi) − a |A| A = B =

slide-8
SLIDE 8

ICP: Iterative Closest Point

Given ,

  • 1. Begin with some translation
  • 2. Compute for each
  • 3. Fix , compute optimal

A, B |A| = |B| = n x0 a ∈ A NB(·) NB(a + xi) xi+1 =

  • a∈A

NB(a + xi) − a |A| A = B =

slide-9
SLIDE 9

Notes on ICP

Accuracy: Bad in worst case Time to converge: Never repeat the function NB(·) ⇒ O(nn)

slide-10
SLIDE 10

Notes on ICP

Accuracy: Bad in worst case Time to converge: Never repeat the function Better Bounds? [ESE SoCG 2006]: in 2d NB(·) Ω(n log n) O(dn2)d ⇒ O(nn)

slide-11
SLIDE 11

Notes on ICP

Accuracy: Bad in worst case Time to converge: Never repeat the function Better Bounds? [ESE SoCG 2006]: in 2d We tighten the bounds and show: NB(·) Ω(n log n) O(dn2)d Ω(n2/d)d ⇒ O(nn)

slide-12
SLIDE 12

Notes on ICP

Accuracy: Bad in worst case Time to converge: Never repeat the function Better Bounds? [ESE SoCG 2006]: in 2d We tighten the bounds and show: But ICP runs very fast in practice, and the worst case bounds don’t do it justice. NB(·) Ω(n log n) O(dn2)d Ω(n2/d)d ⇒ O(nn)

slide-13
SLIDE 13

When Worst Case is Too Bad

The theoretician’s dilemma: An algorithm with horrible worst case guarantees - unbounded competitive ratio, exponential running time ...

slide-14
SLIDE 14

When Worst Case is Too Bad

The theoretician’s dilemma: An algorithm with horrible worst case guarantees - unbounded competitive ratio, exponential running time ... But: is widely used in practice.

slide-15
SLIDE 15

When Worst Case is Too Bad

The theoretician’s dilemma: An algorithm with horrible worst case guarantees - unbounded competitive ratio, exponential running time ... But: is widely used in practice From Worst to ...? Best-case? Average-case?

slide-16
SLIDE 16

When Worst Case is Too Bad

The theoretician’s dilemma: An algorithm with horrible worst case guarantees - unbounded competitive ratio, exponential running time ... But: is widely used in practice From Worst to ...? Best-case? Average-case? Smoothed Analysis (Spielman & Teng ‘01)

slide-17
SLIDE 17

Smoothed Analysis

What is smoothed analysis: Add some random noise to the input Look at the worst case expected complexity

slide-18
SLIDE 18

Smoothed Analysis

What is smoothed analysis: Add some random noise to the input Look at the worst case expected complexity How do we add random noise? Easy in geometric settings... perturb each point by “Let P be a set of n points in general position...” N(0, σ)

slide-19
SLIDE 19

Notes on ICP

Accuracy: Bad in worst case Time to converge: Never repeat the function Better Bounds? [ESE SoCG 2006]: in 2d We tighten the bounds and show: But ICP runs very fast in practice, and the worst case bounds don’t do it justice. Theorem: Smoothed complexity of ICP is NB(·) Ω(n log n) O(dn2)d Ω(n2/d)d nO(1) Diam σ 2 ⇒ O(nn)

slide-20
SLIDE 20

Proof of Theorem

Outline: bound the minimal potential drop that occurs in every step. Two cases:

  • 1. Small number of points change their NN assignments

Bound the potential drop from recomputing the translation.

  • 2. Large number of points change their NN assignments

Bound the potential drop from new nearest neighbor assignments. In both cases, Quantify how “general” is the general position obtained after smoothing. ⇒ ⇒

slide-21
SLIDE 21

Warm up: If every point is perturbed by then the minimum distance between points is at least with probability .

Proof: Part I

N(0, σ)

  • 1 − n2(/σ)d
slide-22
SLIDE 22

Warm up: If every point is perturbed by then the minimum distance between points is at least with probability . Proof: Consider two points and . Fix the position of . The random perturbation of will put it at least away with probability .

Proof: Part I

N(0, σ)

  • p

q p 1 − (/σ)d q

  • 1 − n2(/σ)d
slide-23
SLIDE 23

Warm up: If every point is perturbed by then the minimum distance between points is at least with probability . Proof: Consider two points and . Fix the position of . The random perturbation of will put it at least away with probability . Easy generalization: Consider sets of up to points. Then: with probability

Proof: Part I

N(0, σ)

  • p

q p 1 − (/σ)d q

  • 1 − n2(/σ)d

k 1 − n2k(/σ)d P = {p1, p2, . . . , pk}, Q = {q1, q2, . . . , qk}

  • p∈P

pi −

  • q∈Q

qi ≥

slide-24
SLIDE 24

Warm up: If every point is perturbed by then the minimum distance between points is at least with probability . Proof: Consider two points and . Fix the position of . The random perturbation of will put it at least away with probability . Easy generalization: Consider sets of up to points. Then: with probability . We will take and .

Proof: Part I

N(0, σ)

  • p

q p 1 − (/σ)d q

  • 1 − n2(/σ)d

k 1 − n2k(/σ)d k = O(d) P = {p1, p2, . . . , pk}, Q = {q1, q2, . . . , qk}

  • p∈P

pi −

  • q∈Q

qi ≥ = σ/poly(n)

slide-25
SLIDE 25

Proof: part I (cont)

Recall: If only points changed their NN assignments, then with high probability . xi+1 =

  • a∈A

NB(a + xi) − a |A| xi+1 − xi ≥ /n k

slide-26
SLIDE 26

Proof: part I (cont)

Recall: If only points changed their NN assignments, then with high probability .

  • Fact. For any set with as its mean, and any point .

Thus the total potential dropped by at least: xi+1 =

  • a∈A

NB(a + xi) − a |A| S y xi+1 − xi ≥ /n n · (/n)2 = 2/n k c(S)

  • s∈S

s − y2 = |S| · c(S) − y2 +

  • s∈S

s − c(S)2

slide-27
SLIDE 27

Proof: part II

Suppose many points change their NN assignments. What could go wrong?

slide-28
SLIDE 28

Proof: part II

Suppose many points change their NN assignments. What could go wrong? A = B =

slide-29
SLIDE 29

Suppose many points change their NN assignments. What could go wrong? A = B =

Proof: part II

slide-30
SLIDE 30

What can we say about the points? Every active point in must be near the bisector of two points in .

Proof: Part II Cont

Then the translation vector must lie in this slab. A B

slide-31
SLIDE 31

Proof: Part II Cont

What can we say about the points? Every active point in must be near the bisector of two points in . For a different point the slab has a different orientation: And the translation vector must lie in this slab as well. A B

slide-32
SLIDE 32

Proof: Part II Cont

But if the slabs are narrow, because of the perturbation their

  • rientation will appear random.
slide-33
SLIDE 33

Proof: Part II Cont

But if the slabs are narrow, because of the perturbation their

  • rientation will appear random.

Intuitively, we do not expect a large ( ) number of slabs to have a common intersection. Thus we can bound the minimum slab width from below. ω(d)

slide-34
SLIDE 34

Proof: Finish

  • Theorem. With probability ICP will finish after at most

iterations. Since ICP always runs in at most iterations, we can take to show that the smoothed complexity is polynomial. 1 − 2p O(n11d D σ 2 p−2/d) O(dn2)d p = O(dn2)−d

slide-35
SLIDE 35

Proof: Finish

  • Theorem. With probability ICP will finish after at most

iterations. Since ICP always runs in at most iterations, we can take to show that the smoothed complexity is polynomial. Many union bounds But, linear in ! 1 − 2p O(n11d D σ 2 p−2/d) O(dn2)d p = O(dn2)−d ⇒ n11 d

slide-36
SLIDE 36

Other Geometric Heuristics?

k-means method: Popular iterative clustering algorithm, similar in spirit to ICP. Worst case upper bound: iterations. Show a smoothed upper bound of : polynomial in the dimension, consistent with empirical evidence. Big Open Question: Can we push this to ? (Conjecture: Yes) O(nkd) nO(k) nO(1)

slide-37
SLIDE 37

Conclusion

Showed worst-case ICP suffers from the curse of dimensionality. But smoothed ICP is linear in the number of dimensions. Similar results for the k-means (Lloyd’s) method. Techniques focus on analyzing the separation obtained by the smoothing perturbation.

slide-38
SLIDE 38

Thank You