Provably Good Implicit MLS Surfaces Nikola Milosavljevic CS 468, - - PowerPoint PPT Presentation

provably good implicit mls surfaces
SMART_READER_LITE
LIVE PREVIEW

Provably Good Implicit MLS Surfaces Nikola Milosavljevic CS 468, - - PowerPoint PPT Presentation

Provably Good Implicit MLS Surfaces Nikola Milosavljevic CS 468, Fall 2005 Implicit MLS Surfaces Zero-level-set of a function I ( x ) over R 3 x 0 Fixed points of a projection x 2 operator x 1 Weighted sum of signed distances x 3 x


slide-1
SLIDE 1

Provably Good Implicit MLS Surfaces

Nikola Milosavljevic CS 468, Fall 2005

slide-2
SLIDE 2

Implicit MLS Surfaces

◮ Zero-level-set of a

function I(x) over R3

◮ Fixed points of a projection

  • perator

◮ Weighted sum of signed

distances I(x) =

  • p nT

p (x − p) · θp(x)

  • p θp(x)

◮ Extremal surfaces

I(x) = n(x)·∂eMLS(y, n(x)) ∂y

  • x

x0 x1 x2 x3 x4 p np x

slide-3
SLIDE 3

MLS Surfaces with Guarantees

◮ Not discussed so far

◮ What is the “ground truth” surface? ◮ How do the samples arise? ◮ How good is the reconstruction?

◮ In this talk

◮ A notion of the original surface ◮ A model of sampling and noise ◮ Study the behavior of MLS surfaces

◮ The Holy Grail

◮ Geometric accuracy ◮ Correct topology ◮ Smoothness ◮ Fast (quadratic) convergence ◮ Efficient (local) computation

slide-4
SLIDE 4

Problem Statement

◮ Original smooth, closed

surface Σ

◮ Given conditions on

◮ Sampling density ◮ Normal estimates ◮ Noise

◮ Design an implicit function

I(x) whose zero set recovers Σ

Σ P

slide-5
SLIDE 5

Outline

  • R. Kolluri (U.C. Berkeley),

“Provably Good Moving Least Squares”, SODA 2005.

◮ Globally uniform sampling + normals ◮ Correct topology ◮ Smoothness

  • T. Dey, J. Sun (Ohio State),

“An Adaptive MLS Surface for Reconstruction with Guarantees”, SGP 2005.

◮ Feature-sensitive sampling + normals ◮ Correct topology ◮ “Smoothness”

slide-6
SLIDE 6

Sampling Conditions

◮ Medial axis

◮ Points with multiple

closest points on Σ

◮ Local feature size, lfs(·)

◮ Distance from x ∈ Σ to

the medial axis

◮ ǫ-sampling

◮ Every x ∈ R3 has a

sample p at most ǫ lfs(˜ x)

◮ No oversampling,

|B(x, ǫ lfs(˜ x))| ≤ α

ǫ lfs(˜ x) ˜ x x p

slide-7
SLIDE 7

Typical Proof Outline

◮ Step 1:

◮ Analyze I(x) ◮ Localize the zero-set ◮ Spurious zero-crossings?

I(x) > 0 I(x) < 0

◮ Implication: Reconstruction is geometrically close

slide-8
SLIDE 8

Typical Proof Outline

◮ Step 2:

◮ Analyze ∇I(x) close to the surface ◮ Show that ∇I(x) = 0 ◮ Show that I(x) is strictly monotonic in the direction normal

to Σ

n˜ x n˜ x · ∇I(x) > 0

◮ Implications:

◮ Reconstructed surface is a manifold ◮ Normal directions define a homeomorphism

slide-9
SLIDE 9

Typical Proof Outline

◮ A common technique

◮ Bounding the influence of

points farther than a suitably chosen threshold

◮ The actual radius depends

  • n the quantity that is

being evaluated

◮ Inside — reliable ◮ Outside — negligible

slide-10
SLIDE 10

An MLS Surface for a Uniformly Sampled PCD

slide-11
SLIDE 11

Assumptions

◮ Uniform sampling, ||x − p|| ≤ ǫ

◮ Assume lfs(x) ≥ 1 everywhere ◮ Smallest features determine

density

◮ No oversampling, |B(x, ǫ)| ≤ α ◮ Noise, ||p − ˜

p|| ≤ ǫ2

◮ Normal estimates, ∠(np, n˜ p) ≤ ǫ

slide-12
SLIDE 12

Proposed MLS Surface

◮ Weighted sum of signed distances

I(x) =

  • p nT

p (x − p) · θp(x)

  • p θp(x)

θp(x) = 1 αp e−||x−p||2/ǫ2 p np x

slide-13
SLIDE 13

Analysis of I(x)

◮ Can show that all

zero-crossings are within δ from Σ

◮ Fix x far away (farther

than δ) from the boundary

◮ Influence threshold

r = d(x, Σ) + δ + ǫ

δ d(x, Σ) r = d(x, Σ) + δ + ǫ x ◮ If p is a nearby sample,

nT

p (x − p) = d(x, Σ) · (1 + O(ǫ)) + O(ǫ2) ◮ nT p (x − p) and d(x, Σ) have the same sign, provided δ = O(ǫ)

slide-14
SLIDE 14

Analysis of I(x)

Far away points

◮ Divide the “distant space”

into spherical shells of thickness ǫ

◮ The number of samples in

the i-th shell is O(i 2) (uniform sampling)

◮ The influence decays as

O(e−i2)

◮ The overall influence

O

  • r · r2

ǫ2 · e−r2/ǫ2

x ǫ ri = r + i · ǫ r = d(x, Σ) + δ + ǫ

slide-15
SLIDE 15

Analysis of ∇I(x)

◮ Fix x close (within δ) to the

boundary

◮ Show n˜ x · ∇I(x) > 0 where

˜ x ∈ Σ is closest to x.

◮ Influence threshold

r =

  • (d(x, Σ) + ǫ)2 + ǫ2 = O(ǫ)

◮ Far away points negligible

ǫ x ri = r + i · ǫ δ r =

  • (d(x, Σ) + ǫ)2 + ǫ2

˜ x ∇I(x)

slide-16
SLIDE 16

Analysis of ∇I(x)

◮ Nearby points contribution

to the gradient vector

◮ Signed distance functions

  • p
  • q

θp(x)θq(x) · np

◮ Change of weights

  • p
  • q

θp(x)θq(x)·O(nT

p (x−p))·(p−q)

p np ∇I(x) x ˜ x

◮ The normals np are close to n˜ x!

slide-17
SLIDE 17

Uniform Case: Conclusions

◮ The zero set of I is confined to the

δ = O(ǫ) thickening

◮ The reconstruction is

geometrically accurate

◮ Whenever I(x) = 0, ∇I(x) = 0

◮ The reconstruction is locally flat

◮ The gradient lines provide a

“morphing function”

◮ The reconstruction is

topologically correct

slide-18
SLIDE 18

An MLS Surface for an Adaptively Sampled PCD

slide-19
SLIDE 19

Motivation

◮ Allow variations in sampling

density according to local feature size

◮ Requires adaptive Gaussian

kernel

◮ Uniform sampling: kernel

width ǫ

◮ Does not work for

adaptive sampling

slide-20
SLIDE 20

Adaptive Gaussian Kernel

◮ Adapt to lfs(˜

x)? θp(x) ∼ e

−O „

||x−p||2 ǫ2 lfs(˜ x)2

«

◮ Bias toward small features

◮ Adapt to lfs(˜

p)? θp(x) ∼ e

−O „

||x−p||2 ǫ2 lfs(˜ p)2

«

◮ Influence may not

decrease with distance

✂ ✄ ☎ ✆
slide-21
SLIDE 21

Adaptive Gaussian Kernel

◮ Solution: Adapt to

  • lfs(p) lfs(x)

θp(x) = exp

√ 2 · ||x − p||2 ǫ2 lfs(˜ x) lfs(˜ p)

  • ◮ Note: not smooth!
slide-22
SLIDE 22

Other Assumptions

◮ No oversampling, |B(x, ǫ lfs(˜

x))| ≤ α

◮ Noise magnitude at most ǫ2 lfs(˜

x)

◮ Good normal estimates, ∠(np, n˜ p) ≤ ǫ

slide-23
SLIDE 23

Analysis

◮ Extend the proofs for

adaptive thickening, width δ lfs(˜ x)

◮ Fix a point x and a

threshold radius r

d(x, Σ) r x δ

◮ Bounding the influence of far away points

◮ Small distant features

◮ Reliability of nearby points

◮ Small nearby features

slide-24
SLIDE 24

Influence of Far Away Points

◮ Subdivide into cubes, accumulate the counts bottom-up

◮ Size of top-level cubes O(ǫ lfs(˜

x))

◮ Stop subdivision when lfs(ck) ≥ (ǫ/2k) lfs(˜

x)

◮ Apply “no oversampling” to the leaves

◮ The i-th shell may contain more than O(i 2) samples, but still

the total contribution is O(i 2e−i2)

◮ Total contribution to I(x) and n˜ x · ∇I(x)

O(poly(r/ǫ) · exp(−r 2/ǫ2))

slide-25
SLIDE 25

Influence of Nearby Points

◮ Works only for

d(x, Σ) ≤ 0.1 lfs(˜ x) from the surface

◮ If x is at least

(δ = 0.3ǫ) lfs(˜ x) away from the surface, the sign is correct

d(x, Σ) r x δ

◮ Can claim I(x) = 0 only for

0.3ǫ ≤ d(x, Σ) lfs(˜ x) ≤ 0.1

slide-26
SLIDE 26

Influence of Nearby Points

◮ Works only if x is not too far from the surface

◮ May have fake zero-crossings outside 0.1 lfs(˜

x)!

◮ Uniform sampling ◮ Adaptive sampling

slide-27
SLIDE 27

Remarks

◮ Delaunay-based estimation of normals and medial axis ◮ Maxima layers for standard PMLS surfaces ◮ Comparison with other projection methods