Provably Good Implicit MLS Surfaces Nikola Milosavljevic CS 468, - - PowerPoint PPT Presentation
Provably Good Implicit MLS Surfaces Nikola Milosavljevic CS 468, - - PowerPoint PPT Presentation
Provably Good Implicit MLS Surfaces Nikola Milosavljevic CS 468, Fall 2005 Implicit MLS Surfaces Zero-level-set of a function I ( x ) over R 3 x 0 Fixed points of a projection x 2 operator x 1 Weighted sum of signed distances x 3 x
Implicit MLS Surfaces
◮ Zero-level-set of a
function I(x) over R3
◮ Fixed points of a projection
- perator
◮ Weighted sum of signed
distances I(x) =
- p nT
p (x − p) · θp(x)
- p θp(x)
◮ Extremal surfaces
I(x) = n(x)·∂eMLS(y, n(x)) ∂y
- x
x0 x1 x2 x3 x4 p np x
MLS Surfaces with Guarantees
◮ Not discussed so far
◮ What is the “ground truth” surface? ◮ How do the samples arise? ◮ How good is the reconstruction?
◮ In this talk
◮ A notion of the original surface ◮ A model of sampling and noise ◮ Study the behavior of MLS surfaces
◮ The Holy Grail
◮ Geometric accuracy ◮ Correct topology ◮ Smoothness ◮ Fast (quadratic) convergence ◮ Efficient (local) computation
Problem Statement
◮ Original smooth, closed
surface Σ
◮ Given conditions on
◮ Sampling density ◮ Normal estimates ◮ Noise
◮ Design an implicit function
I(x) whose zero set recovers Σ
Σ P
Outline
- R. Kolluri (U.C. Berkeley),
“Provably Good Moving Least Squares”, SODA 2005.
◮ Globally uniform sampling + normals ◮ Correct topology ◮ Smoothness
- T. Dey, J. Sun (Ohio State),
“An Adaptive MLS Surface for Reconstruction with Guarantees”, SGP 2005.
◮ Feature-sensitive sampling + normals ◮ Correct topology ◮ “Smoothness”
Sampling Conditions
◮ Medial axis
◮ Points with multiple
closest points on Σ
◮ Local feature size, lfs(·)
◮ Distance from x ∈ Σ to
the medial axis
◮ ǫ-sampling
◮ Every x ∈ R3 has a
sample p at most ǫ lfs(˜ x)
◮ No oversampling,
|B(x, ǫ lfs(˜ x))| ≤ α
ǫ lfs(˜ x) ˜ x x p
Typical Proof Outline
◮ Step 1:
◮ Analyze I(x) ◮ Localize the zero-set ◮ Spurious zero-crossings?
I(x) > 0 I(x) < 0
◮ Implication: Reconstruction is geometrically close
Typical Proof Outline
◮ Step 2:
◮ Analyze ∇I(x) close to the surface ◮ Show that ∇I(x) = 0 ◮ Show that I(x) is strictly monotonic in the direction normal
to Σ
n˜ x n˜ x · ∇I(x) > 0
◮ Implications:
◮ Reconstructed surface is a manifold ◮ Normal directions define a homeomorphism
Typical Proof Outline
◮ A common technique
◮ Bounding the influence of
points farther than a suitably chosen threshold
◮ The actual radius depends
- n the quantity that is
being evaluated
◮ Inside — reliable ◮ Outside — negligible
- ✁
An MLS Surface for a Uniformly Sampled PCD
Assumptions
◮ Uniform sampling, ||x − p|| ≤ ǫ
◮ Assume lfs(x) ≥ 1 everywhere ◮ Smallest features determine
density
◮ No oversampling, |B(x, ǫ)| ≤ α ◮ Noise, ||p − ˜
p|| ≤ ǫ2
◮ Normal estimates, ∠(np, n˜ p) ≤ ǫ
Proposed MLS Surface
◮ Weighted sum of signed distances
I(x) =
- p nT
p (x − p) · θp(x)
- p θp(x)
θp(x) = 1 αp e−||x−p||2/ǫ2 p np x
Analysis of I(x)
◮ Can show that all
zero-crossings are within δ from Σ
◮ Fix x far away (farther
than δ) from the boundary
◮ Influence threshold
r = d(x, Σ) + δ + ǫ
δ d(x, Σ) r = d(x, Σ) + δ + ǫ x ◮ If p is a nearby sample,
nT
p (x − p) = d(x, Σ) · (1 + O(ǫ)) + O(ǫ2) ◮ nT p (x − p) and d(x, Σ) have the same sign, provided δ = O(ǫ)
Analysis of I(x)
Far away points
◮ Divide the “distant space”
into spherical shells of thickness ǫ
◮ The number of samples in
the i-th shell is O(i 2) (uniform sampling)
◮ The influence decays as
O(e−i2)
◮ The overall influence
O
- r · r2
ǫ2 · e−r2/ǫ2
x ǫ ri = r + i · ǫ r = d(x, Σ) + δ + ǫ
Analysis of ∇I(x)
◮ Fix x close (within δ) to the
boundary
◮ Show n˜ x · ∇I(x) > 0 where
˜ x ∈ Σ is closest to x.
◮ Influence threshold
r =
- (d(x, Σ) + ǫ)2 + ǫ2 = O(ǫ)
◮ Far away points negligible
ǫ x ri = r + i · ǫ δ r =
- (d(x, Σ) + ǫ)2 + ǫ2
˜ x ∇I(x)
Analysis of ∇I(x)
◮ Nearby points contribution
to the gradient vector
◮ Signed distance functions
- p
- q
θp(x)θq(x) · np
◮ Change of weights
- p
- q
θp(x)θq(x)·O(nT
p (x−p))·(p−q)
p np ∇I(x) x ˜ x
◮ The normals np are close to n˜ x!
Uniform Case: Conclusions
◮ The zero set of I is confined to the
δ = O(ǫ) thickening
◮ The reconstruction is
geometrically accurate
◮ Whenever I(x) = 0, ∇I(x) = 0
◮ The reconstruction is locally flat
◮ The gradient lines provide a
“morphing function”
◮ The reconstruction is
topologically correct
An MLS Surface for an Adaptively Sampled PCD
Motivation
◮ Allow variations in sampling
density according to local feature size
◮ Requires adaptive Gaussian
kernel
◮ Uniform sampling: kernel
width ǫ
◮ Does not work for
adaptive sampling
Adaptive Gaussian Kernel
◮ Adapt to lfs(˜
x)? θp(x) ∼ e
−O „
||x−p||2 ǫ2 lfs(˜ x)2
«
◮ Bias toward small features
◮ Adapt to lfs(˜
p)? θp(x) ∼ e
−O „
||x−p||2 ǫ2 lfs(˜ p)2
«
◮ Influence may not
decrease with distance
✂ ✄ ☎ ✆Adaptive Gaussian Kernel
◮ Solution: Adapt to
- lfs(p) lfs(x)
θp(x) = exp
- −
√ 2 · ||x − p||2 ǫ2 lfs(˜ x) lfs(˜ p)
- ◮ Note: not smooth!
Other Assumptions
◮ No oversampling, |B(x, ǫ lfs(˜
x))| ≤ α
◮ Noise magnitude at most ǫ2 lfs(˜
x)
◮ Good normal estimates, ∠(np, n˜ p) ≤ ǫ
Analysis
◮ Extend the proofs for
adaptive thickening, width δ lfs(˜ x)
◮ Fix a point x and a
threshold radius r
d(x, Σ) r x δ
◮ Bounding the influence of far away points
◮ Small distant features
◮ Reliability of nearby points
◮ Small nearby features
Influence of Far Away Points
◮ Subdivide into cubes, accumulate the counts bottom-up
◮ Size of top-level cubes O(ǫ lfs(˜
x))
◮ Stop subdivision when lfs(ck) ≥ (ǫ/2k) lfs(˜
x)
◮ Apply “no oversampling” to the leaves
◮ The i-th shell may contain more than O(i 2) samples, but still
the total contribution is O(i 2e−i2)
◮ Total contribution to I(x) and n˜ x · ∇I(x)
O(poly(r/ǫ) · exp(−r 2/ǫ2))
Influence of Nearby Points
◮ Works only for
d(x, Σ) ≤ 0.1 lfs(˜ x) from the surface
◮ If x is at least
(δ = 0.3ǫ) lfs(˜ x) away from the surface, the sign is correct
d(x, Σ) r x δ
◮ Can claim I(x) = 0 only for
0.3ǫ ≤ d(x, Σ) lfs(˜ x) ≤ 0.1
Influence of Nearby Points
◮ Works only if x is not too far from the surface
◮ May have fake zero-crossings outside 0.1 lfs(˜