provably good implicit mls surfaces
play

Provably Good Implicit MLS Surfaces Nikola Milosavljevic CS 468, - PowerPoint PPT Presentation

Provably Good Implicit MLS Surfaces Nikola Milosavljevic CS 468, Fall 2005 Implicit MLS Surfaces Zero-level-set of a function I ( x ) over R 3 x 0 Fixed points of a projection x 2 operator x 1 Weighted sum of signed distances x 3 x


  1. Provably Good Implicit MLS Surfaces Nikola Milosavljevic CS 468, Fall 2005

  2. Implicit MLS Surfaces ◮ Zero-level-set of a function I ( x ) over R 3 x 0 ◮ Fixed points of a projection x 2 operator x 1 ◮ Weighted sum of signed distances x 3 x 4 p n T � p ( x − p ) · θ p ( x ) I ( x ) = � p θ p ( x ) x ◮ Extremal surfaces np p � I ( x ) = n ( x ) · ∂ e MLS ( y , n ( x )) � � ∂ y � � x

  3. MLS Surfaces with Guarantees ◮ Not discussed so far ◮ What is the “ground truth” surface? ◮ How do the samples arise? ◮ How good is the reconstruction? ◮ In this talk ◮ A notion of the original surface ◮ A model of sampling and noise ◮ Study the behavior of MLS surfaces ◮ The Holy Grail ◮ Geometric accuracy ◮ Correct topology ◮ Smoothness ◮ Fast (quadratic) convergence ◮ Efficient (local) computation

  4. Problem Statement ◮ Original smooth, closed P surface Σ ◮ Given conditions on Σ ◮ Sampling density ◮ Normal estimates ◮ Noise ◮ Design an implicit function I ( x ) whose zero set recovers Σ

  5. Outline R. Kolluri (U.C. Berkeley), “Provably Good Moving Least Squares”, SODA 2005. ◮ Globally uniform sampling + normals ◮ Correct topology ◮ Smoothness T. Dey, J. Sun (Ohio State), “An Adaptive MLS Surface for Reconstruction with Guarantees”, SGP 2005. ◮ Feature-sensitive sampling + normals ◮ Correct topology ◮ “Smoothness”

  6. Sampling Conditions ◮ Medial axis ◮ Points with multiple closest points on Σ ◮ Local feature size, lfs( · ) ◮ Distance from x ∈ Σ to the medial axis ◮ ǫ -sampling ◮ Every x ∈ R 3 has a ǫ lfs(˜ x ) sample p at most ǫ lfs(˜ x ) x p ◮ No oversampling, ˜ | B ( x , ǫ lfs(˜ x )) | ≤ α x

  7. Typical Proof Outline ◮ Step 1: ◮ Analyze I ( x ) ◮ Localize the zero-set ◮ Spurious zero-crossings? I ( x ) > 0 I ( x ) < 0 ◮ Implication: Reconstruction is geometrically close

  8. Typical Proof Outline ◮ Step 2: ◮ Analyze ∇I ( x ) close to the surface ◮ Show that ∇I ( x ) � = 0 ◮ Show that I ( x ) is strictly monotonic in the direction normal to Σ x · ∇I ( x ) > 0 n ˜ n ˜ x ◮ Implications: ◮ Reconstructed surface is a manifold ◮ Normal directions define a homeomorphism

  9. � ✁ Typical Proof Outline ◮ A common technique ◮ Bounding the influence of points farther than a suitably chosen threshold ◮ The actual radius depends on the quantity that is being evaluated ◮ Inside — reliable ◮ Outside — negligible

  10. An MLS Surface for a Uniformly Sampled PCD

  11. Assumptions ◮ Uniform sampling , || x − p || ≤ ǫ ◮ Assume lfs( x ) ≥ 1 everywhere ◮ Smallest features determine density ◮ No oversampling, | B ( x , ǫ ) | ≤ α ◮ Noise, || p − ˜ p || ≤ ǫ 2 ◮ Normal estimates, ∠ ( n p , n ˜ p ) ≤ ǫ

  12. Proposed MLS Surface ◮ Weighted sum of signed distances p n T � p ( x − p ) · θ p ( x ) θ p ( x ) = 1 e −|| x − p || 2 /ǫ 2 I ( x ) = � p θ p ( x ) α p x np p

  13. Analysis of I ( x ) ◮ Can show that all zero-crossings are within r = d ( x, Σ) + δ + ǫ δ from Σ ◮ Fix x far away (farther x than δ ) from the d ( x, Σ) boundary ◮ Influence threshold δ r = d ( x , Σ) + δ + ǫ ◮ If p is a nearby sample , n T p ( x − p ) = d ( x , Σ) · (1 + O ( ǫ )) + O ( ǫ 2 ) ◮ n T p ( x − p ) and d ( x , Σ) have the same sign, provided δ = O ( ǫ )

  14. Analysis of I ( x ) Far away points ǫ ◮ Divide the “distant space” into spherical shells of thickness ǫ ri = r + i · ǫ x ◮ The number of samples in the i -th shell is O ( i 2 ) r = d ( x, Σ) + δ + ǫ (uniform sampling) ◮ The influence decays as O ( e − i 2 ) ◮ The overall influence r · r 2 � ǫ 2 · e − r 2 /ǫ 2 � O

  15. Analysis of ∇I ( x ) ǫ ◮ Fix x close (within δ ) to the boundary ri = r + i · ǫ ◮ Show n ˜ x · ∇I ( x ) > 0 where ∇I ( x ) x ∈ Σ is closest to x . ˜ x ˜ x δ ( d ( x, Σ) + ǫ ) 2 + ǫ 2 � r = ◮ Influence threshold � ( d ( x , Σ) + ǫ ) 2 + ǫ 2 = O ( ǫ ) r = ◮ Far away points negligible

  16. Analysis of ∇I ( x ) ◮ Nearby points contribution to the gradient vector ∇I ( x ) ◮ Signed distance functions x � � θ p ( x ) θ q ( x ) · n p n p ˜ x p q p ◮ Change of weights � � θ p ( x ) θ q ( x ) · O ( n T p ( x − p )) · ( p − q ) p q ◮ The normals n p are close to n ˜ x !

  17. Uniform Case: Conclusions ◮ The zero set of I is confined to the δ = O ( ǫ ) thickening ◮ The reconstruction is geometrically accurate ◮ Whenever I ( x ) = 0, ∇I ( x ) � = 0 ◮ The reconstruction is locally flat ◮ The gradient lines provide a “morphing function” ◮ The reconstruction is topologically correct

  18. An MLS Surface for an Adaptively Sampled PCD

  19. Motivation ◮ Allow variations in sampling density according to local feature size ◮ Requires adaptive Gaussian kernel ◮ Uniform sampling: kernel width ǫ ◮ Does not work for adaptive sampling

  20. ✆ ✂ ☎ ✄ Adaptive Gaussian Kernel ◮ Adapt to lfs(˜ x )? „ || x − p || 2 « − O ǫ 2 lfs(˜ x )2 θ p ( x ) ∼ e ◮ Bias toward small features ◮ Adapt to lfs(˜ p )? „ « || x − p || 2 − O ǫ 2 lfs(˜ p )2 θ p ( x ) ∼ e ◮ Influence may not decrease with distance

  21. Adaptive Gaussian Kernel ◮ Solution: Adapt to � lfs( p ) lfs( x ) √ � � 2 · || x − p || 2 θ p ( x ) = exp − ǫ 2 lfs(˜ x ) lfs(˜ p ) ◮ Note: not smooth!

  22. Other Assumptions ◮ No oversampling, | B ( x , ǫ lfs(˜ x )) | ≤ α ◮ Noise magnitude at most ǫ 2 lfs(˜ x ) ◮ Good normal estimates, ∠ ( n p , n ˜ p ) ≤ ǫ

  23. Analysis ◮ Extend the proofs for adaptive thickening, width r δ lfs(˜ x ) x ◮ Fix a point x and a d ( x, Σ) threshold radius r δ ◮ Bounding the influence of far away points ◮ Small distant features ◮ Reliability of nearby points ◮ Small nearby features

  24. Influence of Far Away Points ◮ Subdivide into cubes, accumulate the counts bottom-up ◮ Size of top-level cubes O ( ǫ lfs(˜ x )) ◮ Stop subdivision when lfs( c k ) ≥ ( ǫ/ 2 k ) lfs(˜ x ) ◮ Apply “no oversampling” to the leaves ◮ The i -th shell may contain more than O ( i 2 ) samples, but still the total contribution is O ( i 2 e − i 2 ) ◮ Total contribution to I ( x ) and n ˜ x · ∇I ( x ) O (poly( r /ǫ ) · exp( − r 2 /ǫ 2 ))

  25. Influence of Nearby Points ◮ Works only for d ( x , Σ) ≤ 0 . 1 lfs(˜ x ) from r the surface x ◮ If x is at least d ( x, Σ) ( δ = 0 . 3 ǫ ) lfs(˜ x ) away from the surface, the sign is correct δ ◮ Can claim I ( x ) � = 0 only for 0 . 3 ǫ ≤ d ( x , Σ) ≤ 0 . 1 lfs(˜ x )

  26. Influence of Nearby Points ◮ Works only if x is not too far from the surface ◮ May have fake zero-crossings outside 0 . 1 lfs(˜ x )! ◮ Uniform sampling ◮ Adaptive sampling

  27. Remarks ◮ Delaunay-based estimation of normals and medial axis ◮ Maxima layers for standard PMLS surfaces ◮ Comparison with other projection methods

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend