Fast and Robust Normal Estimation for Point Clouds with Sharp - - PowerPoint PPT Presentation

fast and robust normal estimation for point clouds with
SMART_READER_LITE
LIVE PREVIEW

Fast and Robust Normal Estimation for Point Clouds with Sharp - - PowerPoint PPT Presentation

Fast and Robust Normal Estimation for Point Clouds with Sharp Features Alexandre Boulch & Renaud Marlet University Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ParisTech Symposium on Geometry Processing 2012 1/37 Normal estimation Normal


slide-1
SLIDE 1

Fast and Robust Normal Estimation for Point Clouds with Sharp Features

Alexandre Boulch & Renaud Marlet

University Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ParisTech

Symposium on Geometry Processing 2012

1/37

slide-2
SLIDE 2

Normal estimation

Normal estimation for point clouds Our method Experiments Conclusion

2/37

slide-3
SLIDE 3

Normal Estimation

Normal estimation for point clouds Our method Experiments Conclusion

3/37

slide-4
SLIDE 4

Data

Point clouds from photogram- metry or laser acquisition:

◮ may be noisy

Sensitivity to noise Robustness to noise

4/37

slide-5
SLIDE 5

Data

Point clouds from photogram- metry or laser acquisition:

◮ may be noisy ◮ may have outliers

Regression Plane P

4/37

slide-6
SLIDE 6

Data

Point clouds from photogram- metry or laser acquisition:

◮ may be noisy ◮ may have outliers ◮ most often have sharp

features

Smoothed sharp features Preserved sharp features

4/37

slide-7
SLIDE 7

Data

Point clouds from photogram- metry or laser acquisition:

◮ may be noisy ◮ may have outliers ◮ most often have sharp

features

◮ may be anisotropic

Sensitivity to anisotropy Robustness to anisotropy 4/37

slide-8
SLIDE 8

Data

Point clouds from photogram- metry or laser acquisition:

◮ may be noisy ◮ may have outliers ◮ most often have sharp

features

◮ may be anisotropic ◮ may be huge (more than

20 million points)

4/37

slide-9
SLIDE 9

Normal Estimation

Normal estimation for point clouds Our method Experiments Conclusion

5/37

slide-10
SLIDE 10

Basics of the method (2D case here for readability)

Let P be a point and NP be its neighborhood.

6/37

slide-11
SLIDE 11

Basics of the method (2D case here for readability)

Let P be a point and NP be its neighborhood. We consider two cases:

◮ P lies on a planar surface

6/37

slide-12
SLIDE 12

Basics of the method (2D case here for readability)

Let P be a point and NP be its neighborhood. We consider two cases:

◮ P lies on a planar surface

6/37

slide-13
SLIDE 13

Basics of the method (2D case here for readability)

Let P be a point and NP be its neighborhood. We consider two cases:

◮ P lies on a planar surface ◮ P lies next to a sharp

feature

6/37

slide-14
SLIDE 14

Basics of the method (2D case here for readability)

Let P be a point and NP be its neighborhood. We consider two cases:

◮ P lies on a planar surface ◮ P lies next to a sharp

feature

6/37

slide-15
SLIDE 15

Basics of the method (2D case here for readability)

Let P be a point and NP be its neighborhood. We consider two cases:

◮ P lies on a planar surface ◮ P lies next to a sharp

feature If Area(N1) > Area(N2), picking points in N1 × N1 is more probable than N2 × N2, and N1 × N2 leads to “random” normals.

6/37

slide-16
SLIDE 16

Basics of the method (2D case here for readability)

Main Idea

Draw as many primitives as necessary to estimate the normal distribution, and then the most probable normal.

◮ Discretize the problem

P

Normal direction

N.B. We compute the normal direction, not orientation.

7/37

slide-17
SLIDE 17

Basics of the method (2D case here for readability)

Main Idea

Draw as many primitives as necessary to estimate the normal distribution, and then the most probable normal.

◮ Discretize the problem ◮ Fill a Hough accumulator

P

Normal direction

N.B. We compute the normal direction, not orientation.

7/37

slide-18
SLIDE 18

Basics of the method (2D case here for readability)

Main Idea

Draw as many primitives as necessary to estimate the normal distribution, and then the most probable normal.

◮ Discretize the problem ◮ Fill a Hough accumulator ◮ Select the good normal

P

Normal direction

N.B. We compute the normal direction, not orientation.

7/37

slide-19
SLIDE 19

Robust Randomized Hough Transform

◮ T, number of primitives picked after T iteration. ◮ Tmin, number of primitives to pick ◮ M, number of bins of the accumulator ◮ ˆ

pm, empirical mean of the bin m

◮ pm, theoretical mean of the bin m

8/37

slide-20
SLIDE 20

Robust Randomized Hough Transform

Global upper bound

Tmin such that: P( max

m∈{1,...,M} |ˆ

pm − pm| ≤ δ) ≥ α From Hoeffding’s inequality, for a given bin: P(|ˆ pm − pm| ≥ δ) ≤ 2 exp(−2δ2Tmin) Considering the whole accumulator: Tmin ≥ 1 2δ2 ln( 2M 1 − α)

9/37

slide-21
SLIDE 21

Robust Randomized Hough Transform

Confidence Interval

Idea: if we pick often enough the same bin, we want to stop drawing primitives. From the Central Limit Theorem, we can stop if: ˆ pm1 − ˆ pm2 ≥ 2

  • 1

T i.e. the confidence intervals of the most voted bins do not in- tersect (confidence level 95%)

10/37

slide-22
SLIDE 22

Accumulator

Our primitives are planes direc- tions (defined by two angles). We use the accumulator

  • f

Borrmann & al (3D Research, 2011).

◮ Fast computing ◮ Bins of similar area

11/37

slide-23
SLIDE 23

Discretization issues

The use of a discrete accumulator may be a cause of error.

P

12/37

slide-24
SLIDE 24

Discretization issues

The use of a discrete accumulator may be a cause of error.

P

12/37

slide-25
SLIDE 25

Discretization issues

The use of a discrete accumulator may be a cause of error.

Solution

Iterate the algorithm using randomly rotated accumulators.

P

12/37

slide-26
SLIDE 26

Normal Selection P

Normal directions

  • btained by rotation
  • f the accumulator

Mean over all the normals Best confidence Mean over best cluster

13/37

slide-27
SLIDE 27

Dealing with anisotropy

The robustness to anisotropy depends of the way we select the planes (triplets of points)

Sensitivity to anisotropy Robustness to anisotropy

14/37

slide-28
SLIDE 28

Random point selection among nearest neighbors

Dealing with anisotropy

The triplets are randomly selected among the K nearest neighbors. Fast but cannot deal with anisotropy.

15/37

slide-29
SLIDE 29

Uniform point selection on the neighborhood ball

Dealing with anisotropy

◮ Pick a point Q in the

neighborhood ball

P

Q 16/37

slide-30
SLIDE 30

Uniform point selection on the neighborhood ball

Dealing with anisotropy

◮ Pick a point Q in the

neighborhood ball

◮ Consider a small ball

around Q

P

Q 16/37

slide-31
SLIDE 31

Uniform point selection on the neighborhood ball

Dealing with anisotropy

◮ Pick a point Q in the

neighborhood ball

◮ Consider a small ball

around Q

◮ Pick a point randomly in the

small ball

P

Q 16/37

slide-32
SLIDE 32

Uniform point selection on the neighborhood ball

Dealing with anisotropy

◮ Pick a point Q in the

neighborhood ball

◮ Consider a small ball

around Q

◮ Pick a point randomly in the

small ball

◮ Iterate to get a triplet

Deals with anisotropy, but for a high computation cost.

16/37

slide-33
SLIDE 33

Cube discretization of the neighborhood ball

Dealing with anisotropy

◮ Discretize the neighborhood

ball

P

17/37

slide-34
SLIDE 34

Cube discretization of the neighborhood ball

Dealing with anisotropy

◮ Discretize the neighborhood

ball

◮ Pick a cube

P

17/37

slide-35
SLIDE 35

Cube discretization of the neighborhood ball

Dealing with anisotropy

◮ Discretize the neighborhood

ball

◮ Pick a cube ◮ Pick a point randomly in

this cube

P

17/37

slide-36
SLIDE 36

Cube discretization of the neighborhood ball

Dealing with anisotropy

◮ Discretize the neighborhood

ball

◮ Pick a cube ◮ Pick a point randomly in

this cube

◮ Iterate to get a triplet

Good compromise between speed and robustness to anisotropy.

17/37

slide-37
SLIDE 37

Normal Estimation

Normal estimation for point clouds Our method Experiments Conclusion

18/37

slide-38
SLIDE 38

Methods used for comparison

◮ Regression

◮ Hoppe & al

(SIGGRAPH,1992): plane fitting

◮ Cazals & Pouget

(SGP, 2003): jet fitting Plane fitting Jet fitting Noise

  • Outliers

Sharp fts Anisotropy Fast

  • 19/37
slide-39
SLIDE 39

Methods used for comparison

◮ Regression

◮ Hoppe & al

(SIGGRAPH,1992): plane fitting

◮ Cazals & Pouget

(SGP, 2003): jet fitting

◮ Voronoï diagram

◮ Dey & Goswami

(SCG, 2004): NormFet Plane fitting Jet fitting NormFet Noise

  • Outliers

Sharp fts

  • Anisotropy
  • Fast
  • 19/37
slide-40
SLIDE 40

Methods used for comparison

◮ Regression

◮ Hoppe & al

(SIGGRAPH,1992): plane fitting

◮ Cazals & Pouget

(SGP, 2003): jet fitting

◮ Voronoï diagram

◮ Dey & Goswami

(SCG, 2004): NormFet

◮ Sample Consensus

Models

◮ Li & al (Computer &

Graphics, 2010) Plane fitting Jet fitting NormFet Sample Consensus Noise

  • Outliers
  • Sharp fts
  • Anisotropy
  • Fast
  • 19/37
slide-41
SLIDE 41

Precision

Two error measures:

◮ Root Mean Square (RMS):

RMS =

  • 1

|C|

  • P∈C
  • nP,refnP,est

2 ◮ Root Mean Square with threshold

(RMS_τ): RMS_τ =

  • 1

|C|

  • P∈C

v 2

P

where vP =

  • nP,refnP,est

if

  • nP,refnP,est < τ

π 2

  • therwise

More suited for sharp features 20/37

slide-42
SLIDE 42

Visual on error distances

Same RMS, different RMSτ

21/37

slide-43
SLIDE 43

Precision (with noise)

Precision for cube uniformly sampled, depending on noise.

22/37

slide-44
SLIDE 44

Precision (with noise and anisotropy)

Precision for a corner with anisotropy, depending on noise.

23/37

slide-45
SLIDE 45

Computation time

Computation time for sphere, function of the number of points.

24/37

slide-46
SLIDE 46

Robustness to outliers

Noisy model (0.2%) + 100% of outliers.

25/37

slide-47
SLIDE 47

Robustness to outliers

Noisy model (0.2%) + 200% of outliers.

26/37

slide-48
SLIDE 48

Robustness to anisotropy

27/37

slide-49
SLIDE 49

Preservation of sharp features

28/37

slide-50
SLIDE 50

Robustness to “natural” noise, outliers and anisotropy

Point cloud created by photogrammetry.

29/37

slide-51
SLIDE 51

Normal Estimation

Normal estimation for point clouds Our method Experiments Conclusion

30/37

slide-52
SLIDE 52

Conclusion

Plane fitting Jet fitting NormFet Sample Consensus Our method Noise

  • Outliers
  • Sharp fts
  • Anisotropy
  • Fast
  • Compared to state-of-the-art

methods that preserve sharp features, our normal estimator is:

◮ at least as precise ◮ at least as robust to

noise and outliers

◮ almost 10x faster ◮ robust to anisotropy

31/37

slide-53
SLIDE 53

Code available

Web site

https://sites.google.com/site/boulchalexandre Two versions under GPL license:

◮ for Point Cloud Library

(http://pointclouds.org)

◮ for CGAL (http://www.cgal.org)

32/37

slide-54
SLIDE 54

Computation time

Tmin=700 Tmin=300 nrot=5 nrot=2 w/o with w/o with Model (# vertices) interv. interv. interv. interv. Armadillo (173k) 21 s 20 s 3 s 3 s Dragon (438k) 55 s 51 s 8 s 7 s Buddha (543k) 1.1 1 10 s 10 s

  • Circ. Box (701k)

1.5 1.3 13 s 12 s Omotondo (998k) 2 1.2 18 s 10 s Statuette (5M) 11 10 1.5 1.4 Room (6.6M) 14 8 2.3 1.6 Lucy (14M) 28 17 4 2.5

33/37

slide-55
SLIDE 55

Parameters

◮ K or r: number of neighbors or neighborhood radius, ◮ Tmin: number of primitives to explore, ◮ nφ: parameter defining the number of bins, ◮ nrot: number of accumulator rotations, ◮ c: presampling or discretization factor (anisotropy only), ◮ acluster: tolerance angle (mean over best cluster only).

34/37

slide-56
SLIDE 56

Efficiency

35/37

slide-57
SLIDE 57

Influence of the neighborhood size

K = 20 K = 200 K = 400

36/37

slide-58
SLIDE 58

Precision (with noise)

Precision for cube uniformly sampled, depending on noise.

37/37