Learning to Group and Label Fine-Grained Shape Components Xiaogang - - PowerPoint PPT Presentation

learning to group and label fine grained shape components
SMART_READER_LITE
LIVE PREVIEW

Learning to Group and Label Fine-Grained Shape Components Xiaogang - - PowerPoint PPT Presentation

Learning to Group and Label Fine-Grained Shape Components Xiaogang Wang , Bin Zhou, Haiyue Fang, Xiaowu Chen, Qinping Zhao, Kai Xu Motivation Pedal Handlebar Chain Front fork Fender Frame Gear Wheel Chainguard Seat Challenges


slide-1
SLIDE 1

Learning to Group and Label Fine-Grained Shape Components

Xiaogang Wang, Bin Zhou, Haiyue Fang, Xiaowu Chen, Qinping Zhao, Kai Xu

slide-2
SLIDE 2

Motivation

Handlebar Front fork Frame Wheel Seat Pedal Chain Fender Gear Chainguard

slide-3
SLIDE 3

Challenges

  • Highly fine-grained
  • The size of components varies significantly
  • Highly inconsistent across different shapes
slide-4
SLIDE 4

Challenges

  • Highly fine-grained
  • The size of components varies significantly
  • Highly inconsistent across different shapes
slide-5
SLIDE 5

Challenges

  • Highly fine-grained
  • The size of components varies significantly
  • Highly inconsistent across different shapes
slide-6
SLIDE 6

Challenges

  • Highly fine-grained
  • The size of components varies significantly
  • Highly inconsistent across different shapes
slide-7
SLIDE 7

Contributions

  • A new problem of segmentation of stock 3D models with

pre-existing, highly fine-grained components

  • A novel solution of part hypothesis generation and

characterization

  • A benchmark for multi-component labeling with

component-wise ground-truth labels

slide-8
SLIDE 8

Related Work

slide-9
SLIDE 9

Learning 3D Mesh Segmentation and Labeling. Kalogerakis et al. SIGGRAPH 2010. Co-Segmentation of 3D Shapes via Subspace Clustering. Hu et al. CGF 2012.

Mesh segmentation Limited by hand designed features !

slide-10
SLIDE 10

Point clouds segmentation

PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. Qi et al. Nips 2017. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Su et al. CVPR 2017.

Cannot Handle Fine-grained parts

slide-11
SLIDE 11

Multi-view projective segmentation

3D Shape Segmentation with Projective Convolutional Networks.Kalogerakis et al. CVPR 2017. Projective Analysis for 3D Shape Segmentation. Wang et al. Siggraph 2013.

Self-occlusion !

slide-12
SLIDE 12

Learning Hierarchical Shape Segmentation and Labeling from Online Repositories. Yi et al. Siggraph 2017.

segmentation of multi-component models Need scene graph !

slide-13
SLIDE 13

Method

slide-14
SLIDE 14

Pipeline

slide-15
SLIDE 15

Grouping Strategy

  • Center Distance
  • Group Size
  • Geometric Contact
slide-16
SLIDE 16

Grouping Strategy

  • Center Distance
  • Group Size
  • Geometric Contact
slide-17
SLIDE 17
  • Center Distance
  • Group Size
  • Geometric Contact

Grouping Strategy

slide-18
SLIDE 18

Grouping Strategy

  • Center Distance
  • Group Size
  • Geometric Contact
slide-19
SLIDE 19

Sampling Results

slide-20
SLIDE 20

Part hypothesis quality vs. hypothesis count.

Sampling Results

slide-21
SLIDE 21

Comparison to Baseline (GMM and CNN-based).

Sampling Results

slide-22
SLIDE 22

Pipeline

slide-23
SLIDE 23

Classifiying and Ranking

slide-24
SLIDE 24

Classifiying and Ranking

slide-25
SLIDE 25

Classifiying and Ranking

slide-26
SLIDE 26

Classifiying and Ranking

slide-27
SLIDE 27

Classifiying and Ranking

Rows Vehicle Bicycle Chair Cabinet Plane Lamp Motor Helicopter Living room Office

Ours (local only)

50.4 52.4 60.4 68.6 61.3 73.5 60.4 78.5 62.7 54.8

Ours (local+global)

69.2 67.3 68.6 75.4 69.1 79.2 67.2 82.6 68.3 76.4

Ours (all)

73.7 68.1 74.3 78.7 76.5 88.3 71.7 83.3 66.1 65.4

slide-28
SLIDE 28

Classifiying and Ranking

Rows Vehicle Bicycle Chair Cabinet Plane Lamp Motor Helicopter Living room Office

Ours (local only)

50.4 52.4 60.4 68.6 61.3 73.5 60.4 78.5 62.7 54.8

Ours (local+global)

69.2 67.3 68.6 75.4 69.1 79.2 67.2 82.6 68.3 76.4

Ours (all)

73.7 68.1 74.3 78.7 76.5 88.3 71.7 83.3 66.1 65.4

slide-29
SLIDE 29

Pipeline

slide-30
SLIDE 30

1

2

( ) x ϕ

1

( ) h ψ 2 3 4

3

( ) x ϕ

1

{1,2,3} h =

2

{2,3,4} h =

2

( ) h ψ

Labeling via Higher-order CRF

slide-31
SLIDE 31

1

2

( ) x ϕ

1

( ) h ψ 2 3 4

3

( ) x ϕ

1

{1,2,3} h =

2

( ) h ψ

2

{2,3,4} h =

Labeling via Higher-order CRF

slide-32
SLIDE 32

Experiments

slide-33
SLIDE 33
  • Benchmark dataset
  • Labeling results
  • Labeling performance
  • Parameter analyses

Experiments

slide-34
SLIDE 34

Benchmark Dataset

1019 models 8 object categories 2 scene categories

slide-35
SLIDE 35

图片结果动画连播

3X Speed

slide-36
SLIDE 36
  • Benchmark dataset
  • Labeling Results
  • Labeling performance
  • Parameter analyses

Experiments

slide-37
SLIDE 37

Our GT Input

slide-38
SLIDE 38
slide-39
SLIDE 39
  • Benchmark dataset
  • Labeling results
  • Labeling performance
  • Parameter analysis

Experments

slide-40
SLIDE 40

CNN-based component classification

Comparison with three baseline methods

Experiment Results

Random forest CNN-based hypothesis generation

slide-41
SLIDE 41

CNN-based component classification

Comparison with three baseline methods

Experiment Results

Random forest CNN-based hypothesis generation

Rows Vehicle Bicycle Chair Cabinet Plane Lamp Motor Helicopter Living room Office

Baseline (Random Forest) 54.7 58.9 62.4 65.9 53.5 63.3 65.9 52.8 47.7 63.5 Baseline (CNN Classifier) 48.9 63.8 70.75 63.3 68.9 81.2 67.4 78.5 51.2 63.9 Baseline (CNN Hypo. Gen.) 56.3 51.9 68.5 45.7 58.5 71.1 53.1 72.2 58.6 65.1 Ours (all) 73.7 68.1 74.3 78.7 76.5 88.3 71.7 83.3 66.1 65.4

slide-42
SLIDE 42

Comparison with 4 state-of-the-art methods

Experiment Results

PointNet [Su et al. 2017] PointNet++ [Qi et al. 2017] Guo et al. [2015] Yi et al. [2017]

Rows Vehicle Bicycle Chair Cabinet Plane Lamp Motor Helicopter Living room Office

PointNet [Su et at. 2017] 24.3 30.6 68.6 21.0 47.2 46.3 35.8 32.6

  • PointNet++ [Qi et at. 2017]

51.7 53.8 69.3 62.0 53.9 79.8 62.2 79.3

  • Guo et al. [2015]

27.1 25.2 34.2 68.8 38.6 79.1 41.6 80.1 33.7 28.5 Yi et al. [2017a] 65.2 63.0 61.9 70.6 59.3 82.2 67.5 78.9 56.6 68.6 Ours (all) 73.7 68.1 74.3 78.7 76.5 88.3 71.7 83.3 66.1 65.4

slide-43
SLIDE 43
  • Benchmark dataset
  • Labeling results
  • Labeling performance
  • Parameter analysis

Experments

slide-44
SLIDE 44

Labeling performance without confidence score

1

2

( ) x ϕ

1

( ) h ψ 2 3 4

3

( ) x ϕ

1

{1,2,3} h =

2

{2,3,4} h =

2

( ) h ψ

Rows Vehicle Bicycle Chair Cabinet Plane Lamp Motor Helicopter Living room Office

Ours (w/o score) 71.5 66.8 72.5 76.5 71.4 87.6 70.7 81.2 63.3 60.1 Ours (all) 73.7 68.1 74.3 78.7 76.5 88.3 71.7 83.3 66.1 65.4

slide-45
SLIDE 45

Labeling performance vs. part hypothesis count

slide-46
SLIDE 46

Conclusion

  • A new problem of segmentation of off-the-shelf 3D models

with highly fine-grained components. And a benchmark with component-wise ground-truth labels

  • A novel solution of part hypothesis generation based on a

bottom-up hierarchical grouping process

  • A deep neural network is trained to encode part hypothesis,

rather than components

  • A higher order potential adopts a soft constraint, providing

more degree of freedom in optimal labeling search.

slide-47
SLIDE 47

Limitations and Future Work

  • Only groups the components but NOT segment
  • Part hypotheses overlap significantly (shape concavity)
  • Extend hypothesis for hierarchical segmentation, and

Integrate CRF into the deep neural networks

slide-48
SLIDE 48

E-mail:

wangxiaogang@buaa.com.cn

Code&Dataset:

https://github.com/wangxiaogang866/fglabel

slide-49
SLIDE 49

Parameter Kc

Rows Vehicle Bicycle Chair Cabinet Plane Lamp Motor Helicopter Living room Office

Ours (Kc = 1) 52.0 43.2 63.5 62.0 47.6 76.5 41.7 42.4 54.6 70.7 Ours (Kc = 3) 56.5 49.9 67.0 66.6 55.4 84.0 51.7 43.4 63.1 70.1 Ours (Kc = 5) 59.3 54.9 70.5 69.6 59.8 86.3 55.3 50.7 64.7 68.9 Ours (Kc = 10) 62.0 61.9 72.6 74.1 68.6 86.9 62.4 75.6 66.6 66.1 Ours (all) 73.7 68.1 74.3 78.7 76.5 88.3 71.7 83.3 66.1 65.4

1

2

( ) x ϕ

1

( ) h ψ 2 3 4

3

( ) x ϕ

1

{1,2,3} h =

2

{2,3,4} h =

2

( ) h ψ