Cutting-Plane Training of Non-associative Markov Network for 3D - - PowerPoint PPT Presentation

cutting plane training of
SMART_READER_LITE
LIVE PREVIEW

Cutting-Plane Training of Non-associative Markov Network for 3D - - PowerPoint PPT Presentation

Cutting-Plane Training of Non-associative Markov Network for 3D Point Cloud Segmentation Roman Shapovalov, Alexander Velizhev Lomonosov Moscow State University Hangzhou, May 18, 2011 Semantic segmentation of point clouds LIDAR point cloud


slide-1
SLIDE 1

Cutting-Plane Training of Non-associative Markov Network for 3D Point Cloud Segmentation

Roman Shapovalov, Alexander Velizhev Lomonosov Moscow State University

Hangzhou, May 18, 2011

slide-2
SLIDE 2

Semantic segmentation of point clouds

  • LIDAR point cloud without color information
  • Class label for each point
slide-3
SLIDE 3

System workflow

segmentation graph construction feature computation CRF inference

slide-4
SLIDE 4

Non-associative CRF

node features edge features

y

x x max ) , , ( ) , (

) , ( N i E j i j i ij i i

y y y

point labels

  • Associative CRF:
  • Our model: no such constraints!

) , , ( ) , , ( k k y y

ij j i ij

x x

[Shapovalov et al., 2010]

slide-5
SLIDE 5

CRF training

  • parametric model
  • parameters need to be

learned!

CRF inference

slide-6
SLIDE 6

Structured learning

  • Linear model:
  • CRF negative energy:
  • Find such that

y y

i ,i i i

x w x

T n

) , (

l j k i ij ,ij j i i

y y y y

, , T e

) , , ( x w x

y

y x w max ) , (

T

w

______) (______, ______) (______,

T T

w w ______) (______, ______) (______,

T T

w w ______) (______, ______) (______,

T T

w w ______) (______, ______) (______,

T T

w w

[Anguelov et al., 2005; and a lot more]

slide-7
SLIDE 7

Structured loss

  • Define
  • Define structured loss, for example:
  • Find such that

w

______) (______, ______) , ( ______) , (

T T

x w x w ______) (______, ______) , ( ______) , (

T T

x w x w ______) (______, ______) , ( ______) , (

T T

x w x w ______) (______, ______) , ( ______) , (

T T

x w x w

(______) features x

N i i i

y y ] [ ) , ( y y

slide-8
SLIDE 8

Cutting-plane training

  • A lot of constraints (Kn)
  • Maintain a working set
  • Add iteratively the most violated one:
  • Polynomial complexity
  • SVMstruct implementation [Joachims, 2009]

y y y y x w y x w ), , ( ) , ( ) , (

T T

) , ( ) , ( max arg

T

y y y x w y

y

slide-9
SLIDE 9

Results

[Munoz et al., 2009] Our method

slide-10
SLIDE 10

Results: balanced loss better than the Hamming one

0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Ground recall Building recall Tree recall G-mean recall SVM-HAM SVM-RBF

slide-11
SLIDE 11

Results: RBF better than linear

0,5 0,55 0,6 0,65 0,7 0,75 0,8 0,85 0,9 0,95 1 Ground recall Building recall Tree recall G-mean recall SVM-LIN SVM-RBF

slide-12
SLIDE 12

Results: fails at very small classes

0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Ground f- score Vehicle f- score Tree f-score Pole f-score [Munoz, 2009] SVM-LIN SVM-RBF

0.2% of trainset

slide-13
SLIDE 13

Analysis

  • Advantages:

– more flexible model – accounts for class imbalance – allows kernelization

  • Disadvantages:

– really slow (esp. with kernels) – learns small/underrepresented classes badly