A new metric for evaluating semantic segmentation: leveraging - - PowerPoint PPT Presentation

a new metric for evaluating semantic segmentation
SMART_READER_LITE
LIVE PREVIEW

A new metric for evaluating semantic segmentation: leveraging - - PowerPoint PPT Presentation

Introduction Semantic segmentation Accuracy evaluation Conclusions A new metric for evaluating semantic segmentation: leveraging global and contour accuracy Eduardo Fernandez-Moral 1 , Renato Martins 1 , Denis Wolf 2 , and Patrick Rives 1 1


slide-1
SLIDE 1

Introduction Semantic segmentation Accuracy evaluation Conclusions

A new metric for evaluating semantic segmentation: leveraging global and contour accuracy

Eduardo Fernandez-Moral1, Renato Martins1, Denis Wolf2, and Patrick Rives1

1Lagadic team. INRIA Sophia Antipolis - M´

editerran´

  • ee. France.

2 University of Sao Paulo - ICMC/USP, Brazil.

24/09/2017

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-2
SLIDE 2

Introduction Semantic segmentation Accuracy evaluation Conclusions

Table of contents

1

Introduction

2

Semantic segmentation CNN models Training data

3

Accuracy evaluation Comparison of metrics New BJ metric

4

Conclusions

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-3
SLIDE 3

Introduction Semantic segmentation Accuracy evaluation Conclusions

Introduction

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-4
SLIDE 4

Introduction Semantic segmentation Accuracy evaluation Conclusions

Context: semantic-based urban navigation

Create a semantical, textured 3D mesh of the environment to help for guidance and automatic navigation of different types of agents Stereopolis-II

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-5
SLIDE 5

Introduction Semantic segmentation Accuracy evaluation Conclusions

Context: semantic-based urban navigation

Create semantical, textured 3D meshes of the environment to help for guidance and automatic navigation of different types of agents

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-6
SLIDE 6

Introduction Semantic segmentation Accuracy evaluation Conclusions

Context: semantic-based urban navigation

Create semantical, textured 3D meshes of the environment to help for guidance and automatic navigation of different types of agents

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-7
SLIDE 7

Introduction Semantic segmentation Accuracy evaluation Conclusions

Context: semantic-based urban navigation

The problem of semantic segmentation consists of associating a class label to each pixel of the given image:

Source: A. Geiger et al., Vision meets Robotics: The KITTI Dataset. IJRR 2013

  • G. Ros et al., Vision-based Offline-Online Perception Paradigm for Autonomous Driving. WACV 2015
  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-8
SLIDE 8

Introduction Semantic segmentation Accuracy evaluation Conclusions

Context: semantic-based urban navigation

The problem of semantic segmentation consists of associating a class label to each pixel of the given image:

Source: M. Cordts et al., The Cityscapes Dataset for Semantic Urban Scene Understanding. CVPR 2013

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-9
SLIDE 9

Introduction Semantic segmentation Accuracy evaluation Conclusions

Semantic segmentation approaches

Traditional approaches: Classification of hand-crafted visual features (e.g. SIFT) taking into account the spatial distribution and the local neighborhood

Support Vector Machines (SVM) Random Forest (RF) Conditional Random Fields (CRF)

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-10
SLIDE 10

Introduction Semantic segmentation Accuracy evaluation Conclusions

Semantic segmentation approaches

Traditional approaches: Classification of hand-crafted visual features (e.g. SIFT) taking into account the spatial distribution and the local neighborhood

Support Vector Machines (SVM) Random Forest (RF) Conditional Random Fields (CRF)

Convolutional Neural Networks (CNN): used for features extraction and classification. Faster and more accurate than traditional methods

Encoder-Decoder CNN CNN + CRF CNN cascades

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-11
SLIDE 11

Introduction Semantic segmentation Accuracy evaluation Conclusions

Our work

Our work explores the problem of semantic segmentation from accurate RGB-D images We evaluate different network models and input data combinations We analyze different semantic segmentation metrics, with a particular interest on object boundary segmentation

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-12
SLIDE 12

Introduction Semantic segmentation Accuracy evaluation Conclusions CNN models Training data

Semantic segmentation

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-13
SLIDE 13

Introduction Semantic segmentation Accuracy evaluation Conclusions CNN models Training data

Encoder-Decoder CNN

SegNet1: Encoder-decoder architecture

1Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. “Segnet: A deep

convolutional encoder-decoder architecture for scene segmentation”. In: IEEE transactions on pattern analysis and machine intelligence (2017).

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-14
SLIDE 14

Introduction Semantic segmentation Accuracy evaluation Conclusions CNN models Training data

Encoder-Decoder CNN

SegNet1: Encoder-decoder architecture SegNet2: double pipeline SegNet (for color and geometry)

1Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. “Segnet: A deep

convolutional encoder-decoder architecture for scene segmentation”. In: IEEE transactions on pattern analysis and machine intelligence (2017).

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-15
SLIDE 15

Introduction Semantic segmentation Accuracy evaluation Conclusions CNN models Training data

Encoder-Decoder CNN

CEDCNN: compact model focused on real-time

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-16
SLIDE 16

Introduction Semantic segmentation Accuracy evaluation Conclusions CNN models Training data

Encoder-Decoder CNN

CEDCNN: compact model focused on real-time CEDCNN2: double pipeline CEDCNN (for color and geometry)

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-17
SLIDE 17

Introduction Semantic segmentation Accuracy evaluation Conclusions CNN models Training data

Training data

Trained and tested on public urban datasets: Virtual Kitti2 Kitti3 Results verified on: Cityscapes4 Our own data

2Adrien Gaidon et al. “Virtual worlds as proxy for multi-object tracking analysis”.

In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, pp. 4340–4349.

3Andreas Geiger et al. “Vision meets robotics: The KITTI dataset”.

In: The International Journal of Robotics Research 32.11 (2013), pp. 1231–1237.

4Marius Cordts et al. “The cityscapes dataset for semantic urban scene

understanding”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, pp. 3213–3223.

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-18
SLIDE 18

Introduction Semantic segmentation Accuracy evaluation Conclusions CNN models Training data

Data preprocessing

RGB Raw depth Elevation map Surface normals

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-19
SLIDE 19

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Accuracy evaluation

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-20
SLIDE 20

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Metrics

Traditional metrics Global accuracy (GA) F1-measure Jaccard index (Intersection over union [IoU])

JI = TP TP + FN + FP TP = True positives, FP = False positives TN = True negatives, FN = False negatives

5Gabriela Csurka et al. “What is a good evaluation measure for semantic

segmentation?.” In: BMVC. vol. 27. 2013, p. 2013.

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-21
SLIDE 21

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Metrics

Traditional metrics Global accuracy (GA) F1-measure Jaccard index (Intersection over union [IoU])

JI = TP TP + FN + FP TP = True positives, FP = False positives TN = True negatives, FN = False negatives

Boundary metrics Total boundary accuracy (TO) Jaccard index boundary (TJ) Boundary F-measure (BF)5

BFc = 2 · Pc · Rc Pc + Rc Pc = class precision, Rc = class recall

5Gabriela Csurka et al. “What is a good evaluation measure for semantic

segmentation?.” In: BMVC. vol. 27. 2013, p. 2013.

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-22
SLIDE 22

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Boundary metrics

Simple contour score (TO, TJ)

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-23
SLIDE 23

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Boundary metrics

Simple contour score (TO, TJ) Distance among contours (BF)

GT prediction contours

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-24
SLIDE 24

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

A new metric for both global and contour accuracy

Boundary Jaccard index (BJ) BJc = TPc

Bgt + TPc Bps

TPc

Bgt + TPc Bps + FPc + FNc .

(1) where

TPc

Bgt =

  • x∈Bc

gt

z with z =

  • 1 − (d(x, Sc

ps)/θ)2

if d(x, Sc

ps) < θ

  • therwise.

(2) FNc = |Bc

gt| − TPc Bgt

(3) TPc

Bps =

  • x∈Bc

ps

z with z =

  • 1 − (d(x, Sc

gt)/θ)2

if d(x, Sc

gt) < θ

  • therwise.

(4) FPc = |Bc

ps| − TPc Bps

(5)

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-25
SLIDE 25

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

A new metric for both global and contour accuracy

Comparison of class-wise metrics: JI, BF and the proposed BJ

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-26
SLIDE 26

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Inference

Inference results

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-27
SLIDE 27

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Class-wise accuracy for different models and input data

Network architectures: SegNet, basic-SegNet and CEDCNN SegNet2, FuseNet and CEDCNN2 Input data: color(RGB), Depth(D), Normal(N), Elevation(E), and combinations: RGBD, ND, NE RGB-D, RGB-NE, RGB-ND

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-28
SLIDE 28

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Average class-wise metrics for different network architectures and input data

Average class-wise recall

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-29
SLIDE 29

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Average class-wise metrics for different network architectures and input data

Average class-wise recall

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-30
SLIDE 30

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Average class-wise metrics for different network architectures and input data

Average class-wise recall Average class-wise JI

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-31
SLIDE 31

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Average class-wise metrics for different network architectures and input data

Average class-wise recall Average class-wise JI

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-32
SLIDE 32

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Average class-wise metrics for different network architectures and input data

Average class-wise recall Average class-wise JI

Remark: the different approaches cannot be compared without quantifying the relevance of each class

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-33
SLIDE 33

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Average per-image, class-wise metrics for different network architectures and input data

Average global and boundary metrics

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-34
SLIDE 34

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Average per-image, class-wise metrics for different network architectures and input data

Average global and boundary metrics

Remark: different metrics do not agree when ranking different so- lutions

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-35
SLIDE 35

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Spearman’s rank correlation of different metrics

It measures the similarity between pairs of metrics for ranking different solutions ρ ∈

  • − 1, 1
  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-36
SLIDE 36

Introduction Semantic segmentation Accuracy evaluation Conclusions Comparison of metrics New BJ metric

Spearman’s rank correlation of different metrics

It measures the similarity between pairs of metrics for ranking different solutions ρ ∈

  • − 1, 1
  • Table : Spearman Rank correlation among JI (IoU), BF3 and BJ3

BF3 BJ3 JI 0.90 0.99 BF3

  • 0.90
  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-37
SLIDE 37

Introduction Semantic segmentation Accuracy evaluation Conclusions

Conclusions

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-38
SLIDE 38

Introduction Semantic segmentation Accuracy evaluation Conclusions

Conclusions

CNN models for semantic segmentation The accuracy improves by splitting different types of data (e.g. color and depth) into different encoder-decoder sub-networks More compact CNN architectures may achieve almost the same accuracy at a significantly reduced cost Preprocessing depth information The combination of raw depth plus surface normals improves the accuracy of semantic segmentation

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-39
SLIDE 39

Introduction Semantic segmentation Accuracy evaluation Conclusions

Conclusions

New metric BJ It considers both global and boundary classification It is robust to unbalanced class frequencies Limitations BJ has a high computational cost: unsuitable for training Recent approaches based on CNN cascades already boost the quality of contour segmentation

  • E. Fernandez-Moral et al.

A new metric for evaluating semantic segmentation

slide-40
SLIDE 40

Questions?