Deep Learning Retinal Vessel Segmentation From a Single Annotated - - PowerPoint PPT Presentation

deep learning retinal vessel segmentation from a single
SMART_READER_LITE
LIVE PREVIEW

Deep Learning Retinal Vessel Segmentation From a Single Annotated - - PowerPoint PPT Presentation

Deep Learning Retinal Vessel Segmentation From a Single Annotated Example Praneeth Sadda 1 , John A. Onofrey 1 , Xenophon Papademetris 1,2 1 Department of Radiology and Biomedical Imaging, Yale University 2 Department of Biomedical Engineering,


slide-1
SLIDE 1

Deep Learning Retinal Vessel Segmentation From a Single Annotated Example

Praneeth Sadda1, John A. Onofrey1, Xenophon Papademetris1,2

1 Department of Radiology and Biomedical Imaging, Yale University 2 Department of Biomedical Engineering, Yale University

slide-2
SLIDE 2

Semantic Segmentation

Bagci et al. 2014 Garcia-Peraza-Herrera et al. 2014 Stahl et al. 2004

slide-3
SLIDE 3

FCNN

slide-4
SLIDE 4

Fundamental Issue of Supervised Learning

  • Data is easy to acquire
  • Data is difficult to label
slide-5
SLIDE 5

Many Datasets are Partially Labeled

Fully Labeled Dataset Partially Labeled Dataset

slide-6
SLIDE 6

FCNN

slide-7
SLIDE 7

Style Transfer

Zhu et al. 2017

slide-8
SLIDE 8

Synthetic Image Generation

Unlabeled Image Labeled Synthetic Image Labeled Image

slide-9
SLIDE 9

Proposed Workflow

Partially Labeled Dataset Generation of Synthetic Examples Training with Fully-Labeled Data

FCNN

slide-10
SLIDE 10

CycleGAN

slide-11
SLIDE 11

CycleGANs

!"#"$% &, ( = *+ ~ -(+) ( & 0 − 0 2 + *# ~ - # & ( 4 − 4 2 ( & 0 ≈ 0 & ( 4 ≈ 4

slide-12
SLIDE 12

Methods

  • Provide a manual ground truth segmentation for a single “template”

image.

  • Divide images into patches.
  • Train one CycleGAN for each unlabeled image (~10 hours per image)

using a patchwise approach.

  • Transfer the style of the template image using a patchwise approach.
  • Train FCNN on the style transferred images using a patchwise

approach.

slide-13
SLIDE 13

Synthetic Image Generation

slide-14
SLIDE 14

Phyisican Rater FCNN trained with one labeled example FCNN trained with one labeled and 19 unlabeled examples FCNN trained with 20 labeled examples

Results

slide-15
SLIDE 15

Results

Training Dataset Sensitivity Specificity Accuracy 20 Labeled 0.60 ± 0.10 0.98 ± 0.01 0.94 ± 0.01 1 Labeled + 19 Unlabeled 0.62 ± 0.10 0.95 ± 0.01 0.93 ± 0.02 1 Labeled 0.86 ± 0.04 0.53 ± 0.06 0.56 ± 0.05

slide-16
SLIDE 16

Conclusion

  • Style transfer can be used to exploit the information in unlabeled

examples for supervised learning of semantic segmentation.

  • For segmenting