Introducing Space and Time in Local Feature-Based Endomicroscopic - - PowerPoint PPT Presentation

introducing space and time in local feature based
SMART_READER_LITE
LIVE PREVIEW

Introducing Space and Time in Local Feature-Based Endomicroscopic - - PowerPoint PPT Presentation

Introducing Space and Time in Local Feature-Based Endomicroscopic Image Retrieval January 25, 2010 Barbara Andr Supervision Tom Vercauteren Nicholas Ayache 1 B. Andr B. Andr , MSR , MSR- -INRIA Workshop 2010 INRIA Workshop 2010


slide-1
SLIDE 1

1

Introducing Space and Time in Local Feature-Based Endomicroscopic Image Retrieval

January 25, 2010

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

Barbara André

Supervision Tom Vercauteren Nicholas Ayache

slide-2
SLIDE 2

2

  • 1. Introduction
  • Outline
  • 1. Introduction

2. The Bag-of-Visual Words Method 3. Introducing Spatial Information 4. Introducing Temporal Information 5. Conclusion

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-3
SLIDE 3

3

  • 1. Introduction

Medical Context

Colonic Polyp

pCLE Probe-based Confocal Laser Endomicroscopy

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-4
SLIDE 4

4

pCLE Principle

Laser source Frame rate: 12 Hz Oscillating mirror at 4 kHz Galvanometric mirror Avalanche photodiode Surface scanned up to 600 × 600 μm

Fiber bundle scanned by the laser

Fiber bundle

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-5
SLIDE 5

5

Explore the Entire GI Tract

Colon

Duodenum and small intestine

Stomach Rectum Bile duct Esophagus

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-6
SLIDE 6

6

Benign Differentiate Neoplastic ( pathological ) Nuclei or membranes not visible… nucleo-cytoplasmic ratio ? Combination of local texture & shape features in pCLE images ?

  • 1. Introduction

Clinical Need

Courtesy of

  • Pr. Michael Wallace,

Mayo Clinic, Jacksonville, USA

Crypt Goblet Cell

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-7
SLIDE 7

7

Courtesy of Pr. Charles Lightdale Columbia-Presbyterian MC, New York, USA

Database Query

Courtesy of

  • Dr. Caroline Loeser

Yale University, New Haven, USA Courtesy of

  • Pr. Charles Lightdale

Columbia-Presbyterian MC, New York, USA Courtesy of

  • Pr. Michael Wallace

Mayo Clinic, Jacksonville, USA

Colon Benign Colon Benign Colon Neoplastic

  • 1. Introduction

CBIR Concept

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-8
SLIDE 8

8

Texture classes of the UIUCTex dataset [1]

Classification accuracy = 98.7 %

Database 25 classes 500 images CBIR method: Bag-of-Visual Words

[1] Zhang et al., IJCV 2007 [1] Zhang et al., IJCV 2007

  • 1. Introduction

State of the Art in Computer Vision

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-9
SLIDE 9

9

Methodology

Benign Neoplastic

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-10
SLIDE 10

10

  • 2. The Bag-of-Visual Words Method
  • Outline
  • 1. Introduction

2. The Bag-of-Visual Words Method 3. Introducing Spatial Information 4. Introducing Temporal Information 5. Conclusion

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-11
SLIDE 11

11

1. Salient Region Detector

Image I

Courtesy of Pr. Michael Wallace, Mayo Clinic, Jacksonville, USA

X Salient Region

  • 2. The Bag-of-Visual Words Method

BVW Pipeline

x x x x x x

Regions

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-12
SLIDE 12

12

2. Salient Region Description

Courtesy of Pr. Michael Wallace, Mayo Clinic, Jacksonville, USA

  • 2. The Bag-of-Visual Words Method

BVW Pipeline

x x x x x x

v1 v2 . Feature . v128

Invariant Description

SIFT [1] Vector Bag of features: region descriptions

Regions

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-13
SLIDE 13

13

“ Visual Words are clusters ” x x x x x x x x x x x x x x xx x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x xx x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x xx x x x xx xx x x x x x x x x x x x x x x x x x x x x x x x x x x xx x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x xx x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x xx x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x xx x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x xx x x x xx xx x x x x x x x x x x x x x x x x x x x x x x x x x x xx x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x xx x x x x x x x x x x x x x x x x x 3. Clustering

  • 2. The Bag-of-Visual Words Method

BVW Pipeline

N training images N bags of features:

“ One Bag for all images ”

V V VV V VV V V V V V V V V V V V V V V V V

x x x x x

( W1 , ... , WK ) K visual words Feature Space

e.g. SIFT Space

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-14
SLIDE 14

14

4. Mapping to Visual Words

Courtesy of Pr. Michael Wallace, Mayo Clinic, Jacksonville, USA

  • 2. The Bag-of-Visual Words Method

BVW Pipeline

x x x x x x

Regions ( W1 , ... , WK ) K visual words

Image Signature 1 2 3 ... K Visual word Number of

  • ccurrences

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-15
SLIDE 15

15

5. Similarity Measure

Courtesy of Pr. Michael Wallace, Mayo Clinic, Jacksonville, USA

  • 2. The Bag-of-Visual Words Method

BVW Pipeline

Image I1 Image I2 d ( I1 , I2 ) =

2 ( Signature ( I1 ) , Signature ( I2 ) )

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-16
SLIDE 16

16

Off- line On- line

Training Images

  • 2. Region

Descriptor

  • 1. Region

Detector

  • 4. Mapping to

Visual Words

Visual Word Dictionary & Image Signatures

  • 4. Mapping to

Visual Words 5. Similarity Measures

INPUT Query Image OUPTUT: Most similar images

  • 2. Region

Descriptor

  • 1. Region

Detector

  • 2. The Bag-of-Visual Words Method

BVW Pipeline

x x x x x x x x x x x x

Courtesy of

  • Pr. Michael Wallace,

Mayo Clinic, USA

3. Clustering

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-17
SLIDE 17

17

  • 3. Introducing Space
  • Outline
  • 1. Introduction

2. The Bag-of-Visual Words Method 3. Introducing Spatial Information 4. Introducing Temporal Information 5. Conclusion

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-18
SLIDE 18

18

Dense Region Detection

From Sparse to Dense Detector

Sparse detector… Clinically relevant information is densely distributed.

Courtesy of Pr. Michael Wallace, Mayo Clinic, Jacksonville, USA

time inconsistency !

  • 3. Introducing Space

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-19
SLIDE 19

19

Bi-Scale Disc Description

Large discs: groups of cells Small discs: individual cells Dense regular grid

Courtesy of Pr. Michael Wallace, Mayo Clinic, Jacksonville, USA

Disc Overlap

Bi-Scale Disc Description

  • 3. Introducing Space

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-20
SLIDE 20

20

Observation: Cellular architecture is substantial to establish a diagnosis Assumption: Spatial relationship between local features statistically the same in the images with similar appearance

Courtesy of

  • Pr. Michael Wallace

Mayo Clinic, Jacksonville, USA

Introducing Space

  • 3. Introducing Space

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-21
SLIDE 21

21

Idea: Spatial relationship Feature = Co-occurrence matrix of visual words

Introducing Space

  • 3. Introducing Space

w1 … wi … wK w1 wj wK Proba ( wi adjacent to wj in I ) M of size K x K Image I, K visual words

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-22
SLIDE 22

22

Introducing Space

  • 3. Introducing Space

S ( I ) = W . M

Co-occurrence matrix Discriminant linear combinaison ( LDA ) Benign space feature Neoplastic space feature

W Supervised description

  • f Spatial Features

S ( I1 ) S ( I2 )

Image I1 M1 Image I 2 M2

E = | S1 – S2 |

Space Feature Difference

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-23
SLIDE 23

23

Introducing Space

  • 3. Introducing Space

Off- line On- line Training Images

  • 2. Region

Descriptor

  • 1. Region

Detector

  • 4. Mapping to

Visual Words Visual Word Dictionary & Image Signatures

  • 4. Mapping to

Visual Words 5. Similarity Measures

INPUT Query Image OUPTUT: Most similar images

  • 2. Region

Descriptor

  • 1. Region

Detector

  • 3. Clustering

Retrieval Pipeline

Nearest Neighbors

Nearest Neighbors

Outlier Rejection

Threshold on E = | S ( output ) – S( input ) |

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-24
SLIDE 24

Results: Benign Query

Benign Vote = 100 %

Query 4 nearest neighbors

  • 3. Introducing Space

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-25
SLIDE 25

Benign Vote = 75 %

Query 4 nearest neighbors

  • 3. Introducing Space

Results: Benign Query

Outlier ! E > 2

100 %

E = 0.8 E = 1.8 E = 1.0 E = 4.0

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-26
SLIDE 26

Neoplastic Vote = 100 %

Query 4 nearest neighbors

  • 3. Introducing Space

Results: Neoplastic Query

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-27
SLIDE 27

Neoplastic Vote = 75 %

Query 4 nearest neighbors

  • 3. Introducing Space

Results: Neoplastic Query

Outlier ! E > 2

100 %

E = 2.1 E = 1.4 E = 0.6 E = 0.9

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-28
SLIDE 28

28

Method Comparison

Database: 52 videos, 1036 images, 2 classes. Leave-n-out cross-validation:

Accuracy = 78.2 %

Sensitivity = 79.1 % Specificity = 77.1 % Our proposed method

Accuracy = 66.7 % 98.7 %

State of the art

  • 3. Introducing Space

[1] Leung et al., IJCV 2001. [1] Leung et al., IJCV 2001. [1]

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-29
SLIDE 29

Benign Vote = 0 % Rare benign variety ?

Query 4 nearest neighbors

  • 3. Introducing Space

Results: Benign Query

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-30
SLIDE 30

Query 4 nearest neighbors

Neoplastic Vote = 25 % Too small FOV ?

  • 3. Introducing Space

Results: Neoplastic Query

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-31
SLIDE 31

31

  • Outline
  • 1. Introduction

2. The Bag-of-Visual Words Method 3. Introducing Spatial Information 4. Introducing Temporal Information 5. Conclusion

  • 4. Introducing Time

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-32
SLIDE 32

32

Problem: Discriminative patterns may be partially visible on still images. Why creating mosaics from pCLE data ?

  • viewpoint changes
  • little real dynamics

Idea: Combine Image Retrieval and Mosaicing

Introducing Time

  • 4. Introducing Time

pCLE probe moving on epithelium

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-33
SLIDE 33

33

Introducing Time

  • 4. Introducing Time

“ Mosaicing projects temporal dimension of a video

  • nto one larger image of higher resolution ”

mosaicing

Vercauteren Vercauteren et al., MIA 2006. et al., MIA 2006.

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-34
SLIDE 34

Mosaics: Benign Query Benign Vote = 100 %

Query 4 nearest neighbors

slide-35
SLIDE 35

Mosaics: Neoplastic Query Neoplastic Vote = 100 %

Query 4 nearest neighbors

slide-36
SLIDE 36

36

Method Comparison

Database: 52 videos, 1036 images, 66 mosaics, 2 classes. Leave-n-out cross-validation:

  • 4. Introducing Time

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-37
SLIDE 37

37

  • 5. Conclusion
  • Outline
  • 1. Introduction

2. The Bag-of-Visual Words Method 3. Introducing Spatial Information 4. Introducing Temporal Information

  • 5. Conclusion

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-38
SLIDE 38

38

  • Including Space Information
  • Considering Time Information

1st attempt to classify endomicroscopic videos using CBIR Genericity: - various endomicroscopic retrieval applications

  • multiclass image classification

Methodological Contributions

  • 5. Conclusion

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-39
SLIDE 39

39

  • Enrich training database to evaluate both contributions
  • Other databases, on other organs / pathologies
  • More robust validation with aid of medical expertise
  • Use 2D + t to exploit

biological dynamics

Perspectives

  • 5. Conclusion

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-40
SLIDE 40

40

Questions ?

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010

slide-41
SLIDE 41

41

References

B.

  • B. Andr

André é, MSR , MSR-

  • INRIA Workshop 2010

INRIA Workshop 2010