Multicriteria 3D PET image segmentation Application to oncology and - - PowerPoint PPT Presentation

multicriteria 3d pet image segmentation
SMART_READER_LITE
LIVE PREVIEW

Multicriteria 3D PET image segmentation Application to oncology and - - PowerPoint PPT Presentation

Introduction Methodology Experimental study Future works Multicriteria 3D PET image segmentation Application to oncology and cancer lesion extraction Francisco Javier Alvarez Padilla, lose Grossiord, Barbara Romaniuk, Benot Naegel,


slide-1
SLIDE 1

Introduction Methodology Experimental study Future works

Multicriteria 3D PET image segmentation

Application to oncology and cancer lesion extraction

Francisco Javier Alvarez Padilla, Éloïse Grossiord, Barbara Romaniuk, Benoît Naegel, Camille Kurtz, Hugues Talbot, Laurent Najman, Romain Guillemot, Dimitri Papathanassiou, Nicolas Passat

IPTA 2015 – Orléans (France)

November 5, 2015

1 / 19

slide-2
SLIDE 2

Introduction Methodology Experimental study Future works

Outline

1

Introduction

2

Methodology: Multicriteria 3D PET image segmentation

3

Experimental study on oncology and cancer lesion extraction

4

Future works

2 / 19

slide-3
SLIDE 3

Introduction Methodology Experimental study Future works

Outline

1

Introduction

2

Methodology: Multicriteria 3D PET image segmentation

3

Experimental study on oncology and cancer lesion extraction

4

Future works

2 / 19

slide-4
SLIDE 4

Introduction Methodology Experimental study Future works

Big Picture: Medical imaging & Image analysis

Image interpretation: Diagnosis decision making The number of medical images is exploding (> 200 images per exam) Substantial variations in interpretation between physicians Largely an unassisted process

Medical image interpretation is subjective and accuracy varies widely

Objective: Integrate computer-based assistance To detect / segment / classify anatomical and pathological structures of interest (lesions, organs, etc.) in medical images To help physicians to standardize the interpretation of medical images

3 / 19

slide-5
SLIDE 5

Introduction Methodology Experimental study Future works

Focus: Oncology & Nuclear medicine

PET-CT imaging Positron Emission Tomography (PET) constitutes the gold-standard for image-based diagnosis and patient follow-up for cancers PET scans use radiopharmaceuticals (FDG, FNa) to create images of active blood flow, informing about the metabolic activity of lesions In contrast to other 3D imaging modalities (MRI, CT), PET images have: low spatial resolution acquisition / reconstruction / anatomical artifacts

(left) PET image and (right) CT image 4 / 19

slide-6
SLIDE 6

Introduction Methodology Experimental study Future works

Focus: Oncology & Nuclear medicine

PET-CT imaging Positron Emission Tomography (PET) constitutes the gold-standard for image-based diagnosis and patient follow-up for cancers PET scans use radiopharmaceuticals (FDG, FNa) to create images of active blood flow, informing about the metabolic activity of lesions In contrast to other 3D imaging modalities (MRI, CT), PET images have: low spatial resolution acquisition / reconstruction / anatomical artifacts

(left) PET image and (right) CT image 4 / 19

slide-7
SLIDE 7

Introduction Methodology Experimental study Future works

Focus: Oncology & Nuclear medicine

Analysis of lesions from PET images PET images are mostly processed via basic approaches, such as fixed / adaptive thresholding The Standardized Uptake Value (SUV) constitutes the gold-standard, despite its limitations [Buvat, 2007] Segmentation strategies recently emerged for detecting / extracting the lesions rely on intensity-based approaches (e.g., thresholding, region-growing) can lead to inaccurate results where lesions are mixed-up with hyperfixating organs Other approaches also intend to embed additional information (e.g., shape priors, anatomical context)

In all these strategies, the priors are limited in

number, defined beforehand and considered a priori as correct, thus constituting hard parameters

ROI generated by a threshold based algorithm in a 3D PET image 5 / 19

slide-8
SLIDE 8

Introduction Methodology Experimental study Future works

Focus: Oncology & Nuclear medicine

Analysis of lesions from PET images PET images are mostly processed via basic approaches, such as fixed / adaptive thresholding The Standardized Uptake Value (SUV) constitutes the gold-standard, despite its limitations [Buvat, 2007] Segmentation strategies recently emerged for detecting / extracting the lesions rely on intensity-based approaches (e.g., thresholding, region-growing) can lead to inaccurate results where lesions are mixed-up with hyperfixating organs Other approaches also intend to embed additional information (e.g., shape priors, anatomical context)

In all these strategies, the priors are limited in

number, defined beforehand and considered a priori as correct, thus constituting hard parameters

ROI generated by a threshold based algorithm in a 3D PET image 5 / 19

slide-9
SLIDE 9

Introduction Methodology Experimental study Future works

Purpose

Hypothesis There does not exist any consensus related to the most relevant criteria (features) for segmenting lesions in PET images Purpose To learn the expert knowledge carried by their behaviour when analysing 3D PET images, to identify an arbitrary number of relevant quantitative criteria To use this knowledge to reproduce the expert behaviour in interactive and robust lesion segmentation strategies Proposed method: Multicriteria 3D PET image segmentation Step 1. Knowledge extraction from examples Classification approach relying on example-based learning strategies, allowing for interactive example definition and more generally incremental refinement Step 2. Knowledge-based segmentation Segmentation approach relying on efficient (morphological) hierarchical segmentation, allowing vectorial attribute handling

6 / 19

slide-10
SLIDE 10

Introduction Methodology Experimental study Future works

Outline

1

Introduction

2

Methodology: Multicriteria 3D PET image segmentation

3

Experimental study on oncology and cancer lesion extraction

4

Future works

6 / 19

slide-11
SLIDE 11

Introduction Methodology Experimental study Future works

Outline

2

Methodology: Multicriteria 3D PET image segmentation Knowledge extraction from examples Knowledge-based segmentation

6 / 19

slide-12
SLIDE 12

Introduction Methodology Experimental study Future works

Knowledge extraction step

Objective: To automatically learn, from examples defined in PET 3D volumes, discriminative imaging criteria to guide the segmentation to extract lesions Building a predictive / classification model Step 1. Build a learning database To allow the experts to select positive (lesions) and negative (hyperfixating

  • rgans) examples in PET images

This task is carried out by experts using an interactive 3D stereoscopic visualization approach based on multimodal (PET-CT) imaging Step 2. Train a classifier To extract from this learning database what are the most relevant imaging criteria (features) to separate lesions from hyperfixating organs This task relies on supervised classification (Decision Trees – C4.5) to build a 3-class model: lesions, hyperfixating organs & other imaging areas

7 / 19

slide-13
SLIDE 13

Introduction Methodology Experimental study Future works

Knowledge extraction step

Objective: To automatically learn, from examples defined in PET 3D volumes, discriminative imaging criteria to guide the segmentation to extract lesions Building a predictive / classification model Step 1. Build a learning database To allow the experts to select positive (lesions) and negative (hyperfixating

  • rgans) examples in PET images

This task is carried out by experts using an interactive 3D stereoscopic visualization approach based on multimodal (PET-CT) imaging Step 2. Train a classifier To extract from this learning database what are the most relevant imaging criteria (features) to separate lesions from hyperfixating organs This task relies on supervised classification (Decision Trees – C4.5) to build a 3-class model: lesions, hyperfixating organs & other imaging areas

7 / 19

slide-14
SLIDE 14

Introduction Methodology Experimental study Future works

Knowledge extraction step

Objective: To automatically learn, from examples defined in PET 3D volumes, discriminative imaging criteria to guide the segmentation to extract lesions Building a predictive / classification model Step 1. Build a learning database To allow the experts to select positive (lesions) and negative (hyperfixating

  • rgans) examples in PET images

This task is carried out by experts using an interactive 3D stereoscopic visualization approach based on multimodal (PET-CT) imaging Step 2. Train a classifier To extract from this learning database what are the most relevant imaging criteria (features) to separate lesions from hyperfixating organs This task relies on supervised classification (Decision Trees – C4.5) to build a 3-class model: lesions, hyperfixating organs & other imaging areas

7 / 19

slide-15
SLIDE 15

Introduction Methodology Experimental study Future works

Knowledge extraction step

Objective: To automatically learn, from examples defined in PET 3D volumes, discriminative imaging criteria to guide the segmentation to extract lesions Building a predictive / classification model Step 1. Build a learning database To allow the experts to select positive (lesions) and negative (hyperfixating

  • rgans) examples in PET images

This task is carried out by experts using an interactive 3D stereoscopic visualization approach based on multimodal (PET-CT) imaging Step 2. Train a classifier To extract from this learning database what are the most relevant imaging criteria (features) to separate lesions from hyperfixating organs This task relies on supervised classification (Decision Trees – C4.5) to build a 3-class model: lesions, hyperfixating organs & other imaging areas The classification model is used in the segmentation step to select ROIs from PET image segmentation results that could correspond to lesions for cancer detection

7 / 19

slide-16
SLIDE 16

Introduction Methodology Experimental study Future works

Outline

2

Methodology: Multicriteria 3D PET image segmentation Knowledge extraction from examples Knowledge-based segmentation

7 / 19

slide-17
SLIDE 17

Introduction Methodology Experimental study Future works

Hierarchical segmentation guided by criteria

Component-tree [Salembier et al., 1998] The component-tree (max-tree) is a hierarchical image representation It can be used to develop efficient (quasi-linear time) segmentation procedure It works by decomposing the image into basic elements, each being associated to a given attribute (or multiple) It is well-adapted to deal with images where the structures of interest correspond to locally maximal values, as PET images [Grossiord et al., 2015] It does not modify the contours of the segmented structures which is a desirable property for images with fuzzy borders, as PET images

A B C D,E K L M N F,G H,I,J O,P

A grey-level input image Its component-tree 8 / 19

slide-18
SLIDE 18

Introduction Methodology Experimental study Future works

Hierarchical segmentation guided by criteria

Component-tree construction [Salembier et al., 1998] It consists of thresholding the image for each grey-level value (level-sets): Each connected component of a level-set of the image → a node of the tree Inclusion relation between two (different) connected components of successive threshold sets → edge of the tree The extremal nodes of the tree correspond to the areas of highest intensities

A B C D,E K L M N F,G H,I,J O,P

A grey-level input image Its component-tree

A D H B D D I C F E J C G K O N C L M P

The five level-sets obtained by successive thresholding. Each connected component is labeled by a letter (A, B, . . . ). 9 / 19

slide-19
SLIDE 19

Introduction Methodology Experimental study Future works

Hierarchical segmentation guided by criteria

Multicriteria-guided segmentation Step 1. Computation of the component-tree of the 3D PET image Step 2. Automatic selection of nodes corresponding to lesions Each node of the tree corresponds to a region of the image, with specific space and intensity properties The criteria (features) of each region can then be computed and stored as a vectorial attribute at the corresponding node The classification model (trained previously) is used to discard the nodes not corresponding to lesions (discrimination lesions & hyperfixating organs) Step 3. Segmentation result reconstruction The segmentation result can be reconstructed from the remaining set of nodes, preserving the initial intensities of the PET image in the segmented areas

10 / 19

slide-20
SLIDE 20

Introduction Methodology Experimental study Future works

Hierarchical segmentation guided by criteria

Multicriteria-guided segmentation Step 1. Computation of the component-tree of the 3D PET image Step 2. Automatic selection of nodes corresponding to lesions Each node of the tree corresponds to a region of the image, with specific space and intensity properties The criteria (features) of each region can then be computed and stored as a vectorial attribute at the corresponding node The classification model (trained previously) is used to discard the nodes not corresponding to lesions (discrimination lesions & hyperfixating organs) Step 3. Segmentation result reconstruction The segmentation result can be reconstructed from the remaining set of nodes, preserving the initial intensities of the PET image in the segmented areas

10 / 19

slide-21
SLIDE 21

Introduction Methodology Experimental study Future works

Hierarchical segmentation guided by criteria

Multicriteria-guided segmentation Step 1. Computation of the component-tree of the 3D PET image Step 2. Automatic selection of nodes corresponding to lesions Each node of the tree corresponds to a region of the image, with specific space and intensity properties The criteria (features) of each region can then be computed and stored as a vectorial attribute at the corresponding node The classification model (trained previously) is used to discard the nodes not corresponding to lesions (discrimination lesions & hyperfixating organs) Step 3. Segmentation result reconstruction The segmentation result can be reconstructed from the remaining set of nodes, preserving the initial intensities of the PET image in the segmented areas

10 / 19

slide-22
SLIDE 22

Introduction Methodology Experimental study Future works

Hierarchical segmentation guided by criteria

Multicriteria-guided segmentation Step 1. Computation of the component-tree of the 3D PET image Step 2. Automatic selection of nodes corresponding to lesions Each node of the tree corresponds to a region of the image, with specific space and intensity properties The criteria (features) of each region can then be computed and stored as a vectorial attribute at the corresponding node The classification model (trained previously) is used to discard the nodes not corresponding to lesions (discrimination lesions & hyperfixating organs) Step 3. Segmentation result reconstruction The segmentation result can be reconstructed from the remaining set of nodes, preserving the initial intensities of the PET image in the segmented areas

10 / 19

slide-23
SLIDE 23

Introduction Methodology Experimental study Future works

Interactive example definition

Objective: To build ROIs from 3D PET images to feed the learning process that needs to be trained with multicriteria values of lesions and hyperfixating organs Interactive approach to generate examples corresponding to component-tree nodes Step 1. Determining regions of interest in images Manual definition of spheres on the 3D PET image volume Use of multimodal imaging, namely by coupling the visualization of PET and CT images and 3D stereoscopic visualization (3D lesions in context) Step 2. Generate examples that correspond to nodes of a component-tree Translate ROIs into the closest nodes of the component-tree Use of efficient algorithms with a linear-time complexity [Passat et al., 2011]

CT image with a PET image (FDG) tissue-based color rendering with a pulmonary tumor Multimodal visualization of CT / PET images The two images are fused (volumic rendering) and visualized in 3D (autostereoscopic) under the MINT Software https://mint.univ-reims.fr 11 / 19

slide-24
SLIDE 24

Introduction Methodology Experimental study Future works

Outline

1

Introduction

2

Methodology: Multicriteria 3D PET image segmentation

3

Experimental study on oncology and cancer lesion extraction

4

Future works

11 / 19

slide-25
SLIDE 25

Introduction Methodology Experimental study Future works

Dataset

Image database Dataset of 3D PET / CT images obtained from N = 12 patients (acquired with a Gemini-Dual (Philips) PET / CT camera) Standard protocol for cancer imaging: PET acquisition, 3 minutes each bed position, from the pelvis to the base of the skull, one hour after the peripheral intravenous injection of 5 MBq / kg of 18F-FDG in patients fasted for at least 6 hours before; and low dose CT without contrast agent PET images were obtained with the RAMLA 3D algorithm, with CT-based attenuation correction (spatial resolution was approximately 4×4×4 mm3)

12 / 19

slide-26
SLIDE 26

Introduction Methodology Experimental study Future works

Potential criteria for PET segmentation

Multicriteria PET segmentation The proposed methodology allows us to involve a set of criteria of arbitrary size:

1

spectral attributes: contrast (difference between the extremal values of voxels)

2

spatial attributes: area (number of voxels), volume (number of voxels weighted by their values)

3

mixed attributes: volumic contrast (volume × contrast)

4

geometrical attributes: ratios between the eigenvalues of the matrix of inertia [Westenberg et al., 2007] (including compactness / elongation characterization)

5

spatial attributes: coordinates of the barycenter All of these considered attributes can be interpreted numerically

13 / 19

slide-27
SLIDE 27

Introduction Methodology Experimental study Future works

Learning of the classification model

Knowledge extraction step To populate the learning database with multicriteria values of lesions, hyperfixating

  • rgans, and other non-relevant areas extracted from the N PET images

For each image, a nuclear radiologist employed interactive segmentation to generate positive and negative examples (nodes of a component-tree) Multi-dimensional attribute vectors associated to these nodes were automatically computed, resulting in a training database of 1385 instances Learning of the classification model Same database for training and evaluating the decision tree model: leave-one-patient-out (LOPO) cross-validation strategy Model accuracy: 90.83% of instances were correctly classified (Kappa ≈ 0.817) Best features: mean grey-level, contrast and compactness / elongation criteria are the best criteria to separate lesions / hyperfixating organs

Classification results # of original instances lesions hyperfixating organs “non-relevant” areas lesions = 913 837 65 11 hyperfixating organs = 112 50 62 “non-relevant” areas = 360 1 359 14 / 19

slide-28
SLIDE 28

Introduction Methodology Experimental study Future works

Experimental results

Experiments on physical phantom The first experiment was made with a physical phantom filled with 70 MBq of 18F It contains six spherical “lesions” (ø 10, 13, 17, 22, 28 and 37 mm) Segmentation of the image is based on the most discriminative criteria computed during the learning step The component-tree of the image is pruned, based on a thresholding of the vectorial attributes of the nodes After this process, only the nodes corresponding to the spheres of interest are

  • kept. Then, the segmented image is reconstructed from the filtered tree

Original phantom image (2D slice) Segmentation result 15 / 19

slide-29
SLIDE 29

Introduction Methodology Experimental study Future works

Experimental results

Experiments on the real PET images Following a LOPO approach, each image is processed by using the training set computed on all the other images:

1

Component-tree computation (& node vectorial attributes computation)

2

Selection of nodes of interest, based on the retained criteria thresholding

3

3-class node classification in order to discard the false positives

PET image with manual delineation of lesions Segmentation result (in red)

Qualitative evaluation Intensity and shape criteria are adapted to correctly detect a majority of lesions False positives and negatives are also present (hyperfixating organs)

16 / 19

slide-30
SLIDE 30

Introduction Methodology Experimental study Future works

Experimental results

Experiments on the real PET images Following a LOPO approach, each image is processed by using the training set computed on all the other images:

1

Component-tree computation (& node vectorial attributes computation)

2

Selection of nodes of interest, based on the retained criteria thresholding

3

3-class node classification in order to discard the false positives

PET image with manual delineation of lesions Segmentation result (in red)

Qualitative evaluation Intensity and shape criteria are adapted to correctly detect a majority of lesions False positives and negatives are also present (hyperfixating organs)

16 / 19

slide-31
SLIDE 31

Introduction Methodology Experimental study Future works

Outline

1

Introduction

2

Methodology: Multicriteria 3D PET image segmentation

3

Experimental study on oncology and cancer lesion extraction

4

Future works

16 / 19

slide-32
SLIDE 32

Introduction Methodology Experimental study Future works

Future works

Improving the learning step

“one-shot learning” that rely on the principle of knowledge transfer, which encapsulates prior knowledge of learned categories and allows for learning on minimal training examples [Fei-Fei et al., 2006] “active learning” in which a learning algorithm is able to interactively query the user to obtain the categories at new data points and that are well adapted to deal with unbalanced datasets [Ertekin et al., 2007]

Multicriteria segmentation

Intensity and geometrical / morphological criteria will also be completed by structural and relational criteria [Bloch, 2005], derived from anatomical information available in CT or MRI images To consider multimodal images, namely PET / CT or MRI / PET To process these morphological and functional images in a unified way, based on recent extensions of the hierarchical framework considered in this study [Kurtz et al., 2014]

17 / 19

slide-33
SLIDE 33

Introduction Methodology Experimental study Future works

References I

Bloch, I. (2005). Fuzzy spatial relationships for image processing and interpretation: A review. Image Vision Comput, 23:89–110. Buvat, I. (2007). Understanding the limitations of SUV. Med Nucl, 46:165–172. Ertekin, S. E., Huang, J., Bottou, L., and Giles, C. L. (2007). Learning on the border: Active learning in imbalanced data classification. In CIKM, Proc., pages 127–136. Fei-Fei, L., Fergus, R., and Perona, P. (2006). One-shot learning of object categories. IEEE T Pattern Anal, 28:594–611. Grossiord, É., Talbot, H., and Passat et al., N. (2015). Hierarchies and shape-space for PET image segmentation. In ISBI, Proc., pages 1118–1121.

18 / 19

slide-34
SLIDE 34

Introduction Methodology Experimental study Future works

References II

Kurtz, C., Naegel, B., and Passat, N. (2014). Connected filtering based on multivalued component-trees. IEEE T Image Process, 23:5152–5164. Passat, N., Naegel, B., and Rousseau et al., F. (2011). Interactive segmentation based on component-trees. Pattern Recogn, 44:2539–2554. Salembier, P., Oliveras, A., and Garrido, L. (1998). Anti-extensive connected operators for image and sequence processing. IEEE T Image Process, 7:555–570. Westenberg, M. A., Roerdink, J. B. T. M., and Wilkinson, M. H. F. (2007). Volumetric attribute filtering and interactive visualization using the max-tree representation. IEEE T Image Process, 16:2943–2952.

19 / 19