Addressing The False Negative Problem of Deep Learning MRI - - PowerPoint PPT Presentation

addressing the false negative problem of deep learning
SMART_READER_LITE
LIVE PREVIEW

Addressing The False Negative Problem of Deep Learning MRI - - PowerPoint PPT Presentation

Addressing The False Negative Problem of Deep Learning MRI Reconstruction Models by Paper #28 Adversarial Attacks and Robust Training Kaiyang Cheng*, Francesco Caliv*, Rutwik Shah, Misung Han, Sharmila Majumdar, Valentina Pedoia Disclosure


slide-1
SLIDE 1

Paper #28

Kaiyang Cheng*, Francesco Calivà*, Rutwik Shah, Misung Han, Sharmila Majumdar, Valentina Pedoia

Addressing The False Negative Problem of Deep Learning MRI Reconstruction Models by Adversarial Attacks and Robust Training

slide-2
SLIDE 2

MIDL 2020 2 victorcheng21@Berkeley.edu

Disclosure

I have no financial interests or relationships to disclose with regard to the subject matter of this presentation. Funding source This project was supported by R00AR070902 (VP), R61AR073552 (SM/VP) from the National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institutes of Health, (NIH-NIAMS).

slide-3
SLIDE 3

MIDL 2020 3 victorcheng21@Berkeley.edu

Outline

  • Motivation
  • False negative problem in accelerated MRI reconstruction
  • Adversarial examples
  • FNAF attack
  • Adversarial robustness training
  • FNAF robust training
  • Experimental results
  • Conclusions
slide-4
SLIDE 4

MIDL 2020 4 victorcheng21@Berkeley.edu

Adversarial Examples in Medical Imaging Analysis

slide-5
SLIDE 5

MIDL 2020 5 victorcheng21@Berkeley.edu

Adversarial Examples in Medical Imaging Analysis

slide-6
SLIDE 6

MIDL 2020 6 victorcheng21@Berkeley.edu

IID Machine Learning vs Adversarial Machine Learning

IID: Average Case Adversarial: Worst Case

!(#,%)~([*(+, ,, -)] !(#,%)~([max

2∈4 *(-, + + 6, ,)]

slide-7
SLIDE 7

MIDL 2020 7 victorcheng21@Berkeley.edu Under-sampled k-space Fully-sampled k-space Methods

Accelerated MRI Reconstruction

slide-8
SLIDE 8

MIDL 2020 8 victorcheng21@Berkeley.edu

FastMRI results: loss of meniscal tear

slide-9
SLIDE 9

MIDL 2020 9 victorcheng21@Berkeley.edu

The False Negative Phenomenon

slide-10
SLIDE 10

MIDL 2020 10 victorcheng21@Berkeley.edu

Two hypotheses for the false negative problem: 1) The information of small abnormality features is completely lost through the under- sampling process 2) The information of small abnormality features is not completely lost. Instead, it is attenuated and laid in the tail-end of the task distribution, hence is rare

slide-11
SLIDE 11

FNAF: false-negative adversarial feature

MIDL 2020 11 victorcheng21@Berkeley.edu

A perceptible small feature which is present in the ground truth MRI but has disappeared upon MRI reconstruction.

A B G C E F H D

slide-12
SLIDE 12

MIDL 2020 12 victorcheng21@Berkeley.edu

Adversarial Examples and Attacks

max

$∈& '(), + + -, .)

slide-13
SLIDE 13

MIDL 2020 13 victorcheng21@Berkeley.edu

Adversarial Examples and Attacks

slide-14
SLIDE 14

MIDL 2020 14 victorcheng21@Berkeley.edu

FNAF map penaliing map placeholder T Fllsampled MRI placeholder

  • FNAF

placeholder

Aack l:

Undersampled FNAF placeholder

+\dela,

predicted RECONSTRUCTED MRI ith fnaf placeholder encoder decoder

Recci l:

Fllsampled MRI placeholder

+\dela^\ime

AACKE NE ECONCION NE

  • ROB RAINING

Robst training loss:

100 50

FNAF Attack

max

$∈& '(), + + -, .)]

max

$∈& '(), + + -, . + -′)]

  • = 3(-4)

3(.) = ℱ67(8 ℱ . ) 89:(; + , ; . )

slide-15
SLIDE 15

MIDL 2020 15 victorcheng21@Berkeley.edu

Under-sampling information preservation

FNAF map penaliing map placeholder T Fllsampled MRI placeholder

  • FNAF

placeholder

Aack l:

Undersampled FNAF placeholder

+\dela,

predicted RECONSTRUCTED MRI ith fnaf placeholder encoder decoder

Recci l:

Fllsampled MRI placeholder

+\dela^\ime

AACKE NE ECONCION NE

  • ROB RAINING

Robst training loss:

100 50 FNAF map penaliing map placeholder T Fllsampled MRI placeholder

  • FNAF

placeholder

Aack l:

Undersampled FNAF placeholder

+\dela,

predicted RECONSTRUCTED MRI ith fnaf placeholder encoder decoder

Recci l:

Fllsampled MRI placeholder

+\dela^\ime

AACKE NE ECONCION NE

  • ROB RAINING

Robst training loss:

100 50

! " + $, " > '

slide-16
SLIDE 16

MIDL 2020 16 victorcheng21@Berkeley.edu

Adversarial robustness training

FNAF map penaliing map placeholder T Fllsampled MRI placeholder

  • FNAF

placeholder

Aack l:

Undersampled FNAF placeholder

+\dela,

predicted RECONSTRUCTED MRI ith fnaf placeholder encoder decoder

Recci l:

Fllsampled MRI placeholder

+\dela^\ime

AACKE NE ECONCION NE

  • ROB RAINING

Robst training loss:

100 50

!(#,%)~([max

  • ∈/ 0(1, 2 + 4, 5)]
slide-17
SLIDE 17

MIDL 2020 17 victorcheng21@Berkeley.edu

Experimental Results

slide-18
SLIDE 18

MIDL 2020 18 victorcheng21@Berkeley.edu

A B G C E F H D

Qualitative Results

The top row (A-D) shows a ”failed” FNAF attack. The bottom row (E-H) shows a ”successful” FNAF attack. Column 1 contains the under-sampled zero-filled images. Column 2 contains the fully-sampled ground truth images. Column 3 contains U-Net reconstructed images. Column 4 contains FNAF-robust U-Net reconstructed images. (C-G-D-H) FNAF reconstruction: (C) adversarial loss of 0.000229. (G) adversarial loss of 0.00110. (D) adversarial loss of 9.73 · 10−5. (H) adversarial loss of 0.000449

slide-19
SLIDE 19

MIDL 2020 19 victorcheng21@Berkeley.edu

Information Preservation (IP)

! " + $, " > '

slide-20
SLIDE 20

MIDL 2020 20 victorcheng21@Berkeley.edu

FNAF Attack Loss vs. IP Loss

slide-21
SLIDE 21

MIDL 2020 21 victorcheng21@Berkeley.edu

A B C

FNAF Location Distribution and Transferability

FNAF location distribution within the 120x120 center crop of the image of (A) U-Net, (B) I-RIM, (C) FNAF-robust U-Net We take FNAF examples from U-Net and apply them to I-RIM, and observe a 89.48% attack rate.

slide-22
SLIDE 22

MIDL 2020 22 victorcheng21@Berkeley.edu

Real-world Abnormalities reconstruction

(A) Ground truth: small cartilage lesion in femur. (B) U-Net: Area of cartilage lesion not defined and resembles increased signal intensity. (C) FNAF-robust U-Net: Cartilage lesion preserved but less clear.

slide-23
SLIDE 23

MIDL 2020 23 victorcheng21@Berkeley.edu

Limitations

  • FNAF attack hit rate was defined heuristically
  • Attack inner maximization optimization has no guarantee and can be expensive
  • Adversarial training is only empirically robust
  • Limited real world abnormalities evaluation
slide-24
SLIDE 24

MIDL 2020 24 victorcheng21@Berkeley.edu

Conclusions and Future directions

  • Two hypotheses

1) The information of small abnormality features is completely lost through the under- sampling process 2) The information of small abnormality features is not completely lost. Instead, it is attenuated and laid in the tail-end of the task distribution, hence is rare

  • Address our limitations
  • Robustness in other medical imaging tasks
slide-25
SLIDE 25

Acknowledgements

Funding source This project was supported by R00AR070902 (VP), R61AR073552 (SM/VP) from the National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institutes of Health, (NIH-NIAMS). Sharmila Majumdar’s Lab Misung Han Claudia Iriondo Valentina Pedoia’s Lab Francesco Calivà Rutwik Shah victorcheng21@berkeley.edu Paper #28 MIDL 2020 victorcheng21@Berkeley.edu 25