Addressing The False Negative Problem of Deep Learning MRI - - PowerPoint PPT Presentation
Addressing The False Negative Problem of Deep Learning MRI - - PowerPoint PPT Presentation
Addressing The False Negative Problem of Deep Learning MRI Reconstruction Models by Paper #28 Adversarial Attacks and Robust Training Kaiyang Cheng*, Francesco Caliv*, Rutwik Shah, Misung Han, Sharmila Majumdar, Valentina Pedoia Disclosure
MIDL 2020 2 victorcheng21@Berkeley.edu
Disclosure
I have no financial interests or relationships to disclose with regard to the subject matter of this presentation. Funding source This project was supported by R00AR070902 (VP), R61AR073552 (SM/VP) from the National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institutes of Health, (NIH-NIAMS).
MIDL 2020 3 victorcheng21@Berkeley.edu
Outline
- Motivation
- False negative problem in accelerated MRI reconstruction
- Adversarial examples
- FNAF attack
- Adversarial robustness training
- FNAF robust training
- Experimental results
- Conclusions
MIDL 2020 4 victorcheng21@Berkeley.edu
Adversarial Examples in Medical Imaging Analysis
MIDL 2020 5 victorcheng21@Berkeley.edu
Adversarial Examples in Medical Imaging Analysis
MIDL 2020 6 victorcheng21@Berkeley.edu
IID Machine Learning vs Adversarial Machine Learning
IID: Average Case Adversarial: Worst Case
!(#,%)~([*(+, ,, -)] !(#,%)~([max
2∈4 *(-, + + 6, ,)]
MIDL 2020 7 victorcheng21@Berkeley.edu Under-sampled k-space Fully-sampled k-space Methods
Accelerated MRI Reconstruction
MIDL 2020 8 victorcheng21@Berkeley.edu
FastMRI results: loss of meniscal tear
MIDL 2020 9 victorcheng21@Berkeley.edu
The False Negative Phenomenon
MIDL 2020 10 victorcheng21@Berkeley.edu
Two hypotheses for the false negative problem: 1) The information of small abnormality features is completely lost through the under- sampling process 2) The information of small abnormality features is not completely lost. Instead, it is attenuated and laid in the tail-end of the task distribution, hence is rare
FNAF: false-negative adversarial feature
MIDL 2020 11 victorcheng21@Berkeley.edu
A perceptible small feature which is present in the ground truth MRI but has disappeared upon MRI reconstruction.
A B G C E F H D
MIDL 2020 12 victorcheng21@Berkeley.edu
Adversarial Examples and Attacks
max
$∈& '(), + + -, .)
MIDL 2020 13 victorcheng21@Berkeley.edu
Adversarial Examples and Attacks
MIDL 2020 14 victorcheng21@Berkeley.edu
FNAF map penaliing map placeholder T Fllsampled MRI placeholder
- FNAF
placeholder
Aack l:
Undersampled FNAF placeholder
+\dela,
predicted RECONSTRUCTED MRI ith fnaf placeholder encoder decoder
Recci l:
Fllsampled MRI placeholder
+\dela^\ime
AACKE NE ECONCION NE
- ROB RAINING
Robst training loss:
100 50
FNAF Attack
max
$∈& '(), + + -, .)]
max
$∈& '(), + + -, . + -′)]
- = 3(-4)
3(.) = ℱ67(8 ℱ . ) 89:(; + , ; . )
MIDL 2020 15 victorcheng21@Berkeley.edu
Under-sampling information preservation
FNAF map penaliing map placeholder T Fllsampled MRI placeholder
- FNAF
placeholder
Aack l:
Undersampled FNAF placeholder
+\dela,
predicted RECONSTRUCTED MRI ith fnaf placeholder encoder decoder
Recci l:
Fllsampled MRI placeholder
+\dela^\ime
AACKE NE ECONCION NE
- ROB RAINING
Robst training loss:
100 50 FNAF map penaliing map placeholder T Fllsampled MRI placeholder
- FNAF
placeholder
Aack l:
Undersampled FNAF placeholder
+\dela,
predicted RECONSTRUCTED MRI ith fnaf placeholder encoder decoder
Recci l:
Fllsampled MRI placeholder
+\dela^\ime
AACKE NE ECONCION NE
- ROB RAINING
Robst training loss:
100 50
! " + $, " > '
MIDL 2020 16 victorcheng21@Berkeley.edu
Adversarial robustness training
FNAF map penaliing map placeholder T Fllsampled MRI placeholder
- FNAF
placeholder
Aack l:
Undersampled FNAF placeholder
+\dela,
predicted RECONSTRUCTED MRI ith fnaf placeholder encoder decoder
Recci l:
Fllsampled MRI placeholder
+\dela^\ime
AACKE NE ECONCION NE
- ROB RAINING
Robst training loss:
100 50
!(#,%)~([max
- ∈/ 0(1, 2 + 4, 5)]
MIDL 2020 17 victorcheng21@Berkeley.edu
Experimental Results
MIDL 2020 18 victorcheng21@Berkeley.edu
A B G C E F H D
Qualitative Results
The top row (A-D) shows a ”failed” FNAF attack. The bottom row (E-H) shows a ”successful” FNAF attack. Column 1 contains the under-sampled zero-filled images. Column 2 contains the fully-sampled ground truth images. Column 3 contains U-Net reconstructed images. Column 4 contains FNAF-robust U-Net reconstructed images. (C-G-D-H) FNAF reconstruction: (C) adversarial loss of 0.000229. (G) adversarial loss of 0.00110. (D) adversarial loss of 9.73 · 10−5. (H) adversarial loss of 0.000449
MIDL 2020 19 victorcheng21@Berkeley.edu
Information Preservation (IP)
! " + $, " > '
MIDL 2020 20 victorcheng21@Berkeley.edu
FNAF Attack Loss vs. IP Loss
MIDL 2020 21 victorcheng21@Berkeley.edu
A B C
FNAF Location Distribution and Transferability
FNAF location distribution within the 120x120 center crop of the image of (A) U-Net, (B) I-RIM, (C) FNAF-robust U-Net We take FNAF examples from U-Net and apply them to I-RIM, and observe a 89.48% attack rate.
MIDL 2020 22 victorcheng21@Berkeley.edu
Real-world Abnormalities reconstruction
(A) Ground truth: small cartilage lesion in femur. (B) U-Net: Area of cartilage lesion not defined and resembles increased signal intensity. (C) FNAF-robust U-Net: Cartilage lesion preserved but less clear.
MIDL 2020 23 victorcheng21@Berkeley.edu
Limitations
- FNAF attack hit rate was defined heuristically
- Attack inner maximization optimization has no guarantee and can be expensive
- Adversarial training is only empirically robust
- Limited real world abnormalities evaluation
MIDL 2020 24 victorcheng21@Berkeley.edu
Conclusions and Future directions
- Two hypotheses
1) The information of small abnormality features is completely lost through the under- sampling process 2) The information of small abnormality features is not completely lost. Instead, it is attenuated and laid in the tail-end of the task distribution, hence is rare
- Address our limitations
- Robustness in other medical imaging tasks