how intelligent is artificial intelligence on the
play

How intelligent is artificial intelligence? On the surprising and - PowerPoint PPT Presentation

How intelligent is artificial intelligence? On the surprising and mysterious secrets of deep learning Vegard Antun (UiO) Anders C. Hansen (Cambridge, UiO) Joint work with: B. Adcock (SFU), M. Colbrook (Cambridge) N. Gottschling


  1. How intelligent is artificial intelligence? – On the surprising and mysterious secrets of deep learning Vegard Antun (UiO) Anders C. Hansen (Cambridge, UiO) Joint work with: B. Adcock (SFU), M. Colbrook (Cambridge) N. Gottschling (Cambridge) C. Poon (Bath), F. Renna (Porto) 22 May 2019 1 / 67

  2. What could possibly go wrong? AI replacing standard algorithms

  3. Mathematical setup Image reconstruction in medical imaging ◮ x ∈ C N the true image (interpreted as a vector). ◮ A ∈ C m × N measurement matrix ( m < N ). ◮ y = Ax the measurements. 4 / 67

  4. x | = | A ∗ y | True image x Sampling pattern Ω | ˜ x 1 = A ∗ y Sinogram y = Ax Back proj. ˆ FBP: ˆ x 2 = By 5 / 67

  5. Image reconstruction methods ◮ Deep learning approach: For a given a set { x 1 , . . . , x n } , train a neural network f : C m → C N such that � f ( Ax i ) − x i � ≪ � A ∗ x i − x i �

  6. Image reconstruction methods ◮ Deep learning approach: For a given a set { x 1 , . . . , x n } , train a neural network f : C m → C N such that � f ( Ax i ) − x i � ≪ � A ∗ x i − x i � ◮ Sparse regularization minimize z ∈ C N � Wz � ℓ 1 subject to � Az − y � ℓ 2 ≤ η Image x Wx Wx W = Wavelets W = ∇

  7. Sparse regularization reconstruction DB4 wavelets TV 8 / 67

  8. Typical sparse regularization result Let A ∈ C m × N with m < N and y = Ax + e , with � e � 2 ≤ η . Let W ∈ C N × N be unitary, and suppose that AW − 1 satisfies the restricted isometry property in levels (RIPL). Then any minimizer ˆ x of minimize z ∈ C N � Wz � 1 subject to � Az − y � ≤ η satisfies x − x || 2 � σ s , M ( Wx ) 1 || ˆ √ s + η where σ s , M ( Wx ) 1 = inf {|| Wx − z || 1 : z is ( s , M )-sparse } 9 / 67

  9. Neural network image reconstruction approaches ◮ Pure denoisers . Train a neural network φ to learn the noise. f ( y ) = A ∗ y − φ ( A ∗ y ) 10 / 67

  10. Neural network image reconstruction approaches ◮ Pure denoisers . Train a neural network φ to learn the noise. f ( y ) = A ∗ y − φ ( A ∗ y ) ◮ Data consistent denoisers . Train n networks φ i , i = 1 , . . . , n and ensure that the final image is consistent with your data 1: Pick α ∈ [0 , 1]. 2: Set ˜ y 1 = y . 3: for i = 1 , . . . n do x i = A ∗ ˜ y i − φ i ( A ∗ ˜ ˜ y i ) 4: ˆ y = A ˜ x i 5: ˜ y i +1 = α ˆ y + (1 − α ) y , (Enforce data consistency) 6: ˜ x n . 7: Return: 11 / 67

  11. Neural network image reconstruction approaches ◮ Learn the physics . Do not warm start your network with A ∗ . Rather learn f ( y i ) = x i , i = 1 , . . . , n directly 12 / 67

  12. Neural network image reconstruction approaches ◮ Learn the physics . Do not warm start your network with A ∗ . Rather learn f ( y i ) = x i , i = 1 , . . . , n directly ◮ Unravel n steps with sparse regularization solver . Learn λ i , K i , and Ψ i for i = 1 , . . . , n . 1: x 1 = A ∗ y 2: for i = 1 , . . . n do x i − ( K i ) T Ψ i ( K i ˜ x i ) + λ i A ∗ ( A ˜ ˜ x i +1 = ˜ x i − y ) 3: 4: Return: ˜ x n +1 . (omitting some details here) 13 / 67

  13. Networks considered ◮ AUTOMAP ◮ Low resolution images, 60% subsampling, single coil MRI. ◮ B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen and M. S. Rosen, ’ Image reconstruction by domain-transform manifold learning ’, Nature, vol. 555, no. 7697, p. 487, Mar. 2018. ◮ DAGAN ◮ Medium resolution, 20% subsampling, single coil MRI. ◮ G. Yang, S. Yu, H. Dong, G. Slabaugh, P. L. Dragotti, X. Ye, F. Liu, S. Arridge, J. Keegan, Y. Guo et al., DAGAN: Deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction , IEEE Transactions on Medical Imaging, 2017. ◮ Deep MRI ◮ Medium resolution, 33% subsampling, single coil MRI. ◮ J. Schlemper, J. Caballero, J. V. Hajnal, A. Price and D. Rueckert, A deep cascade of convolutional neural networks for MR image reconstruction , in International conference on information processing in medical imaging, Springer, 2017, pp. 647–658. 14 / 67

  14. Networks considered ◮ Ell 50 and Med 50 (FBPConvNet) ◮ CT or any Radon transform based inverse problem, with 50 uniformly spaced lines. ◮ K. H. Jin, M. T. McCann, E. Froustey and M. Unser, ’ Deep convolutional neural network for inverse problems in imaging ’, IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017. ◮ MRI-VN ◮ Medium to high resolution, parallel MRI with 15 coil elements and 15% subsampling. ◮ K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson, T. Pock and F. Knoll, ’ Learning a variational network for reconstruction of accelerated MRI data ’, Magnetic resonance in medicine, vol. 79, no. 6, pp. 3055–3071, 2018. 15 / 67

  15. How to measure image quality? (2) Translated (1) Base image (3) Add ǫ = 0 . 32 to pixels (5) Another bird (4) Nosiy (6) Different image Image (1) (2) (3) (4) (5) (6) ℓ 2 − distance 0 215.04 204.80 167.26 216.44 193.15 Figure from: M. Lohne, Parseval Reconstruction Networks , Master thesis, UiO, 2019

  16. Three types of instabilities (1) Instabilities with respect to tiny perturbations. That is ˜ y = A ( x + r ) with � r � very small. (2) Instabilities with respect to small structural changes, for example a tumour, may not be captured in the reconstructed image (3) Instabilities with respect to changes in the number of samples. Having more information should increase performance. V. Antun, F. Renna, C. Poon, B.Adcock, A. Hansen. On instabilities of deep learning in image reconstruction - Does AI come at a cost? (arXiv 2019) 17 / 67

  17. Finding tiny perturbations Try to maximize Q x ( r ) = 1 ℓ 2 − λ 2 � f ( A ( x + r )) − f ( Ax ) � 2 2 � r � 2 ℓ 2 , λ > 0 using a gradient ascent proceedure. 18 / 67

  18. Tiny perturbation – Deep MRI net | x | | f ( Ax ) | 19 / 67

  19. Tiny perturbation – Deep MRI net | x + r 1 | | f ( A ( x + r 1 )) | 20 / 67

  20. Tiny perturbation – Deep MRI net | x + r 2 | | f ( A ( x + r 2 )) | 21 / 67

  21. Tiny perturbation – Deep MRI net | x + r 3 | | f ( A ( x + r 3 )) | 22 / 67

  22. Tiny perturbation – Deep MRI net SoA from Ax SoA from A ( x + r 3 ) 23 / 67

  23. Tiny perturbation – AUTOMAP | x | | f ( Ax ) | SoA from Ax 24 / 67

  24. Tiny perturbation – AUTOMAP | x + r 1 | | f ( A ( x + r 1 )) | SoA from A ( x + r 1 ) 25 / 67

  25. Tiny perturbation – AUTOMAP | x + r 2 | | f ( A ( x + r 2 )) | SoA from A ( x + r 2 ) 26 / 67

  26. Tiny perturbation – AUTOMAP | x + r 3 | | f ( A ( x + r 3 )) | SoA from A ( x + r 3 ) 27 / 67

  27. Tiny perturbation – AUTOMAP | x + r 4 | | f ( A ( x + r 4 )) | SoA from A ( x + r 4 ) 28 / 67

  28. Finding tiny perturbations What if we tried to maximize Q x ( r ) = 1 ℓ 2 − λ 2 � f ( A ( x + r )) − x � 2 2 � r � 2 ℓ 2 , λ > 0 instead 29 / 67

  29. Tiny perturbation – AUTOMAP | x | | f ( Ax ) | SoA from Ax 30 / 67

  30. Tiny perturbation – AUTOMAP | x + r 1 | | f ( A ( x + r 1 )) | SoA from A ( x + r 1 ) 31 / 67

  31. Tiny perturbation – AUTOMAP | x + r 2 | | f ( A ( x + r 2 )) | SoA from A ( x + r 2 ) 32 / 67

  32. Tiny perturbation – AUTOMAP | x + r 3 | | f ( A ( x + r 3 )) | SoA from A ( x + r 3 ) 33 / 67

  33. Tiny perturbation – AUTOMAP | x + r 4 | | f ( A ( x + r 4 )) | SoA from A ( x + r 4 ) 34 / 67

  34. Tiny perturbation – MRI-VN Original x x + r 1 35 / 67

  35. Tiny perturbation – MRI-VN f ( Ax ) (zoomed) f ( A ( x + r 1 )) (zoomed) 36 / 67

  36. Tiny perturbation – MRI-VN SoA from Ax (zoomed) SoA from A ( x + r 1 ) (zoomed) 37 / 67

  37. Tiny perturbation – Med 50 Original x x + r 1 38 / 67

  38. Tiny perturbation – Med 50 f ( Ax ) (zoomed) f ( A ( x + r 1 )) (zoomed) 39 / 67

  39. Tiny perturbation – Med 50 SoA from Ax (zoomed) SoA from A ( x + r 1 ) (zoomed) 40 / 67

  40. Small structural change – Ell 50 41 / 67

  41. Small structural change – Ell 50 f ( Ax ) SoA from Ax 42 / 67

  42. Small structural change – DAGAN 43 / 67

  43. Small structural change – DAGAN f ( Ax ) SoA from Ax 44 / 67

  44. Small structural change – Deep MRI 45 / 67

  45. Small structural change – Deep MRI f ( Ax ) SoA from Ax 46 / 67

  46. Adding more samples Ell 50/Med 50 DAGAN MRI-VN Deep MRI 47 / 67

  47. Summary of so far... ◮ Tiny perturbations lead to a myriad of different artefacts. ◮ Variety in failure of recovering structural changes. ◮ Networks must be retrained on any subsampling pattern? ◮ Universality – Instabilities regardless of architecture? ◮ Rare events? – Empirical tests are needed. 48 / 67

  48. Can we fix it? ◮ Computational power is increasing. We can train, test and at a substantial higher rate than just a few years ago. ◮ The datasets are growing. ◮ Increased knowledge about good learning techniques 49 / 67

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend