vs net variable splitting network for accelerated
play

VS-Net: Variable Splitting Network for Accelerated Parallel MRI - PowerPoint PPT Presentation

VS-Net: Variable Splitting Network for Accelerated Parallel MRI Reconstruction Jinming Duan 1 , 2 , Jo Schlemper 2 , 3 , Chen Qin 2 , Cheng Ouyang 2 , Wenjia Bai 2 , Carlo Biffi 2 , Ghalib Bello 2 , Ben Statton 2 , Declan P ORegan 2 ,


  1. VS-Net: Variable Splitting Network for Accelerated Parallel MRI Reconstruction Jinming Duan † 1 , 2 , Jo Schlemper † 2 , 3 , Chen Qin 2 , Cheng Ouyang 2 , Wenjia Bai 2 , Carlo Biffi 2 , Ghalib Bello 2 , Ben Statton 2 , Declan P O’Regan 2 , Daniel Rueckert 2 1 School of Computer Science, University of Birmingham, UK 2 Imperial College London, UK 3 Hyperfine Research, CT, USA † contributed equally. 1/ 14

  2. Motivation: ◮ Slow MR imaging process leads to low patient throughput, patient’s discomfort, image artifacts, etc. 2/ 14

  3. Motivation: ◮ Slow MR imaging process leads to low patient throughput, patient’s discomfort, image artifacts, etc. ◮ Compressed sensing and parallel MRI are often used to speed up 2/ 14

  4. Motivation: ◮ Slow MR imaging process leads to low patient throughput, patient’s discomfort, image artifacts, etc. ◮ Compressed sensing and parallel MRI are often used to speed up ◮ Parallel MRI is common in clinical practice currently 2/ 14

  5. Motivation: ◮ Slow MR imaging process leads to low patient throughput, patient’s discomfort, image artifacts, etc. ◮ Compressed sensing and parallel MRI are often used to speed up ◮ Parallel MRI is common in clinical practice currently ◮ Deep learning based approaches [Schlemper, et al, Hammernik, et al, Yan, et al] have shown great promise 2/ 14

  6. Motivation: ◮ Slow MR imaging process leads to low patient throughput, patient’s discomfort, image artifacts, etc. ◮ Compressed sensing and parallel MRI are often used to speed up ◮ Parallel MRI is common in clinical practice currently ◮ Deep learning based approaches [Schlemper, et al, Hammernik, et al, Yan, et al] have shown great promise ◮ Combing parallel MRI, compressed sensing and deep learning is the aim 2/ 14

  7. Motivation: ◮ Slow MR imaging process leads to low patient throughput, patient’s discomfort, image artifacts, etc. ◮ Compressed sensing and parallel MRI are often used to speed up ◮ Parallel MRI is common in clinical practice currently ◮ Deep learning based approaches [Schlemper, et al, Hammernik, et al, Yan, et al] have shown great promise ◮ Combing parallel MRI, compressed sensing and deep learning is the aim 2/ 14

  8. General CS parallel MRI model: Technically, one can reconstruct a clean image m ∈ C N by minimizing n c λ � �DF S i m − y i � 2 min 2 + R ( m ) , (1) 2 m i =1 3/ 14

  9. General CS parallel MRI model: Technically, one can reconstruct a clean image m ∈ C N by minimizing n c λ � �DF S i m − y i � 2 min 2 + R ( m ) , (1) 2 m i =1 where ◮ λ is a smooth parameter 3/ 14

  10. General CS parallel MRI model: Technically, one can reconstruct a clean image m ∈ C N by minimizing n c λ � �DF S i m − y i � 2 min 2 + R ( m ) , (1) 2 m i =1 where ◮ λ is a smooth parameter ◮ y i ∈ C M ( M < N ) is undersampled k -space of the i th coil ( n c coils) 3/ 14

  11. General CS parallel MRI model: Technically, one can reconstruct a clean image m ∈ C N by minimizing n c λ � �DF S i m − y i � 2 min 2 + R ( m ) , (1) 2 m i =1 where ◮ λ is a smooth parameter ◮ y i ∈ C M ( M < N ) is undersampled k -space of the i th coil ( n c coils) ◮ D ∈ R M × N is sampling matrix that zeros out entries not acquired 3/ 14

  12. General CS parallel MRI model: Technically, one can reconstruct a clean image m ∈ C N by minimizing n c λ � �DF S i m − y i � 2 min 2 + R ( m ) , (1) 2 m i =1 where ◮ λ is a smooth parameter ◮ y i ∈ C M ( M < N ) is undersampled k -space of the i th coil ( n c coils) ◮ D ∈ R M × N is sampling matrix that zeros out entries not acquired ◮ F ∈ C N × N is the Fourier transform matrix 3/ 14

  13. General CS parallel MRI model: Technically, one can reconstruct a clean image m ∈ C N by minimizing n c λ � �DF S i m − y i � 2 min 2 + R ( m ) , (1) 2 m i =1 where ◮ λ is a smooth parameter ◮ y i ∈ C M ( M < N ) is undersampled k -space of the i th coil ( n c coils) ◮ D ∈ R M × N is sampling matrix that zeros out entries not acquired ◮ F ∈ C N × N is the Fourier transform matrix ◮ S i ∈ C N × N is the i th coil sensitivity, precomputed from fully sampled k -space center using E-SPIRiT [Uecker, et al] 3/ 14

  14. General CS parallel MRI model: Technically, one can reconstruct a clean image m ∈ C N by minimizing n c λ � �DF S i m − y i � 2 min 2 + R ( m ) , (1) 2 m i =1 where ◮ λ is a smooth parameter ◮ y i ∈ C M ( M < N ) is undersampled k -space of the i th coil ( n c coils) ◮ D ∈ R M × N is sampling matrix that zeros out entries not acquired ◮ F ∈ C N × N is the Fourier transform matrix ◮ S i ∈ C N × N is the i th coil sensitivity, precomputed from fully sampled k -space center using E-SPIRiT [Uecker, et al] ◮ R ( m ) is a regularization term, e.g. TV, TGV, wavelet L1, etc. 3/ 14

  15. Variable splitting optimization: We introduce splitting variables u ∈ C N and { x i ∈ C N } n c i =1 , converting (1) into n c λ � �DF x i − y i � 2 min 2 + R ( u ) s . t . m = u , S i m = x i , ∀ i ∈ { 1 , 2 , ..., n c } . 2 m , u , x i i =1 4/ 14

  16. Variable splitting optimization: We introduce splitting variables u ∈ C N and { x i ∈ C N } n c i =1 , converting (1) into n c λ � �DF x i − y i � 2 min 2 + R ( u ) s . t . m = u , S i m = x i , ∀ i ∈ { 1 , 2 , ..., n c } . 2 m , u , x i i =1 We add constraints back via penalty function method and minimize n c n c λ 2 + R ( u ) + α 2 + β � �DF x i − y i � 2 � � x i − S i m � 2 2 � u − m � 2 min (2) 2 , 2 2 m , u , x i i =1 i =1 4/ 14

  17. Variable splitting optimization: We introduce splitting variables u ∈ C N and { x i ∈ C N } n c i =1 , converting (1) into n c λ � �DF x i − y i � 2 min 2 + R ( u ) s . t . m = u , S i m = x i , ∀ i ∈ { 1 , 2 , ..., n c } . 2 m , u , x i i =1 We add constraints back via penalty function method and minimize n c n c λ 2 + R ( u ) + α 2 + β � �DF x i − y i � 2 � � x i − S i m � 2 2 � u − m � 2 min (2) 2 , 2 2 m , u , x i i =1 i =1 To minimize (2), we alternatively optimize m , u and x i w.r.t. the three subproblems: u k +1 = arg min 2 � u − m k � 2 β  2 + R ( u )  u   x k +1 λ � n c � n c  i =1 �DF x i − y i � 2 2 + α i =1 � x i − S i m k � 2 = arg min , (3) 2 i 2 x i m k +1 = arg min 2 � u k +1 − m � 2  � n c i =1 � x k +1 2 + β α − S i m � 2   2 i 2  m 4/ 14

  18. Variable splitting optimization: We introduce splitting variables u ∈ C N and { x i ∈ C N } n c i =1 , converting (1) into n c λ � �DF x i − y i � 2 min 2 + R ( u ) s . t . m = u , S i m = x i , ∀ i ∈ { 1 , 2 , ..., n c } . 2 m , u , x i i =1 We add constraints back via penalty function method and minimize n c n c λ 2 + R ( u ) + α 2 + β � �DF x i − y i � 2 � � x i − S i m � 2 2 � u − m � 2 min (2) 2 , 2 2 m , u , x i i =1 i =1 To minimize (2), we alternatively optimize m , u and x i w.r.t. the three subproblems: u k +1 = arg min 2 � u − m k � 2 β  2 + R ( u )  u   x k +1 λ � n c � n c  i =1 �DF x i − y i � 2 2 + α i =1 � x i − S i m k � 2 = arg min , (3) 2 i 2 x i m k +1 = arg min 2 � u k +1 − m � 2  � n c i =1 � x k +1 2 + β α − S i m � 2   2 i 2  m Here k ∈ { 1 , ..., n it } denotes the k th iteration. α and β are introduced penalty weights. 4/ 14

  19. Variable splitting optimization: and m k +1 using An optimal solution ( m ∗ ) may be found by iterating over u k +1 , x k +1 i u k +1 = denoiser ( m k )   = F − 1 (( λ D T D + α I ) − 1 ( α F S i m k + λ D T y i )) x k +1 . (4) ∀ i ∈ { 1 , 2 , ..., n c } i m k +1 = ( β I + α � n c i S i ) − 1 ( β u k +1 + α � n c i x k +1 i =1 S H i =1 S H  ) i 5/ 14

  20. Variable splitting optimization: and m k +1 using An optimal solution ( m ∗ ) may be found by iterating over u k +1 , x k +1 i u k +1 = denoiser ( m k )   = F − 1 (( λ D T D + α I ) − 1 ( α F S i m k + λ D T y i )) x k +1 . (4) ∀ i ∈ { 1 , 2 , ..., n c } i m k +1 = ( β I + α � n c i S i ) − 1 ( β u k +1 + α � n c i x k +1 i =1 S H i =1 S H  ) i ◮ 1st Eq. is an image denoising solver (ie. denoiser ) 5/ 14

  21. Variable splitting optimization: and m k +1 using An optimal solution ( m ∗ ) may be found by iterating over u k +1 , x k +1 i u k +1 = denoiser ( m k )   = F − 1 (( λ D T D + α I ) − 1 ( α F S i m k + λ D T y i )) x k +1 . (4) ∀ i ∈ { 1 , 2 , ..., n c } i m k +1 = ( β I + α � n c i S i ) − 1 ( β u k +1 + α � n c i x k +1 i =1 S H i =1 S H  ) i ◮ 1st Eq. is an image denoising solver (ie. denoiser ) ◮ 2nd Eq. is a closed-form , point-wise and coil-wise data consistency term 5/ 14

  22. Variable splitting optimization: and m k +1 using An optimal solution ( m ∗ ) may be found by iterating over u k +1 , x k +1 i u k +1 = denoiser ( m k )   = F − 1 (( λ D T D + α I ) − 1 ( α F S i m k + λ D T y i )) x k +1 . (4) ∀ i ∈ { 1 , 2 , ..., n c } i m k +1 = ( β I + α � n c i S i ) − 1 ( β u k +1 + α � n c i x k +1 i =1 S H i =1 S H  ) i ◮ 1st Eq. is an image denoising solver (ie. denoiser ) ◮ 2nd Eq. is a closed-form , point-wise and coil-wise data consistency term ◮ 3rd Eq. is a closed-form , point-wise weighted average 5/ 14

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend