regularization of inverse problems
play

Regularization of Inverse Problems Yoeri Boink *, Srirang Manohar , - PowerPoint PPT Presentation

Deep Inversion, Autoencoders for Learned Regularization of Inverse Problems Yoeri Boink *, Srirang Manohar , Leonie Zeune *, Leon Terstappen , Stephan van Gils*, Christoph Brune* Biomedical Photonic Imaging Medical Cell Biophysics


  1. Deep Inversion, Autoencoders for Learned Regularization of Inverse Problems Yoeri Boink *†, Srirang Manohar †, Leonie Zeune *‡, Leon Terstappen ‡, Stephan van Gils*, Christoph Brune* Biomedical Photonic Imaging† Medical Cell Biophysics ‡ Applied Mathematics* 08-06-2018 Regularisation methods for PAT 1 / 43

  2. OUTLINE 1. Robustness of learned primal-dual (L-PD) reconstruction in photoacoustic tomography (PAT) 2. Functional learning with an unrolled gradient descent (GD) scheme guaranteed convergence and stability! 3. Learn latent representation of data space and image space via variational autoencoder (VAE) Imaging and ML, IHP Paris, Apr 3, 2019 2 C. Brune - Deep Inversion

  3. VARIATIONAL METHODS AND DEEP LEARNING Partial Differential Measurements Classical Inverse Problems Parameter Estimation Equations Hidden Parameters Deep Variational Complex Generalization? Networks Data Science Regularization Theory? Imaging and ML, IHP Paris, Apr 3, 2019 3 C. Brune - Deep Inversion

  4. VARIATIONAL METHODS AND DEEP LEARNING Deep Residual Neural Networks are connected to Partial Differential Equations Norms Differential Scale-space, Regularization Inverse Vector fields, Time nonconvex operators Harmonic theory problems multimodality dependent Variational methods analysis modeling Activation Convolutions Scattering Generalization GANs Multiple Residual? Deep networks functions functions per networks properties VAEs? populations? Skip ReLU, sigmoid layer connections? Chen, Pock - Trainable Nonlinear Reaction Diffusion, 2016 Chen et al - Neural Ordinary Differential Equations, 2018 Ciccone et al - Stable Deep Networks from Non-Autonomous DEs, 2018 Mallat - Understanding deep convolutional networks, 2016 Haber, Ruthotto - Stable architectures for deep neural networks, 2018 Imaging and ML, IHP Paris, Apr 3, 2019 4 C. Brune - Deep Inversion

  5. MATHEMATICS OF DEEP LEARNING uncertainty, bi-level, parameters non-convex Architecture Optimization DL accurate, large-scale, stable, real-time Regularization reproducible training data, generalization Vidal et al. Mathematics of Deep Learning, 2018 Imaging and ML, IHP Paris, Apr 3, 2019 5 C. Brune - Deep Inversion

  6. DEEPER IN INSIGHTS IN INTO DEEP IN INVERSIO ION Imaging and ML, IHP Paris, Apr 3, 2019 6 C. Brune - Deep Inversion

  7. 4TU PROPOSAL PRECISION MEDICINE 2018 natural performance robust simple Challenges : C1: sparse/big data; C2: multi-dimensionality, heterogeneity C3: non-linearity; C4: super-resolution Deep learning for model-driven imaging “classical” physics -based reconstruction enriched by deep learning → latent operator parameters, robustness “black - box” deep learning enriched by physical constraints → more natural, latent data structures, robustness Imaging and ML, IHP Paris, Apr 3, 2019 7 C. Brune - Deep Inversion

  8. FIN INDING NEEDLES IN IN A HAYSTACK Imaging and ML, IHP Paris, Apr 3, 2019 8 C. Brune - Deep Inversion

  9. FIN INDING NEEDLES IN IN A HAYSTACK Imaging and ML, IHP Paris, Apr 3, 2019 9 C. Brune - Deep Inversion

  10. Denoising and Scale Analysis by Local Diffusion • given noisy input • denoise image by minimizing the Total Variation (TV) flow → Time-discrete case: solving in every step the ROF [Rudin,92] problem: Imaging and ML, IHP Paris, Apr 3, 2019 10 C. Brune - Deep Inversion

  11. Denoising and Scale Analysis by Local Diffusion Idea: Solution of nonlinear eigenvalue problem transformed to a peak in the spectral domain → time-point t where the eigenfunctions are completely removed depends on size and height of the disc, thus scale indicator [Gilboa 2013,2014],[Horesh, Gilboa 2015],[Burger et al., 2015,2016] [Aujol, Gilboa, Papadakis, 2015, 2017] Imaging and ML, IHP Paris, Apr 3, 2019 11 C. Brune - Deep Inversion

  12. Contrast Invariance of L1-TV Denoising Imaging and ML, IHP Paris, Apr 3, 2019 12 C. Brune - Deep Inversion

  13. Zeune et al - Multiscale Segmentation via Bregman Distances and Nonlinear Spectral Analysis, 2017 Imaging and ML, IHP Paris, Apr 3, 2019 13 C. Brune - Deep Inversion

  14. CNN CLASSIFICATION FOR CANCER-ID ID 32 64 128 CTC probability No CTC probability Input: Convolution network image with with three max-pooling layers Fully- fluorescent connected channels layers 0.9722 0.0278 0.010 0.990 Imaging and ML, IHP Paris, Apr 3, 2019 14 C. Brune - Deep Inversion

  15. AUTOENCODERS AND GRADIENT FLOWS • assume we have a trained AE with tied weights, i.e. • x and f(x) live in the same vector space, i.e. is a vector field pointing from x to reconstructed f(x) • under some (fulfilled) assumptions, G(x) is the gradient field of an energy E(x) with Imaging and ML, IHP Paris, Apr 3, 2019 15 C. Brune - Deep Inversion

  16. Convolutional AE Trained on no noise data, 6000 training sets, 1000 test sets 3 Convolution (32 filters in total) + Pooling blocks for Encoder + Decoder → 4963 parameters GT No noise Std.dev 0.05 Std.dev 0.1 Std.dev 0.2 Std.dev 0.5 Imaging and ML, IHP Paris, Apr 3, 2019 16 C. Brune - Deep Inversion

  17. Convolutional AE Trained on noisy data (std dev 0.2), 6000 training sets, 1000 test sets 3 Convolution (32 filters in total) + Pooling blocks for Encoder + Decoder → 4963 parameters GT No noise Std.dev 0.05 Std.dev 0.1 Std.dev 0.2 Std.dev 0.5 Imaging and ML, IHP Paris, Apr 3, 2019 17 C. Brune - Deep Inversion

  18. Convolutional AE Trained on no noise data, 6000 training sets, 1000 test sets 3 Convolution (32 filters in total) + Pooling blocks for Encoder + Decoder → 4963 parameters GT No noise Different Radius Different Radius, Trained on correct Imaging and ML, IHP Paris, Apr 3, 2019 18 C. Brune - Deep Inversion

  19. DEEP LEARNING OF CIR IRCULATING TUMOR CELLS Leonie Zeune Zeune et al - Deep learning for tumor cell classification (2019) Imaging and ML, IHP Paris, Apr 3, 2019 19 C. Brune - Deep Inversion

  20. Folkman, 1996 PHOTOACOUSTICBREAST IM IMAGING (ILLUSTRATIONS MADE BY SJOUKJE SCHOUSTRA – BMPI GROUP , UNITWENTE) angiogenesis ULTRASOUND DETECTORS TUMOUR LASER BREAST Imaging and ML, IHP Paris, Apr 3, 2019 20 C. Brune - Deep Inversion

  21. Folkman, 1996 PHOTOACOUSTICBREAST IM IMAGING (ILLUSTRATIONS MADE BY SJOUKJE SCHOUSTRA – BMPI GROUP , UNITWENTE) angiogenesis ULTRASOUND DETECTORS PHOTOACOUSTIC EFFECT PULSE TUMOUR o LIGHT ABSORPTION LASER o TEMPERATURE RISE o EXPANSION BREAST o PRESSURE RISE o ULTRASOUND WAVE o SIGNALS o RECONSTRUCTION Imaging and ML, IHP Paris, Apr 3, 2019 21 C. Brune - Deep Inversion

  22. PHOTOACOUSTIC TOMOGRAPHY Folkman, 1996 angiogenesis Figure: 2D slice-based imaging with rotating fibres and sensor array. 1 We make use of a projection model with calibration 2 : 1 Van Es, Vlieg, Biswas, Hondebrink, Van Hespen, Moens, Steenbergen, Manohar - Coregistered photoacoustic and ultrasound tomography of healthy and inflamed human interphalangeal joints (2015) 2 Wang, Xing, Zeng, Chen -Photoacoustic imaging with deconvolution (2004) PAT-operator Imaging and ML, IHP Paris, Apr 3, 2019 22 C. Brune - Deep Inversion

  23. INVERSE RECONSTRUCTION METHODS operator ▪ Direct reconstruction: filtered backprojection (FBP) data reconstruction FBP operator ▪ Iterative reconstruction: total variation (TV) data reconstruction PDHG (solved with PDHG) operator ▪ Learned post-processing: U-Net 3 data reconstruction FBP U-Net 3 Jin, McCann, Froustey, Unser - Deep Convolutional Neural Network for Inverse Problems in Imaging (2017) Imaging and ML, IHP Paris, Apr 3, 2019 23 C. Brune - Deep Inversion

  24. MODEL-BASED REGULARISED RECONSTRUCTION -CHOOSING FUNCTION SPACES ? Rudin, Osher, Fatemi - Nonlinear total variation based noise removal algorithms (1992) Bredies, Kunisch, Pock - Total Generalised Variation (2010) Kingsbury - The dual-tree complex wavelet transform a new Boink, Lagerwerf, Steenbergen, van Gils, Manohar, Brune - A framework for efficient tool for image restauration and enhancement (1998) directional and higher-order reconstruction in photoacoustic tomography (2018) Imaging and ML, IHP Paris, Apr 3, 2019 24 C. Brune - Deep Inversion

  25. FROM MODEL-DRIVEN TO DATA-DRIVEN ▪ No regularisation parameter; ▪ Better robustness to noise; ▪ Faster reconstruction. However, ▪ No proven stability or convergence. Meinhardt, Möller, Hazirbas, Cremers - Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems (2017) Adler, Öktem – Learned Primal-Dual Reconstruction (2017) Hauptmann, Lucka, Betcke, Huynh, Cox, Beard, Ourselin, Arridge - Model based learning for accelerated, limited-view 3D photoacoustic tomography (2017) Imaging and ML, IHP Paris, Apr 3, 2019 25 C. Brune - Deep Inversion

  26. + + = = + CNN Imaging and ML, IHP Paris, Apr 3, 2019 26 C. Brune - Deep Inversion

  27. MODEL CHOICES PAT SIMULATION AND RECONSTRUCTION NETWORK Training Resolution 1.5625 mm Number of pixels 192x192 Number of sensors 32 ‘large’ small network network # primal-dual iterations (N) 10 5 # primal/dual channels (k) 5 2 # hidden layers 2 2 # channels in hidden layers 32 32 activation functions ReLU ReLU filter size convolutions 3x3 3x3 • 768 training images DRIVE dataset: Staal, Abramoff, Niemeijer, Viergever, Ginneken - • 192 test images Ridge based vessel segmentation in color images of the retina (2004) • scaled between 0 and 1 Imaging and ML, IHP Paris, Apr 3, 2019 27 C. Brune - Deep Inversion

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend