image restoration using dnn
play

Image Restoration Using DNN Hila Levi & Eran Amar Images were - PowerPoint PPT Presentation

Image Restoration Using DNN Hila Levi & Eran Amar Images were taken from: http://people.tuebingen.mpg.de/burger/neural_denoising/ Agenda Domain Expertise vs. End-to-End optimization Image Denoising and Inpainting: Task definition and


  1. Image Restoration Using DNN Hila Levi & Eran Amar Images were taken from: http://people.tuebingen.mpg.de/burger/neural_denoising/

  2. Agenda Domain Expertise vs. End-to-End optimization Image Denoising and Inpainting: Task definition and previous work Displaying the NNs, experiments and results Conclusions Image Super Resolution (SR) For the rest of the talk, Neural Network will be written as NN . 2

  3. Domain Expertise vs. End-to-End optimization How to utilize Neural Networks for algorithmic challenges? One possible approach, is to try to combine the network with existing well- engineered algorithms ( “ physically ” or by better initialization). On the other hand, there is a “ pure ” learning approach which looks at the NN as a “ black box ” . That is, one should build a network with some (possibly customized) architecture and let it optimize its parameters jointly in an end-to-end manner. In this talk we will discuss those two approaches for the task of image restoration. 3

  4. Image Denoising 4

  5. Introduction - Image Denoising ● Goal - mapping a noisy image to a noise-free image. ● Motivation - Additive noise and image degradation are probable results of many acquisition channels and compression methods. Abundance of more complicated types of noise: The most common and easily simulated noise is ● Additive White Gaussian noise. Salt and pepper ● Gaussianizable noise types: Poisson noise & Rice- ● Strip noise ● distributed noise (will see examples later) JPEG quantization artifacts ● 5

  6. Previous Work Numerous and diverse (non-NN) approaches: Selectively smooth parts of the noisy image Careful shrinkage of wavelets coefficients Dictionary Based: try to approximate noisy patches with sparse combination of elements from a pre-learned dictionary (trained on noise-free database), for instance: KSVD. “ Non local statistics ” of images: different patches in the same image are often similar in appearance. For example, BM3D. 6

  7. KSVD ● Relying on the assumption that natural images admit a sparse decomposition over a redundant dictionary. ● In general, KSVD is an iterative procedure used to learn the dictionaries. In this talk, we refer to the denoising-algorithm based on KSVD as “ KSVD ” (more details about dictionary-based methods in the SR part of this talk). ● Achieved great results in image denoising and inpainting. Based on: M. Aharon, M. Elad, and A. Bruckstein. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation . IEEE Transactions on Signal Processing, 54(11):4311 – 4322, 2006 7

  8. BM3D ● BM3D = Block-Matching and 3D filtering , suggested first in 2007. ● Given a 2D square-block, finds all 2D similar blocks and “ group ” them together as a 3D array, then performs a collaborative filtering (method that the authors designed) of the group to obtain a noise-free 2D estimation. ● Averaging overlapping pixels estimations. ● Gives state of the art results. Based on: K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-D transform-domain collaborative filtering . IEEE Transactions on Image Processing, 16(8):2080 – 2095, 2007. 8

  9. How to Evaluate Denoising Technique? ● In this part, we will focus on PSNR . ● P eak S ignal-to- N oise R atio, expressed with logarithmic decibel scale. ● Higher is better. If I is the original noise-free image, and K is the noisy approximation: MAX I = the maximum value across all pixels. 9

  10. Pure NN approach: MLP ● Based on the work of: Harold C. Burger, Christian J. Schuler, and Stefan Harmeling. Image denoising: Can plain Neural Networks compete with BM3D? (June 2012 - the dawn of Neural Networks ) ● The idea is to learn a Multi-Layer Perceptron ( MLP ), which is simply a feed-forward network, to map noisy grayscale image patches onto cleaner patches. 10

  11. Mathematical Formulation Formally, the MLP is a nonlinear function that maps vector-valued input to a vector-valued output. For a 3-layer network, it can be written as: When the function tanh() operates coordinate-wise. 11

  12. Training Techniques Loss function: Stochastic Gradient Descent with backpropagation algorithm. Common NN tricks: Data normalization (to have zero mean) Weights initialization with normal distribution Learning rate division (in each layer, the learning rate was divided by the number of in- connection to that layer). Implemented over GPUs to allow large-scale experiments. 12

  13. Training and Testing Dataset Training: pairs of noisy and clean patches. Given a clean image, the noisy image was generated by applying AWG noise (std=25). Two main sources for clean images: Barkeley Segmentation dataset (small dataset, ~200 images) The union of LabelMe dataset (large dataset, ~150,000 images) Testing: Standard test set: 11 common images “ Lena ” , “ Barbara ” and etc 13

  14. Architecture Variations The specific variation of the network is defined by a string with the following structure ? - ?? - ? x ??? First, a letter S or L indicating the size of the training set Then, a number denoting the patch size The number of hidden layers follow by the size of the layers (all of them are of the same size) For example: L-17-4x2047. 14

  15. Improvement During Training Test the PSNR on “ Barbara ” and “ Lena ” images for every 2 million training examples. 15

  16. Competing with State of the Art 16

  17. Competing with State of the Art (cont.) 17

  18. Noise Levels & “ Agnostic ” Testing ● The MLP was trained for fixed level of noise (std=25). ● Testing was done on different level of noise. ● Other algorithms has to be supplied with the level of noise for the given image. 18

  19. Mixed Noise Levels for Training ● To overcome that, MLP was trained on several noise level ( “ std ” from 0 to 105 in step size of 5). ● The amount of noise were given during training as an additional input parameter. 19

  20. Handling Different Types of Noise Rice-distributed noise and Poisson noise can be handled by transforming the input image, and applying the MLP on the resulting AWG-like noise. 20

  21. Handling Different Types of Noise (2) In most cases it is more difficult\impossible to find Gaussianizing transforms. MLPs allows to effectively learn a denoising algorithm for a given noise type, provided that noise can be simulated (no need to redesign the network). 21

  22. Handling Different Types of Noise (3) Strip noise: ● Contains a structure ○ No canonical denoising algorithm so used BM3D. ○ Salt & pepper noise: ● Noisy values are not correlated to the original image ○ values. Median filtering as a baseline for comparison. ○ JPEG quantization artifacts: ● Due to the image compression (blocky image and loss of ○ edge clarity). Not random, but completely determined by the input. ○ Compared against the common method for handling ○ JPEG artifacts (re application of JPEG). 22

  23. 23

  24. The Power of Pure Learning Achieved State of the Art results. Key ingredients for success: The capacity of the network should be large enough (in terms of layers and units) Large patch size Huge training set (tens of millions) However, the best MLP performs well only with respect to single level of noise. When tried to overcome that, improved the generalization to different noises but still achieved less than the original version against fixed level of noise. 24

  25. Image Inpainting 25

  26. Blind \ Non-Blind Image Inpainting ● Goal - recovering missing pixel values or removing sophisticated patterns from the image. ● Known vs. unknown locations of the corrupted pixels. ● Some image denoising algorithms can be applied (with minor modifications) to non-blind image inpainting and achieve state of the art results. ● Blind inpainting are much harder problem. Previous methods forced strong assumptions on the inputs. 26

  27. Exploiting Domain Expertise ● Previous works based on Sparse Coding techniques performs well in practice, albeit being linear. ● It was suggested that non-linear “ deep ” models might achieve superior performance. ● Multi layer NN are such a deep model. ● Junyuan Xie, Linli Xu and Enhong Chen, suggested in Image Denoising and Inpainting with Deep Neural Networks (2012) to try to combine “ sparse ” with “ deep ” . We now present their work. 27

  28. DA, SDA and SSDA ● Denoising Autoencoder (DA) is a 2 layer NN that try to reconstruct the original input given a noisy estimation of it. ● Used in several other Machine Learning fields. ● Concatenate multiple DAs to get Stacked Denoising Autoencoder (SDA). ● The authors proposed a Sparse (induced) Stacked Denoising Autoencoder (SSDA). 28

  29. Single DA - Mathematical Formulation Noise\clean relation: ** Learning objective: Layers formulation: Activation function: Loss function: 29

  30. SSDA Formulation - how to make it “ parse ” ? ● Each DA pre-trained w.r.t sparsity inducing loss function: Parameter used: 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend