Efficient Neural Networks for Image Restoration Yulun Zhang - - PowerPoint PPT Presentation
Efficient Neural Networks for Image Restoration Yulun Zhang - - PowerPoint PPT Presentation
Efficient Neural Networks for Image Restoration Yulun Zhang Supervisor: Prof. Yun Fu SMILE lab, Northeastern University, Boston, US Research summary Deep convolutional neural networks for Image Restoration 1.Residual dense network. [ CVPR-2018
Research summary
Deep convolutional neural networks for Image Restoration 1.Residual dense network. [CVPR-2018] Comparable STOA performance with much less parameters 2.Residual channel attention network. [ECCV-2018] Very deep network with channel attention 3.Residual non-local attention network for image restoration. [ICLR-2019] Channel and spatial mixed attention
Research status
SRCNN FSRCNN EDSR VDSR MemNet Challenges: Hard to recover lost details. Hierarchical features in LR feature space Local feature extraction Hard to train very deep and wide network Objects have various: Scales; Angles of view; Aspect ratios; Feature extraction in HR space SRResNet Feature extraction in LR space Limitations: Increase computation complexity; Blur original LR inputs Limitations: Neglect to use hierarchical features; Hard to train very deep and wide networks
Residual Dense Network for Image Super-Resolution (CVPR-2018)
Research status
Method Conv Block 2 Block D Conv Conv LR HR concat Block 1 Block d Upscale 1x1 Conv
Global Feature Fusion Global Residual Learning Shallow Feature Extraction Upscale Global feature extraction
Conv Conv 1x1Conv concat ReLU Conv ReLU Conv ReLU Conv ReLU
Block d-1 Block d+1 Local Feature Fusion Block d Local Residual Learning Residual Dense Block Contiguous Memory
Residual Dense Network for Image Super-Resolution (CVPR-2018)
Research status
Study of D, C, and G.
The number of RDB (denote as D for short), the number of Conv layers per RDB (denote as C for short), and the growth rate (denote as G for short). Analyses: Our RDN allows deeper and wider network, from which more hierarchical features are extracted for higher performance. Residual Dense Network for Image Super-Resolution (CVPR-2018)
Research status
Ablation Investigation.
Ablation investigation on the effects of contiguous memory (CM), local residual learning (LRL), and global feature fusion (GFF). Analyses: These quantitative and visual analyses demonstrate the effectiveness and benefits of our proposed CM, LRL, and GFF. Residual Dense Network for Image Super-Resolution (CVPR-2018)
Research status
Analyses: These quantitative and visual analyses demonstrate the effectiveness and benefits of our proposed CM, LRL, and GFF. Residual Dense Network for Image Super-Resolution (CVPR-2018)
Research status
Visual Results with BI Degradation Model.
Residual Dense Network for Image Super-Resolution (CVPR-2018)
Research status
Visual Results with BD Degradation Model.
Residual Dense Network for Image Super-Resolution (CVPR-2018)
Research status
Visual Results with DN Degradation Model.
Residual Dense Network for Image Super-Resolution (CVPR-2018)
More results about image restoration
arXiv-2018- Residual dense network for image restoration https://arxiv.org/abs/1812.10477
Research status
Motivations for our next work (ECCV-2018-RCAN)
Less GPU memory. Wide network could consume too much GPU memory. (4 GPUs, or 1 GPU with batch split) Smaller model size. Too further decrease network parameter number. (CVPRW-17-EDSR: 43M, CVPR-18-RDN: 22M) Better performance. Very deep network should achieve better performance.
Research status
Image super-resolution using very deep residual channel attention networks (ECCV-2018)
Limitations of previous methods
Whether deeper networks can further contribute to image SR and how to construct very deep trainable networks remains to be explored. Deepest networks for image SR: ICCV-2017-MemNet_M10R10_212C64, CVPRW-2017-EDSR Previous networks lack distinguish ability across feature channels, and finally hinder the representational power of deep networks.
Research status
Image super-resolution using very deep residual channel attention networks (ECCV-2018)
Long skip connection Residual in Residual RG-1 RG-g RG-G LR HR Residual group Residual channel attention block Conv Upscale module Element-wise sum Fg Fg−1 Residual Group RCAB-1 RCAB-b RCAB-B Short skip connection Fg,b−1 Fg,b Fg−1 Fg FDF
Network architecture Contributions
We propose the very deep residual channel attention networks (RCAN) for highly accurate image SR. We propose residual in residual (RIR) structure to construct very deep trainable networks. The long and short skip connections in RIR help to bypass abundant low-frequency information and make the main network learn more effective information. We propose channel attention (CA) mechanism to adaptively rescale features by considering interdependencies among feature channels.
Research status
Image super-resolution using very deep residual channel attention networks (ECCV-2018)
Convergence analyses with RIR
Research status
Image super-resolution using very deep residual channel attention networks (ECCV-2018)
Channel attention Conv ReLU Global pooling Sigmoid function Element-wise product Element-wise sum Fg,b−1 Fg,b Xg,b H×W×C 1×1×C 1×1×C 1×1×C 1×1× C r H×W×C WD WU HGP f
𝑦𝑑 𝑗, 𝑘 is the value at position (i, j) of c-th feature 𝑦𝑑.
Channel attention Residual channel attention block
Research status
Image super-resolution using very deep residual channel attention networks (ECCV-2018)
Low-level CA High-level CA c=48,s=0.0009 c=8,s=0.0016 c=28,s=0.0017 c=12,s=0.9578 c=51,s=0.9732 c=23,s=0.9998 c=29,s=0.2244 c=1,s=0.2397 c=54,s=0.2699 c=56,s=0.5334 c=33,s=0.5457 c=13,s=0.5603
- Figure. Channel attention visualization. Low-/high-level CAs and feature maps. c and s denote channel index and weight.