blind image deblurring using dark channel prior
play

Blind Image Deblurring Using Dark Channel Prior Jinshan Pan 1,2,3 , - PowerPoint PPT Presentation

Blind Image Deblurring Using Dark Channel Prior Jinshan Pan 1,2,3 , Deqing Sun 2,4 , Hanspeter Pfister 2 , and Ming-Hsuan Yang 3 1 Dalian University of Technology 2 Harvard University 3 UC Merced 4 NVIDIA Overview Blurred image captured in


  1. Blind Image Deblurring Using Dark Channel Prior Jinshan Pan 1,2,3 , Deqing Sun 2,4 , Hanspeter Pfister 2 , and Ming-Hsuan Yang 3 1 Dalian University of Technology 2 Harvard University 3 UC Merced 4 NVIDIA

  2. Overview Blurred image captured in low-light conditions 2

  3. Overview Restored image 3

  4. Overview Blurred image 4

  5. Overview Restored image 5

  6. Overview • Our goal – A generic method • state-of-the-art performance on natural images • great results on specific scenes (e.g. saturated images, text images, face images) – No edge selection for natural image deblurring – No engineering efforts to incorporate domain knowledge for specific scenario deblurring 6

  7. Blur Process • Blur is uniform and spatially invariant Blurred image Sharp image Blur kernel Noise 7

  8. Blur Process • Blur is uniform and spatially invariant Blurred image Sharp image Blur kernel Noise Convolution operator 8

  9. Challenging • Blind image deblurring is challenging ? ? 9

  10. Ill-Posed Problem 10

  11. Ill-Posed Problem 11

  12. Related Work • Probabilistic approach Posterior distribution Likelihood Prior on 𝐽 Prior on 𝑙 𝑞 𝑙 , 𝐽 𝐶 ∝ 𝑞 𝐶 𝐽 , 𝑙 𝑞 𝐽 𝑞 𝑙 Blur kernel 𝑙 Latent image 𝐽 Blurred image 𝐶 12

  13. Related Work • Blur kernel prior – Positive and sparse 70 60 Most elements near zero 50 40 p(b) P(k) 30 A few can be large 20 10 0 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 b k 13 Shan et al., SIGGRAPH 2008

  14. Related Work • Sharp image statistics – Fergus et al., SIGGRAPH 2006, Levin et al, CVPR 2009, Shan et al., SIGGRAPH 2008… Histogram of image gradients Log # pixels 14

  15. Related Work Gaussian: -I 2 Log prob Laplacian: -|I| 0.5 -|I| -|I| 0.25 I I Parametric models Derivative distributions in natural images are sparse: ∑ α = − ∇ α < log ( ) , 1 p I I i i 15 Levin et al., SIGGRAPH 2007, CVPR 2009

  16. Related Work • MAP I,k framework 𝑞 𝑙 , 𝐽 𝐶 ∝ 𝑞 𝐶 𝐽 , 𝑙 𝑞 𝐽 𝑞 𝑙 argmax 𝑙 , 𝐽 𝑞 𝑙 , 𝐽 𝐶 ( 𝐽 , 𝑙 ) = argmin 𝑙 , 𝐽 { 𝑚 𝐶 − 𝐽 ∗ 𝑙 + 𝜒 𝐽 + 𝜚 𝑙 } 16

  17. Related Work • The MAP I,k paradox [Levin et al., CVPR 2009] P ( , ) >P ( , ) kernel kernel Latent image Latent image 17

  18. Related Work • The MAP I,k paradox [Levin et al., CVPR 2009] sharp blurred < ∑ ∇ ∑ ∇ α α ? i i 18

  19. Related Work • The MAP I,k paradox [Levin et al., CVPR 2009] 25x25 windows 15x15 windows 45x45 windows simple derivatives [-1,1],[-1;1] FoE filters (Roth&Black) Red windows = [ p(sharp I) >p(blurred I) ] 19

  20. Related Work • The MAP I,k paradox [Levin et al., CVPR 2009] < k=[0.5,0.5] P (step edge) P (blurred step edge) 2 = | | 0 . 5 = d | | 1 d 1 = | | 0 . 5 d 0 = cheaper + = 0 . 5 0 . 5 1 5 . 0 . 5 0 . 5 1 . 41 1 sum of derivatives: < P (impulse) P (blurred impulse) 1 = 2 = 1 1 d d 1 = 2 = 0 . 5 0 . 5 d d cheaper + = + = 0 . 5 0 . 5 0 . 5 0 . 5 0 . 5 0 . 5 1 . 41 1 1 2 sum of derivatives: 20

  21. Related Work • The MAP I,k paradox [Levin et al., CVPR 2009] < P (sharp real image) P (blurred real image) ∑ ∑ 0.5 0.5 ∇ = ∇ = 5.8 4.5 I I cheaper i i i i Noise and texture behave as impulses - total derivative contrast reduced by blur 21

  22. Related Work • The MAP I,k paradox [Levin et al., CVPR 2009] – Maximum marginal probability estimation • Marginalized probability [Levin et al., CVPR 2011] • Variational Bayesian [Fergus et al., SIGGRAPH 2006] 𝑞 𝑙 𝐶 ∝ 𝑞 𝐶 𝑙 𝑞 𝑙 = ∫ 𝑞 𝐶 , 𝐽 𝑙 𝑞 𝑙 𝑒𝐽 𝐽 Marginalizing over 𝐽 = ∫ 𝑞 𝐶 𝐽 , 𝑙 𝑞 ( 𝐽 ) 𝑞 𝑙 𝑒𝐽 𝑚 – MAP I,k 𝑞 𝑙 , 𝐽 𝐶 ∝ 𝑞 𝐶 𝐽 , 𝑙 𝑞 ( 𝐽 ) 𝑞 𝑙 22

  23. Related Work • The MAP I,k paradox [Levin et al., CVPR 2009] – Maximum marginal probability estimation • Marginalized probability [Levin et al., CVPR 2011] • Variational Bayesian [Fergus et al., SIGGRAPH 2006] Optimization surface for a single variable Maximum Score a-Posteriori (MAP) Variational Bayes Pixel intensity – Computationally expensive 23

  24. Related Work • Priors favor clear images – [Krishnan et al., CVPR 2011, Pan et al., CVPR 2014, Michaeli and Irani, ECCV 2014] E(Clear image) < E(Blurred image) – Effective for some specific images, such as natural images or text images – Cannot be generalized well 24

  25. Related Work • MAP I,k with Edge Selection – Main idea • E(clear) < E(blurred) in sharp edge regions [Levin et al., CVPR 2009] – [Cho and Lee SIGGRAPH Asia 2009, Xu and Jia ECCV 2010, …] – Advantages and Limitations • Fast and effective in practice • Explicitly try to recover sharp edges using heuristic image filters and usually fail when sharp edges are not available 25

  26. Related Work • MAP I,k with Edge Selection (Extension) – Exemplar based methods [Sun et al., ICCP 2013, HaCohen et al., ICCV 2013, Pan et al., ECCV 2014] • Computationally expensive 26

  27. Our Work • Dark channel prior • Theoretical analysis • Efficient numerical solver • Applications 27

  28. Convolution and Dark Channel • Dark channel [He et al., CVPR 2009]   ∑ =  c  ( )( ) min ( ) D I x I y ∈   ( ) y N x ∈ { , , } c r g b • Compute the minimum intensity in a patch of an image 28

  29. Convolution and Dark Channel • Convolution = ∑ s (x) (x+[ ] - z) (z) B I k 2 ∈Ω z k Ω : the domain of blur kernel k s : the size of blur kernel [ ] : the rounding operator ∑ ≥ , (z) 0 (z)=1 k k ∈Ω z k 29

  30. Convolution and Dark Channel • Proposition 1: Let N(x) denote a patch centered at pixel x with size the same as the blur kernel. We have: ≥ (x) min (y) B I ∈ y N(x) 54 0 73 141 49 89 183 35 0 142 103 0 1/9 1/9 1/9 0 63 72 81 73 9 1/9 1/9 1/9 30 0 ≥ 86 27 0 0 9 7 0 1/9 1/9 1/9 32 18 0 0 29 0 0 255 149 163 0 146 30 A toy example

  31. Convolution and Dark Channel • Property 1: Let D(B) and D(I) denote the dark channel of the blurred and clear images, we have: ≥ D( )(x) D( )(x) B I • Property 2: Let Ω denote the domain of an image I. If there exist some pixels x ∈ Ω such that I(x) = 0, we have: ||D( )|| > ||D( )|| B I 0 0 31

  32. Convolution and Dark Channel Blurred images Clear images 60000 Average number of dark pixels 40000 20000 0 0 0.01 0.02 0.03 0.04 0.05 Intensity The statistical results on the dataset with 3,200 examples 32

  33. Convolution and Dark Channel Clear Blurred Clear Blurred Blurred images have less sparse dark channels than clear images 33

  34. Proposed Method • Our model – Add the dark channel prior into standard deblurring model − γ + µ ∇ + λ 2 2 min || * || + || || || || | | ( ) || I k B k I D I 2 2 0 0 , I k – How to solve? • L0 norm and non-linear min operator 34

  35. Optimization • Algorithm skeleton − + γ 2 2 min || * || || || I k B k 2 2 k − + µ ∇ λ 2 min || * || || || + || ( ) || I k B I D I 2 0 0 I – L0 norm • Half-quadratic splitting method – Non-linear min operator • Linear approximation 35

  36. Optimization • Update latent image I : − − + + µ µ ∇ ∇ + + λ λ 2 2 min || * min | | * || || | || | || || || || ( ) || ( ) || I I k k B B I I D D I I 2 2 0 0 0 0 I I Half-quadratic splitting [Xu et al., SIGGRAPH Asia 2011, Pan et al., CVPR 2014] − + α ∇ − β − + µ + λ 2 2 2 min || * || || || + || ( ) || || || || || I k B I g D I u g u 2 2 2 0 0 , , I u g – Alternative minimization − + β − + µ ∇ − 2 2 2 min || * || || ( ) || || || I k B D I u I g 2 2 2 I α ∇ − + β − + µ + λ 2 2 min || || || ( ) || || || || | | I g D I u g u 0 0 36 , u g

  37. Optimization • Update latent image I : – u , g sub-problem α ∇ − β − + µ + λ 2 2 min || || + || ( ) || || || || || I g D I u g u 0 0 u g ,  β − + λ 2 min || ( ) || || || D I u u  0 u  u and g are independent! α ∇ − + µ 2  min || || || || x g g  0 g µ ∇ λ  ∇ ≥ ≥  2 2 ,| |  I I ( ),| ( ) | D I D I =  α =  β g u    0,otherwise  0,otherwise Related papers: [Xu et al., SIGGRAPH Asia 2011, Xu et al., CVPR 2013, Pan et al., CVPR 2014] 37

  38. Optimization • Update latent image I : – I sub-problem min operator − + β − + µ ∇ − 2 2 2 min || * || || ( ) || || || I k B D I u I g 2 I – Our observation ( )=MI D I • Let y = argmin z∈𝑂 x 𝐽 (z) , we have  1, z=y,  M(x, z)= 0, otherwise.  38

  39. Optimization • I sub-problem – Compute M 𝐍 Intermediate image I D ( I ) 𝐍 T Visualization of 𝐍 T u u 39 Toy example

  40. Experimental Results • Natural image deblurring • Specific scenes – Text images – Face images – Low-light images • Non-uniform image deblurring 40

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend