covariant least square re fitting for image restoration
play

Covariant LEAst-square Re-fitting for image restoration N. Papadakis - PowerPoint PPT Presentation

Covariant LEAst-square Re-fitting for image restoration N. Papadakis 1 Joint work with C.-A. Deledalle 2 , J. Salmon 3 & S. Vaiter 4 1 CNRS, Institut de Mathmatiques de Bordeaux 2 University California San Diego 3 Universit de Montpellier,


  1. 2. Invariant LEAst square Re-fitting Invariant Re-fitting [Deledalle, P. & Salmon 2015] 250 � Piece-wise affine mapping y �→ ˆ x ( y ) 200 150 ˆ x ( y ) = argmin F (Φ x, y ) + λG ( x ) 100 x 50 0 � Jacobian of the estimator: -50 0 20 40 60 80 100 120 140 160 180 200 x ( y ) ) ij = ∂ ˆ x ( y ) i ( J ˆ ∂y j � Model subspace: M ˆ x ( y ) = ˆ x ( y ) + Im[ J ˆ x ( y ) ] Tangent space of the mapping N. Papadakis CLEAR 9 / 41

  2. 2. Invariant LEAst square Re-fitting Invariant Re-fitting [Deledalle, P. & Salmon 2015] 250 � Piece-wise affine mapping y �→ ˆ x ( y ) 200 150 ˆ x ( y ) = argmin F (Φ x, y ) + λG ( x ) 100 x 50 0 � Jacobian of the estimator: -50 0 20 40 60 80 100 120 140 160 180 200 x ( y ) ) ij = ∂ ˆ x ( y ) i ( J ˆ ∂y j � Model subspace: M ˆ x ( y ) = ˆ x ( y ) + Im[ J ˆ x ( y ) ] � Invariant Least square Re-fitting: R inv � Φ x − y � 2 x ( y ) = argmin ˆ Tangent space of the mapping x ∈M ˆ x ( y ) N. Papadakis CLEAR 9 / 41

  3. 2. Invariant LEAst square Re-fitting Practical Re-fitting for ℓ 1 analysis minimization 1 2 || Φ x − y || 2 + λ || Γ x || 1 x ( y ) = argmin ˆ (1) x Remark: Γ = Id is the LASSO, Γ = [ ∇ x , ∇ y ] ⊤ is the Anisotropic TV. Numerical stability issue � Piecewise constant solution [Strong & Chan 2003, Caselles et al., 2009] x ( y ) requires the support: ˆ � Re-fitting ˆ I = { i ∈ [1 , m ] , s.t. | Γˆ x | i > 0 } � But in practice, ˆ x k x is only approximated through a converging sequence ˆ x k ≈ ˆ I k ≈ ˆ x � ˆ � Unfortunately, ˆ I � Illustration for Anisotropic TV denoising ( Φ = Id ): x k x k Blurry obs. y Biased ˆ x estimate ˆ Re-fitting ˆ x Re-fitting ˆ N. Papadakis CLEAR 10 / 41

  4. 2. Invariant LEAst square Re-fitting Practical Re-fitting for ℓ 1 analysis minimization 1 2 || Φ x − y || 2 + λ || Γ x || 1 x ( y ) = argmin ˆ (1) x Remark: Γ = Id is the LASSO, Γ = [ ∇ x , ∇ y ] ⊤ is the Anisotropic TV. Proposed approach � Provided Ker Φ ∩ Ker Γ= { 0 } , one has for (1) [Vaiter et al. 2016]: � J y = ∂ ˆ x ( y ) � R inv x ( y ) = J y [ y ] with � ˆ ∂y � y � Interpretation: R inv x ( y ) is the derivative of ˆ x ( y ) in the direction of y ˆ x k by chain rule as the derivative of ˆ � Algorithm: x k ( y ) Compute ˜ in the direction of y x k converge towards R inv � Question: Does ˜ x ( y ) ? ˆ N. Papadakis CLEAR 11 / 41

  5. 2. Invariant LEAst square Re-fitting Practical Re-fitting for ℓ 1 analysis minimization 1 2 || Φ x − y || 2 + λ || Γ x || 1 x ( y ) = argmin ˆ (1) x Remark: Γ = Id is the LASSO, Γ = [ ∇ x , ∇ y ] ⊤ is the Anisotropic TV. Proposed approach � Provided Ker Φ ∩ Ker Γ= { 0 } , one has for (1) [Vaiter et al. 2016]: � J y = ∂ ˆ x ( y ) � R inv x ( y ) = J y [ y ] with � ˆ ∂y � y � Interpretation: R inv x ( y ) is the derivative of ˆ x ( y ) in the direction of y ˆ x k by chain rule as the derivative of ˆ � Algorithm: x k ( y ) Compute ˜ in the direction of y x k converge towards R inv � Question: Does ˜ x ( y ) ? ˆ yes, at least for the ADMM or the Chambolle-Pock sequences N. Papadakis CLEAR 11 / 41

  6. 2. Invariant LEAst square Re-fitting Implementation for Anisotropic TV 1 2 || y − x || 2 + λ || Γ x || 1 ˆ x ( y ) = argmin x � Primal-dual implementation [Chambolle and Pock 2011]: � z k + σ Γ x k �  z k +1 = Proj B λ     x k + τ ( y − Γ ⊤ ( z k +1 )) x k +1 =  1+ τ    � z i if | z i | � λ � Projection: Proj B λ ( z ) i = λ sign( z i ) otherwise N. Papadakis CLEAR 12 / 41

  7. 2. Invariant LEAst square Re-fitting Implementation for Anisotropic TV 1 2 || y − x || 2 + λ || Γ x || 1 ˆ x ( y ) = argmin x � Primal-dual implementation [Chambolle and Pock 2011]: � z k + σ Γ x k �  z k +1 = Proj B λ  � x k � z k + σ Γ˜  z k +1 ˜ = P z k + σ Γ x k ˜   x k + τ ( y − Γ ⊤ ( z k +1 )) x k +1 =  1+ τ   x k + τ ( y − Γ ⊤ ( ˜ z k +1 )) ˜  x k +1 ˜ = 1+ τ � z i if | z i | � λ � Projection: Proj B λ ( z ) i = λ sign( z i ) otherwise � Id if | z i | � λ + β P z = 0 otherwise N. Papadakis CLEAR 12 / 41

  8. 2. Invariant LEAst square Re-fitting Implementation for Anisotropic TV 1 2 || y − x || 2 + λ || Γ x || 1 ˆ x ( y ) = argmin x � Primal-dual implementation [Chambolle and Pock 2011]: � z k + σ Γ x k �  z k +1 = Proj B λ  � x k � z k + σ Γ˜  z k +1 ˜ = P z k + σ Γ x k ˜   x k + τ ( y − Γ ⊤ ( z k +1 )) x k +1 =  1+ τ   x k + τ ( y − Γ ⊤ ( ˜ z k +1 )) ˜  x k +1 ˜ = 1+ τ � z i if | z i | � λ � Projection: Proj B λ ( z ) i = λ sign( z i ) otherwise � Id if | z i | � λ + β P z = 0 otherwise x k converges to the re-fitting R inv Theorem: The sequence ˜ x ( y ) of ˆ x ( y ) , ˆ ∀ β > 0 s.t. β < σ | Γˆ x ( y ) | i , ∀ i ∈ I N. Papadakis CLEAR 12 / 41

  9. 2. Invariant LEAst square Re-fitting Implementation for Anisotropic TV 1 2 || y − x || 2 + λ || Γ x || 1 ˆ x ( y ) = argmin x � Primal-dual implementation [Chambolle and Pock 2011]: � z k + σ Γ x k �  z k +1 = Proj B λ  � x k � z k + σ Γ˜  z k +1 ˜ = P z k + σ Γ x k ˜   x k + τ ( y − Γ ⊤ ( z k +1 )) x k +1 =  1+ τ   x k + τ ( y − Γ ⊤ ( ˜ z k +1 )) ˜  x k +1 ˜ = 1+ τ � z i if | z i | � λ � Projection: Proj B λ ( z ) i = λ sign( z i ) otherwise � Id if | z i | � λ + β P z = 0 otherwise x k converges to the re-fitting R inv Theorem: The sequence ˜ x ( y ) of ˆ x ( y ) , ˆ ∀ β > 0 s.t. β < σ | Γˆ x ( y ) | i , ∀ i ∈ I Complexity: twice that of the Chambolle-Pock algorithm. N. Papadakis CLEAR 12 / 41

  10. 2. Invariant LEAst square Re-fitting Anisotropic TV: illustration R inv y ˆ x ( y ) x ( y ) ˆ N. Papadakis CLEAR 13 / 41

  11. 2. Invariant LEAst square Re-fitting Anisotropic TV: Bias-variance trade-off PSNR: 22 . 45 , SSIM: 0 . 416 PSNR: 24 . 80 , SSIM: 0 . 545 PSNR: 27 . 00 , SSIM: 0 . 807 240 Original ˆ x ( y ) Re- fi tted R ˆ x ( y ) 220 Optimum original Optimum re- fi tted 200 Sub-optimum original Quadratic cost 180 Sub-optimum re- fi tted 160 140 120 100 80 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 PSNR: 28 . 89 , SSIM: 0 . 809 PSNR: 28 . 28 , SSIM: 0 . 823 Regularization parameter λ Anisotropic TV CLEAR N. Papadakis CLEAR 14 / 41

  12. 2. Invariant LEAst square Re-fitting Anisotropic TV: Bias-variance trade-off PSNR: 22 . 84 , SSIM: 0 . 312 PSNR: 35 . 99 , SSIM: 0 . 938 PSNR: 23 . 96 , SSIM: 0 . 694 PSNR: 38 . 22 , SSIM: 0 . 935 PSNR: 46 . 42 , SSIM: 0 . 986 Anisotropic TV CLEAR N. Papadakis CLEAR 15 / 41

  13. 2. Invariant LEAst square Re-fitting Anisotropic TV: Bias-variance trade-off PSNR: 22 . 84 , SSIM: 0 . 312 PSNR: 35 . 99 , SSIM: 0 . 938 PSNR: 23 . 96 , SSIM: 0 . 694 PSNR: 38 . 22 , SSIM: 0 . 935 PSNR: 46 . 42 , SSIM: 0 . 986 Anisotropic TV CLEAR N. Papadakis CLEAR 15 / 41

  14. 2. Invariant LEAst square Re-fitting Example: Anisotropic TGV x 0 y R inv ˆ x ( y ) x ( y ) ˆ N. Papadakis CLEAR 16 / 41

  15. 2. Invariant LEAst square Re-fitting Example: Isotropic TV � Restoration model for image y 1 2 || y − x || 2 + λ ||∇ x || 1 , 2 x ( y ) = argmin ˆ x � Model subspace of isotropic TV is the same than anisotropic TV: signals whose gradients share their support with ∇ ˆ x ( y ) N. Papadakis CLEAR 17 / 41

  16. 2. Invariant LEAst square Re-fitting Example: Isotropic TV � Restoration model for image y 1 2 || y − x || 2 + λ ||∇ x || 1 , 2 x ( y ) = argmin ˆ x � Model subspace of isotropic TV is the same than anisotropic TV: signals whose gradients share their support with ∇ ˆ x ( y ) R inv Noise free Noisy data y x ( y ) ˆ x ( y ) ˆ Non sparse support: noise is re-injected Illustration done with an ugly standard (i.e. non Condat and non Chambolle-Pock) discretization of isotropic TV N. Papadakis CLEAR 17 / 41

  17. 2. Invariant LEAst square Re-fitting Limitations Model subspace � Only captures linear invariances w.r.t. small perturbations of y Jacobian matrix � Captures desirable covariant relationships between the entries of y and the entries of ˆ x ( y ) that should be preserved [Deledalle, P., Salmon and Vaiter, 2017, 2019] N. Papadakis CLEAR 18 / 41

  18. 3. Covariant LEAst-square Re-fitting Introduction to Re-fitting Invariant LEAst square Re-fitting Covariant LEAst-square Re-fitting Practical considerations and experiments Conclusions N. Papadakis CLEAR

  19. 3. Covariant LEAst-square Re-fitting Least-square Re-fitting General problem ˆ x ( y ) = argmin F (Φ x, y ) + λG ( x ) x � Φ linear operator, F and G convex N. Papadakis CLEAR 19 / 41

  20. 3. Covariant LEAst-square Re-fitting Least-square Re-fitting General problem ˆ x ( y ) = argmin F (Φ x, y ) + λG ( x ) x � Φ linear operator, F and G convex Desirable properties of Re-fitting operator h ∈ H ˆ x iff 1 h ∈ M ˆ x ( y ) 2 Affine map: h ( y ) = Ay + b 3 Preservation of covariants: J h ( y ) = ρJ ˆ x ( y ) 4 Coherence: h (Φˆ x ( y )) = ˆ x ( y ) N. Papadakis CLEAR 19 / 41

  21. 3. Covariant LEAst-square Re-fitting Least-square Re-fitting General problem ˆ x ( y ) = argmin F (Φ x, y ) + λG ( x ) x � Φ linear operator, F and G convex Desirable properties of Re-fitting operator h ∈ H ˆ x iff 1 h ∈ M ˆ x ( y ) 2 Affine map: h ( y ) = Ay + b 3 Preservation of covariants: J h ( y ) = ρJ ˆ x ( y ) 4 Coherence: h (Φˆ x ( y )) = ˆ x ( y ) Covariant LEAst-square Re-fitting 1 R cov 2 || Φ x − y || 2 x ( y ) = argmin ˆ x ∈H ˆ x N. Papadakis CLEAR 19 / 41

  22. 3. Covariant LEAst-square Re-fitting Covariant LEAst-square Re-fitting Proposition The covariant Re-fitting has an explicit formulation 1 R cov 2 || Φ x − y || 2 x ( y ) = ˆ x ( y ) + ρJ ( y − Φˆ x ( y )) = argmin ˆ x ∈H ˆ x where for δ = y − Φˆ x ( y ) : � � Φ Jδ,δ � if Φ Jδ � = 0 || Φ Jδ || 2 ρ = 1 otherwise N. Papadakis CLEAR 20 / 41

  23. 3. Covariant LEAst-square Re-fitting Covariant LEAst-square Re-fitting Proposition The covariant Re-fitting has an explicit formulation 1 R cov 2 || Φ x − y || 2 x ( y ) = ˆ x ( y ) + ρJ ( y − Φˆ x ( y )) = argmin ˆ x ∈H ˆ x where for δ = y − Φˆ x ( y ) : � � Φ Jδ,δ � if Φ Jδ � = 0 || Φ Jδ || 2 ρ = 1 otherwise Properties � If Φ J is an orthogonal projector, ρ = 1 and Φ R cov x ( y ) = Φ R inv x ( y ) ˆ ˆ N. Papadakis CLEAR 20 / 41

  24. 3. Covariant LEAst-square Re-fitting Covariant LEAst-square Re-fitting Proposition The covariant Re-fitting has an explicit formulation 1 R cov 2 || Φ x − y || 2 x ( y ) = ˆ x ( y ) + ρJ ( y − Φˆ x ( y )) = argmin ˆ x ∈H ˆ x where for δ = y − Φˆ x ( y ) : � � Φ Jδ,δ � if Φ Jδ � = 0 || Φ Jδ || 2 ρ = 1 otherwise Properties � If Φ J is an orthogonal projector, ρ = 1 and Φ R cov x ( y ) = Φ R inv x ( y ) ˆ ˆ � If F convex, G convex and 1 -homogenous and x ( y ) = argmin ˆ F (Φ x − y ) + G ( x ) , x then J Φˆ x ( y ) = ˆ x ( y ) a.e. so that: R cov x ( y ) = (1 − ρ )ˆ x ( y ) + ρJy ˆ N. Papadakis CLEAR 20 / 41

  25. 3. Covariant LEAst-square Re-fitting Covariant LEAst-square Re-fitting Proposition The covariant Re-fitting has an explicit formulation 1 R cov 2 || Φ x − y || 2 x ( y ) = ˆ x ( y ) + ρJ ( y − Φˆ x ( y )) = argmin ˆ x ∈H ˆ x where for δ = y − Φˆ x ( y ) : � � Φ Jδ,δ � if Φ Jδ � = 0 || Φ Jδ || 2 ρ = 1 otherwise Properties � If Φ J is an orthogonal projector, ρ = 1 and Φ R cov x ( y ) = Φ R inv x ( y ) ˆ ˆ � If F convex, G convex and 1 -homogenous and x ( y ) = argmin ˆ F (Φ x − y ) + G ( x ) , x then J Φˆ x ( y ) = ˆ x ( y ) a.e. so that: R cov x ( y ) = (1 − ρ )ˆ x ( y ) + ρJy ˆ N. Papadakis CLEAR 20 / 41

  26. 3. Covariant LEAst-square Re-fitting Statistical interpretation Theorem (Bias reduction) If Φ J is an orthogonal projector or ρ satisfies technical conditions || Φ( E [ D ˆ x ( Y )] − x 0 ) || 2 � || Φ( E [ T ˆ x ( Y )] − x 0 ) || 2 N. Papadakis CLEAR 21 / 41

  27. 3. Covariant LEAst-square Re-fitting Example: Isotropic TV Noise free Noisy data y R inv R cov x ( y ) ˆ x ( y ) x ( y ) ˆ ˆ N. Papadakis CLEAR 22 / 41

  28. 3. Covariant LEAst-square Re-fitting Why not iterating as Boosting approaches? � Differentiable estimator w.r.t y : | 2 + λ | x 0 = ˆ ˜ x ( y ) = argmin | | x − y | |∇ x | | 1 , 2 x N. Papadakis CLEAR 23 / 41

  29. 3. Covariant LEAst-square Re-fitting Why not iterating as Boosting approaches? � Differentiable estimator w.r.t y : | 2 + λ | x 0 = ˆ ˜ x ( y ) = argmin | | x − y | |∇ x | | 1 , 2 x � Bregman iterations : | 2 + λD | x k +1 = argmin ˜ | | x − y | | 1 , 2 ( x, ˜ x k ) |∇ . | x N. Papadakis CLEAR 23 / 41

  30. 3. Covariant LEAst-square Re-fitting Why not iterating as Boosting approaches? � Differentiable estimator w.r.t y : | 2 + λ | x 0 = ˆ ˜ x ( y ) = argmin | | x − y | |∇ x | | 1 , 2 x � Bregman iterations : | 2 + λD | x k +1 = argmin ˜ | | x − y | | 1 , 2 ( x, ˜ x k ) |∇ . | x N. Papadakis CLEAR 23 / 41

  31. 3. Covariant LEAst-square Re-fitting Why not iterating as Boosting approaches? � Differentiable estimator w.r.t y : | 2 + λ | x 0 = ˆ ˜ x ( y ) = argmin | | x − y | |∇ x | | 1 , 2 x � Bregman iterations : | 2 + λD | x k +1 = argmin ˜ | | x − y | | 1 , 2 ( x, ˜ x k ) |∇ . | x N. Papadakis CLEAR 23 / 41

  32. 3. Covariant LEAst-square Re-fitting Why not iterating as Boosting approaches? � Differentiable estimator w.r.t y : | 2 + λ | x 0 = ˆ ˜ x ( y ) = argmin | | x − y | |∇ x | | 1 , 2 x � Bregman iterations : | 2 + λD | x k +1 = argmin ˜ | | x − y | | 1 , 2 ( x, ˜ x k ) |∇ . | x � Convergence: x k → y ˜ N. Papadakis CLEAR 23 / 41

  33. 3. Covariant LEAst-square Re-fitting Why not iterating as Boosting approaches? � Differentiable estimator w.r.t y : | 2 + λ | x 0 = ˆ ˜ x ( y ) = argmin | | x − y | |∇ x | | 1 , 2 x � Covariant iterations : x k +1 ( y ) = ˜ x k ( y ) + ρJ ( y − Φ˜ x k ( y )) ˜ N. Papadakis CLEAR 23 / 41

  34. 3. Covariant LEAst-square Re-fitting Why not iterating as Boosting approaches? � Differentiable estimator w.r.t y : | 2 + λ | x 0 = ˆ ˜ x ( y ) = argmin | | x − y | |∇ x | | 1 , 2 x � Covariant iterations : x k +1 ( y ) = ˜ x k ( y ) + ρJ ( y − Φ˜ x k ( y )) ˜ N. Papadakis CLEAR 23 / 41

  35. 3. Covariant LEAst-square Re-fitting Why not iterating as Boosting approaches? � Differentiable estimator w.r.t y : | 2 + λ | x 0 = ˆ ˜ x ( y ) = argmin | | x − y | |∇ x | | 1 , 2 x � Covariant iterations : x k +1 ( y ) = ˜ x k ( y ) + ρJ ( y − Φ˜ x k ( y )) ˜ � Convergence: x k ( y ) → R inv ˜ x ( y ) ˆ N. Papadakis CLEAR 23 / 41

  36. 4. Practical considerations and experiments Introduction to Re-fitting Invariant LEAst square Re-fitting Covariant LEAst-square Re-fitting Practical considerations and experiments Conclusions N. Papadakis CLEAR

  37. 4. Practical considerations and experiments How computing the covariant Re-fitting? � Explicit expression: R cov x ( y ) = ˆ x ( y ) + ρJδ ˆ with J = ∂ ˆ x ( y ) ∂y , δ = y − Φˆ x ( y ) and � � Φ Jδ,δ � if Φ Jδ � = 0 ρ = || Φ Jδ || 2 1 otherwise Main issue � Being able to compute Jδ N. Papadakis CLEAR 24 / 41

  38. 4. Practical considerations and experiments Application of the Jacobian matrix to a vector Algorithmic differentiation � Iterative algorithm to obtain ˆ x ( y ) : x k +1 = ψ ( x k , y ) � Differentiation in the direction δ : � x k +1 = ψ ( x k , y ) x k + ∂ 2 ψ ( x k , y ) δ x k +1 = ∂ 1 ψ ( x k , y )˜ ˜ � J x k ( y ) δ = ˜ x k � Joint estimation of x k and J x k ( y ) δ � Double the computational cost N. Papadakis CLEAR 25 / 41

  39. 4. Practical considerations and experiments Application of the Jacobian matrix to a vector Finite difference based differentiation � ˆ x ( y ) can be any black box algorithm � Directional derivative w.r.t to direction δ : x ( y ) δ ≈ ˆ x ( y + εδ ) − ˆ x ( y ) J ˆ ε � Need to apply twice the algorithm N. Papadakis CLEAR 26 / 41

  40. 4. Practical considerations and experiments Computation of the Re-fitting Covariant LEAst-square Re-fitting x ( y ) and ρ = � Jδ, δ � R cov x ( y ) = ˆ x ( y ) + ρJδ, with δ = y − Φˆ ˆ || Jδ || 2 2 Two-steps with any denoising algorithm 1 Apply algorithm to y to get ˆ x ( y ) and set δ = y − Φˆ x ( y ) 2 Compute Jδ (with algorithmic or finite difference based differentiation) N. Papadakis CLEAR 27 / 41

  41. 4. Practical considerations and experiments Computation of the Re-fitting Covariant LEAst-square Re-fitting x ( y ) and ρ = � Jδ, δ � R cov x ( y ) = ˆ x ( y ) + ρJδ, with δ = y − Φˆ ˆ || Jδ || 2 2 Two-steps with any denoising algorithm 1 Apply algorithm to y to get ˆ x ( y ) and set δ = y − Φˆ x ( y ) 2 Compute Jδ (with algorithmic or finite difference based differentiation) One-step for absolutely 1 -homogeneous regularizer Re-fitting simplifies to R cov x ( y ) = (1 − ρ )ˆ x ( y ) + ρJy ˆ 1 Estimate jointly ˆ x ( y ) and Jy with the differentiated algorithm N. Papadakis CLEAR 27 / 41

  42. 4. Practical considerations and experiments 1 step Implementation: Anisotropic TV � Model with 1 -homogeneous regularizer: 1 2 || y − x || 2 + λ ||∇ x || 1 x ( y ) = argmin ˆ x � Primal-dual implementation [Chambolle and Pock 2011]: � z k + σ ∇ x k �  z k +1 = Proj B λ  � z k + σ ∇ ˜ x k �  z k +1 ˜ = P z k + σ ∇ x k ˜   x k + τ ( y +div ( z k +1 )) x k +1 =  1+ τ   x k + τ ( y +div ( ˜ z k +1 )) ˜  x k +1 ˜ = 1+ τ � Projection: � z i if | z i | � λ Proj B λ ( z ) i = λ sign( z i ) otherwise � Id if | z i | � λ + β P z = 0 otherwise � x k → ˆ x k = J x k y → J ˆ x ( y ) and ˜ x y � J is an orthogonal projector: R cov x ( y ) = R inv x ( y ) = J ˆ x y ˆ ˆ N. Papadakis CLEAR 28 / 41

  43. 4. Practical considerations and experiments 1 step Implementation: Isotropic TV � Model with 1 -homogeneous regularizer: 1 2 || y − x || 2 + λ ||∇ x || 1 , 2 x ( y ) = argmin ˆ x � Primal-dual implementation [Chambolle and Pock 2011]: � z k + σ ∇ x k �  z k +1 = Proj B λ     x k + τ ( y +div ( z k +1 )) x k +1 =  1+ τ    � z i � Projection: if || z i || 2 � λ Proj B λ ( z ) i = z i λ otherwise || z i || 2 N. Papadakis CLEAR 29 / 41

  44. 4. Practical considerations and experiments 1 step Implementation: Isotropic TV � Model with 1 -homogeneous regularizer: 1 2 || y − x || 2 + λ ||∇ x || 1 , 2 x ( y ) = argmin ˆ x � Primal-dual implementation [Chambolle and Pock 2011]: � z k + σ ∇ x k �  z k +1 = Proj B λ  � z k + σ ∇ ˜ x k �  z k +1 ˜ = P z k + σ ∇ x k ˜   x k + τ ( y +div ( z k +1 )) x k +1 =  1+ τ   x k + τ ( y +div ( ˜ z k +1 )) ˜  x k +1 ˜ = 1+ τ � z i � Projection: if || z i || 2 � λ Proj B λ ( z ) i = z i λ otherwise � Id || z i || 2 if | | z i | | � λ + β � � P z = Id − z i z ⊤ λ otherwise i || z i || 2 || z i || 2 2 � x k → ˆ x k = J x k y → ˜ x ( y ) and ˜ x N. Papadakis CLEAR 29 / 41

  45. 4. Practical considerations and experiments 1 step Implementation: Isotropic TV � Model with 1 -homogeneous regularizer: 1 2 || y − x || 2 + λ ||∇ x || 1 , 2 x ( y ) = argmin ˆ x � Primal-dual implementation [Chambolle and Pock 2011]: � z k + σ ∇ x k �  z k +1 = Proj B λ  � z k + σ ∇ ˜ x k �  z k +1 ˜ = P z k + σ ∇ x k ˜   x k + τ ( y +div ( z k +1 )) x k +1 =  1+ τ   x k + τ ( y +div ( ˜ z k +1 )) ˜  x k +1 ˜ = 1+ τ � z i � Projection: if || z i || 2 � λ Proj B λ ( z ) i = z i λ otherwise � Id || z i || 2 if | | z i | | � λ + β � � P z = Id − z i z ⊤ λ otherwise i || z i || 2 || z i || 2 2 � x k → ˆ x k = J x k y → ˜ x ( y ) and ˜ x with ρ = � ˜ x − ˆ x,y − ˆ x � � Covariant re-fitting: R cov x ( y ) = (1 − ρ )ˆ x + ρ ˜ x, ˆ x || 2 || ˜ x − ˆ 2 N. Papadakis CLEAR 29 / 41

  46. 4. Practical considerations and experiments Inpainting with Isotropic TV R cov y ˆ x ( y ) x ( y ) ˆ Attenuated structures Residual lost structures || R cov x 0 || ˆ x ( y ) − x 0 || 2 x ( y ) − x 0 || 2 ˆ N. Papadakis CLEAR 30 / 41

  47. 4. Practical considerations and experiments Extension to chrominance [Pierre, Aujol, Deledalle, P., 2017] Noise free image Noisy image Denoised image N. Papadakis CLEAR 31 / 41

  48. 4. Practical considerations and experiments Extension to chrominance [Pierre, Aujol, Deledalle, P., 2017] Noise free image Noisy image Re-fitting N. Papadakis CLEAR 31 / 41

  49. 4. Practical considerations and experiments 2 steps Implementation: Non-Local Means � Model without 1 -homogeneous regularizer: � j w y ij y j � 2 /h 2 � w y | 2 x ( y ) i = ˆ , i,j = exp −| |P i y − P j y | � j w y ij � Differentiate NLM code � Algorithm: � Re-fitting with algorithmic differentiation: 1: Run NLM code ˆ x ( y ) and set δ = y − ˆ x ( y ) 2: Run differentiated NLM code in the direction δ to get Jδ 3: Set ρ = � Jδ,δ � || Jδ || 2 2 4: Covariant re-fitting: R cov x ( y ) = ˆ x ( y ) + ρJδ ˆ N. Papadakis CLEAR 32 / 41

  50. 4. Practical considerations and experiments Non-Local Means R cov y x ( y ) ˆ x ( y ) ˆ Attenuated structures Residual lost structures || R cov x 0 || ˆ x ( y ) − x 0 || 2 x ( y ) − x 0 || 2 ˆ N. Papadakis CLEAR 33 / 41

  51. 4. Practical considerations and experiments Non-Local Means: Bias-variance trade-off PSNR: 22 . 18 , SSIM: 0 . 397 PSNR: 25 . 07 , SSIM: 0 . 564 PSNR: 26 . 62 , SSIM: 0 . 724 Original ˆ x ( y ) Re- fi tted R ˆ x ( y ) Optimum original 200 Optimum re- fi tted Quadratic cost Sub-optimum original Sub-optimum re- fi tted 150 100 1 2 3 4 5 6 PSNR: 30 . 12 , SSIM: 0 . 815 PSNR: 29 . 20 , SSIM: 0 . 823 Filtering parameter h Non-Local Means CLEAR N. Papadakis CLEAR 34 / 41

  52. 4. Practical considerations and experiments Bias-variance trade-off: Non-Local Means PSNR: 22 . 18 , SSIM: 0 . 847 PSNR: 24 . 68 , SSIM: 0 . 914 PSNR: 24 . 64 , SSIM: 0 . 910 PSNR: 27 . 04 , SSIM: 0 . 946 PSNR: 27 . 29 , SSIM: 0 . 955 Non-Local Means CLEAR N. Papadakis CLEAR 35 / 41

  53. 4. Practical considerations and experiments Bias-variance trade-off: Non-Local Means PSNR: 22 . 18 , SSIM: 0 . 847 PSNR: 24 . 68 , SSIM: 0 . 914 PSNR: 24 . 64 , SSIM: 0 . 910 PSNR: 27 . 04 , SSIM: 0 . 946 PSNR: 27 . 29 , SSIM: 0 . 955 Non-Local Means CLEAR N. Papadakis CLEAR 35 / 41

  54. 4. Practical considerations and experiments 2 steps Implementation for Black Box algorithm � Denoising algorithm: y �→ ˆ x ( y ) � Re-fitting with finite difference: 1: δ = y − ˆ x ( y ) 2: Jδ = ˆ x ( y + εδ ) − ˆ x ( y )) ε 3: ρ = � Jδ,δ � || Jδ || 2 2 4: R cov x ( y ) = ˆ x ( y ) + ρJδ ˆ N. Papadakis CLEAR 36 / 41

  55. 4. Practical considerations and experiments BM3D [Dabov et al. 2007, Lebrun 2012] PSNR: 22 . 17 , SSIM: 0 . 528 PSNR: 25 . 00 , SSIM: 0 . 7523 PSNR: 26 . 92 , SSIM: 0 . 861 250 Original ˆ x ( y ) Re- fi tted R [ˆ x ] ( y ) Optimum original 200 Optimum re- fi tted Quadratic cost Sub-optimum original Sub-optimum re- fi tted 150 100 50 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Filtering parameter log γ PSNR: 30 . 29 , SSIM: 0 . 920 PSNR: 29 . 41 , SSIM: 0 . 918 BM3D CLEAR N. Papadakis CLEAR 37 / 41

  56. 4. Practical considerations and experiments DDID [Knaus & Zwicker 2013] PSNR: 22 . 16 , SSIM: 0 . 452 PSNR: 26 . 33 , SSIM: 0 . 716 PSNR: 26 . 60 , SSIM: 0 . 721 200 180 160 Quadratic cost 140 120 100 80 60 40 -2 -1 0 1 2 3 4 5 6 PSNR: 31 . 02 , SSIM: 0 . 858 PSNR: 29 . 91 , SSIM: 0 . 845 Filtering parameter log γ DDID CLEAR N. Papadakis CLEAR 38 / 41

  57. 4. Practical considerations and experiments DnCNN [Zhang et al., 2017] Residual Network learning noise to remove Noise level 25 DnCNN N. Papadakis CLEAR 39 / 41

  58. 4. Practical considerations and experiments DnCNN [Zhang et al., 2017] Residual Network learning noise to remove Noise level 25 CLEAR N. Papadakis CLEAR 39 / 41

  59. 4. Practical considerations and experiments DnCNN [Zhang et al., 2017] Residual Network learning noise to remove Noise level 50 DnCNN N. Papadakis CLEAR 39 / 41

  60. 4. Practical considerations and experiments DnCNN [Zhang et al., 2017] Residual Network learning noise to remove Noise level 50 CLEAR N. Papadakis CLEAR 39 / 41

  61. 4. Practical considerations and experiments DnCNN [Zhang et al., 2017] Residual Network learning noise to remove Noise level 150 DnCNN N. Papadakis CLEAR 39 / 41

  62. 4. Practical considerations and experiments DnCNN [Zhang et al., 2017] Residual Network learning noise to remove Noise level 150 CLEAR N. Papadakis CLEAR 39 / 41

  63. 4. Practical considerations and experiments DnCNN [Zhang et al., 2017] Residual Network learning noise to remove Noise level 150 CLEAR � No interesting structural information to recover from noise model N. Papadakis CLEAR 39 / 41

  64. 4. Practical considerations and experiments FFDNet [Zhang et al., 2018] Network learning denoised image for Gaussian noise of variance [0; 75] Noise level 25 FFDNet N. Papadakis CLEAR 40 / 41

  65. 4. Practical considerations and experiments FFDNet [Zhang et al., 2018] Network learning denoised image for Gaussian noise of variance [0; 75] Noise level 25 CLEAR N. Papadakis CLEAR 40 / 41

  66. 4. Practical considerations and experiments FFDNet [Zhang et al., 2018] Network learning denoised image for Gaussian noise of variance [0; 75] Noise level 50 FFDNet N. Papadakis CLEAR 40 / 41

  67. 4. Practical considerations and experiments FFDNet [Zhang et al., 2018] Network learning denoised image for Gaussian noise of variance [0; 75] Noise level 50 CLEAR N. Papadakis CLEAR 40 / 41

  68. 4. Practical considerations and experiments FFDNet [Zhang et al., 2018] Network learning denoised image for Gaussian noise of variance [0; 75] Noise level 150 FFDNet N. Papadakis CLEAR 40 / 41

  69. 4. Practical considerations and experiments FFDNet [Zhang et al., 2018] Network learning denoised image for Gaussian noise of variance [0; 75] Noise level 150 CLEAR N. Papadakis CLEAR 40 / 41

  70. 5. Conclusions Introduction to Re-fitting Invariant LEAst square Re-fitting Covariant LEAst-square Re-fitting Practical considerations and experiments Conclusions N. Papadakis CLEAR

  71. 5. Conclusions Conclusions Covariant LEAst-square Re-fitting � Correct part of the bias of restoration models � No aditionnal parameter � Stability for a larger range of parameters � Double the computational cost N. Papadakis CLEAR 41 / 41

  72. 5. Conclusions Conclusions Covariant LEAst-square Re-fitting � Correct part of the bias of restoration models � No aditionnal parameter � Stability for a larger range of parameters � Double the computational cost When using re-fitting? � Differentiable estimators: no algorithm with quantization � Regularization prior adapted to data � Respect data range: oceanography, radiotherapy... N. Papadakis CLEAR 41 / 41

  73. 5. Conclusions Main related references [Brinkmann, Burger, Rasch and Sutour] Bias-reduction in variational regularization. JMIV , 2017. [Deledalle, P. and Salmon] On debiasing restoration algorithms: applications to total-variation and nonlocal-means. SSVM , 2015. [Deledalle, P., Salmon and Vaiter] CLEAR: Covariant LEAst-square Re-fitting. SIAM SIIMS , 2017. [Deledalle, P., Salmon and Vaiter] Refitting solutions with block penalties, SSVM , 2019. [Osher, Burger, Goldfarb, Xu, and Yin] An iterative regularization method for total variation-based image restoration. SIAM MMS , 2005 [Romano and Elad] Boosting of image denoising algorithms. SIAM SIIMS , 2015. [Talebi, Zhu and Milanfar] How to SAIF-ly boost denoising performance. IEEE TIP , 2013. [Vaiter, Deledalle, Peyré, Fadili and Dossal] The degrees of freedom of partly smooth regularizers. Annals of the Institute of Statistical Mathematics , 2016. N. Papadakis CLEAR

  74. 5. Conclusions Sign in TV models [Brinkmann et al. 2016] � Differentiable estimator w.r.t y : | 2 + λ | x ( y ) = argmin ˆ | | x − y | |∇ x | | 1 , 2 x N. Papadakis CLEAR

  75. 5. Conclusions Sign in TV models [Brinkmann et al. 2016] � Differentiable estimator w.r.t y : | 2 + λ | x ( y ) = argmin ˆ | | x − y | |∇ x | | 1 , 2 x � Orientation preservation: O ˆ x ( y ) = { x | ( ∇ x ) i = α i ( ∇ ˆ x ) i , ∀ i ∈ I} N. Papadakis CLEAR

  76. 5. Conclusions Sign in TV models [Brinkmann et al. 2016] � Differentiable estimator w.r.t y : | 2 + λ | x ( y ) = argmin ˆ | | x − y | |∇ x | | 1 , 2 x � Orientation preservation: O ˆ x ( y ) = { x | ( ∇ x ) i = α i ( ∇ ˆ x ) i , ∀ i ∈ I} � Infimal Convolution of Bregman divergences x ICB = argmin | 2 ˜ | | x − y | x ∈O ˆ x ( y ) N. Papadakis CLEAR

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend