space variant generalized gaussian regularization for
play

Space-variant Generalized Gaussian Regularization for Image - PowerPoint PPT Presentation

Space-variant Generalized Gaussian Regularization for Image Restoration Alessandro Lanza, Serena Morigi, Monica Pragliola, Fiorella Sgallari University of Bologna Computational Methods for Inverse Problems in Imaging Workshop July 16-18,


  1. Space-variant Generalized Gaussian Regularization for Image Restoration Alessandro Lanza, Serena Morigi, Monica Pragliola, Fiorella Sgallari University of Bologna Computational Methods for Inverse Problems in Imaging Workshop July 16-18, 2018, Como

  2. Outline • Image restoration; • Non convex regularization: the TV p - ℓ 2 model; • Novel contribution: the space variant TV SV p ,α - ℓ q : Motivation; Main Idea. • Numerical evidences; • Conclusions and future work.

  3. Image Restoration Direct Problem u − → u = Φ( g ) − → g original image blur and/or noise observed image Inverse Problem g = Φ − 1 ( u ) g − → − → u observed image deblur and/or denoise original image

  4. Degradation model Continuous formulation : � k ( x − y ) u ( y ) dy + n ( x ) , x ∈ Ω ⊂ R 2 , g ( x ) = ( k ∗ u )( x ) + n ( x ) = Ω where k is a known space-invariant blur kernel , n is the noise function whose distribution is known, u is the original image function and g is the corrupted image function. Discrete formulation : g = Ku + n , g , u , n ∈ R d , K ∈ R d × d , where K is a large, sparse, structured matrix, whose structure depends on the boundary conditions. In general, K is very ill conditioned .

  5. Variational Methods u ∗ ← � � − arg min F ( u , g , K ) + µ R ( u ) u ∈ R d • F fidelity term, set according to the noise: F ( u , g , K ) = � Ku − g � q q , q = 1 , 2 ; q = 2 q = 1 Gaussian noise Impulse noise • µ > 0 regularization parameter.

  6. Variational Methods u ∗ ← � � − arg min F ( u , g , K ) + µ R ( u ) u ∈ R d • R regularization term, set according to the properties of the image: Total Variation 1 Regularizer Tikhonov � d 2 � d 2 i = 1 � ( ∇ u ) i � 2 R ( u ) i = 1 � ( ∇ u ) i � 2 2 (a) (b) (c) (d) Figure: Original (a) and corrupted image (b), edges by Tikhonov (c) and TV (d). 1 Rudin, L.I., Osher, S., Fatemi, E.:Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 1992.

  7. The TV p - ℓ 2 model 2 d � µ � u ∗ ← 2 � Ku − g � 2 � � ( ∇ u ) i � p − arg min 2 + 2 u ∈ R d i = 1 Regularization p > 1 p < 1 Advantages convexity sparsity enhancing Disadvantages no sparsity enhancing non convexity 2 Lanza, A., Morigi, S., Sgallari, F.:Constrained TV p − ℓ 2 Model for Image Restoration. Journal of Scientific Computing, 2016.

  8. Deriving the TV p - ℓ 2 model Let us introduce π ( u ) as the prior probability density function (pdf), π ( g | u , K ) as the likelihood pdf and π ( u | g , K ) as the posterior pdf. Maximum A Posteriori (MAP) Estimation Resorting to the Bayes’ formula, we have: u ∈ R d π ( u | g , K ) ⇔ max max u ∈ R d log π ( u | g , K ) � � u ∈ R d log ( π ( u ) π ( g | u , K )) ⇔ min max − log π ( g | u , K ) − log π ( u ) u ∈ R d Regularization term ← − Prior Fidelity term ← − Likelihood

  9. Gradient norm distribution Prior u is a Markov Random Field: d d � � � � π ( u ) = 1 = 1 � � exp − α V c i ( u ) Z exp − α V c i ( u ) , Z i = 1 i = 1 where V c i is a function depending only on the clique of neighbors of pixel i , i = 1 , ..., d . Setting V c i = � ( ∇ u ) i � 2 , we have d π ( u ) = 1 � � = 1 � � � Z exp − α � ( ∇ u ) i � 2 Z exp − α TV ( u ) , i = 1 i.e. the norm of the gradients in each pixel are distributed according to an exponential distribution: � α e ( − α u ) , u ≥ 0 π ( u ; α ) = 0 , u = 0 .

  10. Gradient norm distribution Exponential half Generalized Gaussian Γ( 1 / p ) e ( − ( α u ) p ) , u > 0 α p π ( u ; α ) = α e ( − α u ) , u > 0 π ( u ; α, p ) =

  11. The half Generalized Gaussian distribution Hence, the prior turns into d π ( u ) = 1 � � = 1 � � � � ( ∇ u ) i � p − α − α TV p ( u ) . Z exp Z exp 2 i = 1 Likelihood (AWGN) d − ( Ku − g ) 2 − � Ku − g � 2 1 � � = 1 � � � i 2 √ π ( g | u ; K ) = exp W exp . 2 σ 2 2 σ 2 2 πσ i = 1

  12. The half Generalized Gaussian distribution Recalling the Bayes’ formula, � � u ∈ R d π ( u | g , K ) − max → min − log π ( g | u , K ) − log π ( u ) u ∈ R d the following model is derived: TV p - ℓ 2 d � µ � u ∗ ← � � ( ∇ u ) i � p 2 � Ku − g � 2 − arg min 2 + . 2 u ∈ R d i = 1 Note The shape parameter p of the hGGD is automatically estimated.

  13. TV SV p ,α : Motivation (a) test image (b) global histogram (c) zoom of (b) (d) smooth region (e) local histogram for (d) (f) zoom of (e) (g) texture region (h) local histogram for (g) (i) zoom of (h)

  14. TV SV p ,α : Main Idea • Introduce a space variant prior; • Estimate automatically the parameters characterizing the prior; • Consider also AWGN and impulse noise, such as AWLN and Salt and Pepper Noise. Likelihood (AWLN) d 1 � − | Ku − g | i � = 1 � − � Ku − g � 1 � � π ( g | u ; K ) = 2 β exp W exp . β β i = 1 Fidelity (SPN) In order to promote sparsity of the noise, the ℓ 0 pseudo-norm of the residual Ku − g is a popular choice. Nevertheless, in general the ℓ 1 is adopted: F ( u , g , K ) = � Ku − g � 1

  15. TV SV p ,α : Main Idea Non-stationary Markov Random Field Prior 3 d π ( u ) = 1 � � = 1 � � � α i � ( ∇ u ) i � p i Z exp − Z exp − TV sv p ,α ( u ) . 2 i = 1 TV SV p ,α - ℓ q , q ∈ { 1 , 2 } d � µ � u ∗ ← � q � Ku − g � q α i � ( ∇ u ) i � p i − arg min q + u ∈ R d i = 1 3 Lanza, A., Morigi, S., Pragliola, M., Sgallari, F.:Space-variant Generalized Gaussian Regularization for Image Restoration. Computer Methods in Biomechanics and Biomedical Engineering, 2018.

  16. Automatic Parameter Estimation The procedure adopted to estimate the global p is particularly successful when a large number of samples is available. • m i = � ( ∇ u ) i � 2 , i = 1 , ..., d ; • N s i symmetric square neighborhood of pixel i of size s ∈ { 3 , 5 , ... } . We have 4 : 2 � � � � � � � p i = h − 1 ( ρ i ) , N s m 2 � ρ i = card / | m j | , i = 1 , ... d , i j j ∈ N s j ∈ N s i i where Γ 2 ( 2 / z ) � � � � h ( z ) = Γ( 1 / z ) Γ( 3 / z ) / . 4 Sharifi, K., Leon-Garcia, A.: Estimation of shape parameter for generalized Gaussian distributions in subband decompositions of video. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 5, Iss. 1, Feb 1995.

  17. Automatic Parameter Estimation Once p i for a pixel is estimated, the local likelihood function is given by n � α p i � � exp ( − ( α x i ) p i ) L ( α, p i ; x 1 , ..., x n ) = Γ( 1 / p i ) i = 1 n � n � � � α p i � ( α x i ) p i = exp − . Γ( 1 / p i ) i = 1 The value of the local scale parameter is obtained by solving the following optimization problem: α i = arg max α log L ( α, p i ; x 1 , ..., x n ) . By imposing the first order optimality, we have: � − 1 n � p i pi . � x p i α i = i n i = 1

  18. Alternating Directions Method of Multipliers 5 Original problem : u ∗ ← min u { f 1 ( u ) + f 2 ( Du ) } Variable splitting : { u ∗ , z ∗ } ← min u , z { f 1 ( u ) + f 2 ( z ) } , s . t . z = Du Augmented Lagrangian : L ( u , z ; λ ) = f 1 ( u ) + f 2 ( z ) + � λ, z − Du � + β 2 � z − Du � 2 2 Saddle point problem : Find ( u ∗ , z ∗ ; λ ∗ ) such that: L ( u ∗ , z ∗ ; λ ) ≤ L ( u ∗ , z ∗ ; λ ∗ ) ≤ L ( u , z ; λ ∗ ) , ∀ ( u , z ; λ ) . Subproblems: u , z L ( u , z ; λ ( k ) ) , ( u ( k + 1 ) , z k + 1 ) ← arg min λ ( k ) − β r z ( k + 1 ) − Du ( k + 1 ) � λ ( k + 1 ) � ← . 5 Boyd, S. et al.: Distributed optimization and statistical learning via the admm. 2011.

  19. Alternating Directions Method of Multipliers Constrained form : d � � ( µ/ q ) � r � q � α i � t i � p i { u ∗ , r ∗ , t ∗ } ← min q + , q = 1 , 2 2 u , r , t i = 1 s . t . r = Ku − g , t = Du Augmented Lagrangian : d � α i � t i � p i 2 + ( µ/ q ) � r � q q − � λ t , t − Du � + ( β t / 2 ) � t − Du � 2 L ( u , r , t ; λ r , λ t ) = 2 + i = 1 − � λ r , r − ( Ku − g ) � + ( β r / 2 ) � r − ( Ku − g ) � 2 2 Equivalent problems : The restored image u ∗ is such that dual ascent primal descent ( u ∗ , r ∗ , t ∗ ; λ ∗ r , λ ∗ t ) ← − arg max λ r ,λ t min u , r , t L ( u , r , t ; λ r , λ t )

  20. Subproblems for ADMM One variable at a time is updated, while the other ones are fixed as in the previous iteration step. r , λ ( k ) r ∈ V L ( u ( k ) , r , t ( k ) ; λ ( k ) r ( k + 1 ) ← arg min t ) , closed form r , λ ( k ) t ( k + 1 ) t ∈ Q L ( u ( k ) , r ( k + 1 ) , t ; λ ( k ) ← arg min t ) , closed form r , λ ( k ) u ( k + 1 ) u ∈ V L ( u , r ( k + 1 ) , t ( k + 1 ) ; λ ( k ) ← arg min t ) , linear system r ( k + 1 ) − ( Ku ( k + 1 ) − g ) λ ( k + 1 ) λ ( k ) � � ← − β r , r r t ( k + 1 ) − Du ( k + 1 ) � λ ( k + 1 ) λ ( k ) � ← − β t . t t

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend