Space-variant Generalized Gaussian Regularization for Image - - PowerPoint PPT Presentation

space variant generalized gaussian regularization for
SMART_READER_LITE
LIVE PREVIEW

Space-variant Generalized Gaussian Regularization for Image - - PowerPoint PPT Presentation

Space-variant Generalized Gaussian Regularization for Image Restoration Alessandro Lanza, Serena Morigi, Monica Pragliola, Fiorella Sgallari University of Bologna Computational Methods for Inverse Problems in Imaging Workshop July 16-18,


slide-1
SLIDE 1

Space-variant Generalized Gaussian Regularization for Image Restoration

Alessandro Lanza, Serena Morigi, Monica Pragliola, Fiorella Sgallari

University of Bologna Computational Methods for Inverse Problems in Imaging Workshop July 16-18, 2018, Como

slide-2
SLIDE 2

Outline

  • Image restoration;
  • Non convex regularization: the TVp-ℓ2 model;
  • Novel contribution: the space variant TV SV

p,α-ℓq:

Motivation; Main Idea.

  • Numerical evidences;
  • Conclusions and future work.
slide-3
SLIDE 3

Image Restoration

Direct Problem u − → u = Φ(g) − → g

  • riginal image

blur and/or noise

  • bserved image

Inverse Problem g − → g = Φ−1(u) − → u

  • bserved image

deblur and/or denoise

  • riginal image
slide-4
SLIDE 4

Degradation model

Continuous formulation: g(x) = (k ∗ u)(x) + n(x) =

k(x − y)u(y)dy + n(x), x ∈ Ω ⊂ R2, where k is a known space-invariant blur kernel, n is the noise function whose distribution is known, u is the original image function and g is the corrupted image function. Discrete formulation: g = Ku + n, g, u, n ∈ Rd, K ∈ Rd×d, where K is a large, sparse, structured matrix, whose structure depends on the boundary conditions. In general, K is very ill conditioned.

slide-5
SLIDE 5

Variational Methods

u∗ ← − arg min

u∈Rd

  • F(u, g, K) + µR(u)
  • F fidelity term, set according to the noise:

F(u, g, K) = Ku − gq

q,

q = 1, 2; q = 2 q = 1 Gaussian noise Impulse noise

  • µ > 0 regularization parameter.
slide-6
SLIDE 6

Variational Methods

u∗ ← − arg min

u∈Rd

  • F(u, g, K) + µR(u)
  • R regularization term, set according to the properties of the image:

Regularizer Tikhonov Total Variation1 R(u) d2

i=1(∇u)i2 2

d2

i=1(∇u)i2

(a) (b) (c) (d)

Figure: Original (a) and corrupted image (b), edges by Tikhonov (c) and TV (d).

1Rudin, L.I., Osher, S., Fatemi, E.:Nonlinear total variation based noise removal

  • algorithms. Physica D: Nonlinear Phenomena, 1992.
slide-7
SLIDE 7

The TVp-ℓ2 model2

u∗ ← − arg min

u∈Rd

µ 2 Ku − g2

2 + d

  • i=1

(∇u)ip

2

  • Regularization

p > 1 p < 1 Advantages convexity sparsity enhancing Disadvantages no sparsity enhancing non convexity

2Lanza, A., Morigi, S., Sgallari, F.:Constrained TVp − ℓ2 Model for Image Restoration.

Journal of Scientific Computing, 2016.

slide-8
SLIDE 8

Deriving the TVp-ℓ2 model

Let us introduce π(u) as the prior probability density function (pdf), π(g|u, K) as the likelihood pdf and π(u|g, K) as the posterior pdf. Maximum A Posteriori (MAP) Estimation Resorting to the Bayes’ formula, we have: max

u∈Rd π(u|g, K) ⇔ max u∈Rd logπ(u|g, K)

max

u∈Rd log(π(u)π(g|u, K)) ⇔ min u∈Rd

  • − logπ(g|u, K) − logπ(u)
  • Regularization term

← − Prior Fidelity term ← − Likelihood

slide-9
SLIDE 9

Gradient norm distribution

Prior u is a Markov Random Field: π(u) = 1 Z

d

  • i=1

exp

  • − αVci(u)
  • = 1

Z exp

  • − α

d

  • i=1

Vci(u)

  • ,

where Vci is a function depending only on the clique of neighbors of pixel i, i = 1, ..., d. Setting Vci = (∇u)i2, we have π(u) = 1 Z exp

  • − α

d

  • i=1

(∇u)i2

  • = 1

Z exp

  • − αTV(u)
  • ,

i.e. the norm of the gradients in each pixel are distributed according to an exponential distribution: π(u; α) =

  • αe(−αu), u ≥ 0

0, u = 0.

slide-10
SLIDE 10

Gradient norm distribution

Exponential half Generalized Gaussian π(u; α) = αe(−αu), u > 0 π(u; α, p) =

αp Γ(1/p)e(−(αu)p), u > 0

slide-11
SLIDE 11

The half Generalized Gaussian distribution

Hence, the prior turns into π(u) = 1 Z exp

  • − α

d

  • i=1

(∇u)ip

2

  • = 1

Z exp

  • − αTVp(u)
  • .

Likelihood (AWGN) π(g|u; K) =

d

  • i=1

1 √ 2πσ exp

  • − (Ku − g)2

i

2σ2

  • = 1

W exp

  • − Ku − g2

2

2σ2

  • .
slide-12
SLIDE 12

The half Generalized Gaussian distribution

Recalling the Bayes’ formula, max

u∈Rd π(u|g, K) −

→ min

u∈Rd

  • − logπ(g|u, K) − logπ(u)
  • the following model is derived:

TVp-ℓ2 u∗ ← − arg min

u∈Rd

µ 2 Ku − g2

2 + d

  • i=1

(∇u)ip

2

  • .

Note The shape parameter p of the hGGD is automatically estimated.

slide-13
SLIDE 13

TVSV

p,α: Motivation (a) test image (b) global histogram (c) zoom of (b) (d) smooth region (e) local histogram for (d) (f) zoom of (e) (g) texture region (h) local histogram for (g) (i) zoom of (h)

slide-14
SLIDE 14

TVSV

p,α: Main Idea

  • Introduce a space variant prior;
  • Estimate automatically the parameters characterizing the prior;
  • Consider also AWGN and impulse noise, such as AWLN and Salt and

Pepper Noise. Likelihood (AWLN) π(g|u; K) =

d

  • i=1

1 2β exp

  • − |Ku − g|i

β

  • = 1

W exp

  • − Ku − g1

β

  • .

Fidelity (SPN) In order to promote sparsity of the noise, the ℓ0 pseudo-norm of the residual Ku − g is a popular choice. Nevertheless, in general the ℓ1 is adopted: F(u, g, K) = Ku − g1

slide-15
SLIDE 15

TVSV

p,α: Main Idea

Non-stationary Markov Random Field Prior3 π(u) = 1 Z exp

d

  • i=1

αi(∇u)ipi

2

  • = 1

Z exp

  • − TVsv

p,α(u)

  • .

TVSV

p,α-ℓq, q ∈ {1, 2}

u∗ ← − arg min

u∈Rd

µ q Ku − gq

q + d

  • i=1

αi(∇u)ipi

  • 3Lanza, A., Morigi, S., Pragliola, M., Sgallari, F.:Space-variant Generalized Gaussian

Regularization for Image Restoration. Computer Methods in Biomechanics and Biomedical Engineering, 2018.

slide-16
SLIDE 16

Automatic Parameter Estimation

The procedure adopted to estimate the global p is particularly successful when a large number of samples is available.

  • mi = (∇u)i2, i = 1, ..., d;
  • Ns

i symmetric square neighborhood of pixel i of size s ∈ {3, 5, ...}.

We have 4: pi = h−1(ρi), ρi = card

  • N s

i j∈N s

i

m2

j

  • /

j∈N s

i

|mj|

  • 2

, i = 1, ...d , where h(z) =

  • Γ(1/z) Γ(3/z)
  • /
  • Γ2(2/z)
  • .

4Sharifi, K., Leon-Garcia, A.: Estimation of shape parameter for generalized Gaussian

distributions in subband decompositions of video. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 5, Iss. 1, Feb 1995.

slide-17
SLIDE 17

Automatic Parameter Estimation

Once pi for a pixel is estimated, the local likelihood function is given by L(α, pi; x1, ..., xn) =

n

  • i=1
  • αpi

Γ(1/pi)

  • exp(−(αxi)pi)

=

  • αpi

Γ(1/pi) n exp

n

  • i=1

(αxi)pi

  • .

The value of the local scale parameter is obtained by solving the following

  • ptimization problem:

αi = arg max

α log L(α, pi; x1, ..., xn) .

By imposing the first order optimality, we have: αi = pi n

n

  • i=1

xpi

i

− 1

pi .

slide-18
SLIDE 18

Alternating Directions Method of Multipliers5

Original problem: u∗ ← min

u {f1(u) + f2(Du)}

Variable splitting: {u∗, z∗} ← min

u,z {f1(u) + f2(z)}, s.t.

z = Du Augmented Lagrangian: L(u, z; λ) = f1(u) + f2(z) + λ, z − Du + β 2 z − Du2

2

Saddle point problem: Find (u∗, z∗; λ∗) such that: L(u∗, z∗; λ) ≤ L(u∗, z∗; λ∗) ≤ L(u, z; λ∗), ∀(u, z; λ). Subproblems: (u(k+1), zk+1) ← arg min

u,z L(u, z; λ(k)) ,

λ(k+1) ← λ(k) − βr

  • z(k+1) − Du(k+1)

.

5Boyd, S. et al.: Distributed optimization and statistical learning via the admm. 2011.

slide-19
SLIDE 19

Alternating Directions Method of Multipliers

Constrained form: {u∗, r ∗, t∗} ← min

u,r,t

  • (µ/q)rq

q + d

  • i=1

αitipi

2

  • ,

q = 1, 2 s.t. r = Ku − g, t = Du Augmented Lagrangian: L(u, r, t; λr, λt) =

d

  • i=1

αitipi

2 + (µ/q)rq q − λt, t − Du + (βt/2)t − Du2 2+

− λr, r − (Ku − g) + (βr/2)r − (Ku − g)2

2

Equivalent problems: The restored image u∗ is such that dual ascent primal descent (u∗, r ∗, t∗; λ∗

r , λ∗ t )

← − arg maxλr ,λt minu,r,t L(u, r, t; λr, λt)

slide-20
SLIDE 20

Subproblems for ADMM

One variable at a time is updated, while the other ones are fixed as in the previous iteration step. r (k+1) ← arg min

r∈V L(u(k), r, t(k); λ(k) r , λ(k) t ) ,

closed form t(k+1) ← arg min

t∈Q L(u(k), r (k+1), t; λ(k) r , λ(k) t ) ,

closed form u(k+1) ← arg min

u∈V L(u, r (k+1), t(k+1); λ(k) r , λ(k) t ) ,

linear system λ(k+1)

r

← λ(k)

r

− βr

  • r (k+1) − (Ku(k+1) − g)
  • ,

λ(k+1)

t

← λ(k)

t

− βt

  • t(k+1) − Du(k+1)

.

slide-21
SLIDE 21

Subproblems for ADMM

u(k+1) ← arg min

u∈V L(u, r(k+1), t(k+1); λ(k) r

, λ(k)

t

) , linear system

  • DTD + βr

βt K TK

  • u = DT
  • t(k+1) − 1

βt λ(k)

t

  • + βr

βt K T

  • r (k+1) − 1

βr λ(k)

r

+ g

  • .

The coefficient matrix has full rank, in fact: Ker

  • DTD
  • ∩ Ker
  • K TK
  • = {0} ,

Periodic BC − → DFT Reflective BC − → DCT Anti-Reflective BC − → DST

slide-22
SLIDE 22

Numerical tests: AWGN

(a) original (b) corrupted (BSNR = 20) (c) TVsv

p,α-L2

(a) p-map (b) α-map

geometric TV-L2 TVp-L2 TVsv

p,α-L2

ISNR 7.77 7.92 8.60

slide-23
SLIDE 23

Numerical tests: SPN

(a) original (b) TV-L1 (18.55) (c) TVp-L1 (19.10) (d) TVsv

p -L1 (21.14)

(d) corrupted (e) zoom of (b) (f) zoom of (c) (g) zoom of (d) Figure: Example (SPN): restoration (ISNR in brackets) of the test image aneurism (https://medpix.nlm.nih.gov) corrupted by a γ = 0.1 level noise.

slide-24
SLIDE 24

Numerical tests: AWLN

(a) original (b) corrupted (c) TVsv

p,α-L1

(d) original (e) corrupted (BSNR = 10) (f) TVsv

p,α-L1

lungs ecography TV-L2 TVp-L2 TVsv

p,α-L2

TV-L2 TVp-L2 TVsv

p,α-L2

ISNR 6.20 6.80 8.32 5.93 6.40 8.32

slide-25
SLIDE 25

Conclusions and future work

  • We proposed a local version of the TVp-ℓ2 model with an automatic

procedure for the estimation of the local parameters. Furthermore, we introduced the TV SV

p,α-ℓ1 model, more suitable when dealing with

impulse noise. As a future work, we are going to:

  • Substitute the norm of the gradient in each pixel with the gradient

itself, resorting to a Multivariate Generalized Gaussian distribution, in

  • rder to take into account in the restoration process the orientation of

textures and edges.

  • Adopt other algorithms for solving the optimization probem (e.g.

Majorize-Minimize).

slide-26
SLIDE 26

Thank you for your attention!