Space-variant directional regularisation for image restoration - - PowerPoint PPT Presentation

space variant directional regularisation for image
SMART_READER_LITE
LIVE PREVIEW

Space-variant directional regularisation for image restoration - - PowerPoint PPT Presentation

Space-variant directional regularisation for image restoration problems Luca Calatroni CMAP, Ecole Polytechnique CNRS joint work with: M. Pragliola, A. Lanza, F. Sgallari (University of Bologna) Computational Imaging workshop, Semester


slide-1
SLIDE 1

Space-variant directional regularisation for image restoration problems

Luca Calatroni

CMAP, ´ Ecole Polytechnique CNRS joint work with: M. Pragliola, A. Lanza, F. Sgallari (University of Bologna) Computational Imaging workshop, Semester program in Computer Vision

ICERM March 18-22 2019 Providence, USA

slide-2
SLIDE 2

Introduction

slide-3
SLIDE 3

Imaging inverse problems

Inverse problem formulation

Given f , seek u such that f = Ku + n where K (known) models blur and n noise in the data. Task: compute reconstruction u∗. Different viewpoints. . .

1

slide-4
SLIDE 4

Imaging inverse problems

Inverse problem formulation

Given f , seek u such that f = Ku + n where K (known) models blur and n noise in the data. Task: compute reconstruction u∗. Different viewpoints. . . Statistics

Maximise posterior: u∗ ∈ argmax

u

P(u; f , K) Use Bayes’ formula u∗ ∈ argmax

u

P(u)P(f , K; u) P(f )

= ⇒ Optimisation

MAP estimation u∗ ∈ argmin

u

−log (P(u)P(f , K; u)) Variational regularisation: u∗ ∈ argmin

u

R(u)+µΦ(Ku; f ) Discrete/continuous framework.

= ⇒ PDEs

Evolution problem        ut = −∇R(u) − µ∇Φ(Ku; f ) u(0) = f b.c. Get u∗ as the steady state

  • f the system above

Analogies: → P(u), R(u) encode prior assumptions on u (topic of this talk!)

  • P(f , K; u), Φ(f , K; u) describe noise statistics

1

slide-5
SLIDE 5

Imaging inverse problems

Inverse problem formulation

Given f , seek u such that f = Ku + n where K (known) models blur and n noise in the data. Task: compute reconstruction u∗. Different viewpoints. . . Statistics

Maximise posterior: u∗ ∈ argmax

u

P(u; f , K) Use Bayes’ formula u∗ ∈ argmax

u

P(u)P(f , K; u) P(f )

Optimisation

MAP estimation u∗ ∈ argmin

u

−log (P(u)P(f , K; u)) Variational regularisation: u∗ ∈ argmin

u

R(u)+µΦ(Ku; f ) Discrete/continuous framework. 1

slide-6
SLIDE 6

Tailored image regulariser?

Image model adapted to local features?

2

slide-7
SLIDE 7

Statistical viewpoint: Markov Random Fields

Standard image priors are based on stationary Markov Random Field modelling P(u) = 1 Z

n

  • i=1

exp ( −α VNi (u) ) = 1 Z exp

  • − α

n

  • i=1

VNi (u)

  • ,

where α > 0, Ni is the clique around i and VNi is the Gibbs’ potential in Ni.

3

slide-8
SLIDE 8

Statistical viewpoint: Markov Random Fields

Standard image priors are based on stationary Markov Random Field modelling P(u) = 1 Z exp

  • − αTV (u)
  • = 1

Z exp

  • − α

n

  • i=1

(∇u)i2

  • ,

where α > 0 and VNi (u) = (∇u)i2.

3

slide-9
SLIDE 9

Statistical viewpoint: Markov Random Fields

Standard image priors are based on stationary Markov Random Field modelling P(u) = 1 Z exp

  • − αTV (u)
  • = 1

Z exp

  • − α

n

  • i=1

(∇u)i2

  • ,

where α > 0 and VNi (u) = (∇u)i2.

Interpretation

q := ∇u2 is distributed locally as a α-half-Laplacian distribution: P(qi; α) =    α exp(−αqi) for qi ≥ 0, for qi < 0, and α describes image scales.

One-parameter family describing all pixel features. . . too restrictive?

3

slide-10
SLIDE 10

Variational and PDE approach: TV regularisation

TV (u) =

n

  • i=1

(∇u)i2

1Rudin, Osher, Fatemi, ’92

4

slide-11
SLIDE 11

Variational and PDE approach: TV regularisation

TV (u) =

n

  • i=1

(∇u)i2 Variational model E(u) = TV (u) + λ

2 u − f 2 2

PDE model (non-linear diffusion) ut = p + λ(f − u), p ∈ ∂TV (u) Seek u fitting the data with low TV: edges preservation & noise removal1! (a) f (b) Tikhonov (c) TV Edge-preserving non-linear diffusion PDE. . .

1Rudin, Osher, Fatemi, ’92

4

slide-12
SLIDE 12

Introduction

Describing anisotropy

slide-13
SLIDE 13

Modelling anisotropy

Anisotropy operators

For all x ∈ Ω, let λ : Ω → R2 be a positive vector field and let θ : Ω → [0, 2π) describe local orientation. Λ(x) := λ1(x) λ2(x)

  • ,

Rθ(x) := cos θ(x) sin θ(x) − sin θ(x) cos θ(x)

  • .

5

slide-14
SLIDE 14

Modelling anisotropy

Anisotropy operators

For all x ∈ Ω, let λ : Ω → R2 be a positive vector field and let θ : Ω → [0, 2π) describe local orientation. Λ(x) := λ1(x) λ2(x)

  • ,

Rθ(x) := cos θ(x) sin θ(x) − sin θ(x) cos θ(x)

  • .

Mλ,θ := ΛRθ is the anisotropic metric and Wλ,θ := M⊺

λ,θMλ,θ.

Note: if z(x) := (cos θ(x), sin θ(x)), then: Mλ,θ∇u(x) =

  • λ1(x)∂z(x)u(x)

λ2(x)∂z(x)⊥u(x)

  • ⇒ locally weighted gradient along z

5

slide-15
SLIDE 15

Modelling anisotropy

Anisotropy operators

For all x ∈ Ω, let λ : Ω → R2 be a positive vector field and let θ : Ω → [0, 2π) describe local orientation. Λ(x) := λ1(x) λ2(x)

  • ,

Rθ(x) := cos θ(x) sin θ(x) − sin θ(x) cos θ(x)

  • .

Mλ,θ := ΛRθ is the anisotropic metric and Wλ,θ := M⊺

λ,θMλ,θ.

Note: if z(x) := (cos θ(x), sin θ(x)), then: Mλ,θ∇u(x) =

  • λ1(x)∂z(x)u(x)

λ2(x)∂z(x)⊥u(x)

  • ⇒ locally weighted gradient along z

Using this formalism:

  • Anisotropic diffusion:

ut = div

  • Wλ,θ∇u
  • Corresponding directional energy:

Eλ,θ(u) :=

  • Mλ,θ∇u
  • 2 dx

5

slide-16
SLIDE 16

Directional regularisation for imaging: previous work

  • Statistics of natural images: Generalised Gaussian PDFs (Mallat, ’89), Laplace (Green, ’92), non-stationary

MRFs & optimisation (Lanza, Morigi, Pragliola, Sgallari, ’16, ’18),. . . ;

  • Directional variational regularisation: Variable exponent (Chen, Levine, Rao, ’06), DT(G)V (Bayram, ’12,

Kongskov, Dong, Knudsen, ’17, Parisotto, Sch¨

  • nlieb, Masnou, ’18), . . .
  • Application to inverse problems: Limited Angle Tomography (Tovey, Benning et al., ’19), . . .
  • PDEs: structure tensor modelling (Weickert, ’98).

       ut = div(D(Jρ(∇uσ))∇u) in Ω × (0, T] D(Jρ(∇uσ)∇u), n = 0

  • n ∂Ω

u(0, x) = f (x) in Ω, where for convolution kernels Kρ, Kσ Jρ(∇uσ) := Kρ ∗

  • ∇uσ ⊗ ∇uσ
  • ,

uσ := Kσ ∗ u. and D is a smooth and symmetric diffusion tensor.

  • Consistent numerical schemes (Fehrenbach, Mirebeau, ’14,. . . ).

6

slide-17
SLIDE 17

Directional regularisation for imaging: previous work

  • Statistics of natural images: Generalised Gaussian PDFs (Mallat, ’89), Laplace (Green, ’92), non-stationary

MRFs & optimisation (Lanza, Morigi, Pragliola, Sgallari, ’16, ’18),. . . ;

  • Directional variational regularisation: Variable exponent (Chen, Levine, Rao, ’06), DT(G)V (Bayram, ’12,

Kongskov, Dong, Knudsen, ’17, Parisotto, Sch¨

  • nlieb, Masnou, ’18), . . .
  • Application to inverse problems: Limited Angle Tomography (Tovey, Benning et al., ’19), . . .
  • PDEs: structure tensor modelling (Weickert, ’98).

       ut = div(D(Jρ(∇uσ))∇u) in Ω × (0, T] D(Jρ(∇uσ)∇u), n = 0

  • n ∂Ω

u(0, x) = f (x) in Ω, where for convolution kernels Kρ, Kσ Jρ(∇uσ) := Kρ ∗

  • ∇uσ ⊗ ∇uσ
  • ,

uσ := Kσ ∗ u. and D is a smooth and symmetric diffusion tensor.

  • Consistent numerical schemes (Fehrenbach, Mirebeau, ’14,. . . ).

Common problem Tailored regularisation adapted to local image orientation/structures?

6

slide-18
SLIDE 18

Introduction

Variational space-variant regularisation

slide-19
SLIDE 19

Space-variant and directional regularisation models

Discrete formulation. Let Ω be the image domain with |Ω| = n. TV(u) =

n

  • i=1

(∇u)i2

7

slide-20
SLIDE 20

Space-variant and directional regularisation models

Discrete formulation. Let Ω be the image domain with |Ω| = n. TVp(u) =

n

  • i=1

(∇u)ip

2,

Enforcing sparsity:

  • p = 2: Tikhonov regularisation (Tikhonov, Arsenin, ’77).
  • p = 1: Total Variation (Rudin, Osher, Fatemi, ’92).
  • 0 < p < 1: Non-convex regularisation (Hinterm¨

uller, Wu, Valkonen, ’13, ’15, Nikolova, Ng, Tam, ’10).

7

slide-21
SLIDE 21

Space-variant and directional regularisation models

Discrete formulation. Let Ω be the image domain with |Ω| = n. TVsv

p (u) = n

  • i=1

(∇u)ipi

2

Space-variant modelling:

  • 1 ≤ pi ≤ 2: Convex, space-variant regularisation (Blomgren, Chan, Mulet, Wong,

’97, Chen, Levine, Rao, ’06).

  • pi ∈ (0, 2]: Non-convex space-variant regularisation (Lanza, Morigi, Pragliola,

Sgallari, ’16, ’18).

7

slide-22
SLIDE 22

Space-variant and directional regularisation models

Discrete formulation. Let Ω be the image domain with |Ω| = n. DTVp(u) =

n

  • i=1

Mλi ,θi (∇u)ip

2,

θi ∈ [0, 2π) where Mλi ,θi = Λi Rθi as before. Directional modelling:

  • p = 2: Anisotropic diffusion (Weickert, ’98).
  • p = 1: Directional Total (Generalised) Variation for dominant direction θi ≡ ¯

θ (Kongskov, Dong, Knudsen, ’17) and inverse problems (Tovey, Benning et al., ’19).

Combine (possibly non-convex) space-variance AND directional modelling?

7

slide-23
SLIDE 23

A flexible directional & space-variant regularisation

slide-24
SLIDE 24

A flexible directional & space-variant regularisation

Statistical motivation

slide-25
SLIDE 25

Bivariate Generalised Gaussian Distribution (BGGD) prior

Idea: model locally the joint distribution of (∇u)i in a more flexible way: P((∇u)i; pi, Σi ) = 1 2π|Σi |1/2 pi Γ(2/pi) 22/pi exp

  • − 1

2 ((∇u)T

i Σ−1 i

(∇u)i)pi /2

  • ,

where:

  • Γ is the Gamma function in R;
  • Σi are gradient covariance matrices.

One-dimensional GGD with shape parameter β = 2/p.

8

slide-26
SLIDE 26

Bivariate Generalised Gaussian Distribution (BGGD) prior

Idea: model locally the joint distribution of (∇u)i in a more flexible way: P((∇u)i; pi, Σi ) = 1 2π|Σi |1/2 pi Γ(2/pi) 22/pi exp

  • − 1

2 ((∇u)T

i Σ−1 i

(∇u)i)pi /2

  • ,

where:

  • Γ is the Gamma function in R;
  • Σi are gradient covariance matrices.

Gaussian case

pi ≡ 2: standard bivariate Gaussian distribution with pixel-wise covariance matrix Σi . Image prior P(u) = n

i=1 P((∇u)i; pi, Σi ).

Via MAP estimate, derive the variational space-variant, directional regulariser. . .

8

slide-27
SLIDE 27

A new space-variant, directional TV regulariser

By defining Rθi and Λi from Σi = RT

θi Λ2 i Rθi , find :

DTVsv

p (u) := n

  • i=1

Λi Rθi (∇u)ipi

2 ,

pi ∈ (0, 2], θi ∈ [0, 2π)

DTVsv

p -L2 image restoration model (LC, Lanza, Pragliola, Sgallari, ’18)

We aim to solve min

u

  • DTVsv

p (u) + µ

2 Ku − f 2

2

  • ,

µ > 0, for Gaussian image reconstruction.

Highly flexible, more degrees of freedom to describe natural images!

9

slide-28
SLIDE 28

A flexible directional & space-variant regularisation

Automated parameter estimation

slide-29
SLIDE 29

ML approach for parameter estimation

For any pixel i, Σi is s.p.d.: Σi = σ1 σ3 σ3 σ2

  • with
  • σ1 > 0

|Σ| = σ1σ2 − σ2

3 > 0

Four-parameter per-pixel: σ1, σ2, σ3 and p.

2see Sharifi, Leon-Garcia, ’95, Song, ’06, Pascal, Bombrun, Tourneret, Berthoumieu, ’13

10

slide-30
SLIDE 30

ML approach for parameter estimation

For any pixel i, Σi is s.p.d.: Σi = σ1 σ3 σ3 σ2

  • with
  • σ1 > 0

|Σ| = σ1σ2 − σ2

3 > 0

Four-parameter per-pixel: σ1, σ2, σ3 and p. Maximum likelihood 2 approach from collection of N samples around i: reformulation as a constrained problem using polar coordinates (̺, φ) in the plane σ1 − σ3 for Σi.

(p∗, φ∗, ̺∗) ∈ argmin ¯

C

  • F(p, φ, ̺) := N log
  • Γ
  • 2

p + 1

  • π
  • 1−̺2
  • p

2N

2/p + 2N

p

+ 2N

p log p 4N + 2N p log

  • N

j=1((1 + ̺ cos φ)(∇u)2 j,1 + (1 − ̺ cos φ)(∇u)2 j,2 − 2̺ sin φ (∇u)j,1(∇u)j,2)p/2

  • .

where ¯ C := {(p, φ, ̺) : p ∈ [¯ ε, ¯ p], φ ∈ [0, 2π], ̺ ∈ [0, 1 − ǫ]}, after pre-processing.

2see Sharifi, Leon-Garcia, ’95, Song, ’06, Pascal, Bombrun, Tourneret, Berthoumieu, ’13

10

slide-31
SLIDE 31

ML parameter selection results

TEST: single BGGD with fixed parameters (p, φ, ̺).

  • Unbiased estimator + empirical variance and RMSE going to 0 as N → +∞.

102 104 106 N 0.1 0.2 0.3 0.4 Rel Bias p 102 104 106 N 10-4 10-2 Variance p 102 104 106 N 10-2 10-1 100 Rel rmse p

Figure 2: Relative bias, empirical variance and RMSE for estimated p∗. Comparison with synthetic BGGD. Analogous plots for φ∗ and ̺∗.

11

slide-32
SLIDE 32

ML parameter selection results

TEST: single BGGD with fixed parameters (p, φ, ̺).

  • Unbiased estimator + empirical variance and RMSE going to 0 as N → +∞.

From (p∗

i , φ∗ i , ̺∗ i ), get θi, eigenvalues {e1, e2}i & eigenvectors {v1, v2}i of Σi .

e1 = 1 + ̺ =: a2, e2 = 1 − ̺ =: b2

θ Dh Dv v1 v2 a b

Anisotropy BGGD ellipses in the plane Dh − Dv.

11

slide-33
SLIDE 33

ML parameter selection results

  • Unbiased estimator + empirical variance and RMSE going to 0 as N → +∞.
  • Functional form of the gradient PDF adapts to local image structures.

(a) Image

2 2 4 2 108 6 Dv 10-6 8 10-6 Dh

  • 2
  • 2

(b) Estimated PDF

  • 2
  • 1

1 2 Dh 10-6

  • 2
  • 1

1 2 Dv 10-6

(c) Level lines Test for edge pixel. Neighbourhood size 11 × 11. Estimated p∗ = 0.07, θ = −177.82◦.

11

slide-34
SLIDE 34

ML parameter selection results

  • Unbiased estimator + empirical variance and RMSE going to 0 as N → +∞.
  • Functional form of the gradient PDF adapts to local image structures.

(a) Image

2 4 1 108 6 1 Dv 10-6 8 Dh 10-6

  • 1
  • 1

(b) Estimated PDF

  • 1

1 Dh 10-6

  • 1.5
  • 1
  • 0.5

0.5 1 1.5 Dv 10-6

(c) Level lines Test for corner. Neighbourhood size 11 × 11. Estimated p∗ = 0.07, θ = −72.49◦.

11

slide-35
SLIDE 35

ML parameter selection results

  • Unbiased estimator + empirical variance and RMSE going to 0 as N → +∞.
  • Functional form of the gradient PDF adapts to local image structures.

(a) Anisotropy.

1 2 3 4 5

(b) p map.

1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

(c) e1 map.

20 40 60 80 100 120 140 160

(d) θ map.

11

slide-36
SLIDE 36

ML parameter selection results

  • Unbiased estimator + empirical variance and RMSE going to 0 as N → +∞.
  • Functional form of the gradient PDF adapts to local image structures.

(a) Anisotropy.

1 2 3 4 5

(b) p map.

1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

(c) e1 map.

20 40 60 80 100 120 140 160

(d) θ map.

11

slide-37
SLIDE 37

A flexible directional & space-variant regularisation

Image reconstruction

slide-38
SLIDE 38

Well-posedness of the model

min

u n

  • i=1

Λi Rθi (∇u)ipi

2

  • non-convex

+ µ 2 Ku − f 2

2

  • convex

, µ > 0, pi ∈ (0, 2], θi ∈ [0, 2π)

12

slide-39
SLIDE 39

Well-posedness of the model

min

u n

  • i=1

Λi Rθi (∇u)ipi

2

  • non-convex

+ µ 2 Ku − f 2

2

  • convex

, µ > 0, pi ∈ (0, 2], θi ∈ [0, 2π)

Proposition

The DTVsv

p -L2 functional is continuous, bounded from below by zero and coercive,

hence it admits global minimisers. Uniqueness holds if pi > 1 for all i.

Based on a general result on the sum of proper, l.s.c. and coercive functions (see, e.g., Ciak, PhD Thesis ’15). 12

slide-40
SLIDE 40

Optimisation via ADMM

min

u

  • DTVsv

p (u) + µ

2 Ku − f 2

2

  • ,

µ > 0

13

slide-41
SLIDE 41

Optimisation via ADMM

min

u

n

  • i=1
  • Λi Rθi ti
  • pi

2 + µ

2 r2

2

  • ,

µ > 0 with: t := Du, r := Ku − f .

13

slide-42
SLIDE 42

Optimisation via ADMM

min

u

n

  • i=1
  • Λi Rθi ti
  • pi

2 + µ

2 r2

2

  • ,

µ > 0 with: t := Du, r := Ku − f . Augmented Lagrangian: L(u, r, t; ρr, ρt) : =

n

  • i=1
  • Λi Rθi ti
  • pi

2

+ µ 2 r2

2 − ρt, t − Du + βt

2 t − Du2

2

−ρr, r − (Ku − f ) + βr 2 r − (Ku − f )2

2,

with βr, βt > 0 and ρr ∈ Rn, ρt ∈ R2n.

13

slide-43
SLIDE 43

Optimisation via ADMM

min

u

n

  • i=1
  • Λi Rθi ti
  • pi

2 + µ

2 r2

2

  • ,

µ > 0 with: t := Du, r := Ku − f . Solve saddle point problem: find (u∗, r∗, t∗; ρ∗

r , ρ∗ t ) ∈

  • Rn × Rn × R2n

×

  • Rn × R2n

s.t. L(u∗, r∗, t∗; ρr, ρt) ≤ L(u∗, r∗, t∗; ρ∗

r , ρ∗ t ) ≤ L(u, r, t; ρ∗ r , ρ∗ t ) 13

slide-44
SLIDE 44

Optimisation via ADMM

min

u

n

  • i=1
  • Λi Rθi ti
  • pi

2 + µ

2 r2

2

  • ,

µ > 0 with: t := Du, r := Ku − f . ADMM iteration for k ≥ 0: u(k+1) ← argmin

u∈Rn

L(u, r(k), t(k); ρ(k)

r

, ρ(k)

t

) (linear system) r(k+1) ← argmin

r∈Rn

L(u(k+1), r, t(k); ρ(k)

r

, ρ(k)

t

) (discrepancy) t(k+1) ← argmin

t∈R2n

L(u(k+1), r(k+1), t; ρ(k)

r

, ρ(k)

t

) (∗) ρ(k+1)

r

← ρ(k)

r

− βr

  • r(k+1) − (Ku(k+1) − f )
  • ρ(k+1)

t

← ρ(k)

t

− βt

  • t(k+1) − Du(k+1)

13

slide-45
SLIDE 45

Optimisation via ADMM

min

u

n

  • i=1
  • Λi Rθi ti
  • pi

2 + µ

2 r2

2

  • ,

µ > 0 with: t := Du, r := Ku − f . ADMM iteration for k ≥ 0: u(k+1) ← argmin

u∈Rn

L(u, r(k), t(k); ρ(k)

r

, ρ(k)

t

) (linear system) r(k+1) ← argmin

r∈Rn

L(u(k+1), r, t(k); ρ(k)

r

, ρ(k)

t

) (discrepancy) t(k+1) ← argmin

t∈R2n

L(u(k+1), r(k+1), t; ρ(k)

r

, ρ(k)

t

) (∗) ρ(k+1)

r

← ρ(k)

r

− βr

  • r(k+1) − (Ku(k+1) − f )
  • ρ(k+1)

t

← ρ(k)

t

− βt

  • t(k+1) − Du(k+1)

(*) Non-convex proximal step!

Proposition

Problem (*) is well-posed. The computation of the non-convex proximal map can be reduced to a one-dimensional constrained optimisation problem.

13

slide-46
SLIDE 46

Pseudo-code

Algorithm 1 ADMM scheme for DTVsv

p -L2 inputs:

  • bserved image f ∈ Rn,

noise level σ > 0 parameters: discrepancy parameter τ ≃ 1, ADMM parameters βr , βt > 0

  • utput:

reconstruction u∗∈ Rn 1. Initialisation: 2. · estimate model parameters pi , Rθi , Λi , i = 1, . . . , n, by ML approach 3. · set δ := τσ√n, u(0) = f , r(0) = Ku(0) − f , t(0) = Du(0), ρ(0)

r

= ρ(0)

t

= 0, k = 0 4. while not converging do ADMM: 5. · update primal variables: 6. · update dual variables: 7. · k = k + 1 8. end while 9. u∗ = u(k+1)

Parameter choice:

  • Regularisation parameter µ chosen by discrepancy principle Ku − f 2 ≤ δ
  • βr, βt set manually.

Empirical convergence is observed, also in such non-convex regime. Proof?

14

slide-47
SLIDE 47

Numerical results: Barbara image

TEST: Barbara image with increasing degradation.

15

slide-48
SLIDE 48

Numerical results: Barbara image

TEST: Barbara image with increasing degradation. f

0.6 0.8 1 1.2 1.4 1.6 1.8

p map

1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

e1 map

20 40 60 80 100 120 140 160

θ map Figure 3: Image is corrupted by AWGN and blur with BSNR = 10 dB. Zoom size: 471 × 361. Gaussian blur band= 9, σ = 2.

  • Parameter maps estimated on a 7 × 7 neighbourhood.
  • In the case of large noise: pre-processing by few (5) iterations of TV.

15

slide-49
SLIDE 49

Numerical results: Barbara image

TEST: Barbara image with increasing degradation. TV-L2 TVp-L2, p = 0.92 TVsv

p -L2

DTVsv

p -L2

Better preservation of texture and details!

15

slide-50
SLIDE 50

Numerical results: Barbara image

TEST: Barbara image with increasing degradation. BSNR TV-L2 TVp-L2 TVsv

p -L2

DTVsv

p -L2

20 2.46 3.14 3.23 3.61 15 1.74 1.99 2.14 2.79 10 1.59 2.02 2.13 2.90 Increased SNR (ISNR) values for decreasing BSNR.

BSNR(u∗, u) := 10 log10 Ku − Ku2

2

u∗ − Ku2

2

, ISNR(f , u, u∗) := 10 log10 f − u2

2

u∗ − u2

2

15

slide-51
SLIDE 51

Numerical results: Barbara image

TEST: Barbara image with increasing degradation. BSNR TV-L2 TVp-L2 TVSV

p,α-L2

DTVSV

p

  • L2

20 0.80 0.83 0.83 0.85 15 0.74 0.75 0.77 0.80 10 0.65 0.68 0.69 0.74 SSIM values for decreasing BSNR.

15

slide-52
SLIDE 52

Numerical results: texture image

TEST: texture image with increasing degradation.

16

slide-53
SLIDE 53

Numerical results: texture image

TEST: texture image with increasing degradation. f

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

p map

1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

e1 map

20 40 60 80 100 120 140 160

θ map. Figure 3: Image is corrupted by AWGN and blur with BSNR = 10 dB. Zoom size: 500 × 500. Gaussian blur band= 9, σ = 2.

  • Parameter maps estimated on a 3 × 3 neighbourhood.
  • In the case of large noise: pre-processing by few (5) iterations of TV.

16

slide-54
SLIDE 54

Numerical results: texture image

TEST: texture image with increasing degradation. TV-L2 TVp-L2, p = 0.71 TVsv

p -L2

DTVsv

p -L2 16

slide-55
SLIDE 55

Numerical results: texture image

TEST: texture image with increasing degradation. BSNR TV-L2 TVp-L2 TVsv

p -L2

DTVsv

p -L2

20 2.07 2.43 2.53 2.78 15 1.83 2.06 2.26 2.56 10 0.94 1.55 1.86 2.45 Increased SNR (ISNR) values for decreasing BSNR.

BSNR(u∗, u) := 10 log10 Ku − Ku2

2

u∗ − Ku2

2

, ISNR(f , u, u∗) := 10 log10 f − u2

2

u∗ − u2

2

16

slide-56
SLIDE 56

Numerical results: texture image

TEST: texture image with increasing degradation. BSNR TV-L2 TVp-L2 TVSV

p,α-L2

DTVSV

p

  • L2

20 0.78 0.79 0.80 0.81 15 0.76 0.77 0.78 0.79 10 0.70 0.72 0.74 0.76 SSIM values for decreasing BSNR.

16

slide-57
SLIDE 57

Conclusions

slide-58
SLIDE 58

Conclusions

Take-home messages:

  • BGGD for flexible description of natural image statistics.
  • Variational space-dependent, directional regularisation adapting to local image

structures (upon ML parameter estimation).

  • Efficient ADMM optimisation (non-convex proximal step). Results show

improved texture reconstruction. Outlook:

  • Applications to inverse problems with different measurement/image space (e.g.

tomography → Rob’s talk)?

  • Optimisation: theoretical guarantees for non-convex ADMM? Other algorithms?
  • L. Calatroni, A. Lanza, M. Pragliola, F. Sgallari, Space-variant anisotropic regularisation and

automated parameter selection for image restoration problems, SIAM Journal of Imaging Sciences, in press, 2019.

17

slide-59
SLIDE 59

Thank you for your attention! Questions? luca.calatroni@polytechnique.edu

17