Textured Image Deconvolution and Decomposition Duy Hoang Thai - - PowerPoint PPT Presentation

textured image deconvolution and decomposition
SMART_READER_LITE
LIVE PREVIEW

Textured Image Deconvolution and Decomposition Duy Hoang Thai - - PowerPoint PPT Presentation

Textured Image Deconvolution and Decomposition Duy Hoang Thai Joint work with David Banks 1 Inverse Problem in Imaging [1]: Image processing : e.g. segmentation, denoising, deblurring, ... Inverse problem Approximation theory sparsity


slide-1
SLIDE 1

Textured Image Deconvolution and Decomposition

Duy Hoang Thai Joint work with David Banks

1

slide-2
SLIDE 2

Inverse Problem in Imaging [1]:

Inverse problem Approximation theory sparsity Image processing: Optimization (PDE) Fourier (wavelet) Statistics (stochastic process)

e.g. segmentation, denoising, deblurring, ...

I The underlying technique in image processing is inverse problem

which links to approximation theory.

I There are 3 large fields in inverse problem: optimization, harmonic

analysis and statistics.

I ”Sparsity” is a fundamental concept in approximation theory. 2

slide-3
SLIDE 3

Ex: Approximation of a function f ⇡ b + fseg + ✏ + ~

v

bias field piecewise constant residual texture

  • riginal image

(1) (2) (3) (4) (5) (6) (7)

The purpose of preprocessing is to obtain good feature and approximation theory stays at the heart in this step: I Image segmentation focuses on piecewise constant (2). I Texture analysis relates to directional texture (4)-(7). I Image denoising relates to (1)-(7), except noise (3). 3

slide-4
SLIDE 4

Non-texture v.s. texture: there are three classes of images.

(a) Homogeneous (b) Texture (c) Homogeneous & Texture

4

slide-5
SLIDE 5

Overview:

  • 1. Motivation
  • 2. Directional mean curvature for image deconvolution and

decomposition (DMCDD)

  • 3. Function space & Filter Banks in sampling theory
  • 4. Conclusion

5

slide-6
SLIDE 6
  • 1. Motivation for Image Reconstruction

Example: Forensic examiner tries to match different ballistic images (in the same gun) which are captured from a cheap microscope. Feature for matching is texture information. ) a cheap microscope produces noise and blur in the image. Motivation:

I Deconvolution is to recover the true underlying signal from a noisy

and blurred image.

I Decomposition is to extract good feature for post-processing, e.g.

pattern recognition.

6

slide-7
SLIDE 7
  • 2. Directional mean curvature for image deconvolution and decomposition

(DMCDD) Notation:

I A bounded domain:

Ω = n k = [k1, k2] 2 [0 , m 1] ⇥ [0 , n 1] ⇢ Z2o

I The Euclidean space (space of matrices): X = R

|Ω|

I A discrete image (size m ⇥ n):

f [k] : Ω ! R+ , matrix f 2 X Ex: Barbara image (size 512 ⇥ 512)

7

slide-8
SLIDE 8
  • 2. DMCDD

Go Feature

(a) original image f0 (b) observed image f (c)

reconstructed fre, MSE = 0.057 Model:

f = h ⇤ (u + v + ⇢ | {z }

=f0

) + ✏

I h: a blur operator (known) I u: piecewise smooth component I v: texture I ⇢: small scale objects (residual) I ✏: noise 8

slide-9
SLIDE 9
  • 2. DMCDD

I Our strategy is to find value of functions (u , v , ✏, ⇢) by minimizing

an objective function.

I The measures for these functions are defined in function space.

) This is a regularization problem in the Banach space (in discrete setting)!

9

slide-10
SLIDE 10
  • 2. The DMCDD model: general idea of the proposed DMCDD model

Piecewise smooth u Directional Mean Curvature (high order PDE & reduce stair case effect) T exture v Directional G-norm T exture v Spatial sparsity Identity condition Bounded condition in curvelet domain Legendre Fenchel trans. (Dantzig selector)

Note: This model can be explained by the Bayesian framework and L´ evy processes. 10

slide-11
SLIDE 11
  • 2. DMCDD: functional spaces [2, 6]

To minimize cost func., we have some constraints depending on different function spaces:

Besov space (space of wavelet coef.) Bounded variation G-space dual of Besov space Hilbert space

I Dual pairs are ( ˙ B1

1,1 , ˙

B1

1,1) and ( ˙

BV , G). I Piecewise smooth u is usually measured by ˙ BV or ˙ B1

1,1 .

I Oscillatory components, e.g. texture v or noise ✏, is measured in L2 , G , ˙ B1

1,1.

Example: TV L2 model (Rudin-Osher-Fatemi) for f = u + ✏ , ✏ ⇠ N(0 , 2):

min

u2 ˙ BV

n krukL1 + µ 2 kf uk2

L2

  • The ROF model produces “stair case effect”. We need higher order approach!

11

slide-12
SLIDE 12
  • 2. DMCDD: space for piecewise smooth u [7]

Piecewise smooth u is measured by directional mean curvature (DMC)-norm.

Innovation model (UnserT afti2001)

Assumption: an original signal is not sparse, but it is sparse under some transform domains. I Multi-direction produces smoother signal rather than 2 direction in MC. I DMC produces higher order PDE than BV and reduces stair case effect. 12

slide-13
SLIDE 13
  • 2. DMCDD: space for texture v

Texture isn’t well presented in the Fourier domain. In function space, we use the discrete directional GS-norm (approximation version) [2] kvkGS = inf n k~ gk`1 =

S1

X

s=0

kgsk`1 , v = div

S ~

g,~ g = ⇥ gs ⇤S1

s=0 2 X So

to minimize the cost function.

13

slide-14
SLIDE 14
  • 2. DMCDD: space for residual ⇢ & noise ✏

To handle residual and noise, we need to bound `1-norm in curvelet space by a constant ⌫:

  • C{✏}
  • `1  ⌫ , ✏ 2 X

This is because the oscillatory components do not have small norms in L2(Ω) or L1(Ω). Constraint

  • C{·}
  • `1  ⌫ is similar to Dantzig selector [5].

(d) wavelet ( ˙

B1

1,1 , ˙

B1

1,1)

(e) curv

  • C{·}
  • `1 ,
  • C{·}
  • `1

Signal representation in curvelet is sparser than wavelet, i.e. signal is smoother in spatial domain, because the subband is more refine.

14

slide-15
SLIDE 15
  • 2. DMCDD:

piecewise smooth texture residual noise

h

blur

  • riginal

component in directional G-norm

  • bserved image

bounded condition in curvelet space

To estimate functions (u, v, ⇢, ✏), we solve the DMCDD model min

(u,v,⇢,✏)2X 4

  • d

L

  • u
  • `1

+ µ1kvkGS + µ2kvk`1 s.t. f = h ⇤ (u + v + ⇢) + ✏ ,

  • C{⇢}
  • `1  ⌫⇢ ,
  • C{✏}
  • `1  ⌫✏
  • by Augmented Lagrangian method, alternating directional method of multipliers and

Iterative Shrinkage/Thresholding Algorithms [4].

Go ALM Go ADMM

15

slide-16
SLIDE 16
  • 2. Numerical result: Texture is well recovered on the Barbara image.

(a) Original image f0 (b) Blur image f = f0 ⇤ h (c) fre = u + v + ⇢, MSE = 0.016 (d) u (e) v (very sparse) (f) ⇢ (g) ✏ = 0

Back

16

slide-17
SLIDE 17
  • 2. Numerical result: Barbara image

The previous slide shows reconstructed image from the original Barbara image. Note:

I Since there is no noise in the observed image (b), the noise in (g) is

set to 0 because of the bounded norm in curvelet space:

  • C{✏}
  • `1  ⌫✏

which is different from regularization in `2-norm.

I Texture is very sparse with 29.59% of nonzero coef.. 17

slide-18
SLIDE 18
  • 3. Approximation scheme: from sampling theory to function space

System (a) function space

Lowpass N-th Highpass

2 2 (b) wavelet

Approximation by function space:

I System contains shrinkage operator and ”frame functions”

H(z) , Φ(z) , ˜ Ψl(z) , Ψl(z) , Ξ(z) , ˜ Θl(z) , Θl(z) which have definition in the Fourier domain and satisfy the unity condition.

Go

I These ”frame functions” are combined in this system as sampling

theory in the Banach space (with `1-norm).

I These ”frame functions” are similar to wavelet-like operator [3]. 18

slide-19
SLIDE 19
  • 3. Filter banks in the DMCDD model:

Figure: ΨL,c (z) =

L1

X

l=0

˜ ΨL,c

l

(z)ΨL

l (z).

19

slide-20
SLIDE 20
  • 3. Comparison between multi-directional mean curvature and MC

Go

20

slide-21
SLIDE 21
  • 3. Comparison between multi-directional mean curvature and MC

Directional mean curvature is better because

I When increasing # direction, the bandwidth of lowpass is

  • shrinked. It ensures there is no texture or residual in piecewise

smooth component u.

I The shrinkage operator removes small coefficients in highpass

by a constant depending on Lagrange multiplier, i.e. reconstructed image is smoother. Solution of piecewise smooth component u (at iteration ⌧):

piecewise smooth lowpass (interpolant in sampling theory) frame fucn. & its dual (~ biorthogonal wavelet) shrinkage operator (~ sampling theory with L1 norm) constant depends on Lagrange multiplier due to high order approach

21

slide-22
SLIDE 22
  • 4. Conclusion
  • 1. This research proposes a new method for image recovery. The

method uses constrained minimization in some appropriate spaces to archive:

I good texture recovery I small Mean square error I avoid stair case effect

by extending two directional mean curvature to multidirectional

  • ne (high order PDE approach).
  • 2. We also set the link between sampling theory and function

space which is similar to wavelet-like operator.

This research is based on two projects at SAMSI: I D.H. Thai and D. Banks, Directional Mean Curvature for Textured Image Deconvolution and Decomposition (ongoing) I D.H. Thai and L. Mentch, Multiphase Segmentation For Simultaneously Homogeneous and Textural Images (in submission) 22

slide-23
SLIDE 23

Why do we need some complicated models?

because we’d like to solve some complicated problems: extract fingerprint pattern from noisy background.

23

slide-24
SLIDE 24

References

  • O. Scherzer, editor.

Handbook of Mathematical Methods in Imaging. Springer, New York, NY, USA, 2011.

  • Y. Meyer.

Oscillating Patterns in Image Processing and Nonlinear Evolution Equations: The Fifteenth Dean Jacqueline

  • B. Lewis Memorial Lectures. American Mathematical Society, Boston, MA, USA, 2001.
  • I. Khalidov and M. Unser.

From Differential Equations to the Contruction of New Wavelet-Like Bases. IEEE Transactions on Signal Processing, April, (54):4, 1256-1267, 2006.

  • I. Daubechies and M. Defrise and C.D. Mol.

An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications

  • n Pure and Applied Mathematics, August, (57):11, 1413-1457, 2004.
  • E. Cand`

es and T. Tao. The Dantzig Selector: Statistical Estimation When p Is Much Larger than n. The Annals of Statistics, December, (35):6, 2313-2351, 2007. J.-F. Aujol and A. Chambolle. Dual Norms and Image Decomposition Models. International Journal of Computer Vision, June, (63):1, 85-104, 2005.

  • W. Zhu and X.C. Tai and T. Chan.

Augmented Lagrangian method for a mean curvature based image denoising model. Inverse problems and imaging, (7):4, 1409-1432, 2013. D.H. Thai and C. Gottschlich. Directional Global Three-part Image Decomposition, accepted. D.H. Thai and C. Gottschlich. Global variational method for fingerprint sgementation by three-part decomposition, accepted.

24

slide-25
SLIDE 25

“A picture is worth a thousand words” & Thank you!

(a) f , δ = 25 (b) vbin (c) u (d) v (e) ✏

25

slide-26
SLIDE 26
  • 2. DMCDD (III):

Let: |~ r| ⌦~ y ,~ r ↵

`2 = 0 ,~

r = ⇥ r+

L u , 1

⇤ , d = div

L~

t ,~ t = ~ y , ~ w = ~ g The augmented Lagrangian method is min

(u,v,⇢,✏,d, ~ r, ~ t,~ y,~ w,~ g)2X 5+3(L+1)+2S L(· ; 1, ~

2, 3, ~ 4, 5, ~ 6, 7) (1)

The Lagrange function: L(· ; ·) =kdk`1 + µ1

S1

X

s=0

kwsk`1 + µ2kvk`1 + G ⇤( ⇢ ⌫⇢ ) + G ⇤( ✏ ⌫✏ ) + R⇤(~ y) + D 1 + 1 ,| ~ r| h~ y ,~ ri`2 E

`2

+ 2 2

L1

X

l=0

  • rl @+

l u +

2l 2

  • 2

`2

+ 2 2

  • rL 1 +

2L 2

  • 2

`2

+ 3 2

  • d div

L ~

t + 3 3

  • 2

`2

+ 4 2

  • ~

t ~ y + ~ 4 4

  • 2

`2

+ 5 2

  • f h ⇤
  • u + v + ⇢
  • ✏ +

5 5

  • 2

`2

+ 6 2

S1

X

s=0

  • ws gs +

6s 6

  • 2

`2

+ 7 2

  • v div

S ~

g + 7 7

  • 2

`2

Back

26

slide-27
SLIDE 27
  • 2. DMCDD (IV):

The alternating directional method of multiplier of (1) through iteration ⌧ = 1, 2, . . . is ⇣ u(⌧), v(⌧), ⇢(⌧), ✏(⌧), d(⌧),~ r(⌧),~ t(⌧),~ y(⌧), ~ w(⌧),~ g(⌧)⌘ = argmin L ⇣ · ; (⌧1)

1

, ~

  • (⌧1)

2

, (⌧1)

3

, ~

  • (⌧1)

4

, (⌧1)

5

, ~

  • (⌧1)

6

, (⌧1)

7

  • 1. Solve 10 sub-problems:
  • The “u-problem”: Fix v, ⇢, ✏, d,~

r,~ t,~ y, ~ w,~ g and u(⌧) = argmin L(u ; ·)

  • The “v-problem”: Fix u, ⇢, ✏, d,~

r,~ t,~ y, ~ w,~ g and v(⌧) = argmin L(v ; ·)

  • The “d-problem”: Fix u, v, ⇢, ✏,~

r,~ t,~ y, ~ w,~ g and d(⌧) = argmin L(d ; ·)

  • The “

~ r-problem”: Fix u, v, ⇢, ✏, d,~ t,~ y, ~ w,~ g and ~ r(⌧) = argmin L( ~ r ; ·)

  • The “

~ t-problem”: Fix u, v, ⇢, ✏, d,~ r,~ y, ~ w,~ g and ~ t(⌧) = argmin L( ~ t ; ·)

  • The “~

y-problem”: Fix u, v, ⇢, ✏, d,~ r,~ t, ~ w,~ g and ~ y(⌧) = argmin L(~ y ; ·)

  • The “~

w-problem”: Fix u, v, ⇢, ✏, d,~ r,~ t,~ y,~ g and ~ w(⌧) = argmin L(~ w ; ·)

  • The “~

g-problem”: Fix u, v, ⇢, ✏, d,~ r,~ t,~ y, ~ w and ~ g(⌧) = argmin L(~ g ; ·)

  • The “⇢-problem”: Fix u, v, ✏, d,~

r,~ t,~ y, ~ w,~ g and ⇢(⌧) = argmin L(⇢ ; ·)

  • The “✏-problem”: Fix u, v, ⇢, d,~

r,~ t,~ y, ~ w,~ g and ✏(⌧) = argmin L(✏ ; ·)

  • 2. Update Lagrange multipliers: 1, ~

2, 3, ~ 4, 5, ~ 6, 7 Back

27

slide-28
SLIDE 28
  • 3. Variational Analysis & Filter Banks: wavelet-like operator [3]

Back

Frame functions at direction l = 0, . . . , L 1:

ΦL,c25 (z) = 1 c25

L1

X

l=0

  • cos(

⇡l L )(z2 1) + sin( ⇡l L )(z1 1)

  • 2

+

  • H(z)
  • 2

F1

! L,c25 [k] , ˜ ΨL,c25

l

(z) = c25 h cos( ⇡l L )(z1

2

1) + sin( ⇡l L )(z1

1

1) i c25

L1

X

l0=0

  • cos(

⇡l0 L )(z2 1) + sin( ⇡l0 L )(z1 1)

  • 2

+

  • H(z)
  • 2

F1

! ˜ L,c25

l

[k] = c25@

l

L,c25 [k] , ΨL

l (z) = cos(

⇡l L )(z2 1) + sin( ⇡l L )(z1 1)

F1

! L

l [k] = @+ l [k]

ΞL,c34

l

(z) = 1 1 + c34

  • cos( ⇡l

L )(z2 1) + sin( ⇡l L )(z1 1)

  • 2

F1

! ⇠L,c34

l

[k] = h 1 c34@

l

@+

l

i1[k] ΘL,c34

l

(z) = c34 h cos( ⇡l

L )(z2 1) + sin( ⇡l L )(z1 1)

i 1 + c34

  • cos( ⇡l

L )(z2 1) + sin( ⇡l L )(z1 1)

  • 2

F1

! ✓L,c34

l

[k] = c34@+

l ⇠L,c34 l

[k] ˜ ΘL

l (z) =

⇥ cos( ⇡l L )(z1

2

1) + sin( ⇡l L )(z1

1

1) ⇤

F1

! ˜ ✓L

l [k] = @ l

[k]

The unity conditions:

  • H(z)
  • 2 ΦL,c25 (z) +

L1

X

l=0

˜ ΨL,c25

l

(z)ΨL

l (z) = 1 ,

and ΦL,c25 (ej0) = 1

  • H(ej0)
  • 2 , ˜

ΨL,c25

l

(ej0) = ΨL

l (ej0) = 0 ,

ΞL,c34

a

(z) + ΘL,c34

a

(z) ˜ ΘL

a(z) = 1 ,

and ΞL,c34

a

(ej0) = 1 , ΘL,c34

a

(ej0) = ˜ ΘL

a(ej0) = 0 .

28

slide-29
SLIDE 29
  • 4. Multiscale Filter Banks:

Given h(·) = (·), a multiscale projection of f 2 X to spaces V(int) ? W([ il]) is

f [k] = PV(f)[k] +

I1

X

i=0 L1

X

l=0

PWil (f)[k] = X

n2Z2

f [n][k n] +

I1

X

i=0 L1

X

l=0

X

t2Z

⌦ f, il[· t] ↵

`2

˜ il[k t]

F

! F(z) = F(z)Φ(z) +

I1

X

i=0 L1

X

l=0

F(z)Ψ⇤

il (z) ˜

Ψil(z) I The frame elements of V and W in the Fourier (Z) domain: Φ(z) = 1 I

I1

X

i=0

Φint(zai ) , Ψ⇤

il (z) = 1

I Ψl(zai ) , ˜ Ψil(z) = ˜ Ψl(zai ) , I The unity condition: Φ(z) +

I1

X

i=0 L1

X

l=0

Ψ⇤

il (z)˜

Ψil(z) = 1 . I The wavelet coefficient: dil[t] = ⌦ f , il[· t] ↵

`2 F

! Dij(z) = F(z)Ψ⇤

il (z)

29

slide-30
SLIDE 30

Spectrum of filter banks in different scales: Figure: Visualization of the discrete multiscale filter banks by DMCDD.

30

slide-31
SLIDE 31
  • 2. Directional mean curvature for image deconvolution and decomposition

(DMCDD) Given: a discrete image (size m ⇥ n) f [k] : Ω ! R+ and the Euclidean space X = R

|Ω| with Ω =

n k = [k1, k2] 2 [0 , m 1] ⇥ [0 , n 1] ⇢ Z2o Directional forward/backward difference operators:

l

⇤ = @⌥

l

l f = cos( ⇡l

L )@±

x f + sin( ⇡l

L )@±

y f F

! ± h cos( ⇡l L )(z±

2 1) + sin( ⇡l

L )(z±

1 1)

i F(z)

Directional gradient and divergence operators:

L

⇤ = div⌥

L

L f =

h @±

l f

iL1

l=0

and div±

L ~

g =

L1

X

l=0

l gl

31