Signal analysis using sparse representation and proximal - - PowerPoint PPT Presentation

signal analysis using sparse representation and proximal
SMART_READER_LITE
LIVE PREVIEW

Signal analysis using sparse representation and proximal - - PowerPoint PPT Presentation

(M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Signal analysis using sparse representation and proximal optimization methods Mai Quyen PHAM GIPSA-Lab, 11 rue des math ematiques, 38402 Saint Martin dH` eres 04 November


slide-1
SLIDE 1

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Signal analysis using sparse representation and proximal optimization methods

Mai Quyen PHAM

GIPSA-Lab, 11 rue des math´ ematiques, 38402 Saint Martin d’H` eres

04 November 2016

1 / 36

slide-2
SLIDE 2

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Contents

(MULTIPLE + NOISE) REMOVAL Formulation Algorithm Results BLIND DECONVOLUTION Formulation Algorithm Results CONCLUSIONS

2 / 36

slide-3
SLIDE 3

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

(Multiple + noise) removal

3 / 36

slide-4
SLIDE 4

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Seismic multiple reflection

  • Towed streamer

Hydrophone Solid blue: primaries; dashed red: multiple reflection disturbances.

4 / 36

slide-5
SLIDE 5

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

(Multiple + noise) removal strategies

Source System s(n) multiples y(n) primary + b(n) noise z(n) = y(n) + s(n) + b(n) Measurements (∀n ∈ {0, . . . , N − 1}) Which strategy for restoring the primary signal y(n) corrupted by the unknown multiples s(n), plus noise b(n)?

◮ Methodology for

primary/multiple adaptive separation based on approximate templates

◮ Variational approach ◮ Proximal methods to solve the

resulting optimization problem

5 / 36

slide-6
SLIDE 6

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Multi-model

◮ J models r(n) j

are known (available)

◮ Imperfect in time, amplitude and frequency ◮ Assumption: models linked to ¯

s(n) throughout time varying filters (FIR) ¯ s(n) =

J−1

  • j=0

p′+Pj−1

  • p=p′

¯ h(n)

j

(p)r(n−p)

j

where

◮ ¯

h(n)

j

: unknown impulse reponse of the filter corresponding to model j and time n (Pj tap coefficients)

◮ p′ ∈ {−Pj + 1, . . . , 0} ◮ New definition: P = J−1 j=0 Pj.

6 / 36

slide-7
SLIDE 7

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Template

Magnitude

200 400 600 800 1000

r1 r0

¯ s

Time First model, Second model, Multiple

7 / 36

slide-8
SLIDE 8

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Template

Observed image Template Time Space Signal Space Time Time Primary

7 / 36

slide-9
SLIDE 9

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Problem reformulation

z

  • bserved signal

= R ¯ h

  • filter

+ ¯ y

  • primary

+ b

  • noise

where

◮ ¯

s = J−1

j=0 Rj¯

hj = R¯ h = ¯ s(0), · · · , ¯ s(N−1)⊤

◮ R = [R0 · · · RJ−1], Rj is a block diagonal matrix ◮ ¯

h =

  • ¯

h⊤

0 · · · ¯

h⊤

J−1

◮ ¯

h(n)

j

=

  • ¯

h(0)

j

(p′) · · · ¯ h(0)

j

(p′ + Pj − 1) · · · ¯ h(N−1)

j

(p′) · · · ¯ h(N−1)

j

(p′ + Pj − 1) ⊤

8 / 36

slide-10
SLIDE 10

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Estimation of y

Assumption: ¯ y is a realization of a random vector Y, whose probability density is given by: (∀y ∈ RN) fY(y) ∝ exp(−ϕ(Fy)) F ∈ RK×N: linear operator. ϕ is chosen separable:

  • ∀x = (xk)1≤k≤K ∈ RK

ϕ(x) =

K

  • k=1

ϕk(xk) where, for all k ∈ {1, . . . , K}, ϕk : R → ]−∞, +∞].

9 / 36

slide-11
SLIDE 11

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Estimation : filter h and noise b

◮ Assumption: ¯

h is a realization of a random vector H, whose probability density can be expressed as: (∀h ∈ RNP) fH(h) ∝ exp(−ρ(h)) H is independent of Y.

◮ Assumption: b is a realization of a random vector B, of

probability density: (∀b ∈ RN) fB(b) ∝ exp(−ψ(b)) B is assumed to be independent from Y and H

10 / 36

slide-12
SLIDE 12

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Estimation : filter h and noise b

◮ Assumption: ¯

h is a realization of a random vector H, whose probability density can be expressed as: (∀h ∈ RNP) fH(h) ∝ exp(−ρ(h)) H is independent of Y.

◮ Assumption: b is a realization of a random vector B, of

probability density: (∀b ∈ RN) fB(b) ∝ exp(−ψ(b)) B is assumed to be independent from Y and H minimize

y∈RN,h∈RNP

ψ

  • z − Rh − y
  • fidelity: linked to noise

+ ϕ(Fy)

a priori on the signal

+ ρ(h)

  • a priori on the filters

MAP estimation of (y, h) ♣

10 / 36

slide-13
SLIDE 13

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Problem to be solved

minimize

y∈RN,h∈RNP

ψ

  • z − Rh − y
  • fidelity: linked to noise

+ ϕ(Fy)

a priori on the signal

+ ρ(h)

  • a priori on the filters

MAP estimation of (y, h) ♣

◮ Difficulty: Choosing the good regularization parameters ◮ Proposed: Use a constrained minimization problem

minimize

y∈RN,h∈RNP ψ

  • z − Rh − y
  • + ιD(Fy) + ιC(h)

Problem to be solved

11 / 36

slide-14
SLIDE 14

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

About convex set D

minimize

y∈RN,h∈RNP ψ

  • z − Rh − y
  • + ιD(Fy) + ιC(h)

Problem to be solved ιD(x) =

  • if x ∈ D

+∞

  • therwise.

◮ F ∈ RK×N: analysis frame operator ◮ {Kl | l ∈ {1, . . . , L}} ⊂ {1, . . . , K} ◮ D = D1 × · · · × DL with

Dl = {(xk)k∈Kl |

k∈Kl ϕℓ(xk) ≤ βl}, where

∀l ∈ {1, . . . , L}, βl ∈ ]0, +∞[, and ϕl : R → [0, +∞[ is a lower-semicontinuous convex function.

12 / 36

slide-15
SLIDE 15

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

About convex set C

minimize

y∈RN,h∈RNP ψ

  • z − Rh − y
  • + ιD(Fy) + ιC(h)

Problem to be solved C = C1 ∩ C2 ∩ C3

◮ C1 =

  • h ∈ RPN : ρ(h) = J−1

j=0 ρj(hj) ≤ τ

  • ◮ ρj(hj) = hj2

ℓ2 = N−1 n=0

p′+Pj−1

p=p′

|h(n)

j

(p)|2

13 / 36

slide-16
SLIDE 16

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

About convex set C

minimize

y∈RN,h∈RNP ψ

  • z − Rh − y
  • + ιD(Fy) + ιC(h)

Problem to be solved C = C1 ∩ C2 ∩ C3

◮ C1 =

  • h ∈ RPN : ρ(h) = J−1

j=0 ρj(hj) ≤ τ

  • ◮ ρj(hj) = hj2

ℓ2 = N−1 n=0

p′+Pj−1

p=p′

|h(n)

j

(p)|2

◮ ρj(hj) = hjℓ1 = N−1 n=0

p′+Pj−1

p=p′

|h(n)

j

(p)|

13 / 36

slide-17
SLIDE 17

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

About convex set C

minimize

y∈RN,h∈RNP ψ

  • z − Rh − y
  • + ιD(Fy) + ιC(h)

Problem to be solved C = C1 ∩ C2 ∩ C3

◮ C1 =

  • h ∈ RPN : ρ(h) = J−1

j=0 ρj(hj) ≤ τ

  • ◮ ρj(hj) = hj2

ℓ2 = N−1 n=0

p′+Pj−1

p=p′

|h(n)

j

(p)|2

◮ ρj(hj) = hjℓ1 = N−1 n=0

p′+Pj−1

p=p′

|h(n)

j

(p)|

◮ ρj(hj) = hjℓ1,2 = N−1 n=0

p′+Pj−1

p=p′

|h(n)

j

(p)|21/2

13 / 36

slide-18
SLIDE 18

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Hard constraints on the filters C2, C3

minimize

y∈RN,h∈RNP ψ

  • z − Rh − y
  • + ιD(Fy) + ιC(h)

Problem to be solved C =C1 ∩ C2 ∩ C3 Assumption: slow variations of the filters along time.

  • ∀(j, n, p)
  • |h(n+1)

j

(p) − h(n)

j

(p)| ≤ εj,p For computational issues, h ∈ C2 ∩ C3 where C2 =

  • h | ∀p, ∀n ∈
  • 0, . . . ,

N 2

  • − 1
  • h(2n+1)(p) − h(2n)(p)
  • ≤ εp
  • C3 =
  • h | ∀p, ∀n ∈
  • 1, . . . ,

N − 1 2

  • h(2n)(p) − h(2n−1)(p)
  • ≤ εp
  • 14 / 36
slide-19
SLIDE 19

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Proximity operator

Let ϕ be a lower semi-continuous convex function. For all x ∈ RN, proxϕ is the unique minimizer of y → ϕ(y) + 1

2x − y2

Definition ♣ Examples: C a non-empty closed convex subset of RN. proxιC(x) = minimize

y∈RN

ιC(y) + 1 2x − y2 = minimize

y∈C

x − y2

  • ΠC(x):projection operator onto C

15 / 36

slide-20
SLIDE 20

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Proximity operator

Let ϕ be a lower semi-continuous convex function. For all x ∈ RN, proxϕ is the unique minimizer of y → ϕ(y) + 1

2x − y2

Definition ♣ Examples: C a non-empty closed convex subset of RN. proxιC(x) = minimize

y∈RN

ιC(y) + 1 2x − y2 = minimize

y∈C

x − y2

  • ΠC(x):projection operator onto C

15 / 36

slide-21
SLIDE 21

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Proximity operator

Let ϕ be a lower semi-continuous convex function. For all x ∈ RN, proxϕ is the unique minimizer of y → ϕ(y) + 1

2x − y2

Definition ♣ Examples: (∀x ∈ R) a) proxλ|·|2(x) = 1 1 + 2λx

  • “Wiener” filter

b) proxλ|·|(x) = sign(x) max (|x| − λ, 0)

  • shrinkage operator

x proxλ|·|p p = 1 p = 2 −λ λ

15 / 36

slide-22
SLIDE 22

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Proposed algorithm

minimize

y∈RN,h∈RNP Ψ

  • y

h

  • + ιD(Fy) + ιC(h)

Problem to be solved

◮ Ψ : RN+NP → R :

y h

  • → ψ(z − Rh − y) is convex and

differentiable with µ-Lipschitzian gradient (µ ∈]0, +∞[) i.e.

  • ∀(u, v) ∈ R2(N+NP)

∇Ψ(u) − ∇Ψ(v) ≤ µu − v,

◮ (∀i ∈ N), γ[i] ∈ [ǫ, 1−ǫ β ] where

β = µ +

  • F2 + 3 and ǫ ∈]0,

1 β + 1[ Use the M+LFBF algorithm [Combettes and Pesquet, 2012]

16 / 36

slide-23
SLIDE 23

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Algorithm M+LFBF [Combettes and Pesquet, 2012]

Set y[0] ∈ RN , h[0] ∈ RNP , v[0] ∈ RK , u[0] ∈ RNP for i = 0, 1, . . . do Gradient computation

  • s[i]

1

t[i]

1

  • =

y[i] h[i]

  • − γ[i]
  • ∇Ψ

y[i] h[i]

  • +

F∗v[i] u[i]

  • Projection computation

s[i]

2 = v[i] + γ[i]Fy[i],

w[i]

1 = s[i] 2 − γ[i]ΠD((γ[i])−1s[i] 2 )

t[i]

2 = u[i] + γ[i]h[i],

w[i]

2 = t[i] 2 − γ[i]ΠC((γ[i])−1t[i] 2 )

Averaging q[i]

1 = w[i] 1 + γ[i]Fs[i] 1 ,

v[i+1] = v[i] − s[i]

2 + q[i] 1

q[i]

2 = w[i] 2 + γ[i]t[i] 1 ,

u[i+1] = u[i] − t[i]

2 + q[i] 2

Update

  • y[i+1]

h[i+1]

  • =
  • y[i]

h[i]

  • − γ[i]
  • ∇Ψ
  • s[i]

1

t[i]

1

  • +
  • F∗w[i]

1

w[i]

2

  • end for

17 / 36

slide-24
SLIDE 24

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Algorithm M+LFBF [Combettes and Pesquet, 2012]

Set y[0] ∈ RN , h[0] ∈ RNP , v[0] ∈ RK , u[0] ∈ RNP for i = 0, 1, . . . do Gradient computation

  • s[i]

1

t[i]

1

  • =

y[i] h[i]

  • − γ[i]
  • ∇Ψ

y[i] h[i]

  • +

F∗v[i] u[i]

  • Projection computation

s[i]

2 = v[i] + γ[i]Fy[i],

w[i]

1 = s[i] 2 − γ[i]ΠD((γ[i])−1s[i] 2 )

t[i]

2 = u[i] + γ[i]h[i],

w[i]

2 = t[i] 2 − γ[i]ΠC((γ[i])−1t[i] 2 )

Averaging q[i]

1 = w[i] 1 + γ[i]Fs[i] 1 ,

v[i+1] = v[i] − s[i]

2 + q[i] 1

q[i]

2 = w[i] 2 + γ[i]t[i] 1 ,

u[i+1] = u[i] − t[i]

2 + q[i] 2

Update

  • y[i+1]

h[i+1]

  • =
  • y[i]

h[i]

  • − γ[i]
  • ∇Ψ
  • s[i]

1

t[i]

1

  • +
  • F∗w[i]

1

w[i]

2

  • end for

17 / 36

slide-25
SLIDE 25

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Algorithm M+LFBF [Combettes and Pesquet, 2012]

Set y[0] ∈ RN , h[0] ∈ RNP , v[0] ∈ RK , u[0] ∈ RNP for i = 0, 1, . . . do Gradient computation

  • s[i]

1

t[i]

1

  • =

y[i] h[i]

  • − γ[i]
  • ∇Ψ

y[i] h[i]

  • +

F∗v[i] u[i]

  • Projection computation

s[i]

2 = v[i] + γ[i]Fy[i],

w[i]

1 = s[i] 2 − γ[i]ΠD((γ[i])−1s[i] 2 )

t[i]

2 = u[i] + γ[i]h[i],

w[i]

2 = t[i] 2 − γ[i]ΠC((γ[i])−1t[i] 2 )

Averaging q[i]

1 = w[i] 1 + γ[i]Fs[i] 1 ,

v[i+1] = v[i] − s[i]

2 + q[i] 1

q[i]

2 = w[i] 2 + γ[i]t[i] 1 ,

u[i+1] = u[i] − t[i]

2 + q[i] 2

Update

  • y[i+1]

h[i+1]

  • =
  • y[i]

h[i]

  • − γ[i]
  • ∇Ψ
  • s[i]

1

t[i]

1

  • +
  • F∗w[i]

1

w[i]

2

  • end for

17 / 36

slide-26
SLIDE 26

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Algorithm M+LFBF [Combettes and Pesquet, 2012]

Set y[0] ∈ RN , h[0] ∈ RNP , v[0] ∈ RK , u[0] ∈ RNP for i = 0, 1, . . . do Gradient computation

  • s[i]

1

t[i]

1

  • =

y[i] h[i]

  • − γ[i]
  • ∇Ψ

y[i] h[i]

  • +

F∗v[i] u[i]

  • Projection computation

s[i]

2 = v[i] + γ[i]Fy[i],

w[i]

1 = s[i] 2 − γ[i]ΠD((γ[i])−1s[i] 2 )

t[i]

2 = u[i] + γ[i]h[i],

w[i]

2 = t[i] 2 − γ[i]ΠC((γ[i])−1t[i] 2 )

Averaging q[i]

1 = w[i] 1 + γ[i]Fs[i] 1 ,

v[i+1] = v[i] − s[i]

2 + q[i] 1

q[i]

2 = w[i] 2 + γ[i]t[i] 1 ,

u[i+1] = u[i] − t[i]

2 + q[i] 2

Update

  • y[i+1]

h[i+1]

  • =
  • y[i]

h[i]

  • − γ[i]
  • ∇Ψ
  • s[i]

1

t[i]

1

  • +
  • F∗w[i]

1

w[i]

2

  • end for

17 / 36

slide-27
SLIDE 27

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Algorithm M+LFBF [Combettes and Pesquet, 2012]

Set y[0] ∈ RN , h[0] ∈ RNP , v[0] ∈ RK , u[0] ∈ RNP for i = 0, 1, . . . do Gradient computation

  • s[i]

1

t[i]

1

  • =

y[i] h[i]

  • − γ[i]
  • ∇Ψ

y[i] h[i]

  • +

F∗v[i] u[i]

  • Projection computation

s[i]

2 = v[i] + γ[i]Fy[i],

w[i]

1 = s[i] 2 − γ[i]ΠD((γ[i])−1s[i] 2 )

t[i]

2 = u[i] + γ[i]h[i],

w[i]

2 = t[i] 2 − γ[i]ΠC((γ[i])−1t[i] 2 )

Averaging q[i]

1 = w[i] 1 + γ[i]Fs[i] 1 ,

v[i+1] = v[i] − s[i]

2 + q[i] 1

q[i]

2 = w[i] 2 + γ[i]t[i] 1 ,

u[i+1] = u[i] − t[i]

2 + q[i] 2

Update

  • y[i+1]

h[i+1]

  • =
  • y[i]

h[i]

  • − γ[i]
  • ∇Ψ
  • s[i]

1

t[i]

1

  • +
  • F∗w[i]

1

w[i]

2

  • end for

17 / 36

slide-28
SLIDE 28

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Synthetic data

Primary: y

  • size 512 × 512

18 / 36

slide-29
SLIDE 29

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Synthetic data

Multiples

Primary: y

  • size 512 × 512

Observed image: z

  • Noise: σ = 0.08
  • SNR = 1.13 dB
  • SSIM = 0.16

18 / 36

slide-30
SLIDE 30

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Synthetic data

Multiples

Primary: y

  • size 512 × 512

Observed image: z

  • Noise: σ = 0.08
  • SNR = 1.13 dB
  • SSIM = 0.16

Reconstructed image by [Ventosa et al., 2012]

  • SNR = 2.38 dB
  • SSIM = 0.13

18 / 36

slide-31
SLIDE 31

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Synthetic data

Multiples

Primary: y

  • size 512 × 512

Observed image: z

  • Noise: σ = 0.08
  • SNR = 1.13 dB
  • SSIM = 0.16

Reconstructed image by [Ventosa et al., 2012]

  • SNR = 2.38 dB
  • SSIM = 0.13

Our method

  • SNR = 17.00 dB
  • SSIM = 0.74

18 / 36

slide-32
SLIDE 32

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

σ 0.04 0.08 0.16 y − z ℓ1(×10−2) 3.88 6.89 13.09 [Ventosa et al., 2012] ℓ1(×10−2) 5.38 7.87 13.36 ρj = ℓ2 OR-basis ℓ1(×10−2) 1.53 2.27 3.34 SI frame(∗) ℓ1(×10−2) 1.19 1.69 2.42 M-band ℓ1(×10−2) 1.07 1.41 1.96 ρj = ℓ1 OR-basis ℓ1(×10−2) 1.66 2.33 3.37 SI frame(∗) ℓ1(×10−2) 1.23 1.70 2.39 M-band ℓ1(×10−2) 1.14 1.47 2.00 ρj = ℓ1,2 OR-basis ℓ1(×10−2) 1.51 2.25 3.32 SI frame(∗) ℓ1(×10−2) 1.10 1.58 2.32 M-band ℓ1(×10−2) 0.95 1.31 1.87 Comparison of the estimated primaries with the 2D proposed version(∗) in using three different 2D wavelet transforms, over three noise levels, and three a priori functions ρj ∈ {ℓ2, ℓ1, ℓ1,2}, with [Ventosa et al., 2012]

Summary

19 / 36

slide-33
SLIDE 33

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Real data

100 200 300 400 50 100 150 200 250 300 350 400

Primary

Seismic data with a partially appearing primary

  • size 400 × 400

20 / 36

slide-34
SLIDE 34

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Real data

50 100 150 200 250 50 100 150 200 250

Primary

Seismic data with a partially appearing primary

  • size 400 × 400

cropped of recorded seismic data: z

  • size 256 × 256

20 / 36

slide-35
SLIDE 35

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Real data

50 100 150 200 250 50 100 150 200 250

Primary

Seismic data with a partially appearing primary

  • size 400 × 400

cropped of recorded seismic data: z

  • size 256 × 256

Reconstructed image by [Ventosa et al., 2012]

20 / 36

slide-36
SLIDE 36

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Real data

50 100 150 200 250 50 100 150 200 250

Primary

Seismic data with a partially appearing primary

  • size 400 × 400

cropped of recorded seismic data: z

  • size 256 × 256

Reconstructed image by [Ventosa et al., 2012] Our method

20 / 36

slide-37
SLIDE 37

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Blind deconvolution

21 / 36

slide-38
SLIDE 38

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Degradation model

x (signal) Direct model h ∗ x + Measurements y = h ∗ x + b h (kernel) b (noise) Which strategy for restoring signal x corrupted by a unknown kernel h, plus noise b?

◮ Majorize-Minimize approach ◮ Block Coordinate Variable

Metric Forward-Backward algorithm

◮ Smooth approximation of the

ℓ1/ℓ2 function Efficient for sparse blind deconvolution problems

22 / 36

slide-39
SLIDE 39

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Seismic blind deconvolution

An enlarged portion from the 25 to 30 meter locations on the bench

(http://www.kgs.ku.edu/Geophysics/OFR/2004/OFR04 41/)

23 / 36

slide-40
SLIDE 40

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Seismic blind deconvolution

Reflection seismograms showing primary reflection only [Al-Sadi, 1980]

23 / 36

slide-41
SLIDE 41

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Smoothed ℓ1/ℓ2 sparsity measures

Common assumption: x has a sparse representation Question: Which sparsity measure should be used? Theoretically: ℓ0 measure [Donoho et al., 1995] Usually: ℓ1 measure We propose a smoothed ℓ1/ℓ2 sparsity measures ϕ(x) = log ℓ1,α(x) + β ℓ2,η(x)

  • where (α, β, η) ∈]0, +∞[3, and

◮ ℓ1,α(x) = N

n=1

  • x2

n + α2 − α

  • ◮ ℓ2,η(x) =

N

n=1 x2 n + η2

24 / 36

slide-42
SLIDE 42

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Smoothed ℓ1/ℓ2 sparsity measures

Common assumption: x has a sparse representation Question: Which sparsity measure should be used? Theoretically: ℓ0 measure [Donoho et al., 1995] Usually: ℓ1 measure We propose a smoothed ℓ1/ℓ2 sparsity measures

Convex regularization Blind image recovery (Alternating proximal algorithm) [Bolte et al., 2010] Blind video restoration [Abboud et al., 2014] Non-convex regularization Image blind deconvolution [Krishnan, 2011] SOOT algorithm [Repetti et al., 2015] New convergence proof

24 / 36

slide-43
SLIDE 43

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Observation model

y

  • bserved signal

= h

  • riginal blur

∗ x

  • reflectivity

+ b

  • noise

where,

◮ y ∈ RN observed signal ◮ x ∈ RN unknown sparse original seismic signal (reflectivity) ◮ h ∈ RS unknown original blur kernel ◮ b ∈ RN additive noise: realization of a zero-mean white Gaussian

noise with variance σ2

25 / 36

slide-44
SLIDE 44

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Optimization problem

( x, h) ∈ Argmin

(x,h)∈RN+S

(G(x, h) = F(x, h) + R(x, h)) Find where

◮ F : RN+S → R is differentiable, and has a L-Lipschitz gradient on

dom R.

◮ R : RN+S →] − ∞, +∞] is proper, lower semicontinuous. ◮ G is coercive, i.e. limz→+∞ G(z) = +∞, and is non necessarily

convex.

26 / 36

slide-45
SLIDE 45

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Majorize-Minimize Algorithm

ˆ z ∈ Argmin

z∈RN+S

R(z) + F(z) Find ♣ (∀k ∈ N) zk+1 = Argmin

z∈RN+S

R(z) + Q(z, zk) MM Algorithm Q(z, z) : quadratic majorant of the function F at z i.e. Q(z, z) = F( z)+(z− z)⊤∇F( z)+ 1 2(z − z)⊤A( z)(z − z) where, ∀z ∈ RN+S, F(z) ≤ Q(z, z) and F( z) = Q( z, z) Quadratic majorant

F(z) + R(z) z

27 / 36

slide-46
SLIDE 46

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Majorize-Minimize Algorithm

ˆ z ∈ Argmin

z∈RN+S

R(z) + F(z) Find ♣ (∀k ∈ N) zk+1 = Argmin

z∈RN+S

R(z) + Q(z, zk) MM Algorithm Q(z, z) : quadratic majorant of the function F at z i.e. Q(z, z) = F( z)+(z− z)⊤∇F( z)+ 1 2(z − z)⊤γ(z − z) where, ∀z ∈ RN+S, F(z) ≤ Q(z, z) and F( z) = Q( z, z) Quadratic majorant

F(z) + R(z) Q(z, zk) + R(z) zk zk+1 z

27 / 36

slide-47
SLIDE 47

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Majorize-Minimize Algorithm

ˆ z ∈ Argmin

z∈RN+S

R(z) + F(z) Find ♣ (∀k ∈ N) zk+1 = Argmin

z∈RN+S

R(z) + Q(z, zk) MM Algorithm Q(z, z) : quadratic majorant of the function F at z i.e. Q(z, z) = F( z)+(z− z)⊤∇F( z)+ 1 2(z − z)⊤A( z)(z − z) where, ∀z ∈ RN+S, F(z) ≤ Q(z, z) and F( z) = Q( z, z) Quadratic majorant

F(z) + R(z) Q(z, zk) + R(z) zk zk+1 z

27 / 36

slide-48
SLIDE 48

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Majorize-Minimize Algorithm

ˆ z ∈ Argmin

z∈RN+S

R(z) + F(z) Find ♣ (∀k ∈ N) zk+1 = Argmin

z∈RN+S

R(z) + Q(z, zk) MM Algorithm Let z ∈ RN+S. The proximity operator of R relative to the metric induced by a SPD matrix A ∈ R(S+N)×(S+N) is defined by proxA,R( z) = minimize

z∈RN+S

R(z) + 1 2(z − z)⊤A(z − z) Definition

27 / 36

slide-49
SLIDE 49

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Proposed criterion

minimize

(x,h)∈RN+S (G(x, h) = F(x, h) + R(x, h))

Criterion

F(x, h) = ρ(x, h)

data fidelity term

+ λϕ(x)

regularization term ◮ ρ(x, h) = 1 2h ∗ x − y2 ◮ R(x, h) = R1(x) + R2(h) ◮ R1(x) = ι[xmin,xmax]N(x) with (xmin, xmax) ∈]0, +∞[2 ◮ R2(h) = ιC(h)

with C = {h ∈ [hmin, hmax]S | h ≤ δ}, for (hmin, hmax, δ) ∈]0, +∞[3

28 / 36

slide-50
SLIDE 50

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Quadratic majorants

For every (x, h) ∈ RN × RS, let A1(x, h) =

  • L1(h) + 9λ

8η2

  • IN +

λ ℓ1,α(x) + β Aℓ1,α(x), A2(x, h) = L2(x) IS, where Aℓ1,α(x) = Diag

  • (xn

2 + α2)−1/2 1≤n≤N

  • ,

and L1(h) (resp. L2(x)) is a Lipschitz constant for ∇1ρ(·, h) (resp. ∇2ρ(x, ·)). Then, A1(x, h) (resp. A2(x, h)) satisfies the majoration condition for F(·, h) at x (resp. F(x, ·) at h). Proposition ♣

29 / 36

slide-51
SLIDE 51

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

SOOT algorithm

For every k ∈ N, let Jk ∈ N∗, Ik ∈ N∗ and let (γk,j

x )0≤j≤Jk−1 and

(γk,i

h )0≤i≤Ik−1 be positive sequences. Initialize with x0 ∈ dom R1 and

h0 ∈ dom R2. Iterations: For k = 0, 1, . . .                   xk,0 = xk, hk,0 = hk, For j = 0, . . . , Jk − 1

  • xk,j = xk,j − γk,j

x A1(xk,j, hk)−1∇1F(xk,j, hk),

xk,j+1 = prox(γk,j

x )−1A1(xk,j,hk),R1

  • xk,j

, xk+1 = xk,Jk. For i = 0, . . . , Ik − 1 hk,i = hk,i − γk,i

h A2(xk+1, hk,i)−1∇2F(xk+1, hk,i),

hk,i+1 = prox(γk,i

h )−1A2(xk+1,hk,i),R2

  • hk,i

, hk+1 = hk,Ik.

30 / 36

slide-52
SLIDE 52

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

SOOT algorithm

For every k ∈ N, let Jk ∈ N∗, Ik ∈ N∗ and let (γk,j

x )0≤j≤Jk−1 and

(γk,i

h )0≤i≤Ik−1 be positive sequences. Initialize with x0 ∈ dom R1 and

h0 ∈ dom R2. Iterations: For k = 0, 1, . . .                   xk,0 = xk, hk,0 = hk, For j = 0, . . . , Jk − 1

  • xk,j = xk,j − γk,j

x A1(xk,j, hk)−1∇1F(xk,j, hk),

xk,j+1 = prox(γk,j

x )−1A1(xk,j,hk),R1

  • xk,j

, xk+1 = xk,Jk. For i = 0, . . . , Ik − 1 hk,i = hk,i − γk,i

h A2(xk+1, hk,i)−1∇2F(xk+1, hk,i),

hk,i+1 = prox(γk,i

h )−1A2(xk+1,hk,i),R2

  • hk,i

, hk+1 = hk,Ik.

30 / 36

slide-53
SLIDE 53

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

SOOT algorithm

For every k ∈ N, let Jk ∈ N∗, Ik ∈ N∗ and let (γk,j

x )0≤j≤Jk−1 and

(γk,i

h )0≤i≤Ik−1 be positive sequences. Initialize with x0 ∈ dom R1 and

h0 ∈ dom R2. Iterations: For k = 0, 1, . . .                   xk,0 = xk, hk,0 = hk, For j = 0, . . . , Jk − 1

  • xk,j = xk,j − γk,j

x A1(xk,j, hk)−1∇1F(xk,j, hk),

xk,j+1 = prox(γk,j

x )−1A1(xk,j,hk),R1

  • xk,j

, xk+1 = xk,Jk. For i = 0, . . . , Ik − 1 hk,i = hk,i − γk,i

h A2(xk+1, hk,i)−1∇2F(xk+1, hk,i),

hk,i+1 = prox(γk,i

h )−1A2(xk+1,hk,i),R2

  • hk,i

, hk+1 = hk,Ik.

30 / 36

slide-54
SLIDE 54

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Convergence results

Let (xk)k∈N and (hk)k∈N be sequences generated by SOOT Algorithm.

  • 1. There exists (ν, ν) ∈]0, +∞[2 such that, for all k ∈ N,

(∀j ∈ {0, . . . , Jk − 1}) ν IN A1(xk,j, hk) ν IN, (∀i ∈ {0, . . . , Ik − 1}) ν IS A2(xk+1, hk,i) ν IS.

  • 2. Set (γ, γ) ∈]0, +∞[2, for all k ∈ N, j ∈ {0, . . . , Jk − 1} and

i ∈ {0, . . . , Ik − 1}, (γk,j

x , γk,i h ) ∈ [γ, 2 − γ]2

  • 3. R1, R2 are the semi-algebraic functions.

(xk, hk)k∈N converges to a critical point ( x, h) of G.

  • G(xk, hk)
  • k∈N is a nonincreasing sequence converging to G(

x, h). Theorem ♣

31 / 36

slide-55
SLIDE 55

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Results (1D)

100 200 300 400 500 600 700

¯ x y

Reflectivity (N = 784) Observed signal

32 / 36

slide-56
SLIDE 56

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Results (1D)

1 10 20 30 40 −0.5 0.5 1 Reflectivity (N = 784) Observed signal Blur (S = 41)

  • Original
  • [Krishnan, 2011]: SNR = 19.12 dB

Time = 41 s.

  • SOOT: SNR = 20.98 dB

Time = 14 s.

32 / 36

slide-57
SLIDE 57

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Results (1D)

1 100 200 300 400 500 600 700 784 −0.8 −0.4 0.4 Reflectivity (N = 784) Observed signal Blur (S = 41)

  • Original
  • [Krishnan, 2011]: SNR = 19.12 dB

Time = 41 s.

  • SOOT: SNR = 20.98 dB

Time = 14 s. Reflectivity (N = 784)

  • Original
  • [Krishnan, 2011]: SNR = 8.60 dB
  • SOOT: SNR = 11.44 dB

32 / 36

slide-58
SLIDE 58

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Results (1D)

200 400 600 −0.1 (b) 0 0.1 −0.1 (a) 0 0.1 Reflectivity (N = 784) Observed signal Blur (S = 41)

  • Original
  • [Krishnan, 2011]: SNR = 19.12 dB

Time = 41 s.

  • SOOT: SNR = 20.98 dB

Time = 14 s. Reflectivity (N = 784)

  • Original
  • [Krishnan, 2011]: SNR = 8.60 dB
  • SOOT: SNR = 11.44 dB

Errors

  • [Krishnan, 2011]: ℓ1 = 5.5 × 10−3
  • SOOT: ℓ1 = 4.2 × 10−3

32 / 36

slide-59
SLIDE 59

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Noise level (σ) 0.01 0.02 0.03 Observation error ℓ2(×10−2) 7.14 7.35 7.68 ℓ1(×10−2) 2.85 3.44 4.09 Signal error [Krishnan, 2011] ℓ2(×10−2) 1.23 1.66 1.84 ℓ1(×10−3) 3.79 4.69 5.30 SOOT algorithm ℓ2(×10−2) 1.09 1.63 1.83 ℓ1(×10−3) 3.42 4.30 4.85 Kernel error [Krishnan, 2011] ℓ2(×10−2) 1.88 2.51 3.21 ℓ1(×10−2) 1.44 1.96 2.53 SOOT algorithm ℓ2(×10−2) 1.62 2.26 2.93 ℓ1(×10−2) 1.22 1.77 2.31 Time (s.) [Krishnan, 2011] 106 61 56 SOOT algorithm 56 22 18 Comparison between [Krishnan, 2011] and SOOT algorithm for x and h estimates (Intel(R) Xeon(R) CPU E5-2609 v2@2.5GHz using Matlab 8).

Summary

33 / 36

slide-60
SLIDE 60

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Image blind deconvolution

Original image

  • size 300 × 300

34 / 36

slide-61
SLIDE 61

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Image blind deconvolution

Original image

  • size 300 × 300

Observed image

  • SNR = 13.90 dB
  • SSIM = 0.92

34 / 36

slide-62
SLIDE 62

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Image blind deconvolution

Original image

  • size 300 × 300

Observed image

  • SNR = 13.90 dB
  • SSIM = 0.92

Reconstructed image by [Krishnan, 2011]

  • SNR = 15.19 dB
  • SSIM = 0.94
  • Time = 2 min.

34 / 36

slide-63
SLIDE 63

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Image blind deconvolution

Original image

  • size 300 × 300

Observed image

  • SNR = 13.90 dB
  • SSIM = 0.92

Reconstructed image by [Krishnan, 2011]

  • SNR = 15.19 dB
  • SSIM = 0.94
  • Time = 2 min.

SOOT

  • SNR = 16.06 dB
  • SSIM = 0.94
  • Time = 7 s.

34 / 36

slide-64
SLIDE 64

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Image blind deconvolution

Observed image

  • size 512 × 512
  • SNR = 19.66 dB
  • SSIM = 0.93

35 / 36

slide-65
SLIDE 65

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Image blind deconvolution

Observed image

  • size 512 × 512
  • SNR = 19.66 dB
  • SSIM = 0.93

Reconstructed image by [Krishnan, 2011]

  • SNR = 20.96 dB
  • SSIM = 0.95
  • Time = 7 min.

35 / 36

slide-66
SLIDE 66

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Image blind deconvolution

Observed image

  • size 512 × 512
  • SNR = 19.66 dB
  • SSIM = 0.93

Reconstructed image by [Krishnan, 2011]

  • SNR = 20.96 dB
  • SSIM = 0.95
  • Time = 7 min.

SOOT

  • SNR = 22.80 dB
  • SSIM = 0.97
  • Time = 45 s.

35 / 36

slide-67
SLIDE 67

(MULTIPLE + NOISE) REMOVAL BLIND DECONVOLUTION CONCLUSIONS

Contributions:

◮ A generic methodology to impose sparsity and regularity

properties through constrained adaptive filtering in a transformed domain.

◮ Versatility of the proposed optimization framework which

permits different strategies for sparse modeling, and adaptation constraints.

◮ Smooth parametric approximations to the ℓ1/ℓ2 norm ratio. ◮ Convergence results both on iterates and function values. ◮ Blocks updated according to a flexible quasi-cyclic rule. ◮ Acceleration of the convergence thanks to the choice of matrices

  • Ajℓ(xℓ)
  • ℓ∈N based on MM principle.

◮ Application to sparse blind deconvolution. ◮ Results demonstrated on seismic reflectivity, acoustic source

localization, biological/medical image retoration. Software:

◮ SOOT-Blind deconvolution: (http://lc.cx/soot )

36 / 36