From the modeling of direct problems in image processing to the - - PowerPoint PPT Presentation

from the modeling of direct problems in image processing
SMART_READER_LITE
LIVE PREVIEW

From the modeling of direct problems in image processing to the - - PowerPoint PPT Presentation

I NTRODUCTION TF INPAINTING T ENSOR FACTORIZATION C ONCLUSIONS From the modeling of direct problems in image processing to the resolution of inverse problems Caroline Chaux Joint work with M. Krm, V. Emiya, B. Torrsani Joint work with X.


slide-1
SLIDE 1

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

From the modeling of direct problems in image processing to the resolution of inverse problems

Caroline Chaux Joint work with M. Krémé, V. Emiya, B. Torrésani Joint work with X. Vu, N. Thirion-Moreau, S. Maire

CNRS and Aix-Marseille Univ.

1 / 42

slide-2
SLIDE 2

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Contents

INTRODUCTION On the importance of modeling What about solving inverse problems? TF INPAINTING Why ? Low-rankness property of the STFT TF phase inpainting Simulations TENSOR FACTORIZATION 3D fluorescence spectroscopy A proximal approach for NTF Experiments Real case: water monitoring CONCLUSIONS Conclusions

2 / 42

slide-3
SLIDE 3

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

From modeling to resolution

Object of interest y System Dα Observation z = Dα(y)

3 / 42

slide-4
SLIDE 4

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

From modeling to resolution

Object of interest y System Dα Observation z = Dα(y) α - system parameters (e.g. noise)

3 / 42

slide-5
SLIDE 5

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

From modeling to resolution

Object of interest y System Dα Observation z = Dα(y) α - system parameters (e.g. noise)

3 / 42

slide-6
SLIDE 6

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

From modeling to resolution

Object of interest y System Dα Observation z = Dα(y) α - system parameters (e.g. noise)

3 / 42

slide-7
SLIDE 7

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

From modeling to resolution

Object of interest y System Dα Observation z = Dα(y) α - system parameters (e.g. noise)

3 / 42

slide-8
SLIDE 8

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

From modeling to resolution

Object of interest y System Dα Observation z = Dα(y) α - system parameters (e.g. noise) How? → Solving inverse problems

3 / 42

slide-9
SLIDE 9

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

From modeling to resolution

Object of interest y System Dα Observation z = Dα(y) α - system parameters (e.g. noise) How? → Solving inverse problems method parameters (e.g. regularization)

3 / 42

slide-10
SLIDE 10

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Inverse problem formulation

Recovering the original (unknown data) from distorded obser- vations. What? ♣ Formulating the inverse problem as a minimization problem ◮ Variational approach; ◮ Statistical approach (MAP). How? ♣ minimize

y

f1(y)

  • Fidelity

+ f2(y)

  • Regularization

And so ♣

4 / 42

slide-11
SLIDE 11

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Minimization problems

◮ Standard problem: minimize

y∈RN

f1(y)

  • Fidelity

+ f2(y)

  • Regularization

. ◮ Taking into account several regularizations (P − 1 terms): minimize

y∈RN

f1(y) +

P

  • p=2

fp(y). ◮ Introducing linear operators (Fp)p∈{1,...,P}: minimize

y∈RN P

  • p=1

fp(Fpy). ◮ For large size problem or for other reasons, can be interesting to work on data blocks y(p) of size Lp (y = (y(p))P

p=1)

minimize

y∈RN P

  • p=1

fp(y(p)).

5 / 42

slide-12
SLIDE 12

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Some proximal approaches

◮ Parallel ProXimal Algorithm + (PPXA+) [Pesquet, Pustelnik, 2012] ◮ Generalized Forward-Backward [Raguet et al., 2012] ◮ M+SFBF [Briceño-Arias, Combettes, 2011] ◮ M+LFBF [Combettes, Pesquet, 2011] ◮ FB based algorithms [Chambolle, Pock, 2011],[V˜

u,2013],[Condat,2013]

◮ Proximal Alternating Linearized Minimization (PALM) [Bolte et al.,

2014]

◮ An accelerated projection gradient based algorithm [Zhang et al., 2016] ◮ Block-Coordinate Variable Metric Forward-Backward (BC-VMFB) algorithm [Chouzenoux et al., 2016]

6 / 42

slide-13
SLIDE 13

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Motivation

7 / 42

slide-14
SLIDE 14

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Motivation

min

Y∈CF×T

1 2M ⊙ (X − Y)2

F + λY∗, where λ > 0.

Inpainting problem

7 / 42

slide-15
SLIDE 15

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Initial point

1 2 3 4 5 6

Time

2500 5000 7500 10000 12500 15000 17500 20000

Frequency

−60 −40 −20 20

Figure: Spectrogram of the Glockenspiel, composed of about 50 spectral peaks distributed on 15 occurrences of 8 notes.

◮ How the intuitions of the low-rankness of the spectrograms can be extended to complex-valued time-frequency matrices ? ◮ What is a rank-one matrix, or more generally a rank-r matrix, in the time-frequency plane? ◮ Do time-frequency matrices of real-world sounds have good low-rank approximations?

8 / 42

slide-16
SLIDE 16

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

STFT definitions

S(K×N)

BP

[k, n] =

  • m

s [tn + m] w [m] e−2iπνkm (K × N)-STFT, band-pass convention S(K×N)

LP

[k, n] =

  • m

s [m] w [m − tn] e−2iπνkm. (K × N)-STFT, low-pass convention where (w [m])m∈L ∈ CL denotes the window, νk, k ∈ K is a discrete frequency and tn, n ∈ N a discrete time. ∀k ∈ K, n ∈ Z, SLP (k, n) = SBP (k, n) × e−2iπνktn Relation between conventions

9 / 42

slide-17
SLIDE 17

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Factorization of STFT matrices

∀k, n, SBP [k, n] =

  • m

s [n + m] w [m] e−2iπ km

L

∀k, n, SLP [k, n] =

  • m

s [m] w [m − n] e−2iπ km

L .

(L × L)-STFT (full redundancy K = L = N) For any signal s ∈ CL and window w ∈ CL, we have SBP = E diag (w) E−1 diag ( s) E and SLP = E diag (s) E−1 diag ( w) E where E =

  • e−2iπ kt

L

  • k∈L,t∈L

10 / 42

slide-18
SLIDE 18

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Rank-r STFT matrices

If w ∈ CL is a window that does not vanish, i.e., ∀k ∈ L, w [k] = 0, then rank (SBP) = s0. ⇒ The set of rank-r STFT matrices in the band-pass convention is composed of the signals that are a sum of r pure complex exponentials at Fourier frequencies. Band-pass convention If w ∈ CL is a window such that w does not vanish, i.e., ∀k ∈ L, w [k] = 0, then rank (SLP) = s0. ⇒ The set of rank-r STFT matrices in the low-pass convention is composed of the signals that are a sum r diracs at integer times. Low-pass convention

11 / 42

slide-19
SLIDE 19

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Analysis of low-rank STFT matrices

Context : Signal with length L = 128 composed of a sum of Nc = 6 complex sinusoids at exact Fourier frequencies (5 closed frequencies and 1 isolated frequency). Results : rank (SBP) = Nc while rank (SLP) is higher.

Figure: Analysis with a Gaussian window: singular values of STFT matrices,

magnitude and energy spectrograms.

12 / 42

slide-20
SLIDE 20

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Analysis of low-rank STFT matrices

Context: rank vs. number of components Nc (frequencies drawn randomly at exact Fourier frequencies), signal length L = 64.

Figure: Rank of several types of time-frequency matrices vs. number of sinusoids in the signal.

Results: rank (SBP) = Nc while rank (SLP) is higher. rg STFT matrix < related spectrograms.

13 / 42

slide-21
SLIDE 21

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Formulation of phase inpainting problem

Gabor atoms (STFT): at,ν = w[n − th]e2ıπ ν

F n

for          t ∈ {0, . . . , T − 1} ν ∈ {0, . . . , F − 1} w : window h : hop size Known binary mask: m ∈ {0, 1}F×T Observations: b ∈ CF×T

  • b(m) fully known ceofficients

b(¬m) known magnitudes

Proposition

Find x ∈ CN s.t.

  • x, at,ν

= b[t, ν], ∀t, ν ∈ supp (m) (our contribution) |x, at,ν| = b[t, ν], ∀t, ν ∈ supp (¬m)

14 / 42

slide-22
SLIDE 22

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

STFT with some missing data

◮ Missing data in TF plane = Missing phases ie magnitudes assumed to be known. ◮ What about the quality of the reconstructed signal with 30% of missing phases in its spectrogram ?

Original glockenspiel Reconstructed signal

◮ What about putting random phases ? RPI reconstruction:

15 / 42

slide-23
SLIDE 23

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

STFT with some missing data

◮ Missing data in TF plane = Missing phases ie magnitudes assumed to be known. ◮ What about the quality of the reconstructed signal with 30% of missing phases in its spectrogram ?

Original glockenspiel Reconstructed signal

◮ What about putting random phases ? RPI reconstruction: ◮ Phases are very important

15 / 42

slide-24
SLIDE 24

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Algorithm 1 Griffin and Lim for phase inpainting (GLI) Require:      b : observations , m : binary mask niter : number of iterations STFTx and STFT−1 : operators related to {at,ν} Random initialization ϕ0 of missing phases: ϕ ← m ◦ ∠b + (1 − m) ◦ ϕ0 and y(0) ← b ◦ exp (ıϕ) for i ∈ {1, 2, . . . , niter} do z(i) ← STFTx

  • STFT−1

y(i−1) ϕ(i) ← m ◦ ∠b + (1 − m) ◦ ∠z(i) y(i) ← b ◦ exp(ıϕ(i)) end for return STFT−1(y(niter)) Original signal: RPI reconstruction: GLI reconstruction:

16 / 42

slide-25
SLIDE 25

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

PhaseLift for phase inpainting (PLI)

Phase inpainting problem (non-convex)

Find x ∈ CN s.t.

  • x, at,ν

= b[t, ν], ∀t, ν ∈ supp (m) |x, at,ν| = b[t, ν], ∀t, ν ∈ supp (¬m)    lifting X = xxH

min

X∈CN×NRank(X) s.t.

     Trace(A(t,ν),(t′,ν′)X) = b[t′, ν′]¯ b[t, ν], ∀ (t, ν) ∈ supp(m) Trace(A(t′,ν′)(t′,ν′)X) = b2[t′, ν′], ∀ (t′, ν′) ∈ supp (¬m) X 0 (positive semidefinite matrix (PSD))

   relaxation

min

X∈CN×NTrace(X) s.t.

     Trace(A(t,ν),(t′,ν′)X) = b[t′, ν′]¯ b[t, ν], ∀ (t, ν) ∈ supp(m) Trace(A(t′,ν′)(t′,ν′)X) = b2[t′, ν′], ∀ (t′, ν′) ∈ supp(¬m) X 0

1

1TFOCS: Templates for convex cone problems with applications to sparse signal

recovery, S. Becker , E.J. Candès and M. Grant, 2010.

17 / 42

slide-26
SLIDE 26

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

PhaseCut for phase inpainting (PCI)

Find x ∈ CN s.t.

  • x, at,ν

= b[t, ν], ∀t, ν ∈ supp (m) |x, at,ν| = b[t, ν], ∀t, ν ∈ supp (¬m)

   splitting

Find x ∈ CN, u ∈ CF×T s.t.      Ax = Diag(c)u u [t, ν] = eı∠b[t,ν] ∀t, ν ∈ supp (m) |u [t, ν] | = 1 ∀t, ν

   lifting and relaxation

min

U∈C(F×T)2 Trace(UΓ) s.t.

     Diag(U) = 1 U[(t, ν), (t′, ν′)] =

b[t,ν] |b[t,ν]| ¯ b[t′,ν′] |b[t′,ν′]| , ∀(t, ν), (t′, ν′) ∈ supp (m)

U 0

2

2Block coordinate descent methods for semidefinite programming. Wen, Z. and

Goldfarb, D. and Scheinberg, K. 2012.

18 / 42

slide-27
SLIDE 27

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Data

◮ A signal composed of a mixture of 3 signals: s = (a) +(b) + (c)

(a) chirpր (b) Chirp ց (c)

  • 50

100 Time (samples) 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Frequency (normalized)

◮ Signal spectrogram (T = 16 and F = 32)

19 / 42

slide-28
SLIDE 28

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Phase inpainting problem

      

50 100 Time (samples) 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Frequency (normalized)

      

find x

− − − − − − − →

GLI,PCI,PLI

Reconstruction from a random mask with holes of width one.      

50 100 Time (samples) 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Frequency (normalized) 5 10 15 Time (samples) 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Frequency (normalized)

     

find x

− − − − − − − →

GLI,PCI,PLI

Reconstruction from a random mask with larger holes. Reconstruction error up to a global phase: EdB(x,^ x) = 20 log10 min

θ

x − eıθ^ x2 x2

20 / 42

slide-29
SLIDE 29

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Reconstruction error: random holes of size one

0.0 0.2 0.4 0.6 0.8 Ratio of missing phases 300 250 200 150 100 50 Error (dB) RPI GLI PCI PLI

  • Perfect reconstruction below 40% by GLI and PCI
  • PLI works very well but not perfect
  • >= 40% PLI and PCI perform better than GLI, but the performance is

even better for PLI

21 / 42

slide-30
SLIDE 30

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Reconstruction error: large randomly distributed holes

1 2 3 4 5 6 7 8 9 Hole width 300 250 200 150 100 50 Error (dB)

30% missing phases

RPI GLI PCI PLI

  • Bad reconstruction of GLI
  • Good performance for SDP methods: PLI and PCI

22 / 42

slide-31
SLIDE 31

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

3D fluorescence spectroscopy

◮ Problem: identifying dissolved fluorescent substances in water solutions ◮ Method: fluorescence spectroscopy technique ◮ Data acquisition:

Spectro- photometer Water sample

λex λem

10 20 30 40 50 60 70 80 90 100 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45

λem

23 / 42

slide-32
SLIDE 32

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

3D fluorescence spectroscopy

◮ Problem: identifying dissolved fluorescent substances in water solutions ◮ Method: fluorescence spectroscopy technique ◮ Data acquisition:

Spectro- photometer Water sample

λex λem

λem λex

FEEM

10 20 30 40 50 60 70 80 90 100 5 10 15 20 25 30 35 40 45

23 / 42

slide-33
SLIDE 33

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

3D fluorescence spectroscopy

◮ Problem: identifying dissolved fluorescent substances in water solutions ◮ Method: fluorescence spectroscopy technique ◮ Data acquisition:

Spectro- photometer . . . Water sample (i) Water sample (1)

λex λem

λem λex

FEEM

10 20 30 40 50 60 70 80 90 100 5 10 15 20 25 30 35 40 45

. . .

λem λex

FEEM

10 20 30 40 50 60 70 80 90 100 5 10 15 20 25 30 35 40 45

23 / 42

slide-34
SLIDE 34

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

3D fluorescence spectroscopy

◮ Problem: identifying dissolved fluorescent substances in water solutions ◮ Method: fluorescence spectroscopy technique ◮ Data acquisition:

Spectro- photometer . . . Water sample (i) Water sample (1)

λex λem

λem λex

FEEM

10 20 30 40 50 60 70 80 90 100 5 10 15 20 25 30 35 40 45

. . .

λem λex

FEEM

10 20 30 40 50 60 70 80 90 100 5 10 15 20 25 30 35 40 45

Fluorescent compounds?

23 / 42

slide-35
SLIDE 35

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

3D fluorescence spectroscopy and tensors

Beer-Lambert’s law

20 40 0.35

λem emission spectra

50 100 0.35

excitation spectra λex

5 10

concentration Experiments

T = R

r=1 ¯

a(1)

r

  • ¯

a(2)

r

  • ¯

a(3)

r

  • : outer product

24 / 42

slide-36
SLIDE 36

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

(Canonical) Polyadic Decomposition (CPD)

Tensor form: [Harshman1927] Tensor rank Loading vectors Loading matrices T = R

  • r=1

¯ a(1)

r

  • ¯

a(2)

r

  • . . . ◦ ¯

a(N)

r

  • Rank-1 tensor

= [ [ ¯ A(1) , ¯ A(2) , . . . , ¯ A(N) ] ] ∀n ∈ {1, 2, . . . , N}, ¯ a(n)

r

∈ RIn and ¯ A(n) ∈ RIn×R

25 / 42

slide-37
SLIDE 37

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Canonical Polyadic Decomposition (CPD) (2)

Scalar form: ti1...iN =

R

  • r=1

¯ a(1)

i1r ¯

a(2)

i2r . . .¯

a(N)

iNr

Matrix form: [Cichocki2009] T

(n) In,I−n = ¯

A(n)(Z

(−n))⊤,

n ∈ {1, . . . , N}, T

(n) In,I−n ∈ RIn×I−n +

: the matrix obtained by unfolding T in the n-th mode, I−n = I1 . . . IN/In; for all n ∈ {1, . . . , N}, Z

(−n) = ¯

A(N) ⊙ . . . ¯ A(n+1) ⊙ ¯ A(n−1) ⊙ ¯ A(1) ∈ RI−n×R

+

⊙: Khatri-Rao product.

26 / 42

slide-38
SLIDE 38

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Objective: tensor decomposition

  • Input: Observed tensor T
  • Output: Estimated loading factors

a(n)

r

for all n ∈ {1, . . . , N} Constraint: ◮ Loading factors ¯ a(n)

r

entrywise nonnegative Difficulties: ◮ Large dimension tensors ◮ Rank R unknown → needs to be estimated (overestimation problems)

27 / 42

slide-39
SLIDE 39

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Proximal algorithm for CP decomposition

T =

R

  • r=1

¯ a(1)

r

  • . . . ◦ ¯

a(N)

r

= [ [¯ A(1), . . . , ¯ A(N)] ]. Tensor structure: naturally leads to consider N blocks corresponding to the loading matrices A(1), . . . , A(N) minimize

A(n)∈RIn×R, n∈{1,...,N} F(A(1), . . . , A(N)) + R1(A(1)) + . . . + RN(A(N))

Proposed optimization problem

28 / 42

slide-40
SLIDE 40

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Proximal algorithm for tensor decomposition

matrices An[0] Inputs: 1) Initial loading 2) Stepsize choice γ 3) Iteration k = 0 Yes Choose randomly a block n ∈ {1, 2, 3} to be updated Preconditioner P(n)[k] Partial gradient ∇n[k] Gradient step Proximal step Is stopping criterion reached ? proxγ−1P(n)[k],Rn

  • A(n)[k] = A(n)[k]

−γ∇n[k] ⊘ P(n)[k] ( A(n)[k]) Other blocks unchanged at k matrices A(n) Estimated loading Outputs: No k ← k + 1

Figure: BC-VMFB algorithm for CPD.

29 / 42

slide-41
SLIDE 41

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Fidelity term

◮ F(A(1), . . . , A(N)): quadratic data fidelity term F(A(1), . . . , A(N)) = 1 2T − [ [A(1), . . . , A(N)] ]2

F

= 1 2T(n)

In,I−n − A(n)Z(−n)⊤2 F

◮ Gradient matrices of F with respect to A(n), ∀n = 1, . . . , N ∇nF(A(1), . . . , A(N)) = −(T(n)

In,I−n − A(n)Z(−n)⊤)Z(−n)

30 / 42

slide-42
SLIDE 42

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Regularization terms

◮ Rn(A(n)): block dependent penalty terms enforcing sparsity and nonnegativity Rn(A(n)) =

In

  • in=1

R

  • r=1

ρn(a(n)

inr )

∀n ∈ {1, . . . , N} where loading matrices A(n) = (a(n)

inr )(in,r)∈{1,...,In}×{1,...,R}

ρn(ω) =

  • α(n)|ω|π(n)

if η(n)

min ≤ ω ≤ η(n) max

+∞

  • therwise

α(n) ∈]0, +∞[, π(n) ∈ N∗, η(n)

min ∈ [−∞, +∞[ and η(n) max ∈ [η(n) min, +∞]

⇒ block dependent but constant within a block regularization parameters

31 / 42

slide-43
SLIDE 43

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Preconditioning

◮ Preconditioner matrix P for the n-th block, ∀n ∈ {1, . . . , N} P(n)(A(1), . . . , A(N)) = A(n)(Z(−n)⊤Z(−n)) ⊘ A(n) ∀n ∈ {1, . . . , N}, A(n) must be non zero ⊘: Hadamard entry-wise division (Preconditioning: extension of the one used in NMF [Lee and Seung,

2001])

32 / 42

slide-44
SLIDE 44

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Proximity operator

Proximity operator of Rn associated with P(n) proxγ[k]−1P(n)[k],Rn(y) =

  • proxγ[k]−1p(n)

i

[k],ρn(y(i))

  • i∈{1,...,RIn}

(∀y = (y(i))i∈{1,...,RIn} ∈ RRIn), where (∀i ∈ {1, ..., RIn}), (∀υ ∈ R) proxγ[k]−1p(n)

i

,ρn(υ) = min

  • η(n)

max, max

  • η(n)

min, proxγ[k]α(n)(p(n)

i

[k])−1| . |π(n) (υ)

  • (separable structure, diagonal preconditioning matrices,

componentwise calculation)

33 / 42

slide-45
SLIDE 45

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Proximity operator

Proximity operator of Rn associated with P(n) proxγ[k]−1P(n)[k],Rn(y) =

  • proxγ[k]−1p(n)

i

[k],ρn(y(i))

  • i∈{1,...,RIn}

(∀y = (y(i))i∈{1,...,RIn} ∈ RRIn), where (∀i ∈ {1, ..., RIn}), (∀υ ∈ R) proxγ[k]−1p(n)

i

,ρn(υ) = min

  • η(n)

max, max

  • η(n)

min, proxγ[k]α(n)(p(n)

i

[k])−1| . |π(n) (υ)

  • Example:

proxρn(υ) where υ ∈ [−2, 22], [η(n)

min, η(n) max] = [0, 1.5], α(n) = 2 and

1)π(n) = 1 2)π(n) = 2 3)π(n) = 3

5 10 15 20 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 π = 1 π = 2 π=3 33 / 42

slide-46
SLIDE 46

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Experiments on simulated data

◮ Simulated tensor T : (uni or bimodal type) emission and excitation spectra, R = 5 ◮ Simulated observed tensor: T = T + B, B: white Gaussian noise ◮ 2 considered cases:

  • 1. 3D tensor: T ∈ R100×100×100

+

+ Noiseless case: no noise added, R = 6 (overestimation)

  • 2. 4D tensor: T ∈ R100×100×100×100

+

+ Noisy case: SNR = 18.46 dB, R = 7 (overestimation)

◮ Error measure:

  • 1. Signal to Noise Ratio defined as SNR = 20 log10

T F T −T F

  • 2. Estimation error: E1 = 10 log10

N

n=1

A(n)(1:R)−¯ A(n)1 N

n=1 ¯

A(n)1

  • 3. Over-factoring error E2 = 10 log10
  • R

r=R+1

a(1)

r

  • . . . ◦

a(N)

r

1

  • 34 / 42
slide-47
SLIDE 47

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Numerical results - 3D tensor

Noisy case Elapsed time (s) BC-VMFB N-way fast HALS For 50 iterations 0.2 11 0.5 To reach stopping conditions 75 8 8 ( E1, E2) dB (-11.2, -409) (-12.5, 30.6) (-12.5, 30.6) Noiseless case Elapsed time (s) BC-VMFB N-way fast HALS To reach stopping conditions 74 80 3.7 (E1, E2) dB (-15, -409) (-8.7, 31.7) (-6.1, 31.7)

Computation time comparison: BC-VMFB (with penalty), N-way [Bro, 1997], fast HALS [Phan et al., 2013] using the same initial value

BC-VMFB: + Fastest computation time / iteration + Smallest estimation error E1 (noisy case), overestimation error E2 (both cases)

35 / 42

slide-48
SLIDE 48

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Visual results: 3D tensor, noiseless case

Ref.

λex 300 400 500 λex 300 400 500 λex 300 400 500 λex 300 400 500 λex 300 400 500 λem λex 300 400 300 400 500

BC−VMFB

λem 300 400

N−way

λem 300 400

fHALS

λem 300 400 0.05 0.5 5

Penalized BC-VMFB α = 0.05

36 / 42

slide-49
SLIDE 49

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Visual results: 4D tensor, noisy case

λex Ref.

300 400

λex

300 400

λex

300 400

λex

300 400

λex

300 400

λex

300 400

λem λex

300 400 500 300 400

BC−VMFB λem

300 400 500

N−way λem

300 400 500

fHALS λem

300 400 500 10 20 30 40 50 60 70 37 / 42

slide-50
SLIDE 50

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Computer simulation: real experimental data - water monitoring to detect pollutants

◮ Data were acquired automatically every 3 minutes, during a 10 days monitoring campaign performed on water extracted from an urban river ⇒ tensor of size 36 × 111 × 2594. ◮ The excitation wavelengths range from 225nm to 400nm with a 5nm bandwidth, whereas the emission wavelengths range from 280nm to 500nm with a 2nm bandwidth. ◮ The FEEM have been pre-processed using the Zepp’s method (negative values were set to 0). During this experiment, a contamination with diesel oil appeared 7 days after the beginning of the monitoring. Contamination

38 / 42

slide-51
SLIDE 51

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Results: assuming that R = 4

(1)

λex 300 350 400 450 500 250 300 350 400

(2)

300 350 400 450 500 250 300 350 400

(3)

λem λex 300 350 400 450 500 250 300 350 400

Estimated FEEM

(4)

λem 300 350 400 450 500 250 300 350 400 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 10

5

(1)

λex 300 350 400 450 500 250 300 350 400

(2)

300 350 400 450 500 250 300 350 400

(3)

λem λex 300 350 400 450 500 250 300 350 400

Estimated FEEM

(4)

λem 300 350 400 450 500 250 300 350 400 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 10

5

penalized BC-VMFB algorithm Bro’s N-way algorithm

39 / 42

slide-52
SLIDE 52

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Results: concentrations

200 400 600 800 1000 1200 1400 2 4 6 8 10 12 14 x 10

4

Experiment

Normalized concentrations

(1) (2) (3) (4) 200 400 600 800 1000 1200 1400 2 4 6 8 10 12 14 x 10

4

Experiment

Normalized concentrations

(1) (2) (3) (4)

penalized BC-VMFB algorithm Bro’s N-way algorithm Case R = 4

40 / 42

slide-53
SLIDE 53

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Concentrations estimated by BC-VMFB

200 400 600 800 1000 1200 1400 2 4 6 8 10 12 14 x 10

4

Experiment

Normalized concentrations

(1) (2) (3) (4)

↑ Day 7

Case R = 4

41 / 42

slide-54
SLIDE 54

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Concentrations estimated by BC-VMFB

200 400 2

41 / 42

slide-55
SLIDE 55

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Conclusions and perspectives

◮ Inverse problems study from model to resolution through parameterization. ◮ Performance study on simulated data but also on real data. ◮ Elaboration of efficient methods based on wavelets, optimization, proximal algorithms. ◮ More efficient methods should be developed for TF inpainting. ◮ Real data preprocessing should be directly incorporated in the

  • ptimization problems.

42 / 42

slide-56
SLIDE 56

INTRODUCTION TF INPAINTING TENSOR FACTORIZATION CONCLUSIONS

Conclusions and perspectives

◮ Inverse problems study from model to resolution through parameterization. ◮ Performance study on simulated data but also on real data. ◮ Elaboration of efficient methods based on wavelets, optimization, proximal algorithms. ◮ More efficient methods should be developed for TF inpainting. ◮ Real data preprocessing should be directly incorporated in the

  • ptimization problems.

Thank you !

42 / 42