Global solution to non-convex optimization problems involving an - - PowerPoint PPT Presentation

global solution to non convex optimization problems
SMART_READER_LITE
LIVE PREVIEW

Global solution to non-convex optimization problems involving an - - PowerPoint PPT Presentation

Global solution to non-convex optimization problems involving an approximate 0 penalization Arthur Marmin In collaboration with: Jean-Christophe Pesquet and Marc Castella Center for Visual Computing, CentraleSup elec, INRIA, Universit


slide-1
SLIDE 1

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Arthur Marmin In collaboration with: Jean-Christophe Pesquet and Marc Castella

Center for Visual Computing, CentraleSup´ elec, INRIA, Universit´ e Paris-Saclay

9th October 2020

Arthur Marmin GdR MIA 09/10/2020 1 / 22

slide-2
SLIDE 2

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Motivation

Need for accurate data acquisition of sparse data

Arthur Marmin GdR MIA 09/10/2020 2 / 22

slide-3
SLIDE 3

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Motivation

Need for accurate data acquisition of sparse data Example in chromatography (joint work with IFPEN)

Mixture of chemical components Find the components (peaks location) Find their concentrations (peaks amplitude)

Arthur Marmin GdR MIA 09/10/2020 2 / 22

slide-4
SLIDE 4

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Motivation

Need for accurate data acquisition of sparse data Example in chromatography (joint work with IFPEN)

Mixture of chemical components Find the components (peaks location) Find their concentrations (peaks amplitude)

Sensor degradation

Peak enlargement Nonlinear distortion (e.g. saturation) Noise Only sub-sampled signal available (e.g. performing long acquisition in a limited time)

Arthur Marmin GdR MIA 09/10/2020 2 / 22

slide-5
SLIDE 5

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Application to Chromatography

5 10 15 20 25 30 35 40 45 50

  • 0.2

0.2 0.4 0.6 0.8

Original signal Altered signal Noisy ouput Arthur Marmin GdR MIA 09/10/2020 3 / 22

slide-6
SLIDE 6

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Main Issues

Common approach: Minimize a criterion Criterion = fit to observations + sparsity promoting regularizer

Arthur Marmin GdR MIA 09/10/2020 4 / 22

slide-7
SLIDE 7

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Main Issues

Common approach: Minimize a criterion Criterion = fit to observations + sparsity promoting regularizer Issues:

How to model nonlinearity?

Arthur Marmin GdR MIA 09/10/2020 4 / 22

slide-8
SLIDE 8

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Main Issues

Common approach: Minimize a criterion Criterion = fit to observations + sparsity promoting regularizer Issues:

How to model nonlinearity? − → Linearize the model − → Limited solution

Arthur Marmin GdR MIA 09/10/2020 4 / 22

slide-9
SLIDE 9

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Main Issues

Common approach: Minimize a criterion Criterion = fit to observations + sparsity promoting regularizer Issues:

How to model nonlinearity? − → Linearize the model − → Limited solution How to promote sparsity efficiently? (How to approximate ℓ0?)

Arthur Marmin GdR MIA 09/10/2020 4 / 22

slide-10
SLIDE 10

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Main Issues

Common approach: Minimize a criterion Criterion = fit to observations + sparsity promoting regularizer Issues:

How to model nonlinearity? − → Linearize the model − → Limited solution How to promote sparsity efficiently? (How to approximate ℓ0?) − → Use a convex approximation (like ℓ1) − → Bias in solution − → Use nonconvex approximations − → Settle for suboptimal solutions (local minimizers)

Arthur Marmin GdR MIA 09/10/2020 4 / 22

slide-11
SLIDE 11

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Summary

1 Model and criterion 2 Rational formulation of the problem 3 Solving the rational optimization problem 4 Complexity of the relaxation Arthur Marmin GdR MIA 09/10/2020 5 / 22

slide-12
SLIDE 12

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Modelling Decimation and Nonlinear Degradation Goal: Retrieve x from y

Observation model y = x y = observed signal of size U x = initial sparse discrete signal of size T

Arthur Marmin GdR MIA 09/10/2020 6 / 22

slide-13
SLIDE 13

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Modelling Decimation and Nonlinear Degradation Goal: Retrieve x from y

Observation model y = h ∗ x y = observed signal of size U x = initial sparse discrete signal of size T h = impulse response of convolution filter of length L

Arthur Marmin GdR MIA 09/10/2020 6 / 22

slide-14
SLIDE 14

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Modelling Decimation and Nonlinear Degradation Goal: Retrieve x from y

Observation model y = Φ(h ∗ x) y = observed signal of size U x = initial sparse discrete signal of size T h = impulse response of convolution filter of length L Φ = nonlinear function (e.g. saturation)

Arthur Marmin GdR MIA 09/10/2020 6 / 22

slide-15
SLIDE 15

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Modelling Decimation and Nonlinear Degradation Goal: Retrieve x from y

Observation model y = Φ(h ∗ x) + w y = observed signal of size U x = initial sparse discrete signal of size T h = impulse response of convolution filter of length L Φ = nonlinear function (e.g. saturation) w = white noise

Arthur Marmin GdR MIA 09/10/2020 6 / 22

slide-16
SLIDE 16

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Modelling Decimation and Nonlinear Degradation Goal: Retrieve x from y

Observation model y = D

  • Φ(h ∗ x) + w
  • y = observed signal of size U

x = initial sparse discrete signal of size T h = impulse response of convolution filter of length L Φ = nonlinear function (e.g. saturation) w = white noise D = decimation

Arthur Marmin GdR MIA 09/10/2020 6 / 22

slide-17
SLIDE 17

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Examples of Decimation

Decimation Dα − → Delete elements indexed by multiples of α Dα

  • (vt)1≤t≤T
  • = (v∆(u,α))1≤u≤U

∆(u, α) = Decimation index D∞ = Identity operator Example: D2     v1 v2 v3 v4     =?

Arthur Marmin GdR MIA 09/10/2020 7 / 22

slide-18
SLIDE 18

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Examples of Decimation

Decimation Dα − → Delete elements indexed by multiples of α Dα

  • (vt)1≤t≤T
  • = (v∆(u,α))1≤u≤U

∆(u, α) = Decimation index D∞ = Identity operator Example: D2     v1 v2 v3 v4     =     v1

✚ ✚

v2 v3

✚ ✚

v4    

Arthur Marmin GdR MIA 09/10/2020 7 / 22

slide-19
SLIDE 19

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Examples of Decimation

Decimation Dα − → Delete elements indexed by multiples of α Dα

  • (vt)1≤t≤T
  • = (v∆(u,α))1≤u≤U

∆(u, α) = Decimation index D∞ = Identity operator Example: D2     v1 v2 v3 v4     = v∆(1,2) v∆(2,2)

  • Arthur Marmin

GdR MIA 09/10/2020 7 / 22

slide-20
SLIDE 20

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Criterion for Signal Reconstruction

Criterion to minimize J ∗ = min

x∈RT

J (x) J (x) = fy(x)

  • data fidelity

+ Rλ(x)

penalization

fy(x) = y − D(Φ(h ∗ x))2

2 = y − D(Φ(Hx))2 2

Arthur Marmin GdR MIA 09/10/2020 8 / 22

slide-21
SLIDE 21

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Criterion for Signal Reconstruction

Criterion to minimize J ∗ = min

x∈RT

J (x) J (x) = fy(x)

  • data fidelity

+ Rλ(x)

penalization

fy(x) = y − D(Φ(h ∗ x))2

2 = y − D(Φ(Hx))2 2

Φ rational element-wise − → Φ : t →

t 0.3+|t|

  • 1
  • 0.5

0.5 1 x

  • 1
  • 0.5

0.5 1 (x)

Saturation Function Φ

Arthur Marmin GdR MIA 09/10/2020 8 / 22

slide-22
SLIDE 22

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Criterion for Signal Reconstruction

Criterion to minimize J ∗ = min

x∈RT

J (x) J (x) = fy(x)

  • data fidelity

+ Rλ(x)

penalization

fy(x) = y − D(Φ(h ∗ x))2

2 = y − D(Φ(Hx))2 2

Φ rational element-wise − → Φ : t →

t 0.3+|t|

fy(x) =

U

  • u=1
  • yu − Φ

L

  • l=1

hlx∆(u,α)−l+1 2

  • = gu(x∆(u,α)−L+1,...,x∆(u,α)) → rational functions

Arthur Marmin GdR MIA 09/10/2020 8 / 22

slide-23
SLIDE 23

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Choice of Sparsity Promoting Regularizer

Ideal regularizer Rλ(x) = λℓ0(x) . Lead to intricate optimization problem − → use surrogates

Arthur Marmin GdR MIA 09/10/2020 9 / 22

slide-24
SLIDE 24

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Choice of Sparsity Promoting Regularizer

Ideal regularizer Rλ(x) = λℓ0(x) . Lead to intricate optimization problem − → use surrogates Separable approximation: Rλ(x) =

T

  • t=1

Ψλ(xt)

Arthur Marmin GdR MIA 09/10/2020 9 / 22

slide-25
SLIDE 25

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Choice of Sparsity Promoting Regularizer

Ideal regularizer Rλ(x) = λℓ0(x) . Lead to intricate optimization problem − → use surrogates Separable approximation: Rλ(x) =

T

  • t=1

Ψλ(xt) Unbiased, continuous and promoting sparsity Ψλ − → Examples: Capped ℓp, MCP, SCAD, CEL0

Arthur Marmin GdR MIA 09/10/2020 9 / 22

slide-26
SLIDE 26

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Choice of Sparsity Promoting Regularizer

Ideal regularizer Rλ(x) = λℓ0(x) . Lead to intricate optimization problem − → use surrogates Separable approximation: Rλ(x) =

T

  • t=1

Ψλ(xt) Unbiased, continuous and promoting sparsity Ψλ − → Examples: Capped ℓp Ψλ(x) = |x|p 1{|x|≤λ} + λp1{|x|>λ}

Arthur Marmin GdR MIA 09/10/2020 9 / 22

slide-27
SLIDE 27

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Choice of Sparsity Promoting Regularizer

Ideal regularizer Rλ(x) = λℓ0(x) . Lead to intricate optimization problem − → use surrogates Separable approximation: Rλ(x) =

T

  • t=1

Ψλ(xt) Unbiased, continuous and promoting sparsity Ψλ − → Examples: Capped ℓp, MCP Ψλ(x) = λ |x| 1{|x|≤λ} + (γ + 1)λ2 2 1{|x|>γλ} − λ2 − 2γλ |x| + x2 2(γ − 1) 1{λ<|x|≤γλ}

Arthur Marmin GdR MIA 09/10/2020 9 / 22

slide-28
SLIDE 28

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Choice of Sparsity Promoting Regularizer

Ideal regularizer Rλ(x) = λℓ0(x) . Lead to intricate optimization problem − → use surrogates Separable approximation: Rλ(x) =

T

  • t=1

Ψλ(xt) Unbiased, continuous and promoting sparsity Ψλ − → Examples: Capped ℓp, MCP, SCAD Ψλ(x) =

  • λ |x| − x2

  • 1{|x|≤γλ} + γλ2

2 1{|x|>γλ} ,

Arthur Marmin GdR MIA 09/10/2020 9 / 22

slide-29
SLIDE 29

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Choice of Sparsity Promoting Regularizer

Ideal regularizer Rλ(x) = λℓ0(x) . Lead to intricate optimization problem − → use surrogates Separable approximation: Rλ(x) =

T

  • t=1

Ψλ(xt) Unbiased, continuous and promoting sparsity Ψλ − → Examples: Capped ℓp, MCP, SCAD, CEL0 Ψλ(x) = λ − γ2 2

  • |x| −

√ 2λ γ 2 1

|x|≤

√ 2λ γ

  • Arthur Marmin

GdR MIA 09/10/2020 9 / 22

slide-30
SLIDE 30

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Choice of Sparsity Promoting Regularizer

Ideal regularizer Rλ(x) = λℓ0(x) . Lead to intricate optimization problem − → use surrogates Separable approximation: Rλ(x) =

T

  • t=1

Ψλ(xt) Unbiased, continuous and promoting sparsity Ψλ − → Examples: Capped ℓp, MCP, SCAD, CEL0 Many such Ψλ are piecewise rational functions

Arthur Marmin GdR MIA 09/10/2020 9 / 22

slide-31
SLIDE 31

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Choice of Sparsity Promoting Regularizer

Ideal regularizer Rλ(x) = λℓ0(x) . Lead to intricate optimization problem − → use surrogates Separable approximation: Rλ(x) =

T

  • t=1

Ψλ(xt) Unbiased, continuous and promoting sparsity Ψλ − → Examples: Capped ℓp, MCP, SCAD, CEL0 Optimization problem J (x) =

U

  • u=1

gu(x∆(u,α)−L+1, . . . , x∆(u,α)) +

T

  • t=1

I

  • i=1

ζi(xt)1{σi−1≤xt<σi}

Arthur Marmin GdR MIA 09/10/2020 9 / 22

slide-32
SLIDE 32

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Examples of Continuous Rational Approximations of ℓ0

  • 4
  • 3
  • 2
  • 1

1 2 3 4 0.2 0.4 0.6 0.8 1 1.2 Arthur Marmin GdR MIA 09/10/2020 10 / 22

slide-33
SLIDE 33

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Rational Formulation of the Problem

Replace indicator functions with binary variables: z(i) = 1{σi≤x} − → 1{σi−1≤x<σi} = z(i−1)(1 − z(i))

Arthur Marmin GdR MIA 09/10/2020 11 / 22

slide-34
SLIDE 34

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Rational Formulation of the Problem

Replace indicator functions with binary variables: z(i) = 1{σi≤x} − → 1{σi−1≤x<σi} = z(i−1)(1 − z(i)) Add polynomial constraint to ensure z(i) take the correct values:

  • z(i) − 0.5
  • (x − σi) ≥ 0

Arthur Marmin GdR MIA 09/10/2020 11 / 22

slide-35
SLIDE 35

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Rational Formulation of the Problem

Replace indicator functions with binary variables: z(i) = 1{σi≤x} − → 1{σi−1≤x<σi} = z(i−1)(1 − z(i)) Add polynomial constraint to ensure z(i) take the correct values:

  • z(i) − 0.5
  • (x − σi) ≥ 0

Add polynomial constraint to ensure z(i) is binary:

  • z(i)2

− z(i) = 0

Arthur Marmin GdR MIA 09/10/2020 11 / 22

slide-36
SLIDE 36

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Rational Formulation of the Problem

Final rational optimization problem minimize

x,z U

  • u=1

gu(x∆(u,α)−L+1, . . . , x∆(u,α)) +

T

  • t=1

I

  • i=1

ζi(xt)z(i−1)

t

(1 − z(i)

t )

subject to (∀(i, t) ∈ 0, I × 1, T)

  • z(i)

t

2 − z(i)

t

= 0 (∀(i, t) ∈ 0, I × 1, T)

  • z(i)

t

− 0.5

  • (xt − σi+1) ≥ 0

Arthur Marmin GdR MIA 09/10/2020 11 / 22

slide-37
SLIDE 37

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

From Polynomial Optimization to Problem of Measure

Polynomial case min

x∈K

p(x) = inf

µ∈M+(K)

  • K

p(x)µ(dx) s.t.

  • K

µ(dx) = 1

Arthur Marmin GdR MIA 09/10/2020 12 / 22

slide-38
SLIDE 38

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

From Polynomial Optimization to Problem of Measure

Polynomial case min

x∈K

p(x) = inf

µ∈M+(K)

  • K

p(x)µ(dx) s.t.

  • K

µ(dx) = 1 Rational case min

x∈K

p(x) q(x) = inf

µ∈M+(K)

  • K

p(x)µ(dx) s.t.

  • K

q(x)µ(dx) = 1

Arthur Marmin GdR MIA 09/10/2020 12 / 22

slide-39
SLIDE 39

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

From Polynomial Optimization to Problem of Measure

Polynomial case min

x∈K

p(x) = inf

µ∈M+(K)

  • K

p(x)µ(dx) s.t.

  • K

µ(dx) = 1 Rational case min

x∈K

p(x) q(x) = inf

µ∈M+(K)

  • K

p(x)µ(dx) s.t.

  • K

q(x)µ(dx) = 1 K ⊂ RT = set of constraint − → basic closed semi-algebraic subset − → Assumed compact: add bound conditions on signal M+(K) = set of positive measures supported on K

Arthur Marmin GdR MIA 09/10/2020 12 / 22

slide-40
SLIDE 40

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

From Problem of Measure to Problem of Moments

From measure to moments p(x) =

  • α∈NT

pαxα  =

  • α∈NT

pα1...αT xα1

1 . . . xαT T

  = ⇒

  • K

p(x)µ(dx) =

  • α∈NT

  • K

xαµ(dx) =

  • α∈NT

pαvα Linear problem in µ (or in v) Optimize on v, the moments of x: vα =

  • K xαµ(dx)

Infinite number of moments − → Truncate up to a degree 2k: |α| ≤ 2k Ensure v contains moments representing a positive measure − → Add linear matrix inequalities (LMI)

Arthur Marmin GdR MIA 09/10/2020 13 / 22

slide-41
SLIDE 41

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Convexification using Lasserre’s Hierarchy

SDP problem of order k in Lasserre’s hierarchy P∗

k = min v∈Rmp⊤v

subject to C −

m

  • i=1

viAi ∈ Sn

+

f − G⊤v = 0 C ∈ Sn (∀i ∈ 1, m) Ai ∈ Sn p ∈ Rm f ∈ Rnl G ∈ Rm×nl

  • moment constraints

Convergence of the hierarchy lim

k→∞ P∗ k = J ∗

, P∗

k ≤ P∗ k+1

Arthur Marmin GdR MIA 09/10/2020 14 / 22

slide-42
SLIDE 42

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Sum of Rational Functions to Problem of Moments

Rational function min

x∈K

p(x) q(x) Equivalent problem of moments inf

µ∈M+(K)

  • K

p(x)µ(dx) s.t.

  • K

q(x)µ(dx) = 1

Arthur Marmin GdR MIA 09/10/2020 15 / 22

slide-43
SLIDE 43

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Sum of Rational Functions to Problem of Moments

Sum of rational functions min

x∈K

p1(x) q1(x) + p2(x) q2(x) Equivalent problem of moments inf

(µ1,µ2)∈(M+(K))2

  • K

p1(x)µ1(dx) +

  • K

p2(x)µ2(dx) s.t.

  • K

q1(x)µ1(dx) = 1 ,

  • K

q2(x)µ2(dx) = 1 (∀α ∈ Nn)

  • K

xαq1(x)µ1(dx) =

  • K

xαq2(x)µ2(dx)

Arthur Marmin GdR MIA 09/10/2020 15 / 22

slide-44
SLIDE 44

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Using Sparsity of the Data

Reconstruction criterion as a sum of rational functions minimise

x,z U

  • u=1

gu(x∆(u,α)−L+1, . . . , x∆(u,α)) +

T

  • t=1

I

  • i=1

ζi(xt)z(i−1)

t

(1 − z(i)

t )

subject to (∀(i, t) ∈ 0, I × 1, T)

  • z(i)

t

2 − z(i)

t

= 0 (∀(i, t) ∈ 0, I × 1, T)

  • z(i)

t

− 0.5

  • (xt − σi+1) ≥ 0

Introduce U measures (µu)u∈1,U and T measures (νt)t∈1,T Add extra moment constraints to link the different moments Sparsity offers block-diagonal structure to C and Ai in SDP

Arthur Marmin GdR MIA 09/10/2020 16 / 22

slide-45
SLIDE 45

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Global Optimality Guarantee

Comparison with classical local algorithms: Linear model (Φ = Id) to be fair with other methods Penalization = SCAD, no decimation, 100 realizations Use symmetry (parity) of penalization to decrease load

20 40 60 80 100

Test ID Number

3 3.5 4 4.5 5 5.5 6

Criterion Value

FB IRL1 CD Our method Arthur Marmin GdR MIA 09/10/2020 17 / 22

slide-46
SLIDE 46

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Reconstruction Quality of Sparse Signal

Comparison with iLASSO: iLASSO = linearized LASSO followed by IHT Nonlinear model Penalization = SCAD, decimation = D2

20 40 60 80 100

  • 1

1

Observed signal y

50 100 150 200 0.5 1

Original signal x0

50 100 150 200 0.5 1

Reconstructed signal using our method

50 100 150 200 0.5 1

Reconstructed signal using iLASSO Arthur Marmin GdR MIA 09/10/2020 18 / 22

slide-47
SLIDE 47

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Impact of decimation on Computational Time

Time (s) T L D∞ D4 D2 50 3 18 12 8 100 3 45 30 18 200 3 88 65 42 50 4 1285 522 219 100 4 9871 4468 1356 50 5 5559 2269 620 100 5 Overload 22354 5105 Chemistry application − → Filter length L ≃ 5

Arthur Marmin GdR MIA 09/10/2020 19 / 22

slide-48
SLIDE 48

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Examples of Problem Sizes after Convexification

Asymptotic dimensions With decimation m = O(UL2k + TI 2k) n = O(ULk + TI k) Without decimation m = O(T(L2k + I 2k)) n = O(T(Lk + I k)) Usually L > I − → m and n decreased by T

U

Arthur Marmin GdR MIA 09/10/2020 20 / 22

slide-49
SLIDE 49

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Examples of Problem Sizes after Convexification

Asymptotic dimensions With decimation m = O(UL2k + TI 2k) n = O(ULk + TI k) Without decimation m = O(T(L2k + I 2k)) n = O(T(Lk + I k)) Usually L > I − → m and n decreased by T

U

m n T L k D∞ D2 D∞ D2 100 3 3 9600 5400 10000 7500 100 4 3 22200 11700 14500 97500 100 3 4 18100 9850 19000 14250 100 4 4 51100 26350 30500 20000

Arthur Marmin GdR MIA 09/10/2020 20 / 22

slide-50
SLIDE 50

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Conclusion

Provide a reconstruction method that handles both nonlinear transformation and exact continuous approximations of ℓ0

Arthur Marmin GdR MIA 09/10/2020 21 / 22

slide-51
SLIDE 51

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Conclusion

Provide a reconstruction method that handles both nonlinear transformation and exact continuous approximations of ℓ0 Extend Lasserre’s framework for polynomial optimization to piecewise rational criterion: guaranteed global optimality

Arthur Marmin GdR MIA 09/10/2020 21 / 22

slide-52
SLIDE 52

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Conclusion

Provide a reconstruction method that handles both nonlinear transformation and exact continuous approximations of ℓ0 Extend Lasserre’s framework for polynomial optimization to piecewise rational criterion: guaranteed global optimality A study of the complexity of the problem and simulations show the application of our method to moderate-size signals.

Arthur Marmin GdR MIA 09/10/2020 21 / 22

slide-53
SLIDE 53

Global solution to non-convex optimization problems involving an approximate ℓ0 penalization

Related Articles

  • A. Marmin, M. Castella, J.-C. Pesquet, and L. Duval, “Signal

reconstruction from sub-sampled and nonlinearly distorted

  • bservations”, in Proc. Eur. Signal Image Process. Conf., Rome,

Italy, Sep. 2018, pp. 1970–1974.

  • M. Castella and J.-C. Pesquet, and A. Marmin, “Rational
  • ptimization for nonlinear reconstruction with approximate ℓ0

penalization”, in IEEE Trans. Signal Process., vol. 67, no. 6, pp. 1407–1417, Mar. 2019.

  • A. Marmin, M. Castella, and J.-C. Pesquet, “How to globally

solve non-convex optimization problems involving an approximate ℓ0 penalization”, in Proc. Int. Conf. Acoust. Speech Signal Process., Brighton, United Kingdom, 2019, pp. 5601–5605.

  • A. Marmin, M. Castella, and J.-C. Pesquet, “Sparse signal

reconstruction with a sign oracle”, in Proc. Signal Processing with Adaptive Sparse Structured Representations (SPARS) workshop 2019.

Arthur Marmin GdR MIA 09/10/2020 22 / 22