Deep Unfolding of a Proximal Interior Point Method for Image - - PowerPoint PPT Presentation

deep unfolding of a proximal interior point method for
SMART_READER_LITE
LIVE PREVIEW

Deep Unfolding of a Proximal Interior Point Method for Image - - PowerPoint PPT Presentation

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion Deep Unfolding of a Proximal Interior Point Method for Image Restoration M.-C. Corbineau 1 in


slide-1
SLIDE 1

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Deep Unfolding of a Proximal Interior Point Method for Image Restoration

M.-C. Corbineau1

in collaboration with

  • C. Bertocchi2, E. Chouzenoux1, J.C. Pesquet1, M. Prato2

1Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France 2Università di Modena e Reggio Emilia, Modena, Italy

19 November 2019 Workshop on Regularisation for Inverse Problems and Machine Learning Jussieu, Paris

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 1 / 35

slide-2
SLIDE 2

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Motivation

Inverse problem in imaging y = D(Hx)

where y ∈ Rm observed image, D degradation model, H ∈ Rm×n linear observation model, x ∈ Rn original image

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 2 / 35

slide-3
SLIDE 3

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Motivation

Inverse problem in imaging y = D(Hx)

where y ∈ Rm observed image, D degradation model, H ∈ Rm×n linear observation model, x ∈ Rn original image

Variational methods minimize

x∈C

f (Hx, y) + λR(x)

where f : Rm × Rm → R data-fitting term, R : Rn → R regularization function, λ > 0 regularization weight

✓ Incorporate prior knowledge about solution and enforce desirable constraints ✗ No closed-form solution → advanced algorithms ✗ Estimation of λ and tuning of algorithm parameters → time-consuming

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 2 / 35

slide-4
SLIDE 4

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Motivation

Inverse problem in imaging y = D(Hx)

where y ∈ Rm observed image, D degradation model, H ∈ Rm×n linear observation model, x ∈ Rn original image

Variational methods minimize

x∈C

f (Hx, y) + λR(x)

where f : Rm × Rm → R data-fitting term, R : Rn → R regularization function, λ > 0 regularization weight

✓ Incorporate prior knowledge about solution and enforce desirable constraints ✗ No closed-form solution → advanced algorithms ✗ Estimation of λ and tuning of algorithm parameters → time-consuming Deep-learning methods ✓ Generic and very efficient architectures ✗ Pre-processing step : solve optimization problem → estimate regularization parameter ✗ Black-box, no theoretical guarantees

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 2 / 35

slide-5
SLIDE 5

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Motivation

Inverse problem in imaging y = D(Hx)

where y ∈ Rm observed image, D degradation model, H ∈ Rm×n linear observation model, x ∈ Rn original image

Variational methods minimize

x∈C

f (Hx, y) + λR(x)

where f : Rm × Rm → R data-fitting term, R : Rn → R regularization function, λ > 0 regularization weight

✓ Incorporate prior knowledge about solution and enforce desirable constraints ✗ No closed-form solution → advanced algorithms ✗ Estimation of λ and tuning of algorithm parameters → time-consuming Deep-learning methods ✓ Generic and very efficient architectures ✗ Pre-processing step : solve optimization problem → estimate regularization parameter ✗ Black-box, no theoretical guarantees → Combine benefits of both approaches : unfold proximal interior point algorithm

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 2 / 35

slide-6
SLIDE 6

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Deep Unfolding

  • Examples
  • Sparse coding : FISTA [Gregor and LeCun, 2010], ISTA [Kamilov and Mansour, 2016]
  • Compressive sensing : ISTA [Zhang and Ghanem, 2018], ADMM [Sun et al., 2016]
  • Principle

Iterative solver Unfolded algorithm for k = 0, 1, . . . for k = 0, 1, . . . , K − 1 xk+1 = A(xk, θk)

↓ hyperparameters

= ⇒ xk+1 = A

  • xk, L(θ)

k (xk)

layer estimating hyperparameters

Estimate : x∗ = lim

k→∞ xk

Estimate : x∗ = xK

  • Operators and functions included in A can be learned

✓ Gradient backpropagation and training are simpler ✗ Link to the original algorithm is weakened

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 3 / 35

slide-7
SLIDE 7

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Notation and Assumptions

Proximity operator Let Γ0(Rn) be the set of proper lsc convex functions from Rn to R ∪ {+∞}. The proximal

  • perator [http://proximity-operator.net/] of g ∈ Γ0(Rn) at x ∈ Rn is uniquely defined as

proxg(x) = argmin

z∈Rn

  • g(z) + 1

2 z − x2 . Assumptions P0 : minimize

x∈C

f (Hx, y) + λR(x) We assume that f (·, y) and R are twice-differentiable, f (H·, y) + λR ∈ Γ0(Rn) is either coercive

  • r C is bounded. The feasible set is defined as

C = {x ∈ Rn | (∀i ∈ {1, . . . , p}) ci(x) ≥ 0} where (∀i ∈ {1, . . . , p}), −ci ∈ Γ0(Rn). The strict interior of the feasible set is nonempty. Existence of a solution to P0 Twice-differentiability : training using gradient descent B : logarithmic barrier (∀x ∈ Rn) B(x) =

  • −p

i=1 ln(ci(x))

if x ∈ intC +∞

  • therwise.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 4 / 35

slide-8
SLIDE 8

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Logarithmic barrier method

Constrained Problem P0 : minimize

x∈C

f (Hx, y) + λR(x)

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 5 / 35

slide-9
SLIDE 9

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Logarithmic barrier method

Constrained Problem P0 : minimize

x∈C

f (Hx, y) + λR(x) ⇓ Unconstrained Subproblem Pµ : minimize

x∈Rn

f (Hx, y) + λR(x) + µB(x) where µ > 0 is the barrier parameter. P0 is replaced by a sequence of subproblems (Pµj )j∈N. Subproblems solved approximately for a sequence µj → 0 Main advantages : feasible iterates, superlinear convergence for NLP ✗ Inversion of an n × n matrix at each step

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 5 / 35

slide-10
SLIDE 10

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proximal interior point strategy

→ Combine interior point method with proximity operator Exact version of the proximal IPM in [Kaplan and Tichatschke, 1998]. Let x0 ∈ intC, γ > 0, (∀k ∈ N) γ ≤ γk and µk → 0 ; for k = 0, 1, . . . do xk+1 = proxγk(f (H·,y)+λR+µkB) (xk) end for ✗ No closed-form solution for proxγk(f (H·,y)+λR+µkB)

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 6 / 35

slide-11
SLIDE 11

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proximal interior point strategy

→ Combine interior point method with proximity operator Exact version of the proximal IPM in [Kaplan and Tichatschke, 1998]. Let x0 ∈ intC, γ > 0, (∀k ∈ N) γ ≤ γk and µk → 0 ; for k = 0, 1, . . . do xk+1 = proxγk(f (H·,y)+λR+µkB) (xk) end for ✗ No closed-form solution for proxγk(f (H·,y)+λR+µkB) Proposed forward–backward proximal IPM. Let x0 ∈ intC, γ > 0, (∀k ∈ N) γ ≤ γk and µk → 0 ; for k = 0, 1, . . . do xk+1 = proxγkµkB

  • xk − γk
  • H⊤∇1f (Hxk, y) + λ∇R(xk)

end for ✓ Only requires proxγkµkB

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 6 / 35

slide-12
SLIDE 12

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proximity operator of the barrier

Let ϕ : (x, α) → proxαB(x). A neural network obtained by unfolding an iterative solver A

  • requires to compute A(x, θ).

→ expression for the proximity operator ϕ(x, α) ?

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 7 / 35

slide-13
SLIDE 13

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proximity operator of the barrier

Let ϕ : (x, α) → proxαB(x). A neural network obtained by unfolding an iterative solver A

  • requires to compute A(x, θ).

→ expression for the proximity operator ϕ(x, α) ?

  • is trained with loss function ℓ(xK, x) by gradient descent.

θk = L(θ)

k (xk)

xk+1 = A (xk, θk)

→ first derivatives of ℓ wrt learnable parameters of hidden layers L(θ)

k

  • 0≤k≤K−1 ?

→ the chain rule requires the derivative of A wrt x and θ

→ expressions for J(x)

ϕ (x, α) and ∇(α) ϕ (x, α) ?

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 7 / 35

slide-14
SLIDE 14

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proximity operator of the barrier

Let ϕ : (x, α) → proxαB(x). A neural network obtained by unfolding an iterative solver A

  • requires to compute A(x, θ).

→ expression for the proximity operator ϕ(x, α) ?

  • is trained with loss function ℓ(xK, x) by gradient descent.

θk = L(θ)

k (xk)

xk+1 = A (xk, θk)

→ first derivatives of ℓ wrt learnable parameters of hidden layers L(θ)

k

  • 0≤k≤K−1 ?

→ the chain rule requires the derivative of A wrt x and θ

→ expressions for J(x)

ϕ (x, α) and ∇(α) ϕ (x, α) ?

These quantities depend on B and on the feasible set. ⇒ We obtain their expressions for three types of constraints.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 7 / 35

slide-15
SLIDE 15

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proximity operator of the barrier

Affine constraints C = x ∈ Rn | a⊤x ≤ b Proposition 1 Let ϕ : (x, α) → proxαB(x). Then, for every (x, α) ∈ Rn × R∗

+,

ϕ(x, α) = x + b − a⊤x − (b − a⊤x)2 + 4αa2 2a2 a.

In addition, the Jacobian matrix of ϕ wrt x and the gradient of ϕ wrt α are given by

J(x)

ϕ (x, α) = In −

1 2a2

  • 1 +

a⊤x − b

  • (b − a⊤x)2 + 4αa2
  • aa⊤

and

∇(α)

ϕ (x, α) =

−1

  • (b − a⊤x)2 + 4αa2

a Proof : [Chaux et al.,2007] and [Bauschke and Combettes,2017]

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 8 / 35

slide-16
SLIDE 16

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proximity operator of the barrier

Hyperslab constraints C = x ∈ Rn | bm ≤ a⊤x ≤ bM

  • Proposition 2

Let ϕ : (x, α) → proxαB(x). Then, for every (x, α) ∈ Rn × R∗

+,

ϕ(x, α) = x + κ(x, α) − a⊤x a2 a,

where κ(x, α) is the unique solution in ]bm, bM[, of the following cubic equation,

0 = z3 −(bm +bM +a⊤x)z2 +(bmbM +a⊤x(bm +bM)−2αa2)z −bmbMa⊤x +α(bm +bM)a2.

In addition, the Jacobian matrix of ϕ wrt x and the gradient of ϕ wrt α are given by

J(x)

ϕ (x, α) = In −

1 a2

(bM − κ(x, α))(bm − κ(x, α))

η(x, α) − 1

  • aa⊤

and

∇(α)

ϕ (x, α) = 2κ(x, α) − bm − bM

η(x, α) a,

where η(x, α) = (bM − κ(x, α))(bm − κ(x, α)) − (bm + bM − 2κ(x, α))(κ(x, α) − a⊤x) − 2αa2.

Proof : [Chaux et al.,2007], [Bauschke and Combettes,2017] and implicit function theorem

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 9 / 35

slide-17
SLIDE 17

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proximity operator of the barrier

Bound constraints C = {x ∈ R | 0 ≤ x ≤ 1}

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 10 / 35

slide-18
SLIDE 18

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proximity operator of the barrier

Bounded ℓ2-norm C = x ∈ Rn | x − c2 ≤ ρ Proposition 3 Let ϕ : (x, α) → proxαB(x). Then, for every (x, α) ∈ Rn × R∗

+,

ϕ(x, α) = c + ρ − κ(x, α)2 ρ − κ(x, α)2 + 2α (x − c),

where κ(x, α) is the unique solution in ]0, √ρ[, of the following cubic equation,

0 = z3 − x − cz2 − (ρ + 2α)z + ρx − c.

In addition, the Jacobian matrix of ϕ wrt x and the gradient of ϕ wrt α are given by

J(x)

ϕ (x, α) =

ρ − ϕ(x, α) − c2 ρ − ϕ(x, α) − c2 + 2α M(x, α)

and

∇(α)

ϕ (x, α) =

−2 ρ − ϕ(x, α) − c2 + 2α M(x, α)(ϕ(x, α) − c),

where

M(x, α) = In − 2(x − ϕ(x, α))(ϕ(x, α) − c)⊤ ρ − 3ϕ(x, α) − c2 + 2α + 2(ϕ(x, α) − c)⊤(x − c) . Proof : [Bauschke and Combettes,2017], Sherma-Morrison lemma and implicit function theorem

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 11 / 35

slide-19
SLIDE 19

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proximity operator of the barrier

Bounded ℓ2-norm C = x ∈ R2 | x2 ≤ 0.7

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 12 / 35

slide-20
SLIDE 20

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Proposed strategy

Forward–backward proximal IPM. Let x0 ∈ intC, γ > 0, (∀k ∈ N) γ ≤ γk and µk → 0 ; for k = 0, 1, . . . do xk+1 = proxγkµkB

  • xk − γk
  • H⊤∇1f (Hxk, y) + λ∇R(xk)

end for ✓ Efficient algorithm for constrained optimization ✗ Setting of the parameters (µk, γk)k∈N ? ✗ Finding the regularization parameter λ so as to optimize the visual quality of the solution ? → Unfold proximal IP algorithm over K iterations, untie γ, µ and λ across network A(xk, µk, γk, λk) = proxγkµkB

  • xk − γk
  • H⊤∇1f (Hxk, y) + λk∇R(xk)

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 13 / 35

slide-21
SLIDE 21

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

iRestNet architecture

Input RGB image

= () () ()  0

Output RGB image

1 ()

1

()

1

()

1

1 1 1

 1 2

Input : x0 = y blurred image

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 14 / 35

slide-22
SLIDE 22

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

iRestNet architecture

Input RGB image

= () () ()  0

Output RGB image

1 ()

1

()

1

()

1

1 1 1  1 2

Input : x0 = y blurred image Hidden structures (L(γ)

k

)0≤k≤K−1 : estimate stepsize, positive → Softplus (smooth approx ReLU) γk = L(γ)

k

= Softplus(ak)

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 14 / 35

slide-23
SLIDE 23

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

iRestNet architecture

Input RGB image

= () () ()  0

Output RGB image

1 ()

1

()

1

()

1

1 1 1  1 2

Input : x0 = y blurred image Hidden structures (L(γ)

k

)0≤k≤K−1 : estimate stepsize (L(µ)

k

)0≤k≤K−1 : estimate barrier parameter

AvgPool 4x4 + SoftPlus AvgPool 4x4 + SoftPlus SoftPlus Fully connected layer 256 256 3 64 64 16 16 16x16 16x16 1

  • 5

5 5 5 Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 14 / 35

slide-24
SLIDE 24

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

iRestNet architecture

Input RGB image

= () () ()  0

Output RGB image

1 ()

1

()

1

()

1

1 1 1  1 2

Input : x0 = y blurred image Hidden structures (L(γ)

k

)0≤k≤K−1 : estimate stepsize (L(µ)

k

)0≤k≤K−1 : estimate barrier parameter (L(λ)

k

)0≤k≤K−1 : estimate regularization parameter → image statistics, noise level

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 14 / 35

slide-25
SLIDE 25

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

iRestNet architecture

Input RGB image

= () () ()  0

Output RGB image

1 ()

1

()

1

()

1

1 1 1  1 2

Input : x0 = y blurred image Hidden structures (L(γ)

k

)0≤k≤K−1 : estimate stepsize (L(µ)

k

)0≤k≤K−1 : estimate barrier parameter (L(λ)

k

)0≤k≤K−1 : estimate regularization parameter A(xk, µk, γk, λk) = proxγkµkB

  • xk − γk
  • H⊤∇1f (Hxk, y) + λk∇R(xk)

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 14 / 35

slide-26
SLIDE 26

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

iRestNet architecture

Input RGB image

= () () ()  0

Output RGB image

1 ()

1

()

1

()

1

1 1 1  1 2

Input : x0 = y blurred image Hidden structures (L(γ)

k

)0≤k≤K−1 : estimate stepsize (L(µ)

k

)0≤k≤K−1 : estimate barrier parameter (L(λ)

k

)0≤k≤K−1 : estimate regularization parameter A(xk, µk, γk, λk) = proxγkµkB

  • xk − γk
  • H⊤∇1f (Hxk, y) + λk∇R(xk)

Lpp : post-processing layer → e.g. removes small artifacts

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 14 / 35

slide-27
SLIDE 27

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

iRestNet architecture

Input RGB image

= () () ()  0

Output RGB image

1 ()

1

()

1

()

1

1 1 1  1 2

Input : x0 = y blurred image Hidden structures (L(γ)

k

)0≤k≤K−1 : estimate stepsize (L(µ)

k

)0≤k≤K−1 : estimate barrier parameter (L(λ)

k

)0≤k≤K−1 : estimate regularization parameter A(xk, µk, γk, λk) = proxγkµkB

  • xk − γk
  • H⊤∇1f (Hxk, y) + λk∇R(xk)

Lpp : post-processing layer → e.g. removes small artifacts Training Gradient descent and backpropagation (∇A with Propositions 1-3)

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 14 / 35

slide-28
SLIDE 28

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Network stability

What about the network performance when the input is perturbed ?

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 15 / 35

slide-29
SLIDE 29

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Network stability

What about the network performance when the input is perturbed ?

Applications with high risk and legal responsibility (medical image processing, defense, etc...) → need guarantees Deep learning : lack of theoretical guarantees, e.g. AlexNet [Szegedy et al., 2013] Original image Perturbation New input ✓ Correctly classified ✗ Categorized as ‘Ostrich’ Figure – Adversarial examples for AlexNet [Szegedy et al., 2013]

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 15 / 35

slide-30
SLIDE 30

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Network stability

Formulation

  • Neural network : T(·) : Rn → Rn
  • Input image : x ∈ Rn
  • Perturbation : δx ∈ Rn
  • Output perturbation : ∆T(x) = T(x + δx) − T(x)
  • Questions : ∆T(x) ? ∆T(x) ?

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 16 / 35

slide-31
SLIDE 31

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Network stability

Formulation

  • Neural network : T(·) : Rn → Rn
  • Input image : x ∈ Rn
  • Perturbation : δx ∈ Rn
  • Output perturbation : ∆T(x) = T(x + δx) − T(x)
  • Questions : ∆T(x) ? ∆T(x) ?

Tools

1 Framework of averaged operators 2 iRestNet is re-written as a generic feedforward neural network 3 Results from the following recent work :

  • P. L. Combettes and J.-C. Pesquet.

Deep neural network structures solving variational inequalities https://arxiv.org/abs/1808.07526.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 16 / 35

slide-32
SLIDE 32

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Nonexpansive operators

Definition – Nonexpansiveness Let T : Rn → Rn. Then, T is nonexpansive if it is 1-Lipschitz continuous, i.e., (∀x ∈ Rn)(∀y ∈ Rn) T(x) − T(y) ≤ x − y. = ⇒ Bound on the norm of the output variation when input is perturbed : ∆T(x) ≤ δx

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 17 / 35

slide-33
SLIDE 33

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Averaged operators

Definition – α-averaged operator Let T : Rn → Rn be nonexpansive, and let α ∈ [0, 1]. Then, T is α-averaged if there exists a nonexpansive operator R : Rn → Rn such that T = (1 − α)In + αR.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 18 / 35

slide-34
SLIDE 34

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Averaged operators

Definition – α-averaged operator Let T : Rn → Rn be nonexpansive, and let α ∈ [0, 1]. Then, T is α-averaged if there exists a nonexpansive operator R : Rn → Rn such that T = (1 − α)In + αR. If T is averaged, then it is nonexpansive. Let α ∈]0, 1]. T is α-averaged if and only if for every x ∈ Rn and y ∈ Rn, T(x) − T(y)2 ≤ x − y2 − 1 − α α (In − T)(x) − (In − T)(y)2. = ⇒ Bound on the output variation when input is perturbed : ∆T(x)2 ≤ δx2 − 1−α

α ∆T(x) − δx2

  • In particular, as δx → 0, ∆T(x) → δx.
  • As α → 1 : nonexpansive.
  • The smaller α is, the more stable T is.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 18 / 35

slide-35
SLIDE 35

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Relation to generic deep neural networks

Feedforward architecture T = RK−1 ◦ (WK−1 · +bK−1) ◦ · · · ◦ R0 ◦ (W0 · +b0) (Rk)0≤k≤K−1 nonlinear activation functions (Wk)0≤k≤K−1 linear operators (weight) (bk)0≤k≤K−1 vectors (bias parameters)

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 19 / 35

slide-36
SLIDE 36

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Relation to generic deep neural networks

Standard activation functions can be expressed as proximity operators Rectified linear unit (ReLU) ̺: R → R: ξ →

  • ξ,

if ξ > 0; 0, if ξ ≤ 0. Then, ̺ = proj[0,+∞[. Parametric rectified linear unit (LeakyReLU) ̺: R → R: ξ →

  • ξ,

if ξ > 0; αξ, if ξ ≤ 0 , α ∈]0, 1]. Then ̺ = proxφ where φ: R → R: ξ →

  • 0,

if ξ > 0; (1/α − 1)ξ2/2, if ξ ≤ 0.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 20 / 35

slide-37
SLIDE 37

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Relation to generic deep neural networks

Standard activation functions can be expressed as proximity operators Unimodal sigmoid ̺: R → R: ξ → 1 1 + e−ξ − 1 2 Then ̺ = proxφ where φ: ξ →

      

(ξ + 1/2) ln(ξ + 1/2)+ (1/2 − ξ) ln(1/2 − ξ) − 1 2 (ξ2 + 1/4) if |ξ| < 1/2; −1/4, if |ξ| = 1/2; +∞, if |ξ| > 1/2. Elliot activation function (SoftSign) : ̺: R → R: ξ → ξ 1 + |ξ| . We have ̺ = proxφ, where φ: R →] − ∞, +∞]: ξ →

  • −|ξ| − ln(1 − |ξ|) − ξ2

2 ,

if |ξ| < 1; +∞, if |ξ| ≥ 1.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 20 / 35

slide-38
SLIDE 38

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Relation to generic deep neural networks

Standard activation functions can be expressed as proximity operators Softmax R : RN → RN : (ξk)1≤k≤N →

  • exp(ξk)

N

  • j=1

exp(ξj)

  • 1≤k≤N

− u, where u = (1, . . . , 1)/N ∈ RN. Then R = proxϕ where ϕ = ψ(· + u) + · | u and ψ : RN →] − ∞, +∞] (ξk)1≤k≤N →

    

N

  • k=1
  • ξk ln ξk − ξ2

k

2

  • ,

if (ξk)1≤i≤N ∈ [0, 1]N and

N

  • k=1

ξk = 1; +∞,

  • therwise.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 20 / 35

slide-39
SLIDE 39

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Relation to generic deep neural networks

Quadratic problem minimize

x∈C 1 2 Hx − y2 + λ 2 Dx2

Feedforward architecture RK−1 ◦ (WK−1 · +bK−1) ◦ · · · ◦ R0 ◦ (W0 · +b0) iRestNet xk+1 = proxγkµkB(xk − γk(H⊤(Hxk − y) + λkD⊤Dxk)) = proxγkµkB

  • [In − γk(H⊤H + λkD⊤D)]xk + γkH⊤y

= Rk(Wkxk + bk) Wk = In − γk(H⊤H + λD⊤D) weight operator bk = γkH⊤y bias parameter Rk = proxγkµkB → Rk specific activation function

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 21 / 35

slide-40
SLIDE 40

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Averageness result

Theorem 1 [Combettes and Pesquet, 2018] Let α ∈ [1/2, 1]. Let K ≥ 1 be an integer. Let W = WK−1 ◦ · · · ◦ W0, let µ = infx∈Rn, x=1 Wx | x, and let

θK−1 = W +

K−2

  • ℓ=0
  • 0≤j0<···<jℓ≤K−2

WK−1 ◦ · · · ◦ Wjℓ+1Wjℓ ◦ · · · ◦ Wjℓ−1+1 · · · Wj0 ◦ · · · ◦ W0.

If one of the following conditions is satisfied : (i) There exists k ∈ {0, . . . , K − 1} such that Wk = 0 ; (ii) W − 2K(1 − α)In − W + 2θK−1 ≤ 2Kα ; (iii) α = 1, for every k ∈ {1, . . . , K − 1} Wk = 0, and there exists η ∈ [0, α/((1 − α)θK−1)] such that

  • θK−1 ≤ 2K−1α

αθK−1 + (1 − α)(In − ηW − ηW )(θK−1 − W ) ≤ 2K−2(2α − 1) + (1 − α)µ,

then T = RK−1 ◦ (WK−1 · +bK−1) ◦ · · · ◦ R0 ◦ (W0 · +b0) is α-averaged.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 22 / 35

slide-41
SLIDE 41

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Network stability result

Assumption

Consider the quadratic problem, assume that H⊤H and D⊤D are diagonalizable in the same basis P.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 23 / 35

slide-42
SLIDE 42

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Network stability result

Assumption

Consider the quadratic problem, assume that H⊤H and D⊤D are diagonalizable in the same basis P.

Notation

For every p ∈ {1, . . . , n} let β(p)

H

and β(p)

D

denote the pth eigenvalue of H⊤H and D⊤D in P,

  • resp. Let β− and β+ be defined by

β− = min

1≤p≤n K−1

  • k=0
  • 1 − γk
  • β(p)

H

+ λkβ(p)

D

  • and β+ = max

1≤p≤n K−1

  • k=0
  • 1 − γk
  • β(p)

H

+ λkβ(p)

D

  • .

Let θ−1 = 1 and, for every k ∈ {0, . . . , K − 1}, θk =

k

  • l=0

θl−1 max

1≤ql ≤n

  • 1 − γk
  • β

(ql ) H

+ λkβ

(ql ) D

  • . . .

1 − γl

  • β

(ql ) H

+ λlβ

(ql ) D

  • .

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 23 / 35

slide-43
SLIDE 43

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Network stability result

Assumption

Consider the quadratic problem, assume that H⊤H and D⊤D are diagonalizable in the same basis P.

Notation

For every p ∈ {1, . . . , n} let β(p)

H

and β(p)

D

denote the pth eigenvalue of H⊤H and D⊤D in P,

  • resp. Let β− and β+ be defined by

β− = min

1≤p≤n K−1

  • k=0
  • 1 − γk
  • β(p)

H

+ λkβ(p)

D

  • and β+ = max

1≤p≤n K−1

  • k=0
  • 1 − γk
  • β(p)

H

+ λkβ(p)

D

  • .

Let θ−1 = 1 and, for every k ∈ {0, . . . , K − 1}, θk =

k

  • l=0

θl−1 max

1≤ql ≤n

  • 1 − γk
  • β

(ql ) H

+ λkβ

(ql ) D

  • . . .

1 − γl

  • β

(ql ) H

+ λlβ

(ql ) D

  • .

Theorem 2 Let α ∈ [1/2, 1]. If one of the following conditions is satisfied : (i) β+ + β− ≤ 0 and θK−1 ≤ 2K−1(2α − 1) ; (ii) 0 ≤ β+ + β− ≤ 2K+1(1 − α) and 2θK−1 ≤ β+ + β− + 2K(2α − 1) ; (iii) 2K+1(1 − α) ≤ β+ + β− and θK−1 ≤ 2K−1, then the operator RK−1 ◦ (WK−1 · +bK−1) ◦ · · · ◦ R0 ◦ (W0 · +b0) is α-averaged.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 23 / 35

slide-44
SLIDE 44

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Numerical experiments

Image deblurring y = Hx + ω H ∈ Rn × Rn : circular convolution with known blur ω ∈ Rn : additive white Gaussian noise with standard deviation σ y ∈ Rn, x ∈ Rn : RGB images Variational formulation minimize

x∈C

1 2 Hx − y2 + λ

n

  • i=1
  • (Dhx)2

i + (Dvx)2 i

δ2 + 1 C = {x ∈ Rn | (∀i ∈ {1, . . . , n}) xmin ≤ xi ≤ xmax} δ : smoothing parameter, δ = 0.01 for iRestNet Dh ∈ Rn×n, Dv ∈ Rn×n : horizontal and vertical spatial gradient operators

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 24 / 35

slide-45
SLIDE 45

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Network characteristics

Input RGB image

= () () ()  0

Output RGB image

1 ()

1

()

1

()

1

1 1 1  1 2

Number of layers : K = 40

64 64 64 3 7 7 5 5 3 3 ReLU +BN ReLU +BN ReLU +BN Output Input

+

9 9 64 11 11 ReLU +BN ReLU +BN 256 3 9 9 3 2 5 6 ReLU 64 64 64 3 5 5 7 7 64 ReLU +BN ReLU +BN Residual

Sigmoid

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 25 / 35

slide-46
SLIDE 46

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Network characteristics

Input RGB image

= () () ()  0

Output RGB image

1 ()

1

()

1

()

1

1 1 1  1 2

Number of layers : K = 40 Estimation of regularization parameter λk = L(λ)

k

(xk) = σ(y) × Softplus(bk) η(xk) + Softplus(ck)

  • η(xk) : standard deviation of [(Dhxk)⊤(Dvxk)⊤]⊤
  • Estimation of noise level [Ramadhan et al.,2017],

σ(y) = median(|WHy|)/0.6745

  • |WHy| : vector gathering the absolute value of the diagonal coefficients of the first level

Haar wavelet decomposition of the blurred image → iRestNet does not require knowledge of noise level

64 64 64 3 7 7 5 5 3 3 ReLU +BN ReLU +BN ReLU +BN Output Input

+

9 9 64 11 11 ReLU +BN ReLU +BN 256 3 9 9 3 2 5 6 ReLU 64 64 64 3 5 5 7 7 64 ReLU +BN ReLU +BN Residual

Sigmoid

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 25 / 35

slide-47
SLIDE 47

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Network characteristics

Input RGB image

= () () ()  0

Output RGB image

1 ()

1

()

1

()

1

1 1 1  1 2

Number of layers : K = 40 Estimation of regularization parameter λk = L(λ)

k

(xk) = σ(y) × Softplus(bk) η(xk) + Softplus(ck)

  • η(xk) : standard deviation of [(Dhxk)⊤(Dvxk)⊤]⊤
  • Estimation of noise level [Ramadhan et al.,2017],

σ(y) = median(|WHy|)/0.6745

  • |WHy| : vector gathering the absolute value of the diagonal coefficients of the first level

Haar wavelet decomposition of the blurred image → iRestNet does not require knowledge of noise level Post-processing Lpp [Zhang et al.,2017]

64 64 64 3 7 7 5 5 3 3 ReLU +BN ReLU +BN ReLU +BN Output Input

+

9 9 64 11 11 ReLU +BN ReLU +BN 256 3 9 9 3 2 5 6 ReLU 64 64 64 3 5 5 7 7 64 ReLU +BN ReLU +BN Residual

Sigmoid

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 25 / 35

slide-48
SLIDE 48

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Numerical experiments

Dataset Training set : 200 RGB images from BSD500 + 1000 images from COCO Validation set : 100 validation images from BSD500 Test sets : 200 test images from BSD500, Flickr30 test set (30 images)

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 26 / 35

slide-49
SLIDE 49

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Numerical experiments

Dataset Training set : 200 RGB images from BSD500 + 1000 images from COCO Validation set : 100 validation images from BSD500 Test sets : 200 test images from BSD500, Flickr30 test set (30 images) Test configurations GaussianA : Gaussian kernel with std=1.6, σ = 0.008 GaussianB : Gaussian kernel with std=1.6, σ ∈ [0.01, 0.05] GaussianC : Gaussian kernel with std=3, σ = 0.04 Motion : motion kernel from [Levin et al.,2009] σ = 0.01 Square : 7 × 7 square kernel, σ = 0.01

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 26 / 35

slide-50
SLIDE 50

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Numerical experiments

Dataset Training set : 200 RGB images from BSD500 + 1000 images from COCO Validation set : 100 validation images from BSD500 Test sets : 200 test images from BSD500, Flickr30 test set (30 images) Test configurations GaussianA : Gaussian kernel with std=1.6, σ = 0.008 GaussianB : Gaussian kernel with std=1.6, σ ∈ [0.01, 0.05] GaussianC : Gaussian kernel with std=3, σ = 0.04 Motion : motion kernel from [Levin et al.,2009] σ = 0.01 Square : 7 × 7 square kernel, σ = 0.01 Training Loss : Structural SImilarity Measure (SSIM) [Wang et al., 2004], ADAM optimizer L0, . . ., L29 trained individually, Lpp ◦ L39 ◦ · · · ◦ L30 trained end-to-end → low memory Implemented with Pytorch using a GPU, ∼3-4 days per training

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 26 / 35

slide-51
SLIDE 51

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Numerical experiments

Competitors Variational approach VAR : solution to P0 with projected gradient algorithm, (λ, δ) leading to best SSIM. Machine learning approaches EPLL [Zoran and Weiss, 2011] : Bayesian approach, Gaussian mixture model with learned parameters, deblurred image = MAP estimate MLP [Schuler et al.,2013] : Multi-Layer Perceptron network fed with a pre-deconvolved image produced by a Wiener deconvolution filter. Machine learning approaches based on deep unfolding IRCNN [Zhang et al.,2017] (require noise level) : empirical algorithm derived from an augmented Lagrangian formulation and unfolded over 30 iterations, CNN is used as a denoiser to update the splitting variable. PDHG [Meinhardt et al., 2017] : maximum of 30 iterations of a primal dual hybrid gradient algorithm, proximity operator of the second regularization function replaced by a NN. FCNN [J. Zhang et al., 2017] : unfolded algorithm, regularization function learned by a NN. → MLP, EPLL and IRCNN require knowledge of noise level → for GaussianB use noise standard deviation estimation given by [Mallat, 1999, Section 11.3.1].

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 27 / 35

slide-52
SLIDE 52

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Results

✓ Higher average SSIM than competitors ✓ Higher SSIM on almost all images GaussianA GaussianB GaussianC Motion Square Blurred 0.676 0.526 0.326 0.549 0.544 VAR 0.804 0.723 0.587 0.829 0.756 EPLL [Zoran and Weiss, 2011] 0.800 0.708 0.565 0.839 0.755 MLP [Schuler et al., 2016] 0.821 0.734 0.608 n/a n/a PDHG [Meinhardt et al., 2017] 0.796 0.716 0.563 n/a n/a IRCNN [K. Zhang et al., 2017] 0.841 0.768 0.619 0.907 0.834 FCNN [J. Zhang et al., 2017] n/a n/a n/a 0.847 n/a iRestNet 0.853 0.787 0.641 0.910 0.840 Table – SSIM results on the BSD500 test set. Figure – From left to right : GaussianA, GaussianC, Square.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 28 / 35

slide-53
SLIDE 53

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

✓ Short execution time : ∼ 1.4 sec per image ✓ Similar performance on a different test set GaussianA GaussianB GaussianC Motion Square Blurred 0.723 0.545 0.355 0.590 0.579 VAR 0.857 0.776 0.639 0.869 0.818 EPLL [Zoran and Weiss, 2011] 0.860 0.770 0.616 0.887 0.827 MLP [Schuler et al., 2016] 0.874 0.798 0.668 n/a n/a PDHG [Meinhardt et al., 2017] 0.853 0.781 0.623 n/a n/a IRCNN [K. Zhang et al., 2017] 0.885 0.819 0.676 0.930 0.886 FCNN [J. Zhang et al., 2017] n/a n/a n/a 0.890 n/a iRestNet 0.892 0.833 0.696 0.930 0.886 Table – SSIM results on the Flickr30 test set.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 29 / 35

slide-54
SLIDE 54

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Visual results

✓ Better contrast and more details Ground-truth Blurred : 0.509 VAR : 0.833 EPLL : 0.839 MLP : 0.860 PDHG : 0.772 IRCNN : 0.840 iRestNet : 0.883 Figure – Visual results and SSIM obtained on one image from the BSD500 test set degraded with GaussianB.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 30 / 35

slide-55
SLIDE 55

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Visual results

Ground-truth Blurred : 0.344 VAR : 0.622 EPLL : 0.553 IRCNN : 0.685 iRestNet : 0.713 Figure – Visual results and SSIM obtained on one image from the BSD500 test set degraded with Square. Ground-truth Blurred : 0.576 VAR : 0.844 EPLL : 0.849 IRCNN : 0.906 FCNN : 0.856 iRestNet : 0.909 Figure – Visual results and SSIM obtained on one image from the Flickr30 test set degraded with Motion.

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 31 / 35

slide-56
SLIDE 56

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Conclusion

Novel architecture based on an unfolded proximal interior point algorithm Allows to apply hard constraints on the image Expression and gradient of the proximity operator of the barrier → Different application (classification, . . .) → When degradation is unkown : blind or semi-blind deconvolution

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 32 / 35

slide-57
SLIDE 57

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Codes

https://mccorbineau.github.io https://github.com/mccorbineau/iRestNet

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 33 / 35

slide-58
SLIDE 58

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Related publications

iRestNet

  • C. Bertocchi, E. Chouzenoux, M.-C. Corbineau, J.-C. Pesquet, M. Prato

Deep unfolding of a proximal interior point method for image restoration To appear in Inverse Problems, 2019.

Network stability

  • P. L. Combettes and J.-C. Pesquet.

Deep neural network structures solving variational inequalities https://arxiv.org/abs/1808.07526.

Proximal interior point methods

M.-C. Corbineau, E. Chouzenoux and J.-C. Pesquet. PIPA : a new proximal interior point algorithm for large-scale convex optimization. Proceedings of the 20th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018. M.-C. Corbineau, E. Chouzenoux and J.-C. Pesquet. Geometry-texture decomposition/reconstruction using a proximal interior point algorithm Proceedings of the 10th IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM), 2018.

  • E. Chouzenoux, M.-C. Corbineau and J.-C. Pesquet.

A proximal interior point algorithm with applications to image processing To appear in Journal of Mathematical Imaging and Vision, 2019. Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 34 / 35

slide-59
SLIDE 59

Introduction Proximal IP method Proximity operator of the barrier Proposed architecture Network stability Numerical experiments Conclusion

Thank you !

Corbineau et al. Deep Unfolding of a Proximal Interior Point Method Workshop Jussieu, 2019 35 / 35