p q minimization methods for image restoration CMIPI 2018 16th - - PowerPoint PPT Presentation

p q minimization methods for image restoration
SMART_READER_LITE
LIVE PREVIEW

p q minimization methods for image restoration CMIPI 2018 16th - - PowerPoint PPT Presentation

p q minimization methods for image restoration CMIPI 2018 16th July 2018 A. Buccini 1 L. Reichel 1 1 Department of Mathematical Sciences, Kent State Univeristy, Kent OH, USA Outline p - q minimization methods for image


slide-1
SLIDE 1

ℓp − ℓq minimization methods for image restoration

CMIPI 2018 16th July 2018

  • A. Buccini1
  • L. Reichel1

1Department of Mathematical Sciences, Kent State Univeristy, Kent OH, USA

slide-2
SLIDE 2

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Outline

Introduction Discrete ill-posed inverse problems ℓp − ℓq regularization Majorization-Minimization in Generalized Krylov Subspaces General idea Algorithm Theoretical Results Selection of the regularization parameter Discrepancy Principle Cross Validation Modified Cross Validation Numerical Results Conclusions & Future work

slide-3
SLIDE 3

31

ℓp-ℓq minimization methods for image restoration Introduction

1 Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

Discrete ill-posed inverse problems

Consider linear system of equations Ax = b, (1) A may be rank deficient and is severely ill-conditioned, i.e., its singular values decay to 0 rapidly and without any gap.

100 200 300 400 500 600 700 800 900 1000 10-15 10-10 10-5 100

Figure: Singular values of the

shaw matrix.
slide-4
SLIDE 4

31

ℓp-ℓq minimization methods for image restoration Introduction

2 Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

Discrete ill-posed inverse problems (continued)

We only have a noise contaminated right-hand side bδ. We will assume that the error in bδ is made up of impulse noise and/or Gaussian noise.

slide-5
SLIDE 5

31

ℓp-ℓq minimization methods for image restoration Introduction

2 Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

Discrete ill-posed inverse problems (continued)

We only have a noise contaminated right-hand side bδ. We will assume that the error in bδ is made up of impulse noise and/or Gaussian noise. Impulse noise affects only a certain percentage of the entries

  • f b and leaves the other entries unchanged.

i =

di with probability σ, bi with probability 1 − σ, where the di are identically and uniformly distributed random numbers in the dynamic range of b [d

min, d max]. If

di ∈ {d

min, d max} impulse noise is commonly referred to as

salt-and-pepper noise.

slide-6
SLIDE 6

31

ℓp-ℓq minimization methods for image restoration Introduction

2 Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

Discrete ill-posed inverse problems (continued)

We only have a noise contaminated right-hand side bδ. We will assume that the error in bδ is made up of impulse noise and/or Gaussian noise. Impulse noise affects only a certain percentage of the entries

  • f b and leaves the other entries unchanged.

i =

di with probability σ, bi with probability 1 − σ, where the di are identically and uniformly distributed random numbers in the dynamic range of b [d

min, d max]. If

di ∈ {d

min, d max} impulse noise is commonly referred to as

salt-and-pepper noise. Noise and ill-conditioning make impossible to directly solve system (1), we need regularization methods.

slide-7
SLIDE 7

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 3

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization

A regularization technique consists in solving an ℓp-ℓq minimization problem of the form x∗ = arg min

x

1 p

  • Ax − bδ

p

p + µ

q Lxq

q

0 < p, q ≤ 2 where N(A) ∩ N(L) = {0}.

slide-8
SLIDE 8

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 3

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization

A regularization technique consists in solving an ℓp-ℓq minimization problem of the form x∗ = arg min

x

1 p

  • Ax − bδ

p

p + µ

q Lxq

q

0 < p, q ≤ 2 where N(A) ∩ N(L) = {0}. ◮ p = 2 and q = 2 yield to the classical Tikhonov regularization (Bai, Chan, Donatelli, Fenu, Gazzola, Hayami, Hanke, Hansen, Nagy, Ramlau, Reichel, Rodriguez,. . . );

slide-9
SLIDE 9

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 3

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization

A regularization technique consists in solving an ℓp-ℓq minimization problem of the form x∗ = arg min

x

1 p

  • Ax − bδ

p

p + µ

q Lxq

q

0 < p, q ≤ 2 where N(A) ∩ N(L) = {0}. ◮ p = 2 and q = 2 yield to the classical Tikhonov regularization (Bai, Chan, Donatelli, Fenu, Gazzola, Hayami, Hanke, Hansen, Nagy, Ramlau, Reichel, Rodriguez,. . . ); ◮ 1 ≤ p, q < 2 yield to a convex minimization problem (Chan, Chung, Donatelli, Estatico, Gazzola, Hansen, Huang, Nagy, Reichel,. . . );

slide-10
SLIDE 10

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 3

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization

A regularization technique consists in solving an ℓp-ℓq minimization problem of the form x∗ = arg min

x

1 p

  • Ax − bδ

p

p + µ

q Lxq

q

0 < p, q ≤ 2 where N(A) ∩ N(L) = {0}. ◮ p = 2 and q = 2 yield to the classical Tikhonov regularization (Bai, Chan, Donatelli, Fenu, Gazzola, Hayami, Hanke, Hansen, Nagy, Ramlau, Reichel, Rodriguez,. . . ); ◮ 1 ≤ p, q < 2 yield to a convex minimization problem (Chan, Chung, Donatelli, Estatico, Gazzola, Hansen, Huang, Nagy, Reichel,. . . ); ◮ 0 < q < 1 or 0 < p < 1 yield to a non-convex minimization problem (Chan, Huang, Lanza, Morigi, Reichel, Sgallari,. . . ).

slide-11
SLIDE 11

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 4

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization (continued)

In many situations it is useful to impose sparsity to improve the quality of the computed reconstructions

slide-12
SLIDE 12

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 4

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization (continued)

In many situations it is useful to impose sparsity to improve the quality of the computed reconstructions To enhance sparsity, we may consider using the so-called ℓ0-norm.

slide-13
SLIDE 13

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 4

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization (continued)

In many situations it is useful to impose sparsity to improve the quality of the computed reconstructions To enhance sparsity, we may consider using the so-called ℓ0-norm. It is common to approximate the ℓ0-norm by the ℓ1-norm.

slide-14
SLIDE 14

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 4

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization (continued)

In many situations it is useful to impose sparsity to improve the quality of the computed reconstructions To enhance sparsity, we may consider using the so-called ℓ0-norm. It is common to approximate the ℓ0-norm by the ℓ1-norm. However, ℓz-norms with z < 1 are better approximations to the ℓ0-norm.

slide-15
SLIDE 15

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 4

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization (continued)

In many situations it is useful to impose sparsity to improve the quality of the computed reconstructions To enhance sparsity, we may consider using the so-called ℓ0-norm. It is common to approximate the ℓ0-norm by the ℓ1-norm. However, ℓz-norms with z < 1 are better approximations to the ℓ0-norm.

slide-16
SLIDE 16

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 4

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization (continued)

In many situations it is useful to impose sparsity to improve the quality of the computed reconstructions To enhance sparsity, we may consider using the so-called ℓ0-norm. It is common to approximate the ℓ0-norm by the ℓ1-norm. However, ℓz-norms with z < 1 are better approximations to the ℓ0-norm.

  • 2
  • 1.5
  • 1
  • 0.5

0.5 1 1.5 2 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Figure: Comparison of different ℓz-norms.

slide-17
SLIDE 17

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 4

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization (continued)

In many situations it is useful to impose sparsity to improve the quality of the computed reconstructions To enhance sparsity, we may consider using the so-called ℓ0-norm. It is common to approximate the ℓ0-norm by the ℓ1-norm. However, ℓz-norms with z < 1 are better approximations to the ℓ0-norm.

  • 2
  • 1.5
  • 1
  • 0.5

0.5 1 1.5 2 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Figure: Comparison of different ℓz-norms.

◮ 0 < q < 1: Sparsity of the computed solution;

slide-18
SLIDE 18

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems 4

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Introduction

ℓp − ℓq regularization (continued)

In many situations it is useful to impose sparsity to improve the quality of the computed reconstructions To enhance sparsity, we may consider using the so-called ℓ0-norm. It is common to approximate the ℓ0-norm by the ℓ1-norm. However, ℓz-norms with z < 1 are better approximations to the ℓ0-norm.

  • 2
  • 1.5
  • 1
  • 0.5

0.5 1 1.5 2 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Figure: Comparison of different ℓz-norms.

◮ 0 < q < 1: Sparsity of the computed solution; ◮ 0 < p < 1: Sparsity of the computed residual.

slide-19
SLIDE 19

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

5 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea

We briefly describe the method proposed in [G. Huang, A. Lanza, S. Morigi, L. Reichel, and F. Sgallari, BIT2017].

slide-20
SLIDE 20

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

5 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea

We briefly describe the method proposed in [G. Huang, A. Lanza, S. Morigi, L. Reichel, and F. Sgallari, BIT2017]. Introduce a smoothed version of the xz

z as

xz

z ≈ n

  • i=1

Φz,ε(xi) with Φz,ε(t) =

  • t2 + ε2

z , ε > 0.

slide-21
SLIDE 21

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

5 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea

We briefly describe the method proposed in [G. Huang, A. Lanza, S. Morigi, L. Reichel, and F. Sgallari, BIT2017]. Introduce a smoothed version of the xz

z as

xz

z ≈ n

  • i=1

Φz,ε(xi) with Φz,ε(t) =

  • t2 + ε2

z , ε > 0. We consider the functional Jε(x) := 1 p

m

  • i=1

Φp,ε

  • Ax − bδ

i

  • + µ

q

  • i=1

Φq,ε((Lx)i). Thus, the smoothed minimization problem becomes x∗ = arg min

x Jε(x).

slide-22
SLIDE 22

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method.

slide-23
SLIDE 23

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε.

slide-24
SLIDE 24

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k).

slide-25
SLIDE 25

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-26
SLIDE 26

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-27
SLIDE 27

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-28
SLIDE 28

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-29
SLIDE 29

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-30
SLIDE 30

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-31
SLIDE 31

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-32
SLIDE 32

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-33
SLIDE 33

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-34
SLIDE 34

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-35
SLIDE 35

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-36
SLIDE 36

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-37
SLIDE 37

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-38
SLIDE 38

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-39
SLIDE 39

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

6 General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

General idea (continued)

For computing a stationary point of Jε we use a majorization-minimization method. We construct a sequence of iterates x(k) that converges to a stationary point of Jε. At each step the functional Jε is majorized by a quadratic function Q(x, x(k)) that is tangent to Jε at x(k). The next iterate x(k+1) is the unique minimizer of Q(x, x(k)).

slide-40
SLIDE 40

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 7 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm

Let v(k) := Ax(k) − bδ, u(k) := Lx(k), and the vectors ω(k)

d := v(k)
  • 1 −

(v(k))2 + ε2 ε2 p/2−1 , ω(k)

reg := u(k)
  • 1 −

(u(k))2 + ε2 ε2 q/2−1 , where all the operations are element-wise.

slide-41
SLIDE 41

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 7 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm

Let v(k) := Ax(k) − bδ, u(k) := Lx(k), and the vectors ω(k)

d := v(k)
  • 1 −

(v(k))2 + ε2 ε2 p/2−1 , ω(k)

reg := u(k)
  • 1 −

(u(k))2 + ε2 ε2 q/2−1 , where all the operations are element-wise. The function Q(x, x(k)) =εp−2 2

  • Ax − bδ

2

2 − 2

  • ω(k)
reg, Ax
  • + µεq−2

2

  • Lx2

2 − 2

  • ω(k)
d, Lx
  • + c,

with c a constant independent of x, is a quadratic tangent majorant for Jε at x(k).

slide-42
SLIDE 42

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 8 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

The minimizer of Q can be computed by solving (AtA + ηLtL)x(k+1) = At(bδ + ω(k)

d) + ηLtω(k) reg, η = µεq−2

εp−2 , (2)

slide-43
SLIDE 43

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 8 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

The minimizer of Q can be computed by solving (AtA + ηLtL)x(k+1) = At(bδ + ω(k)

d) + ηLtω(k) reg, η = µεq−2

εp−2 , (2) An approximate solution of (2) can be determined by seeking a solution in a low-dimensional subspace.

slide-44
SLIDE 44

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 8 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

The minimizer of Q can be computed by solving (AtA + ηLtL)x(k+1) = At(bδ + ω(k)

d) + ηLtω(k) reg, η = µεq−2

εp−2 , (2) An approximate solution of (2) can be determined by seeking a solution in a low-dimensional subspace. Let the columns of Vk ∈ Rn׈

k form an orthonormal basis of the

solution subspace.

slide-45
SLIDE 45

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 8 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

The minimizer of Q can be computed by solving (AtA + ηLtL)x(k+1) = At(bδ + ω(k)

d) + ηLtω(k) reg, η = µεq−2

εp−2 , (2) An approximate solution of (2) can be determined by seeking a solution in a low-dimensional subspace. Let the columns of Vk ∈ Rn׈

k form an orthonormal basis of the

solution subspace. Then x(k+1) = Vk+1y(k+1), where y(k+1) = arg min

y

  • AVk

η1/2LVk

  • y −
  • bδ + ω(k)
d

η1/2ω(k)

reg
  • 2

2

slide-46
SLIDE 46

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 9 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

Introduce the QR factorizations AVk = QARA with QA ∈ Rm׈

k, RA ∈ R ˆ k׈ k,

LVk = QLRL with QL ∈ Rℓ׈

k, RL ∈ R ˆ k׈ k.

slide-47
SLIDE 47

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 9 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

Introduce the QR factorizations AVk = QARA with QA ∈ Rm׈

k, RA ∈ R ˆ k׈ k,

LVk = QLRL with QL ∈ Rℓ׈

k, RL ∈ R ˆ k׈ k.

Then y(k+1) = arg min

y

  • RA

η1/2RL

  • y −
  • Qt

A(bδ + ω(k)

d)

η1/2Qt

Lω(k)

reg
  • 2

2

.

slide-48
SLIDE 48

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 9 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

Introduce the QR factorizations AVk = QARA with QA ∈ Rm׈

k, RA ∈ R ˆ k׈ k,

LVk = QLRL with QL ∈ Rℓ׈

k, RL ∈ R ˆ k׈ k.

Then y(k+1) = arg min

y

  • RA

η1/2RL

  • y −
  • Qt

A(bδ + ω(k)

d)

η1/2Qt

Lω(k)

reg
  • 2

2

. The residual can be computed by r(k+1) = At(AVky(k+1) − bδ − ω(k)

d) + ηLt(LVky(k+1) − ω(k) reg).
slide-49
SLIDE 49

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 9 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

Introduce the QR factorizations AVk = QARA with QA ∈ Rm׈

k, RA ∈ R ˆ k׈ k,

LVk = QLRL with QL ∈ Rℓ׈

k, RL ∈ R ˆ k׈ k.

Then y(k+1) = arg min

y

  • RA

η1/2RL

  • y −
  • Qt

A(bδ + ω(k)

d)

η1/2Qt

Lω(k)

reg
  • 2

2

. The residual can be computed by r(k+1) = At(AVky(k+1) − bδ − ω(k)

d) + ηLt(LVky(k+1) − ω(k) reg).

We expand the solution subspace by including the vector v

new = r(k+1)/r(k+1).
slide-50
SLIDE 50

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 9 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

Introduce the QR factorizations AVk = QARA with QA ∈ Rm׈

k, RA ∈ R ˆ k׈ k,

LVk = QLRL with QL ∈ Rℓ׈

k, RL ∈ R ˆ k׈ k.

Then y(k+1) = arg min

y

  • RA

η1/2RL

  • y −
  • Qt

A(bδ + ω(k)

d)

η1/2Qt

Lω(k)

reg
  • 2

2

. The residual can be computed by r(k+1) = At(AVky(k+1) − bδ − ω(k)

d) + ηLt(LVky(k+1) − ω(k) reg).

We expand the solution subspace by including the vector v

new = r(k+1)/r(k+1).

We define the new matrix Vk+1 = [Vk, v

new] ∈ Rn×(d+1).
slide-51
SLIDE 51

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 10 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

The QR factorizations of AVk+1 and LVk+1 matrices are updated AVk+1 = [AVk, Av

new] = [QA, ˜

qA]

  • RA

rA 0t τA

  • ,

LVk+1 = [LVk, Lv

new] = [QL, ˜

qL] RL rL 0t τL

  • ,

where rA = Qt

A(Av

new),

qA = Av

new − QArA,

τA = qA2 , ˜ qA = qA/τA, rL = Qt

L(Lv

new),

qL = Lv

new − QLrL,

τL = qL2 , ˜ qL = qL/τL;

slide-52
SLIDE 52

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea 11 Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Algorithm (continued)

Algorithm (MM-GKS)

Let 0 < p, q ≤ 2 and µ > 0. Consider A ∈ Rm×n and L ∈ Rℓ×n. Fix ε > 0 and k0 > 0. Generate the initial subspace basis: V0 ∈ Rn×k0 s.t. V t

0V0 = I;

Compute and store AV0 and LV0; AV0 = QARA, LV0 = QLRL; y(0) = V t

0x(0);

for k = 0, 1, . . . do Compute the quadratic tangent majorant Q(x, x(k)) at x(k); Minimize Q(x, x(k)) in the span of Vk; Enlarge the solution subpace obtaining Vk+1 ; Update the QR factorizations: AVk+1 = QARA and LVk+1 = QLRL; end x∗ = Vky(k+1)

slide-53
SLIDE 53

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm 12 Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Majorization-Minimization in GKS

Theoretical Results

Theorem ([G. Huang et al., BIT2017])

Let N(A) ∩ N(L) = {0} hold. Then for any initial guess x(0) ∈ Rn the sequence {x(k)}k converges to a stationary point

  • f Jε(x). Thus,

(i) limk→∞

  • x(k+1) − x(k)
  • 2 = 0,

(ii) limk→∞ ∇Jε = 0.

slide-54
SLIDE 54

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

13 Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Discrepancy Principle

We describes a method derived from MM-GKS and based on the discrepancy principle for determining a suitable value of the regularization parameter.

slide-55
SLIDE 55

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

13 Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Discrepancy Principle

We describes a method derived from MM-GKS and based on the discrepancy principle for determining a suitable value of the regularization parameter. The method updates the regularization parameter µ so that the discrepancy principle is satisfied at each step.

slide-56
SLIDE 56

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

14 Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Discrepancy Principle (continued)

Algorithm (MM-GKS-DP)

Let 0 < q ≤ 2 and µ > 0. Consider A ∈ Rm×n and L ∈ Rℓ×n. Fix ε > 0 and k0 > 0. Generate the initial subspace basis: V0 ∈ Rn×k0 s.t. V t

0V0 = I;

Compute and store AV0 and LV0; AV0 = QARA, LV0 = QLRL; y(0) = V t

0x(0);

for k = 0, 1, . . . do Compute the quadratic tangent majorant Q(k)(x, x(k)) at x(k) such that

  • Ax(k+1) − bδ
  • 2 = τδ;

Minimize Q(k)(x, x(k)) in the span of Vk; Enlarge the solution subpace obtaining Vk+1 ; Update the QR factorizations; end x∗ = Vky(k+1)

slide-57
SLIDE 57

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

15 Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Discrepancy Principle (continued)

Proposition

Assume that A is of full rank and let x(k), k = 1, 2, . . . , denote the iterates generated by MM-GKS-DP . Then there is a subsequence x(kj), j = 1, 2, . . . , with a limit x∗, such that

  • Ax∗ − bδ
  • 2 = τδ.
slide-58
SLIDE 58

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

16 Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Discrepancy Principle (continued)

Theorem

Let x(kj), j = 1, 2, . . . , denote the subsequence defined in the proposition above and let xδ = x(kj) denote the approximate solution determined by MM-GKS-DP with noise level δ > 0, where we assume that A is of full column rank. Then lim sup

δց0

  • xδ −

x

  • 2 = 0,

where x denotes the solution of the error-free system.

slide-59
SLIDE 59

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 17 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation

The CV method partitions the right-hand side bδ into two complementary subsets: the training set and the testing set.

slide-60
SLIDE 60

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 17 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation

The CV method partitions the right-hand side bδ into two complementary subsets: the training set and the testing set. ◮ The training set is used for solving the problem (with the rows of the testing set removed) for different regularization parameters;

slide-61
SLIDE 61

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 17 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation

The CV method partitions the right-hand side bδ into two complementary subsets: the training set and the testing set. ◮ The training set is used for solving the problem (with the rows of the testing set removed) for different regularization parameters; ◮ The testing set is used to validate the computed solution and select a suitable regularization parameter;

slide-62
SLIDE 62

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 18 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation (continued)

Let I denotes the set of indexes of the testing set with |I| = d.

slide-63
SLIDE 63

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 18 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation (continued)

Let I denotes the set of indexes of the testing set with |I| = d. Let bδ ∈ Rm−d and A ∈ R(m−d)×n denote the restrictions of bδ and A to the training set, respectively.

slide-64
SLIDE 64

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 18 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation (continued)

Let I denotes the set of indexes of the testing set with |I| = d. Let bδ ∈ Rm−d and A ∈ R(m−d)×n denote the restrictions of bδ and A to the training set, respectively. We solve the ℓp-ℓq regularization problems xµj = arg min

x

1 p

  • Ax −

  • p

p + µj

q Lxq

q ,

j = 1, . . . , l.

slide-65
SLIDE 65

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 18 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation (continued)

Let I denotes the set of indexes of the testing set with |I| = d. Let bδ ∈ Rm−d and A ∈ R(m−d)×n denote the restrictions of bδ and A to the training set, respectively. We solve the ℓp-ℓq regularization problems xµj = arg min

x

1 p

  • Ax −

  • p

p + µj

q Lxq

q ,

j = 1, . . . , l. We compute the residual norms rj =

  • i∈I
  • Axµj
  • i − bδ

i

2 , j = 1, 2, . . . , l.

slide-66
SLIDE 66

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 18 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation (continued)

Let I denotes the set of indexes of the testing set with |I| = d. Let bδ ∈ Rm−d and A ∈ R(m−d)×n denote the restrictions of bδ and A to the training set, respectively. We solve the ℓp-ℓq regularization problems xµj = arg min

x

1 p

  • Ax −

  • p

p + µj

q Lxq

q ,

j = 1, . . . , l. We compute the residual norms rj =

  • i∈I
  • Axµj
  • i − bδ

i

2 , j = 1, 2, . . . , l. We select µ = µj∗ , where j∗ = arg min

j

rj.

slide-67
SLIDE 67

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 19 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation (continued)

To reduce variability, we apply CV for several different partitionings and average the regularization parameter values determined.

slide-68
SLIDE 68

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 19 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation (continued)

To reduce variability, we apply CV for several different partitionings and average the regularization parameter values determined. At each step we consider a randomly selected set of d components of the vector bδ as testing data.

slide-69
SLIDE 69

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 19 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation (continued)

To reduce variability, we apply CV for several different partitionings and average the regularization parameter values determined. At each step we consider a randomly selected set of d components of the vector bδ as testing data. Each step provides a regularization parameter µ(k) for k = 1, 2, . . . , K. These parameters may differ.

slide-70
SLIDE 70

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 19 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation (continued)

To reduce variability, we apply CV for several different partitionings and average the regularization parameter values determined. At each step we consider a randomly selected set of d components of the vector bδ as testing data. Each step provides a regularization parameter µ(k) for k = 1, 2, . . . , K. These parameters may differ. The determined regularization parameter µ is µ = 1 K

K

  • k=1

µ(k).

slide-71
SLIDE 71

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle 20 Cross Validation Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Cross Validation (continued)

Algorithm (Cross Validation)

Let A ∈ Rm×n. Let d, K ∈ N with d < m. Let {µj}l

j=1 be a set of

positive regularization parameters. for k = 1, 2, . . . , K do Let A and bδ denote versions of A and bδ, in which d randomly selected rows have been removed; for j = 1, 2, . . . , l do Compute xµj = arg minx 1

p

  • Ax −

  • p

p + µj q Lxq q;

Compute r (k)

j

=

  • i∈Ij
  • Ax(k)

µj

  • i − bδ

i

2 ; end Let µ(k) = µj∗, where j∗ = arg minj r (k)

j

; end Compute µ = 1

K

K

k=1 µ(k);

slide-72
SLIDE 72

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation 21 Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Modified Cross Validation

The standard CV technique compares predictions of the right-hand side. We would like to exploit a similar idea, but instead compare predictions of computed solutions.

slide-73
SLIDE 73

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation 21 Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Modified Cross Validation

The standard CV technique compares predictions of the right-hand side. We would like to exploit a similar idea, but instead compare predictions of computed solutions. Let I1 and I2 denote two distinct sets of d distinct random integers between 1 and n.

slide-74
SLIDE 74

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation 21 Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Modified Cross Validation

The standard CV technique compares predictions of the right-hand side. We would like to exploit a similar idea, but instead compare predictions of computed solutions. Let I1 and I2 denote two distinct sets of d distinct random integers between 1 and n. Let Ai and bδi denote versions of A and bδ, respectively, in which the rows of bδ and A with index in Ii have been removed, for i = 1, 2.

slide-75
SLIDE 75

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation 21 Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Modified Cross Validation

The standard CV technique compares predictions of the right-hand side. We would like to exploit a similar idea, but instead compare predictions of computed solutions. Let I1 and I2 denote two distinct sets of d distinct random integers between 1 and n. Let Ai and bδi denote versions of A and bδ, respectively, in which the rows of bδ and A with index in Ii have been removed, for i = 1, 2.

slide-76
SLIDE 76

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation 22 Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Modified Cross Validation (continued)

We solve the ℓp-ℓq regularization problems x(i)

µj = arg min x

1 p

  • Aix −

bδi

  • p

p + µj

q Lxq

q ,

j = 1, . . . , l, i = 1, 2.

slide-77
SLIDE 77

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation 22 Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Modified Cross Validation (continued)

We solve the ℓp-ℓq regularization problems x(i)

µj = arg min x

1 p

  • Aix −

bδi

  • p

p + µj

q Lxq

q ,

j = 1, . . . , l, i = 1, 2. Compute the quantities ∆xj =

  • x(1)

µj − x(2) µj

  • 2 ,

j = 1, 2, . . . , l.

slide-78
SLIDE 78

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation 22 Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Modified Cross Validation (continued)

We solve the ℓp-ℓq regularization problems x(i)

µj = arg min x

1 p

  • Aix −

bδi

  • p

p + µj

q Lxq

q ,

j = 1, . . . , l, i = 1, 2. Compute the quantities ∆xj =

  • x(1)

µj − x(2) µj

  • 2 ,

j = 1, 2, . . . , l. We select µ = µj∗ , where j∗ = arg min

j

∆xj.

slide-79
SLIDE 79

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation 22 Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Modified Cross Validation (continued)

We solve the ℓp-ℓq regularization problems x(i)

µj = arg min x

1 p

  • Aix −

bδi

  • p

p + µj

q Lxq

q ,

j = 1, . . . , l, i = 1, 2. Compute the quantities ∆xj =

  • x(1)

µj − x(2) µj

  • 2 ,

j = 1, 2, . . . , l. We select µ = µj∗ , where j∗ = arg min

j

∆xj. To reduce variability, we apply MCV for several sets I1 and I2 and average the obtained parameters.

slide-80
SLIDE 80

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation 23 Modified Cross Validation

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Selection of the regularization parameter

Modified Cross Validation (continued)

Algorithm (Modified Cross Validation)

Let A ∈ Rm×n and let d, K ∈ N with d < m. Let {µj}l

j=1 be a set

  • f positive regularization parameters.

for k = 1, 2, . . . , K do Let I(k)

1

and I(k)

2

be distinct sets of d random integers; Let Ai and bδi denote the restricted versions of A and bδ, respectively, i = 1, 2; for j = 1, 2, . . . , l do x(i)

µj = arg minx 1 p

  • Aix −

bδi

  • p

p + µj q Lxq q, i = 1, 2;

∆x(k)

j

=

  • x(1)

µj − x(2) µj

  • 2;

end Let µ(k) = µj∗, where j∗ = arg min1≤j≤l

  • ∆x(k)

j

  • ;

end Compute µ = 1

K

K

k=1 µ(k);

slide-81
SLIDE 81

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation 24

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Numerical Results

Implementation details

◮ For the DP based method we impose sparsity representation in the framelet domain. Thus, we will use tight frames determined by linear B-splines;

slide-82
SLIDE 82

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation 24

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Numerical Results

Implementation details

◮ For the DP based method we impose sparsity representation in the framelet domain. Thus, we will use tight frames determined by linear B-splines; ◮ For the sake of computational efficiency, in the CV and MCV based methods we impose sparsity of the gradient. Thus, we will set L = L2 ⊗ I + I ⊗ L2, where I denotes the identity matrix and L2 =          2 −1 −1 2 −1 −1 2 −1 ... ... ... −1 2 −1 −1 2          .

slide-83
SLIDE 83

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation 25

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Numerical results

Gaussian Noise

(a) True image (238 × 238 pixels) (b) PSF (17 × 17 pixels) (c) Blurred image (δ = 0.02 b2)

slide-84
SLIDE 84

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation 26

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Numerical Results

Gaussian Noise (continued)

(a) MM-GKS (RRE: 0.067217) (b) MM-GKS-DP (RRE: 0.067347)

slide-85
SLIDE 85

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation 27

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Numerical Results

Example (continued)

(a) True (b) MM-GKS (c) MM-GKS-DP

slide-86
SLIDE 86

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation 27

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Numerical Results

Gaussian Noise (continued)

(a) True (b) MM-GKS (a) MM-GKS-DP

slide-87
SLIDE 87

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation 28

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Numerical results

Impulse and Gaussian Noise

(a) True image (234 × 182 pixels) (b) PSF (35 × 27 pixels) (c) Blurred image and noisy We corrupt the image with 20% impulse noise and 1% of white Gaussian noise.

slide-88
SLIDE 88

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation 29

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Numerical Results

Impulse and Gaussian Noise (continued)

(a) Optimal µ (RRE: 0.066903) (b) CV (RRE: 0.073975) (c) MCV (RRE: 0.068938) We set ◮ p = 0.8; ◮ q = 0.1;

slide-89
SLIDE 89

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation 30

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Numerical Results

Impulse and Gaussian Noise (continued)

(a) True (b) Optimal µ (c) CV (d) MCV

slide-90
SLIDE 90

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation 30

Numerical Results Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Numerical Results

Impulse and Gaussian Noise (continued)

(a) True (b) Optimal µ (c) CV (d) MCV

slide-91
SLIDE 91

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results

31

Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Conclusions & Future work

We draw some conclusions

slide-92
SLIDE 92

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results

31

Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Conclusions & Future work

We draw some conclusions ◮ We employed several strategies for determining the regularization parameter in the ℓp − ℓq regularization;

slide-93
SLIDE 93

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results

31

Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Conclusions & Future work

We draw some conclusions ◮ We employed several strategies for determining the regularization parameter in the ℓp − ℓq regularization; ◮ We developed a version of the Cross Validation which consider the reconstructed solutions rather than the data;

slide-94
SLIDE 94

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results

31

Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Conclusions & Future work

We draw some conclusions ◮ We employed several strategies for determining the regularization parameter in the ℓp − ℓq regularization; ◮ We developed a version of the Cross Validation which consider the reconstructed solutions rather than the data; ◮ The considered criteria do not require to set any parameter.

slide-95
SLIDE 95

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results

31

Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Conclusions & Future work

We draw some conclusions ◮ We employed several strategies for determining the regularization parameter in the ℓp − ℓq regularization; ◮ We developed a version of the Cross Validation which consider the reconstructed solutions rather than the data; ◮ The considered criteria do not require to set any parameter. Future work includes

slide-96
SLIDE 96

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results

31

Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Conclusions & Future work

We draw some conclusions ◮ We employed several strategies for determining the regularization parameter in the ℓp − ℓq regularization; ◮ We developed a version of the Cross Validation which consider the reconstructed solutions rather than the data; ◮ The considered criteria do not require to set any parameter. Future work includes ◮ Applying these methods for machine learning and, more in general, for the analysis of big data;

slide-97
SLIDE 97

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results

31

Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Conclusions & Future work

We draw some conclusions ◮ We employed several strategies for determining the regularization parameter in the ℓp − ℓq regularization; ◮ We developed a version of the Cross Validation which consider the reconstructed solutions rather than the data; ◮ The considered criteria do not require to set any parameter. Future work includes ◮ Applying these methods for machine learning and, more in general, for the analysis of big data; ◮ Avoiding non-necessary enlargement of the GKS;

slide-98
SLIDE 98

31

ℓp-ℓq minimization methods for image restoration Introduction

Discrete ill-posed inverse problems

ℓp − ℓq regularization MM-GKS

General idea Algorithm Theoretical Results

Selection of the regularization parameter

Discrepancy Principle Cross Validation Modified Cross Validation

Numerical Results

31

Conclusions & Future work

  • Dep. of Mathematical Sc.

Kent State Univeristy Ohio, USA

Conclusions & Future work

We draw some conclusions ◮ We employed several strategies for determining the regularization parameter in the ℓp − ℓq regularization; ◮ We developed a version of the Cross Validation which consider the reconstructed solutions rather than the data; ◮ The considered criteria do not require to set any parameter. Future work includes ◮ Applying these methods for machine learning and, more in general, for the analysis of big data; ◮ Avoiding non-necessary enlargement of the GKS; ◮ Theoretical analysis of the CV and MCV methods.

slide-99
SLIDE 99

Thank you for your attention!