Recent advances in optimization algorithms for image deblurring and - - PowerPoint PPT Presentation

recent advances in optimization algorithms for image
SMART_READER_LITE
LIVE PREVIEW

Recent advances in optimization algorithms for image deblurring and - - PowerPoint PPT Presentation

Recent advances in optimization algorithms for image deblurring and denoising G. Landi E. Loli Piccolomini F. Zama Department of Mathematics, Bologna University http://www.unibo.it Conference on Applied Inverse Problems 2009, 23 July, 2009


slide-1
SLIDE 1

Recent advances in optimization algorithms for image deblurring and denoising

  • G. Landi
  • E. Loli Piccolomini
  • F. Zama

Department of Mathematics, Bologna University http://www.unibo.it

Conference on Applied Inverse Problems 2009, 23 July, 2009

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 1 / 35

slide-2
SLIDE 2

Outline

Newton-like projection methods for a nonnegatively constrained minimization problem arising in astronomical image restoration Convergence results Numerical results

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 2 / 35

slide-3
SLIDE 3

Image formation model

The mathematical model for image formation is Af + bg + w = g where g ∈ Rn is the detected image A ∈ Rn×n is a block-Toeplitz matrix describing the blurring bg ∈ Rn is the expected value (usually constant) of the background w ∈ Rn is the value of the noise f ∈ Rn is the unknown image to be recovered

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 3 / 35

slide-4
SLIDE 4

Image restoration problem

The problem of image restoration is to determine an approximation

  • f f given g, bg, A and statistics for w.

The noise w

◮ is not given ◮ is the realization of an independent Poisson process

the pixel values of the image f are nonnegative A is an ill-conditioned matrix

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 4 / 35

slide-5
SLIDE 5

Image restoration problem

By using a maximum-likelihood approach 1, the image restoration problem can be reformulated as the nonnegatively constrained optimization problem min J (f ) = J0(f ) + λJR(f ) s.t. f ≥ 0, where J0(f ) is the Csiz´ ar divergence: J0(f ) =

n

  • j=1
  • gj ln

gj (Af )j + bg + (Af )j + bg − gj

  • JR(f ) is the Tikhonov regularization functional:

JR(f ) = 1 2f 2

2

λ is the regularization parameter

  • 1M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, 1998
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 5 / 35

slide-6
SLIDE 6

Image restoration problem

By using a maximum-likelihood approach 1, the image restoration problem can be reformulated as the nonnegatively constrained optimization problem min J (f ) = J0(f ) + λJR(f ) s.t. f ≥ 0, where J0(f ) is the Csiz´ ar divergence: J0(f ) =

n

  • j=1
  • gj ln

gj (Af )j + bg + (Af )j + bg − gj

  • JR(f ) is the Tikhonov regularization functional:

JR(f ) = 1 2f 2

2

λ is the regularization parameter

  • 1M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, 1998
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 5 / 35

slide-7
SLIDE 7

Image restoration problem

By using a maximum-likelihood approach 1, the image restoration problem can be reformulated as the nonnegatively constrained optimization problem min J (f ) = J0(f ) + λJR(f ) s.t. f ≥ 0, where J0(f ) is the Csiz´ ar divergence: J0(f ) =

n

  • j=1
  • gj ln

gj (Af )j + bg + (Af )j + bg − gj

  • JR(f ) is the Tikhonov regularization functional:

JR(f ) = 1 2f 2

2

λ is the regularization parameter

  • 1M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, 1998
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 5 / 35

slide-8
SLIDE 8

Image restoration problem

By using a maximum-likelihood approach 1, the image restoration problem can be reformulated as the nonnegatively constrained optimization problem min J (f ) = J0(f ) + λJR(f ) s.t. f ≥ 0, where J0(f ) is the Csiz´ ar divergence: J0(f ) =

n

  • j=1
  • gj ln

gj (Af )j + bg + (Af )j + bg − gj

  • JR(f ) is the Tikhonov regularization functional:

JR(f ) = 1 2f 2

2

λ is the regularization parameter

  • 1M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, 1998
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 5 / 35

slide-9
SLIDE 9

Proposed Newton-like Projection methods

The proposed Newton-like projection methods have the general form f k+1 = [f k − αkpk]+ where [·]+ denotes the projection on the positive orthant αk is the step-length pk is the search direction

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 6 / 35

slide-10
SLIDE 10

Search direction computation

Define the set of indices Ik = {j = 1, . . . , n | f k

j = 0 and ∂J (f k)

∂fj > 0} The computation of the search direction pk takes the following steps:

1

Computation of the reduced gradient g k

I:

  • g k

I

  • j =
  • ∂J (f k)

∂fj

, if j / ∈ Ik; 0,

  • therwise;

j = 1, . . . , n.

2

Computation of the solution z of the linear system ∇2J kz = g k

I

3

Computation of pk such as: pk

j =

  • zj,

if j / ∈ Ik;

∂J (f k) ∂fj

,

  • therwise;

j = 1, . . . , n.

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 7 / 35

slide-11
SLIDE 11

Search direction computation

Define the set of indices Ik = {j = 1, . . . , n | f k

j = 0 and ∂J (f k)

∂fj > 0} The computation of the search direction pk takes the following steps:

1

Computation of the reduced gradient g k

I:

  • g k

I

  • j =
  • ∂J (f k)

∂fj

, if j / ∈ Ik; 0,

  • therwise;

j = 1, . . . , n.

2

Computation of the solution z of the linear system ∇2J kz = g k

I

3

Computation of pk such as: pk

j =

  • zj,

if j / ∈ Ik;

∂J (f k) ∂fj

,

  • therwise;

j = 1, . . . , n.

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 7 / 35

slide-12
SLIDE 12

Search direction computation

Define the set of indices Ik = {j = 1, . . . , n | f k

j = 0 and ∂J (f k)

∂fj > 0} The computation of the search direction pk takes the following steps:

1

Computation of the reduced gradient g k

I:

  • g k

I

  • j =
  • ∂J (f k)

∂fj

, if j / ∈ Ik; 0,

  • therwise;

j = 1, . . . , n.

2

Computation of the solution z of the linear system ∇2J kz = g k

I

3

Computation of pk such as: pk

j =

  • zj,

if j / ∈ Ik;

∂J (f k) ∂fj

,

  • therwise;

j = 1, . . . , n.

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 7 / 35

slide-13
SLIDE 13

Search direction computation

Define the set of indices Ik = {j = 1, . . . , n | f k

j = 0 and ∂J (f k)

∂fj > 0} The computation of the search direction pk takes the following steps:

1

Computation of the reduced gradient g k

I:

  • g k

I

  • j =
  • ∂J (f k)

∂fj

, if j / ∈ Ik; 0,

  • therwise;

j = 1, . . . , n.

2

Computation of the solution z of the linear system ∇2J kz = g k

I

3

Computation of pk such as: pk

j =

  • zj,

if j / ∈ Ik;

∂J (f k) ∂fj

,

  • therwise;

j = 1, . . . , n.

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 7 / 35

slide-14
SLIDE 14

Search direction computation

Define the set of indices Ik = {j = 1, . . . , n | f k

j = 0 and ∂J (f k)

∂fj > 0} The computation of the search direction pk takes the following steps:

1

Computation of the reduced gradient g k

I:

  • g k

I

  • j =
  • ∂J (f k)

∂fj

, if j / ∈ Ik; 0,

  • therwise;

j = 1, . . . , n.

2

Computation of the solution z of the linear system ∇2J kz = g k

I

3

Computation of pk such as: pk

j =

  • zj,

if j / ∈ Ik;

∂J (f k) ∂fj

,

  • therwise;

j = 1, . . . , n.

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 7 / 35

slide-15
SLIDE 15

Step-length computation

The step-length αk is computed with the modified Armijo rule2: αk is the first number of the sequence {2−m}m∈N such that J (f k) − J (f k(2−m)) ≥ η  2−m

j / ∈Ik

∇J k

j pk j +

  • j∈Ik

∇J k

j

  • f k

j − f k j (2−m)

 where

◮ f k(2−m) = [f k − 2−mpk]+ ◮ η ∈ (0, 1

2)

  • 2D. P. Bertsekas, Nonlinear Programming, Athena Scientific, 2003
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 8 / 35

slide-16
SLIDE 16

Step-length computation

The step-length αk is computed with the modified Armijo rule2: αk is the first number of the sequence {2−m}m∈N such that J (f k) − J (f k(2−m)) ≥ η  2−m

j / ∈Ik

∇J k

j pk j +

  • j∈Ik

∇J k

j

  • f k

j − f k j (2−m)

◮ if j /

∈ Ik ⇒ Armijo rule

◮ if j ∈ Ik ⇒ Armijo rule along the projection arc

  • 2D. P. Bertsekas, Nonlinear Programming, Athena Scientific, 2003
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 8 / 35

slide-17
SLIDE 17

Algorithm: Newton-like Projection methods

Given f 0 ≥ 0 and η ∈ (0, 1

2), the basic algorithm is as follows:

Repeat until convergence

  • 1. Computation of the search direction pk

1.1 Compute the reduced gradient gk

I ;

1.2 Solve ∇2J kz = gk

I ;

1.3 Compute pk

j =

zi, if j / ∈ Ik; ∇J k

j ,

  • therwise;

j = 1, . . . , n;

  • 2. Computation of the steplength αk

Find the smallest number m ∈ N satisfying J (f k) − J (f k(2−m)) ≥ η  2−m

j / ∈Ik

∇J k

j pk j +

  • j∈Ik

∇J k

j

  • f k

j − f k j (2−m)

  • 3. Updates

Set f k+1 = [f k − αkpk]+ end

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 9 / 35

slide-18
SLIDE 18

The computational kernel of the proposed Newton-like projection methods is the solution of the linear system ∇2J kz = gk

I

Different solution strategies gives different Newton-like projection methods In order to develop efficient methods, we look for a a Block Circulant with Circulant Blocks (BCCB) approximation to ∇2J (f )

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 10 / 35

slide-19
SLIDE 19

The computational kernel of the proposed Newton-like projection methods is the solution of the linear system ∇2J kz = gk

I

Different solution strategies gives different Newton-like projection methods In order to develop efficient methods, we look for a a Block Circulant with Circulant Blocks (BCCB) approximation to ∇2J (f )

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 10 / 35

slide-20
SLIDE 20

The computational kernel of the proposed Newton-like projection methods is the solution of the linear system ∇2J kz = gk

I

Different solution strategies gives different Newton-like projection methods In order to develop efficient methods, we look for a a Block Circulant with Circulant Blocks (BCCB) approximation to ∇2J (f )

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 10 / 35

slide-21
SLIDE 21

A BCCB approximation to ∇2J

The Hessian matrix of J is given by ∇2J (f ) = ATC kA + λI where A is a Block Toeplitz with Toeplitz Blocks (BTTB) matrix C k =    ck

1

. . . . . . ... . . . . . . ck

n

   , ck

i =

(∇J k)i (Af k + b)i I is the identity matrix

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 11 / 35

slide-22
SLIDE 22

A BCCB approximation to ∇2J

The Hessian matrix of J is given by ∇2J (f ) = ATC kA + λI where A is a Block Toeplitz with Toeplitz Blocks (BTTB) matrix C k =    ck

1

. . . . . . ... . . . . . . ck

n

   , ck

i =

(∇J k)i (Af k + b)i I is the identity matrix

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 11 / 35

slide-23
SLIDE 23

A BCCB approximation to ∇2J

The Hessian matrix of J is given by ∇2J (f ) = ATC kA + λI where A is a Block Toeplitz with Toeplitz Blocks (BTTB) matrix C k =    ck

1

. . . . . . ... . . . . . . ck

n

   , ck

i =

(∇J k)i (Af k + b)i I is the identity matrix

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 11 / 35

slide-24
SLIDE 24

A BCCB approximation to ∇2J

Consider Q a BCCB approximation to A C

k =

   ck . . . . . . ... . . . . . . ck    , ck = mean(ck

1 , . . . , ck n )

then Hk = QTC

kQ + λI

is a BCCB approximation to ∇2J Hk ∼ ∇2J (f ) The BCCB matrix Hk can be easily inverted by FFTs

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 12 / 35

slide-25
SLIDE 25

A BCCB approximation to ∇2J

Consider Q a BCCB approximation to A C

k =

   ck . . . . . . ... . . . . . . ck    , ck = mean(ck

1 , . . . , ck n )

then Hk = QTC

kQ + λI

is a BCCB approximation to ∇2J Hk ∼ ∇2J (f ) The BCCB matrix Hk can be easily inverted by FFTs

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 12 / 35

slide-26
SLIDE 26

The inexact Newton Projection method

In the Inexact Newton Projection (INP) method the linear system ∇2J kz = gk

I

is inexactly solved by

◮ the Conjugate Gradient (CG) method ◮ the Preconditioned Conjugate Gradient (PCG) method where the PCG

preconditioner is the matrix Hk

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 13 / 35

slide-27
SLIDE 27

The quasi-Newton Projection method

In the Quasi-Newton Projection (QNP) method the linear system ∇2J kz = gk

I

is approximated by the linear system Hkz = gk

I

z is computed by inverting Hk by means of FFTs

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 14 / 35

slide-28
SLIDE 28

Bertsekas’ Newton-like Projection methods

Bertsekas 3 proposed a class of Newton-like projection methods of the general form f k+1 = [f k − αkpk]+ where αk is the step-length computed with the modified Armijo rule pk = Dk∇J k where Dk is diagonal with respect to Ik

Definition

The matrix Dk is diagonal with respect to Ik if: Dk

ij =

δij, if either i ∈ Ik or j ∈ Ik; dk

ij ,

  • therwise;
  • 3D. P. Bertsekas, SIAM J. Control Optim., 20 (1982), 221-245
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 15 / 35

slide-29
SLIDE 29

Convergence

For both the INP method and the QNP method it can be proved that45 There exists a matrix Dk diagonal with respect to Ik such that the search direction pk of the INP and QNP methods can be expressed as pk = Dk∇J k = ⇒ The INP and QNP methods belong to the class

  • f Bertsekas’ Newton-like projection methods
  • 4G. Landi and E. Loli Piccolomini, Num. Alg., 48:279-300, 2008
  • 5G. Landi and E. Loli Piccolomini, Tech. Rep.
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 16 / 35

slide-30
SLIDE 30

Convergence

For both the INP method and the QNP method it can be proved that45 There exists a matrix Dk diagonal with respect to Ik such that the search direction pk of the INP and QNP methods can be expressed as pk = Dk∇J k = ⇒ The INP and QNP methods belong to the class

  • f Bertsekas’ Newton-like projection methods
  • 4G. Landi and E. Loli Piccolomini, Num. Alg., 48:279-300, 2008
  • 5G. Landi and E. Loli Piccolomini, Tech. Rep.
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 16 / 35

slide-31
SLIDE 31

Convergence

Theorem (D. P. Bertsekas, SIAM J. Control Optim., 20 (1982), 221-245)

If there exist two positive scalars µ1 and µ2 such that µ1||y2 ≤ yTDky ≤ µ2||y2, ∀y ∈ Rn, k = 0, 1, . . . then every limit point of a sequence {fk} generated by the iteration f k+1 = [f k − αkDk∇J k]+ is a critical point with respect to the problem min J (f ) = J0(f ) + λJR(f ) s.t. f ≥ 0

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 17 / 35

slide-32
SLIDE 32

Convergence

For both the INP method and the QNP method it can be proved that67 There exist two positive scalars µ1 and µ2 such that µ1||y2 ≤ yTDky ≤ µ2||y2, ∀y ∈ Rn, k = 0, 1, . . . = ⇒ Following the convergence analysis of Bertsekas’ Newton-like projection methods it can be proved that: Every limit point of a sequence {fk} generated by the INP method or by the QNP method is a critical point with respect to the problem min J (f ) = J0(f ) + λJR(f ) s.t. f ≥ 0

  • 6G. Landi and E. Loli Piccolomini, Num. Alg., 48:279-300, 2008
  • 7G. Landi and E. Loli Piccolomini, Tech. Rep.
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 18 / 35

slide-33
SLIDE 33

Convergence

For both the INP method and the QNP method it can be proved that67 There exist two positive scalars µ1 and µ2 such that µ1||y2 ≤ yTDky ≤ µ2||y2, ∀y ∈ Rn, k = 0, 1, . . . = ⇒ Following the convergence analysis of Bertsekas’ Newton-like projection methods it can be proved that: Every limit point of a sequence {fk} generated by the INP method or by the QNP method is a critical point with respect to the problem min J (f ) = J0(f ) + λJR(f ) s.t. f ≥ 0

  • 6G. Landi and E. Loli Piccolomini, Num. Alg., 48:279-300, 2008
  • 7G. Landi and E. Loli Piccolomini, Tech. Rep.
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 18 / 35

slide-34
SLIDE 34

Numerical experiments

Compare the Inexact Newton Projection (INP) method

INP

where the linear system ∇2J kz = gk

I is solved by

◮ the CG method (INP-CG) ◮ the PCG method where Hk is the preconditioner (INP-PCG)

the Quasi Newton Projection (QNP) method

QNP

the original Bertsekas’ Newton Projection (BNP) method

BNP

where the linear system (Dk)−1pk = ∇J k is solved by the CG method the Gradient Projection (GP) method

GP

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 19 / 35

slide-35
SLIDE 35

Stopping criteria

Defined the projected gradient of J at f k: {gk

P}i =

   gk

i ,

if f k

i > 0;

gk

i ,

if f k

i = 0 and gk i < 0;

0,

  • therwise.

the iterations of the previous algorithms are terminated according to the following stopping criteria:

1

gk

P2

g0

P2

≤ 10−6

2

f k+1 − f k2 f k+12 ≤ 10−5

3 k > 100

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 20 / 35

slide-36
SLIDE 36

Stopping criteria

Defined the projected gradient of J at f k: {gk

P}i =

   gk

i ,

if f k

i > 0;

gk

i ,

if f k

i = 0 and gk i < 0;

0,

  • therwise.

the iterations of the previous algorithms are terminated according to the following stopping criteria:

1

gk

P2

g0

P2

≤ 10−6

2

f k+1 − f k2 f k+12 ≤ 10−5

3 k > 100

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 20 / 35

slide-37
SLIDE 37

CG stopping criteria

At each iteration of both the INP and BNP methods, it is necessary to solve a linear system with the CG method The CG method is stopped with a relative precision of τCG = 0.9 and a maximum number of KCG = 5 iterations allowed These values have been chosen in order to optimize the performance

  • f the INP and BNP methods
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 21 / 35

slide-38
SLIDE 38

CG stopping criteria

At each iteration of both the INP and BNP methods, it is necessary to solve a linear system with the CG method The CG method is stopped with a relative precision of τCG = 0.9 and a maximum number of KCG = 5 iterations allowed These values have been chosen in order to optimize the performance

  • f the INP and BNP methods
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 21 / 35

slide-39
SLIDE 39

CG stopping criteria

At each iteration of both the INP and BNP methods, it is necessary to solve a linear system with the CG method The CG method is stopped with a relative precision of τCG = 0.9 and a maximum number of KCG = 5 iterations allowed These values have been chosen in order to optimize the performance

  • f the INP and BNP methods
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 21 / 35

slide-40
SLIDE 40

CG stopping criteria

At each iteration of both the INP and BNP methods, it is necessary to solve a linear system with the CG method The CG method is stopped with a relative precision of τCG = 0.9 and a maximum number of KCG = 5 iterations allowed These values have been chosen in order to optimize the performance

  • f the INP and BNP methods
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 21 / 35

slide-41
SLIDE 41

Initialization

The regularization parameter λ is chosen euristically The initial iterate f 0 is chosen such that f 0 =

Nx

  • i=1

Ny

  • j=1

(gij − bg) NxNy where Nx × Ny is the image size

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 22 / 35

slide-42
SLIDE 42

Astronomical image restoration experiment

The VLT image of the Crab Nebula: = * + g

  • bserved image

f true image A Adaptive Optic PSF w Poisson noise SNR=44.23

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 23 / 35

slide-43
SLIDE 43

Relative reconstruction error

  • Rel. Err.

k FFTs QNP 0.161266 3 12 INP-PCG 0.163328 16 194 INP-CG 0.163513 61 774

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 24 / 35

slide-44
SLIDE 44

Relative reconstruction error

  • Rel. Err.

k FFTs QNP 0.161266 3 12 INP-PCG 0.163328 16 194 INP-CG 0.163513 61 774 BNP 0.163770 60 754 GP 0.166881 291 580

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 25 / 35

slide-45
SLIDE 45

Projected gradient behavior

gP k FFTs QNP 0.001949 54 470 INP-PCG 0.000883 51 698 INP-CG 0.003690 61 774 BNP 0.004071 60 754 GP 0.041237 291 580

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 26 / 35

slide-46
SLIDE 46

QNP iterations

k = 0 QNP image relative error

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 27 / 35

slide-47
SLIDE 47

QNP iterations

k = 1 QNP image relative error

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 27 / 35

slide-48
SLIDE 48

QNP iterations

k = 2 QNP image relative error

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 27 / 35

slide-49
SLIDE 49

QNP iterations

k = 3 QNP image relative error

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 27 / 35

slide-50
SLIDE 50

QNP iterations

k = 4 QNP image relative error

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 27 / 35

slide-51
SLIDE 51

QNP iterations

k = 5 QNP image relative error

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 27 / 35

slide-52
SLIDE 52

QNP iterations

k = 6 QNP image relative error

Skip

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 27 / 35

slide-53
SLIDE 53

Astronomical image restoration experiment

Blurry and noisy (SNR=38.20)

  • bserved image

True image: a HST image of the galaxy NGC 1288.

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 28 / 35

slide-54
SLIDE 54

Astronomical image restoration experiment

Image obtained by the QNP method (rel. err.=0.266625) True image: a HST image of the galaxy NGC 1288.

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 28 / 35

slide-55
SLIDE 55

Relative reconstruction error

  • Rel. Err.

k FFTs QNP 0.266625 12 136 INP-PCG 0.273944 99 1954 INP-CG 0.274099 100 2478

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 29 / 35

slide-56
SLIDE 56

Relative reconstruction error

  • Rel. Err.

k FFTs QNP 0.266625 12 136 INP-PCG 0.273944 99 1954 INP-CG 0.274099 100 2478 BNP 0.274109 100 1354 GP 0.276440 254 506

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 30 / 35

slide-57
SLIDE 57

Projected gradient behavior

gP k FFTs QNP 0.856989 100 1362 INP-PCG 0.632781 100 1974 INP-CG 0.016679 100 2478 BERT 0.092587 100 1354 GP 0.074514 255 509

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 31 / 35

slide-58
SLIDE 58

Conclusions

Newton-like projection methods have been used for the solution of a nonnegatively constrained optimization problem arising in astronomical image restoration applications A BCCB approximation to the true Hessian of the objective function has been introduced The numerical results show that this approximation can significantly improve the efficiency and effectiveness of the algorithms since

◮ The best results both in terms of reconstruction error reduction and

computational efficiency are obtained with the QNP method

◮ In the INP method, the BCCB approximation can be used as an

efficient preconditioner for the CG method

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 32 / 35

slide-59
SLIDE 59

For Further Reading

  • M. Bertero and P. Boccacci.

Introduction to Inverse Problems in Imaging. IoP, 1998.

  • D. P. Bertsekas.

Projected Newton Methods for Optimization Problems with Simple Constraints. SIAM J. Control and Optimization, 20:221-246, 1982.

  • G. Landi and E. Loli Piccolomini.

A projected Newton-CG method for nonnegative astronomical image deblurring. Numerical Algorithms, 48:279-300, 2008.

  • G. Landi and E. Loli Piccolomini.

A quasi-Newton projection method for nonnegatively constrained image deblurring. Technical Report, Bologna University

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 33 / 35

slide-60
SLIDE 60

Bertsekas’ Newton projection method

f k+1 = [f k − αkpk]+, pk = Dk∇J k Dk is diagonal with respect to Ik By rearranging indices, suppose that Ik = {r + 1, . . . , n} ∇2J k = E k

1

E k

3

(E k

3 )T

E k

2

  • ,

E k

1 ∈ Rr×r,

E k

2 ∈ Rn−r×n−r

The matrix Dk is chosen as Dk = (E k

1 )−1

In−r

  • pk is computed by solving the linear system (Dk)−1pk = ∇J k
  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 34 / 35

slide-61
SLIDE 61

Gradient Projection method

The gradient projection method has the form f k+1 = [f k − αk∇J k]+ where αk is computed with the traditional Armijo rule along the projection arc: J (f k) − J (f k(2−m)) ≥ η(∇J k)T f k

i − f k i (2−m)

  • G. Landi (Bologna University)

Recent advances in optimization algorithms AIP 2009 35 / 35