Scaled gradient projection methods in image deblurring and denoising - - PowerPoint PPT Presentation

scaled gradient projection methods in image deblurring
SMART_READER_LITE
LIVE PREVIEW

Scaled gradient projection methods in image deblurring and denoising - - PowerPoint PPT Presentation

Scaled gradient projection methods in image deblurring and denoising Mario Bertero 1 Patrizia Boccacci 1 Silvia Bonettini 2 Riccardo Zanella 3 Luca Zanni 3 1 Dipartmento di Matematica, Universit di Genova 2 Dipartimento di Matematica, Universit


slide-1
SLIDE 1

Scaled gradient projection methods in image deblurring and denoising

Mario Bertero1 Patrizia Boccacci1 Silvia Bonettini2 Riccardo Zanella3 Luca Zanni3

1Dipartmento di Matematica, Università di Genova 2Dipartimento di Matematica, Università di Ferrara 3Dipartimento di Matematica, Università di Modena e Reggio Emilia

Conference on Applied Inverse Problems, Vienna July 20–24 2009

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 1 / 26

slide-2
SLIDE 2

Outline

1

Examples of Imaging problems

2

Optimization problem

3

Gradient methods and step-length selections

4

Scaled Gradient Projection (SGP) Method

5

Test results

6

Conclusions and Future Works

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 2 / 26

slide-3
SLIDE 3

Image Deblurring example

Image acquisition model: y = Hx + b + n, where: y ∈ Rn

  • bserved image,

H ∈ Rn×n blurring operator, b ∈ Rn background radiation, n ∈ Rn unknown noise.

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 3 / 26

slide-4
SLIDE 4

Image Deblurring example

Image acquisition model: y = Hx + b + n, where: y ∈ Rn

  • bserved image,

H ∈ Rn×n blurring operator, b ∈ Rn background radiation, n ∈ Rn unknown noise. Goal: Find an “approximation” of the true image x ∈ Rn

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 3 / 26

slide-5
SLIDE 5

Image Deblurring example

Image acquisition model: y = Hx + b + n, where: y ∈ Rn

  • bserved image,

H ∈ Rn×n blurring operator, b ∈ Rn background radiation, n ∈ Rn unknown noise. Goal: Find an “approximation” of the true image x ∈ Rn

Maximum Likelihood Approach (and early stopping)

min Ly(x)

  • sub. to x ∈ Ω

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 3 / 26

slide-6
SLIDE 6

Image Denoising example

Image acquisition model: y = x + n, where: y ∈ Rn

  • bserved image,

n ∈ Rn unknown noise.

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 4 / 26

slide-7
SLIDE 7

Image Denoising example

Image acquisition model: y = x + n, where: y ∈ Rn

  • bserved image,

n ∈ Rn unknown noise. Goal: Remove noise from y ∈ Rn, while preserving some features

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 4 / 26

slide-8
SLIDE 8

Image Denoising example

Image acquisition model: y = x + n, where: y ∈ Rn

  • bserved image,

n ∈ Rn unknown noise. Goal: Remove noise from y ∈ Rn, while preserving some features

Regularized Approach

min J(0)

y (x) + µJR(x)

  • sub. to x ∈ Ω

where JR(x) is (for example): ||x||2

2

Tikhonov, ||x||1 sparsity inducing,

  • Ω |∇x| Total Variation.

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 4 / 26

slide-9
SLIDE 9

Problem setting

Both examples lead to:

Constrained optimization problem

min f(x)

  • sub. to x ∈ Ω

Ω is a convex and closed set f(x) is countinuously differentiable in Ω

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 5 / 26

slide-10
SLIDE 10

Why gradient type methods?

Gradient methods are first order optimization methods.

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 6 / 26

slide-11
SLIDE 11

Why gradient type methods?

Gradient methods are first order optimization methods.

pros

Simplicity of implementation

first order iterative method

Low memory requirements

suitable to face high dimensional problems

Ability to provide medium-accurate solutions Semiconvergence from numerical practice

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 6 / 26

slide-12
SLIDE 12

Why gradient type methods?

Gradient methods are first order optimization methods.

pros

Simplicity of implementation

first order iterative method

Low memory requirements

suitable to face high dimensional problems

Ability to provide medium-accurate solutions Semiconvergence from numerical practice

cons

Low convergence rate

hundreds or thousands of iterations

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 6 / 26

slide-13
SLIDE 13

The Barzilai-Borwein (BB) step-length selection rules

Consider the gradient method: x(k+1) = x(k) − αkg(k) k = 0, 1, . . . , with g(x) = ∇f(x).

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 7 / 26

slide-14
SLIDE 14

The Barzilai-Borwein (BB) step-length selection rules

Consider the gradient method: x(k+1) = x(k) − αkg(k) k = 0, 1, . . . , with g(x) = ∇f(x).

Problem:

How the step-length αk > 0 can be chosen to improve the convergence rate?

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 7 / 26

slide-15
SLIDE 15

The Barzilai-Borwein (BB) step-length selection rules

Consider the gradient method: x(k+1) = x(k) − αkg(k) k = 0, 1, . . . , with g(x) = ∇f(x).

Solution:

Regard the matrix B(αk) = (αkI)−1 as an approximation of the Hessian ∇2f(x(k))

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 7 / 26

slide-16
SLIDE 16

The Barzilai-Borwein (BB) step-length selection rules

Consider the gradient method: x(k+1) = x(k) − αkg(k) k = 0, 1, . . . , with g(x) = ∇f(x).

Solution:

Regard the matrix B(αk) = (αkI)−1 as an approximation of the Hessian ∇2f(x(k)) Determine αk by forcing a quasi-Newton property on B(αk): αk

BB1 = argmin α∈R B(α)s(k−1) − z(k−1)

  • r

αk

BB2 = argmin α∈R s(k−1) − B(α)−1z(k−1),

where s(k−1) =

  • x(k) − x(k−1)

and z(k−1) = (g(k) − g(k−1)).

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 7 / 26

slide-17
SLIDE 17

The BB step-length selection rules (cont.)

It follows that: αk

BB1 = s(k−1)Ts(k−1)

s(k−1)Tz(k−1)

  • r

αk

BB2 = s(k−1)Tz(k−1)

z(k−1)Tz(k−1) where s(k−1) =

  • x(k) − x(k−1)

and z(k−1) = (g(k) − g(k−1)).

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 8 / 26

slide-18
SLIDE 18

The BB step-length selection rules (cont.)

It follows that: αk

BB1 = s(k−1)Ts(k−1)

s(k−1)Tz(k−1)

  • r

αk

BB2 = s(k−1)Tz(k−1)

z(k−1)Tz(k−1) where s(k−1) =

  • x(k) − x(k−1)

and z(k−1) = (g(k) − g(k−1)). Remarkable improvements in comparison with the steepest descent method are observed:

[Barzilai-Borwein, IMA J. Num. Anal. 1988] [Raydan, IMA J. Num. Anal. 1993] [Friedlander et al., SIAM J. Num. Anal. 1999] [Raydan, SIAM J. Optim. 1997] [Fletcher, Tech. Rep. 207, 2001] [Dai-Liao, IMA J. Num. Anal. 2002]

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 8 / 26

slide-19
SLIDE 19

Effective use of the BB rules

Further improvements are obtained by using adaptive alternations of the two BB rules; for example:      αk = αBB2

k

if αBB2

k

/αBB1

k

< τ , αk = αBB1

k

  • therwise,

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 9 / 26

slide-20
SLIDE 20

Effective use of the BB rules

Further improvements are obtained by using adaptive alternations of the two BB rules; for example:      αk = αBB2

k

if αBB2

k

/αBB1

k

< τ , αk = αBB1

k

  • therwise,

Many suggestions for the alternation are available:

[Dai, Optim., 2003] [Dai-Fletcher, Math. Prog. 2005] [Serafini et al., Opt. Meth. Soft. 2005] [Dai et al., IMA J. Num. Anal. 2006] [Zhuo et al., Comput. Opt. Appl., 2006 ] [Frassoldati et al., J. Ind. Manag. Opt. 2008]

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 9 / 26

slide-21
SLIDE 21

The BB step-lengths and Scaled Gradient Methods

Consider the scaled gradient method: x(k+1) = x(k) − αkDkg(k) k = 0, 1, . . . , where Dk is the symmetric positive definite scaling matrix.

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 10 / 26

slide-22
SLIDE 22

The BB step-lengths and Scaled Gradient Methods

Consider the scaled gradient method: x(k+1) = x(k) − αkDkg(k) k = 0, 1, . . . , where Dk is the symmetric positive definite scaling matrix. By forcing the quasi-Newton properties on B(αk) = (αkDk)−1 we have αk

BB1 = s(k−1)TD−1 k D−1 k s(k−1)

s(k−1)TD−1

k z(k−1)

and αk

BB2 =

s(k−1)TDkz(k−1) z(k−1)TDkDkz(k−1) , where s(k−1) =

  • x(k) − x(k−1)

and z(k−1) = (g(k) − g(k−1)).

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 10 / 26

slide-23
SLIDE 23

Scaled Gradient Projection (SGP) method: basic notations

[Bonettini et al., Inv. Prob. 2009]

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 11 / 26

slide-24
SLIDE 24

Scaled Gradient Projection (SGP) method: basic notations

[Bonettini et al., Inv. Prob. 2009]

Scaling matrix: Dk ∈ DL = {D s.p.d. ∈ Rn×n | D ≤ L, D−1 ≤ L}, L > 1, if Dk is diagonal, the requirement leads to: L−1 ≤ (Dk)ii ≤ L.

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 11 / 26

slide-25
SLIDE 25

Scaled Gradient Projection (SGP) method: basic notations

[Bonettini et al., Inv. Prob. 2009]

Scaling matrix: Dk ∈ DL = {D s.p.d. ∈ Rn×n | D ≤ L, D−1 ≤ L}, L > 1, if Dk is diagonal, the requirement leads to: L−1 ≤ (Dk)ii ≤ L. Projection operator: PΩ,D(x) ≡ argmin

y∈Ω x − yD, where xD =

√ xTDx.

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 11 / 26

slide-26
SLIDE 26

Scaled Gradient Projection (SGP) method

Given 0 < αmin < αmax, β, γ ∈ (0, 1) line-search parameters, and fix a positive integer M.

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 12 / 26

slide-27
SLIDE 27

Scaled Gradient Projection (SGP) method

Given 0 < αmin < αmax, β, γ ∈ (0, 1) line-search parameters, and fix a positive integer M.

  • 1. Initialization.

Set x(0) ∈ Ω, D0 ∈ DL, α0 ∈ [αmin, αmax]

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 12 / 26

slide-28
SLIDE 28

Scaled Gradient Projection (SGP) method

Given 0 < αmin < αmax, β, γ ∈ (0, 1) line-search parameters, and fix a positive integer M.

  • 1. Initialization.

Set x(0) ∈ Ω, D0 ∈ DL, α0 ∈ [αmin, αmax] For k = 0, 1, 2, . . . end

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 12 / 26

slide-29
SLIDE 29

Scaled Gradient Projection (SGP) method

Given 0 < αmin < αmax, β, γ ∈ (0, 1) line-search parameters, and fix a positive integer M.

  • 1. Initialization.

Set x(0) ∈ Ω, D0 ∈ DL, α0 ∈ [αmin, αmax] For k = 0, 1, 2, . . .

  • 2. Projection.

y(k) = PΩ,D−1

k (x(k) − αkDk∇f(x(k)));

If y(k) = x(k) then stop. end

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 12 / 26

slide-30
SLIDE 30

Scaled Gradient Projection (SGP) method

Given 0 < αmin < αmax, β, γ ∈ (0, 1) line-search parameters, and fix a positive integer M.

  • 1. Initialization.

Set x(0) ∈ Ω, D0 ∈ DL, α0 ∈ [αmin, αmax] For k = 0, 1, 2, . . .

  • 2. Projection.

y(k) = PΩ,D−1

k (x(k) − αkDk∇f(x(k)));

If y(k) = x(k) then stop.

  • 3. Descent direction.

d(k) = y(k) − x(k). end

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 12 / 26

slide-31
SLIDE 31

Scaled Gradient Projection (SGP) method

Given 0 < αmin < αmax, β, γ ∈ (0, 1) line-search parameters, and fix a positive integer M.

  • 1. Initialization.

Set x(0) ∈ Ω, D0 ∈ DL, α0 ∈ [αmin, αmax] For k = 0, 1, 2, . . .

  • 2. Projection.

y(k) = PΩ,D−1

k (x(k) − αkDk∇f(x(k)));

If y(k) = x(k) then stop.

  • 3. Descent direction.

d(k) = y(k) − x(k).

  • 3. Line-search.

Set λk = 1 and ¯ f = max

0≤j≤min{k,M−1} f(x(k−j))

While f(x(k) + λkd(k)) > ¯

f + γλk∇f(x(k))Td(k)

λk = βλk end. Set x(k+1) = x(k) + λkd(k). end

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 12 / 26

slide-32
SLIDE 32

Scaled Gradient Projection (SGP) method

Given 0 < αmin < αmax, β, γ ∈ (0, 1) line-search parameters, and fix a positive integer M.

  • 1. Initialization.

Set x(0) ∈ Ω, D0 ∈ DL, α0 ∈ [αmin, αmax] For k = 0, 1, 2, . . .

  • 2. Projection.

y(k) = PΩ,D−1

k (x(k) − αkDk∇f(x(k)));

If y(k) = x(k) then stop.

  • 3. Descent direction.

d(k) = y(k) − x(k).

  • 3. Line-search.

Set λk = 1 and ¯ f = max

0≤j≤min{k,M−1} f(x(k−j))

While f(x(k) + λkd(k)) > ¯

f + γλk∇f(x(k))Td(k)

λk = βλk end. Set x(k+1) = x(k) + λkd(k).

  • 4. Update.

Define Dk+1 and αk+1 ∈ [αmin, αmax]. end

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 12 / 26

slide-33
SLIDE 33

SGP acceleration techniques

The acceleration technique involves:

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 13 / 26

slide-34
SLIDE 34

SGP acceleration techniques

The acceleration technique involves: selection of the step-length αk:

general algorithm

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 13 / 26

slide-35
SLIDE 35

SGP acceleration techniques

The acceleration technique involves: selection of the step-length αk:

general algorithm

definition of the scaling matrix Dk:

problem dependent (see the experiment section)

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 13 / 26

slide-36
SLIDE 36

SGP step-length selection

Let αmin = 10−3, αmax = 105, Mα = 3, τ = 0.5

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 14 / 26

slide-37
SLIDE 37

SGP step-length selection

Let αmin = 10−3, αmax = 105, Mα = 3, τ = 0.5

if s(k−1)TD−1

k z(k−1) ≤ 0

if s(k−1)TDkz(k−1) ≤ 0 αBB1

k

= αmax αBB2

k

= αmax else else α =

s(k−1)T D−1

k

D−1

k

s(k−1) s(k−1)T D−1

k

z(k−1)

α =

s(k−1)T Dkz(k−1) z(k−1)T DkDkz(k−1)

αBB1

k

= min{αmax, max{αmin, α}} αBB2

k

= min{αmax, max{αmin, α}}

end end

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 14 / 26

slide-38
SLIDE 38

SGP step-length selection

Let αmin = 10−3, αmax = 105, Mα = 3, τ = 0.5

if s(k−1)TD−1

k z(k−1) ≤ 0

if s(k−1)TDkz(k−1) ≤ 0 αBB1

k

= αmax αBB2

k

= αmax else else α =

s(k−1)T D−1

k

D−1

k

s(k−1) s(k−1)T D−1

k

z(k−1)

α =

s(k−1)T Dkz(k−1) z(k−1)T DkDkz(k−1)

αBB1

k

= min{αmax, max{αmin, α}} αBB2

k

= min{αmax, max{αmin, α}}

end end if αBB2

k

/αBB1

k

< τ αk = min{αBB2

k−j, j = 0, . . . , Mα − 1}

τ = τ ∗ 0.9 else αk = αBB1

k

τ = τ ∗ 1.1 end

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 14 / 26

slide-39
SLIDE 39

Convergence of SGP

min f(x)

  • sub. to x ∈ Ω

(1) Ω is a convex and closed set f(x) is countinuously differentiable in Ω

Theorem

Assume that the level set Ω0 = {x ∈ Ω : f(x) ≤ f(x(0))} is bounded. Every accumulation point of the sequence {x(k)} generated by the algorithm SGP is a stationary point of (1).

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 15 / 26

slide-40
SLIDE 40

Image Deblurring: Poisson noise

Object Blurred Noisy image

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 16 / 26

slide-41
SLIDE 41

Image Deblurring: Poisson noise

Object Blurred Noisy image

f(x) = DKL(Hx + b, y) = n

i=1

n

j=1 Hijxj + bi − yi − yi log Pn

j=1 Hijxj+bi

yi

= {x ∈ Rn | xi ≥ 0, i = 1, . . . , n} A suited reconstruction is obtained by early stopping the SGP iterations.

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 16 / 26

slide-42
SLIDE 42

Image Deblurring: Poisson noise (II)

Algorithms: SGP Adaptive selection of αk, scaling matrix Dk = min “ L, max “ diag(x(k)), L−1”” . EM Richardson-Lucy or Expectation Maximization algorithm. EM_MATLAB deconvlucy function, Matlab image toolbox. WMRNSD Weighted Minimum Residual Norm Steepest Descent [Bardsley-Nagy].

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 17 / 26

slide-43
SLIDE 43

Image Deblurring: Poisson noise (II)

Algorithms: SGP Adaptive selection of αk, scaling matrix Dk = min “ L, max “ diag(x(k)), L−1”” . EM Richardson-Lucy or Expectation Maximization algorithm. EM_MATLAB deconvlucy function, Matlab image toolbox. WMRNSD Weighted Minimum Residual Norm Steepest Descent [Bardsley-Nagy].

Algorithm

  • it. number

ℓ2 rel. err. time [s] SGP 26 0.069 1.61 EM 500 0.069 21.69 EM_MATLAB 44 0.070 2.64 WMRNSD 80 0.069 4.26

Test environment: Matlab 7.5.0 on an AMD Opteron Dual Core 2.4 GHz processor.

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 17 / 26

slide-44
SLIDE 44

Image Deblurring: SGP reconstruction

Object Blurred Noisy image SGP reconstruction

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 18 / 26

slide-45
SLIDE 45

Image Denoising: Poisson noise

Object Blurred Noisy image

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 19 / 26

slide-46
SLIDE 46

Image Denoising: Poisson noise

Object Blurred Noisy image

f(x) = DKL(x, y) + β TV(x) Ω = {x ∈ Rn | xi ≥ η, i = 1, . . . , n}

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 19 / 26

slide-47
SLIDE 47

Image Denoising: Poisson noise (II)

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 20 / 26

slide-48
SLIDE 48

Image Denoising: Poisson noise (II)

Algorithms: SGP Adaptive selection of αk, scaling matrix Dk = x(k)/ (1 + βV), with ∇f(x(k))) = V − U, Vi ≥ 0 and Ui ≥ 0. [Lanteri et al., Inv. Prob. 2002] GP Adaptive selection of αk, scaling matrix Dk = I. GP-BB Only αk

BB1, scaling matrix Dk = I. Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 20 / 26

slide-49
SLIDE 49

Image Denoising: Poisson noise (II)

Algorithms: SGP Adaptive selection of αk, scaling matrix Dk = x(k)/ (1 + βV), with ∇f(x(k))) = V − U, Vi ≥ 0 and Ui ≥ 0. [Lanteri et al., Inv. Prob. 2002] GP Adaptive selection of αk, scaling matrix Dk = I. GP-BB Only αk

BB1, scaling matrix Dk = I.

Algorithm

  • it. number

ℓ2 rel. err. time [s] SGP 148 0.025 14.30 GP 280 0.025 23.23 GP-BB 735 0.025 70.62

Test environment: Matlab 7.5.0 on an AMD Opteron Dual Core 2.4 GHz processor.

[Zanella et al., Inv. Prob. 2009]

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 20 / 26

slide-50
SLIDE 50

Image Denoising: SGP reconstruction

Object Noisy image SGP reconstruction

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 21 / 26

slide-51
SLIDE 51

An application in medical imaging

Object Noisy image SGP reconstruction

Image size: 512 × 512, parameters: β = 0.3. Noisy image relative error: 17.9%. Reconstructed image relative error: 2.9%. Computational time: 20.95 seconds (Matlab 7.5.0 on an AMD Opteron Dual Core 2.4 GHz processor).

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 22 / 26

slide-52
SLIDE 52

GPU Implementation: Deblurring

CPU† GPU‡ N = n × n it. ℓ2 rel. err. time [s] it. ℓ2 rel. err. time [s] Speedup 2562 29 0.070 0.72 29 0.071 0.05 14.7 5122 29 0.065 2.69 29 0.065 0.16 16.8 10242 29 0.064 10.66 29 0.064 0.58 18.4 20482 29 0.064 49.81 29 0.063 2.69 18.5

† C implementation:

Microsoft Visual Studio 2005, AMD Athlon X2 Dual-Core at 3.11GHz.

‡ C-CUDA implementation:

CUDA 2.0, NVIDIA GTX 280, AMD Athlon X2 Dual-Core at 3.11GHz.

[Ruggiero et al., J. Global Optim. 2009]

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 23 / 26

slide-53
SLIDE 53

GPU Implementation: Denoising

CPU† GPU‡ N = n × n it. ℓ2 rel. err. time [s] it. ℓ2 rel. err. time [s] Speedup 2562 154 0.025 1.97 224 0.025 0.20 9.9 5122 161 0.018 8.23 235 0.018 0.42 19.6 10242 166 0.014 33.51 201 0.014 1.09 30.7 20482 146 0.011 120.89 121 0.011 2.56 47.2

† C implementation:

Microsoft Visual Studio 2005, AMD Athlon X2 Dual-Core at 3.11GHz.

‡ C-CUDA implementation:

CUDA 2.0, NVIDIA GTX 280, AMD Athlon X2 Dual-Core at 3.11GHz.

[Serafini et al., ParCo 2009]

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 24 / 26

slide-54
SLIDE 54

Other examples of applications

Least-squares minimization:

F . Benvenuto: Iterative methods for constrained and regularized least-square problems, M20, 23 July, 15:15-17:15, C2

Sparsity constraints:

  • C. De Mol: Iterative Algorithms for Sparse Recovery,

M19, 21 July, 15:15-17:15, C2

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 25 / 26

slide-55
SLIDE 55

Conclusions and Future Works

Conclusions: by exploiting both the scaling matrix and the Barzilai-Borwein step-length rules, the SGP Method is able to achieve a satisfactory reconstruction in a reasonable time. easy to implement remarkable results in massively parallel architectures (GPU).

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 26 / 26

slide-56
SLIDE 56

Conclusions and Future Works

Conclusions: by exploiting both the scaling matrix and the Barzilai-Borwein step-length rules, the SGP Method is able to achieve a satisfactory reconstruction in a reasonable time. easy to implement remarkable results in massively parallel architectures (GPU). Works in progress: comparative analysis TV image reconstruction [S. Wright, M39, 23 July, 10:30-12:30, D]

Duality-based algorithms [Zhu-Wright, COAP 2008] Primal-dual approach [Zhu-Chan, CAM Rep. UCLA 2008],

[Lee-Wright, 2008]

Regularized deblurring [G. Landi, M39, 23 July, 10:30-12:30, D]

Quasi-Newton approaches [Landi-Loli-Piccolomini, Num. Alg. 2008]

Zanella (UniMoRe) Gradient projection methods in imaging AIP 2009 26 / 26