recent advances in optimization algorithms for image
play

Recent advances in optimization algorithms for image deblurring and - PowerPoint PPT Presentation

Recent advances in optimization algorithms for image deblurring and denoising G. Landi E. Loli Piccolomini F. Zama Department of Mathematics, Bologna University http://www.unibo.it Conference on Applied Inverse Problems 2009, 23 July, 2009


  1. Recent advances in optimization algorithms for image deblurring and denoising G. Landi E. Loli Piccolomini F. Zama Department of Mathematics, Bologna University http://www.unibo.it Conference on Applied Inverse Problems 2009, 23 July, 2009 G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 1 / 35

  2. Outline Newton-like projection methods for a nonnegatively constrained minimization problem arising in astronomical image restoration Convergence results Numerical results G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 2 / 35

  3. Image formation model The mathematical model for image formation is Af + bg + w = g where g ∈ R n is the detected image A ∈ R n × n is a block-Toeplitz matrix describing the blurring bg ∈ R n is the expected value (usually constant) of the background w ∈ R n is the value of the noise f ∈ R n is the unknown image to be recovered G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 3 / 35

  4. Image restoration problem The problem of image restoration is to determine an approximation of f given g , bg , A and statistics for w . The noise w ◮ is not given ◮ is the realization of an independent Poisson process the pixel values of the image f are nonnegative A is an ill-conditioned matrix G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 4 / 35

  5. Image restoration problem By using a maximum-likelihood approach 1 , the image restoration problem can be reformulated as the nonnegatively constrained optimization problem min J ( f ) = J 0 ( f ) + λ J R ( f ) s.t. f ≥ 0 , where J 0 ( f ) is the Csiz´ ar divergence: n � g j � � J 0 ( f ) = g j ln ( Af ) j + bg + ( Af ) j + bg − g j j =1 J R ( f ) is the Tikhonov regularization functional: J R ( f ) = 1 2 � f � 2 2 λ is the regularization parameter 1 M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, 1998 G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 5 / 35

  6. Image restoration problem By using a maximum-likelihood approach 1 , the image restoration problem can be reformulated as the nonnegatively constrained optimization problem min J ( f ) = J 0 ( f ) + λ J R ( f ) s.t. f ≥ 0 , where J 0 ( f ) is the Csiz´ ar divergence: n � g j � � J 0 ( f ) = g j ln ( Af ) j + bg + ( Af ) j + bg − g j j =1 J R ( f ) is the Tikhonov regularization functional: J R ( f ) = 1 2 � f � 2 2 λ is the regularization parameter 1 M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, 1998 G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 5 / 35

  7. Image restoration problem By using a maximum-likelihood approach 1 , the image restoration problem can be reformulated as the nonnegatively constrained optimization problem min J ( f ) = J 0 ( f ) + λ J R ( f ) s.t. f ≥ 0 , where J 0 ( f ) is the Csiz´ ar divergence: n � g j � � J 0 ( f ) = g j ln ( Af ) j + bg + ( Af ) j + bg − g j j =1 J R ( f ) is the Tikhonov regularization functional: J R ( f ) = 1 2 � f � 2 2 λ is the regularization parameter 1 M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, 1998 G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 5 / 35

  8. Image restoration problem By using a maximum-likelihood approach 1 , the image restoration problem can be reformulated as the nonnegatively constrained optimization problem min J ( f ) = J 0 ( f ) + λ J R ( f ) s.t. f ≥ 0 , where J 0 ( f ) is the Csiz´ ar divergence: n � g j � � J 0 ( f ) = g j ln ( Af ) j + bg + ( Af ) j + bg − g j j =1 J R ( f ) is the Tikhonov regularization functional: J R ( f ) = 1 2 � f � 2 2 λ is the regularization parameter 1 M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, 1998 G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 5 / 35

  9. Proposed Newton-like Projection methods The proposed Newton-like projection methods have the general form f k +1 = [ f k − α k p k ] + where [ · ] + denotes the projection on the positive orthant α k is the step-length p k is the search direction G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 6 / 35

  10. Search direction computation Define the set of indices j = 0 and ∂ J ( f k ) I k = { j = 1 , . . . , n | f k > 0 } ∂ f j The computation of the search direction p k takes the following steps: Computation of the reduced gradient g k I : 1 � ∂ J ( f k ) ∈ I k ; , if j / � g k � j = ∂ f j j = 1 , . . . , n . I 0 , otherwise; Computation of the solution z of the linear system 2 ∇ 2 J k z = g k I Computation of p k such as: 3 � ∈ I k ; z j , if j / p k j = j = 1 , . . . , n . ∂ J ( f k ) , otherwise; ∂ f j G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 7 / 35

  11. Search direction computation Define the set of indices j = 0 and ∂ J ( f k ) I k = { j = 1 , . . . , n | f k > 0 } ∂ f j The computation of the search direction p k takes the following steps: Computation of the reduced gradient g k I : 1 � ∂ J ( f k ) ∈ I k ; , if j / � g k � j = ∂ f j j = 1 , . . . , n . I 0 , otherwise; Computation of the solution z of the linear system 2 ∇ 2 J k z = g k I Computation of p k such as: 3 � ∈ I k ; z j , if j / p k j = j = 1 , . . . , n . ∂ J ( f k ) , otherwise; ∂ f j G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 7 / 35

  12. Search direction computation Define the set of indices j = 0 and ∂ J ( f k ) I k = { j = 1 , . . . , n | f k > 0 } ∂ f j The computation of the search direction p k takes the following steps: Computation of the reduced gradient g k I : 1 � ∂ J ( f k ) ∈ I k ; , if j / � g k � j = ∂ f j j = 1 , . . . , n . I 0 , otherwise; Computation of the solution z of the linear system 2 ∇ 2 J k z = g k I Computation of p k such as: 3 � ∈ I k ; z j , if j / p k j = j = 1 , . . . , n . ∂ J ( f k ) , otherwise; ∂ f j G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 7 / 35

  13. Search direction computation Define the set of indices j = 0 and ∂ J ( f k ) I k = { j = 1 , . . . , n | f k > 0 } ∂ f j The computation of the search direction p k takes the following steps: Computation of the reduced gradient g k I : 1 � ∂ J ( f k ) ∈ I k ; , if j / � g k � j = ∂ f j j = 1 , . . . , n . I 0 , otherwise; Computation of the solution z of the linear system 2 ∇ 2 J k z = g k I Computation of p k such as: 3 � ∈ I k ; z j , if j / p k j = j = 1 , . . . , n . ∂ J ( f k ) , otherwise; ∂ f j G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 7 / 35

  14. Search direction computation Define the set of indices j = 0 and ∂ J ( f k ) I k = { j = 1 , . . . , n | f k > 0 } ∂ f j The computation of the search direction p k takes the following steps: Computation of the reduced gradient g k I : 1 � ∂ J ( f k ) ∈ I k ; , if j / � g k � j = ∂ f j j = 1 , . . . , n . I 0 , otherwise; Computation of the solution z of the linear system 2 ∇ 2 J k z = g k I Computation of p k such as: 3 � ∈ I k ; z j , if j / p k j = j = 1 , . . . , n . ∂ J ( f k ) , otherwise; ∂ f j G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 7 / 35

  15. Step-length computation The step-length α k is computed with the modified Armijo rule 2 : α k is the first number of the sequence { 2 − m } m ∈ N such that J ( f k ) − J ( f k (2 − m )) ≥   � �  2 − m � � ∇J k j p k ∇J k f k j − f k j (2 − m ) η j + j  ∈I k j ∈I k j / where ◮ f k (2 − m ) = [ f k − 2 − m p k ] + ◮ η ∈ (0 , 1 2 ) 2 D. P. Bertsekas, Nonlinear Programming, Athena Scientific, 2003 G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 8 / 35

  16. Step-length computation The step-length α k is computed with the modified Armijo rule 2 : α k is the first number of the sequence { 2 − m } m ∈ N such that J ( f k ) − J ( f k (2 − m )) ≥   � �  2 − m � � ∇J k j p k ∇J k f k j − f k j (2 − m ) η j + j  j / ∈I k j ∈I k ∈ I k ⇒ Armijo rule ◮ if j / ◮ if j ∈ I k ⇒ Armijo rule along the projection arc 2 D. P. Bertsekas, Nonlinear Programming, Athena Scientific, 2003 G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 8 / 35

  17. Algorithm: Newton-like Projection methods Given f 0 ≥ 0 and η ∈ (0 , 1 2 ), the basic algorithm is as follows: Repeat until convergence 1. Computation of the search direction p k 1.1 Compute the reduced gradient g k I ; 1.2 Solve ∇ 2 J k z = g k I ; � z i , ∈ I k ; if j / 1.3 Compute p k j = j = 1 , . . . , n ; ∇J k j , otherwise; 2. Computation of the steplength α k Find the smallest number m ∈ N satisfying   � �  2 − m � � J ( f k ) − J ( f k (2 − m )) ≥ η ∇J k j p k ∇J k f k j − f k j (2 − m ) j + j  ∈I k j ∈I k j / 3. Updates Set f k +1 = [ f k − α k p k ] + end G. Landi (Bologna University) Recent advances in optimization algorithms AIP 2009 9 / 35

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend