iterative methods for image processing lothar reichel
play

Iterative methods for Image Processing Lothar Reichel Como, May - PowerPoint PPT Presentation

Iterative methods for Image Processing Lothar Reichel Como, May 2018. Lecture 3: Block iterative methods, preconditioning, iterated Tikhonov. Outline of Lecture 3: Block Krylov subspace methods, application to color image restoration


  1. Iterative methods for Image Processing Lothar Reichel Como, May 2018.

  2. Lecture 3: Block iterative methods, preconditioning, iterated Tikhonov. Outline of Lecture 3: • Block Krylov subspace methods, application to color image restoration • Preconditining • Iterated Tikhonov • The method by Donatelli and Hanke

  3. Color image restoration Color images are represented by three channels: red, green, and blue. Hyperspectral images have more “colors” and require more channels. See Hansen, Nagy, and O’Leary. Consider k -channel images. Let b ( i ) ∈ R n 2 represent the blur- and noise-contaminated image in channel i , let e ( i ) ∈ R n 2 describe the noise. The contaminated images of all channels b ( i ) can be represented by b = [( b (1) ) T , . . . , ( b ( k ) ) T ] T .

  4. The degradation model is of the form b = Hx exact + e, where H = A k ⊗ A ∈ R n 2 k × n 2 k with • A ∈ R n 2 × n 2 modelling within-channel blurring, • A k ∈ R k × k modelling cross-channel blurring. Determine approximation of x exact by computing approximate solution of Hx = b.

  5. Alternatively, the contaminated images of all channels b ( i ) can be represented by B = [ b (1) , . . . , b ( k ) ] . Define the linear operator A : R n 2 × k → R n 2 × k : A ( X ) := AXA T k . The degradation model can be written as B = A ( X exact ) + E, where X exact = [ x (1) exact , . . . , x ( k ) exact ]. Let B exact = A ( X exact ). Denote A ( X ) by AX .

  6. Tikhonov regularization Solve the minimization problem X {� AX − B � 2 F + µ � X � 2 F } , min where � · � F denotes the Frobenius norm and µ > 0 is a regularization parameter. The normal equations, which are obtained by requiring the gradient with respect to X to vanish, are given by ( A T A + µI ) X = A T B.

  7. They have the unique solution � − 1 A T B A T A + µI � X µ = for any µ > 0. The discrepancy principle requires that µ > 0 be determined so that � B − AX µ � F = ηδ, δ = � B − B exact � F . This is possible for most reasonable B .

  8. Solution methods • Trival method: Compute approximate solution of each system of equations Ax = b ( j ) , j = 1 , 2 , . . . , k, (1) independently. • Apply partial block Golub–Kahan bidiagonalization. • Apply partial global Golub–Kahan bidiagonalization. • Compute partial SVD of A and apply to each system (1) independently

  9. Block Golub–Kahan bidiagonalization (BGKB) Define the QR factorization B = P 1 R 1 , where P 1 ∈ R n 2 × k has orthonormal columns, i.e., P T 1 P 1 = I and R 1 ∈ R k × k is upper triangular.

  10. ℓ steps of the BGKB applied to A with initial block vector P 1 gives the decompositions ℓ C ( k ) T AQ ( k ) = P ( k ) ℓ +1 C ( k ) A T P ( k ) = Q ( k ) ℓ +1 ,ℓ , ℓ,ℓ , ℓ ℓ where P ( k ) [ P ( k ) , P ℓ ] = [ P 1 , . . . , P ℓ +1 ] ∈ R n 2 × ( ℓ +1) k , = ℓ +1 ℓ Q ( k ) [ Q 1 , . . . , Q ℓ ] ∈ R n 2 × ℓk = ℓ have orthonormal columns, i.e., ( P ( k ) ℓ +1 ) T P ( k ) ( Q ( k ) ℓ ) T Q ( k ) ℓ +1 = I, = I. ℓ

  11. The lower block bidiagonal matrix   L 1   R 2 L 2     ... ... C ( k ) ∈ R k ( ℓ +1) × kℓ   ℓ +1 ,ℓ :=       R ℓ L ℓ     R ℓ +1 has lower triagular blocks L j ∈ R k × k and upper ℓ,ℓ ∈ R kℓ × kℓ is the leading triangular blocks R j ∈ R k × k ; C ( k ) submatrix of C ( k ) ℓ +1 ,ℓ .

  12. Further, R ( Q ( k ) K ℓ ( A T A, A T B ) ℓ ) = span { A T B, ( A T A ) A T B, . . . ( A T A ) ℓ − 1 A T B } . = Let X = Q ( k ) ℓ Y with Y ∈ R kℓ × kℓ . Then X ∈ K ℓ ( A T A,A T B ) {� AX − B � 2 F + µ � X � 2 F } min Y ∈ R kℓ × kℓ {� AQ ( k ) ℓ Y − B � 2 F + µ � Y � 2 F } = min 2  � �    �  R 1 �   C ( k ) + µ � Y � 2 � � = min ℓ +1 ,ℓ Y −  . � �  F Y ∈ R kℓ × kℓ 0 � �  � � F Solve by QR factorization of C ( k ) ℓ +1 ,ℓ .

  13. Gives Y µ and X µ = P ( k ) Y µ . Determine µ > 0 by ℓ discrepancy principle, i.e., so that � �    R 1 � � C ( k ) � � � AX µ − B � F = ℓ +1 ,ℓ Y µ − = ηδ. � �  0 � � � � F Requires that ℓ be sufficiently large and that error in B is reasonable ( < 100%). Then the desired µ > 0 is the unique solution of a nonlinear equation determined by the reduced problem.

  14. Global Golub–Kahan bidiagonalization (GGKB) Define the matrix inner product M, N ∈ R n 2 × k . � M, N � = tr( M T N ) , Then � M � F = � M, M � 1 / 2 . Application of ℓ steps of GGKB to A with initial block vector B determines the lower bidiagonal matrix

  15.   ρ 1   σ 2 ρ 2      ... ...    ∈ R ( ℓ +1) × ℓ   C ℓ +1 ,ℓ =   σ ℓ − 1 ρ ℓ − 1       σ ℓ ρ ℓ       σ ℓ +1 and the matrices U ( k ) [ U 1 , U 2 , . . . , U ℓ +1 ] ∈ R n 2 × ( ℓ +1) k , = ℓ +1 V ( k ) [ V 1 , V 2 , . . . , V ℓ ] ∈ R n 2 × ℓk , = ℓ

  16. where U i , V j ∈ R n 2 × k , U 1 = B/ � B � F , and  1 i = j,  � U i , U j � = � V i , V j � = i � = j. 0  Let C ℓ,ℓ be the leading ℓ × ℓ submatrix of C ℓ +1 ,ℓ . If ℓ is small enough so that no breakdown occurs, then U ( k ) � � A V 1 , V 2 , . . . , V ℓ = ℓ +1 ( C ℓ +1 ,ℓ ⊗ I k ) , V ( k ) A T � ( C T � U 1 , U 2 , . . . , U ℓ = ℓ,ℓ ⊗ I k ) . ℓ Recall that A [ V 1 , V 2 , . . . , V ℓ ] stands for [ A ( V 1 ) , A ( V 2 ) , . . . , A ( V ℓ )]; similarly for A T U j .

  17. Determine an approximate solution of the form X = V ( k ) y ∈ R ℓ , ( y ⊗ I k ) , ℓ of the Tikhonov minimization problem {� AX − B � 2 F + µ � X � 2 F } min X = V ( k ) ( y ⊗ I k ) ℓ y ∈ R ℓ {� C ℓ +1 ,ℓ y − e 1 � B � 2 F � 2 2 + µ � y � 2 = min 2 } Denote the solution by y µ ℓ . Choose µ = µ ℓ > 0 so that y µ ℓ and therefore X µ ℓ = V ( k ) ( y µ ℓ ⊗ I k ) satisfy the ℓ discrepancy principle � AX µ ℓ − B � F = � C ℓ +1 ,ℓ y µ ℓ − e 1 � B � 2 F � 2 = ηδ.

  18. Standard Golub–Kahan bidiagonalization for multiple right-hand sides The largest singular triplets of A can be approximated well by carrying out a few GKB steps. This suggests the solution method: • Apply ℓ bidiagonalization steps to A with initial vector b (1) . Gives decompositions A T U ℓ = V ℓ C T AV ℓ = U ℓ +1 C ℓ +1 ,ℓ , ℓ,ℓ , with V ℓ ∈ R n 2 × ℓ , U ℓ +1 ∈ R n 2 × ( ℓ +1) such that V T ℓ V ℓ = I , U T ℓ +1 U ℓ +1 = I , and U ℓ e 1 = b/ � b � 2 . Moreover, C ℓ +1 ,ℓ ∈ R ( ℓ +1) × ℓ lower bidiagonal.

  19. • Then x ∈R ( V ℓ ) {� Ax − b (1) � 2 2 + µ � x � 2 2 } min y ∈ R ℓ {� C ℓ +1 ,ℓ y − U T ℓ +1 b (1) � 2 2 + µ � y � 2 2 } . = min Determine µ > 0 so that the solution y µ satisfies the discrepancy principle � C ℓ +1 ,ℓ y µ − U T ℓ +1 b (1) � 2 = ηδ (1) , where δ (1) is a bound for the error in b (1) .

  20. • Solve x ∈R ( V ℓ ) {� Ax − b (2) � 2 2 + µ � x � 2 2 } min y ∈ R ℓ {� C ℓ +1 ,ℓ y − U T ℓ +1 b (2) � 2 2 + µ � y � 2 2 } . = min If discrepancy principle cannot be satisfied, then increase ℓ . • Compute approximate solutions of Ax = b ( j ) , j = 3 , 4 , . . . , k, similarly.

  21. Computations require the columns of U ℓ +1 to be numerically orthonormal to be able to accurately compute the Fourier coefficients U T ℓ +1 b ( j ) , j = 2 , 3 , . . . , k.

  22. Example: Let matrix A ∈ R 70 2 × 70 2 be determined by the function phillips in Regularization Tools by Hansen. The matrix is a discretization of a Fredholm integral equation of the first kind that describes a convolution on the interval − 6 ≤ t ≤ 6. Generate 10 right-hand sides that model smooth functions. Add noise of same noise level to each right-hand side.

  23. Noise level Method MVP Relative error CPU time (sec) 10 − 3 1 . 46 × 10 − 2 BGKB 100 0 . 30 1 . 31 × 10 − 2 GGKB 200 0 . 43 2 . 28 × 10 − 2 1 GKB 16 0 . 31 1 . 43 × 10 − 2 10 GKBs 162 2 . 08 10 − 2 2 . 54 × 10 − 2 BGKB 80 0 . 24 2 . 61 × 10 − 2 GGKB 120 0 . 30 2 . 52 × 10 − 2 1 GKB 10 0 . 19 2 . 60 × 10 − 2 10 GKBs 140 1 . 32

  24. Example: Restoration of a 3-channel RGB color image that has been contaminated by blur and noise. The corrupted image is stored in a block vector B with three columns (one for each channel).

  25. Original image (left), blurred and noisy image (right).

  26. Restored image by BGKB (left), restored image by GGKB (right).

  27. Noise level Method MVP Relative error CPU-time (sec) 10 − 3 6 . 93 × 10 − 2 BGKB 492 3 . 86 6 . 85 × 10 − 2 GGKB 558 3 . 95 2 . 64 × 10 − 1 1 GKB 112 1 . 66 1 . 29 × 10 − 1 3 GKBs 632 6 . 55 10 − 2 9 . 50 × 10 − 2 BGKB 144 1.13 9 . 44 × 10 − 2 GGKB 156 1.12 2 . 91 × 10 − 1 1 GKB 20 0.32 1 . 58 × 10 − 1 3 GKBs 112 1.10

  28. Example: We restore an image that has been contaminated by noise, within-channel blur, and cross-channel blur. Same within-channel blur as above. The cross-channel blur is defined by the cross-channel blur matrix   0 . 7 0 . 2 0 . 1   A 3 = 0 . 25 0 . 5 0 . 25     0 . 15 0 . 1 0 . 75 More details in book by Hansen, Nagy, and O’Leary.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend