iterated tikhonov with generalized singular value
play

Iterated Tikhonov with Generalized Singular Value Decomposition - PowerPoint PPT Presentation

Iterated Tikhonov with Generalized Singular Value Decomposition Mirjeta PASHA Kent State University mpasha1@kent.edu May 23, 2018 Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 1 / 30 What comes next in this presentation Motivation on


  1. Iterated Tikhonov with Generalized Singular Value Decomposition Mirjeta PASHA Kent State University mpasha1@kent.edu May 23, 2018 Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 1 / 30

  2. What comes next in this presentation Motivation on working with inverse problems 1 Introduction and insight on inverse problems 2 Iterated Tikhonov with GSVD 3 Comparison of some zero finders 4 Numerical tests and results 5 Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 2 / 30

  3. Motivation. Why inverse problems? Some examples where they arise Inverse problems arise in many fields of science and life: 1. Medical imaging 2. Geophysics 3. Industrial process monitoring 4. Remote sensing 5. Pricing financial instruments, etc. GOAL: Design and analyze reliable and computationally effective mathematical solution methods for inverse problems. Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 3 / 30

  4. Regularization Inverse problems ≡ Troubles Naive solution of an equation with the blurring operator . What we usually do? x = A † b , where A † is the Moore-Penrose pseudoinverse. Figure: Direct solving ill-posed problems (example by James Nagy, Emory University, Atlanta). Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 4 / 30 Question: What did it happen?

  5. Introduction GOAL: Solve Ax = b . A ∈ R 100 x 100 , x, b, ∈ R 100 generated by Matlab code [A, x,b]= shaw(100). Naive solution: x = A − 1 b Question: What will happen if I calculate the solution a little bit more carefully, i.e, x=A \ b ? Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 5 / 30

  6. Introduction GOAL: Solve Ax = b . A ∈ R 100 x 100 , x, b, ∈ R 100 generated by Matlab code [A, x,b]= shaw(100). Question: What will happen if there is some noise in the right- hand side b??? Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 6 / 30

  7. Motivation. Why inverse problems Definition: (Well- posedness, Hadamard 1865-1963) A problem is called well-posed if: a solution exists the solution is unique the solution depends continuously on the given data (the solution is stable). If at least one of the above conditions is violated, then the problem becomes ill-posed. Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 7 / 30

  8. Problem with ill-conditioned matrices GOAL: Solve Ax + e = b true . A ∈ R nxn , x, b true , ∈ R n A is large ill-conditioned and maybe rank deficient. b represents the measured data e is the noise from measurements or other sources. Problems with the properties above are called linear ill-posed problems Methods to solve?? Some.. Generally we regularize and then solve. Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 8 / 30

  9. Notation A ∈ R mxn b ∈ R m where b = b true + e x ∈ R n is the solution we are looking for. � · � is the Euclidian norm, i.e. � · � = � · � 2 µ = 1 β Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 9 / 30

  10. Introduction: Consider the following problem: x ∈ R n � Ax − b � min DREAM: Would like to compute a solution of Ax = b true REALITY: Would like to compute an approximate solution of Ax = b since b true is not known. Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 10 / 30

  11. Tikhonov regularization The possibly most popular regularization method is Tikhonov regularization. x ∈ R n {� Ax − b � 2 + µ � x − x 0 � 2 } Standard form : min (3) x ∈ R n {� Ax − b � 2 + µ � L ( x − x 0 ) � 2 } General form : min (4) If L= I (Identity), then the Tikhonov minimization problem is said to be in the standard form . If L � = I (Identity), then the Tikhonov minimization problem is said to be in the general form . The matrix L is chosen such that N ( A ) ∩ N ( L ) = 0 (5) For µ > 0 the Tikhonov minimization problem has unique solution. Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 11 / 30

  12. The discrepancy principle Assume that a fairly estimate for δ = � b − b true � 2 is known. The discrepancy principle prescribes that µ > 0 be chosen so that � Ax µ − b � 2 = ηδ for some constant η > 1 independent of δ . Let define the function φ ( µ ) = � Ax µ − b � 2 . Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 12 / 30

  13. From Tikhonov to Iterated Tikhonov Consider x k is our solution at some iteration k. Our direct solution is given by x † = A † b and let e k = x † − x k x † = x k + e k ≈ x k + h k = x k +1 A ( e k ) = A ( x † − x k ) = b true − Ax k ≈ b − Ax k = r k x ∈ R n � Ah − r k � 2 + µ � Lh � 2 h k = min h k = ( A T A + µ L T L ) − 1 A T ( b − Ax k ) x k +1 = x k + ( A T A + µ L T L ) − 1 A T ( b − Ax k ) Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 13 / 30

  14. GSVD (Generalized Singular Value Decomposition) The GSVD is a generalization of the SVD of A and the generalized singular values of the pair (A,L) are essentially the square roots of the generalized eigenvalues of the matrix pair ( A T A , AA T ) Let A ∈ R mxn and let L ∈ R pxn satisfy m ≥ n ≥ p Assume that N ( A ) ∩ N ( L ) = 0 and that L has full row rank. The columns of U and V are orthonormal, X is nonsingular with columns that are A T A orthonormal and Σ and M are pxp diagonal matrices. The diagonal elements of Σ and M are nonnegative and ordered such that 0 ≤ σ 1 ≤ σ 2 ≤ ... ≤ σ p ≤ 1, 0 < µ p ≤ µ p − 1 ≤ ... ≤ µ 1 < 1. Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 14 / 30

  15. GSVD (Generalized Singular Value Decomposition) They are normalized such that σ 2 i + µ 2 i = 1 , i = 1 , 2 , .., p . Then, the generalized singular values γ i of (A,L) are defined as the ratios γ i = σ i µ i , i = 1 , 2 , ..., p . The pairs ( σ i , µ i ) are well conditioned with respect to perturbations in A and B. Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 15 / 30

  16. Standard Tikhonov Regularization in general form Assume that A and L are square matrices in R nxn and define the factorizations as: A = U Σ Y T and L = V Λ Y T , where U , V ∈ R nxn are orthogonal matrices Σ = diag [ σ 1 , σ 2 , ..., σ n ] ∈ R nxn Λ = diag [ λ 1 , λ 2 , ..., λ n ] ∈ R nxn The matrix Y is nonsingular Due to (5) the unique solution will be given by: x µ = ( A T A + µ k I ) − 1 A T b (12) Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 16 / 30

  17. Formulas for Iterated Tikhonov with GSVD Consider the Iterated Tikhonov formula in the general case: x k +1 = x k + ( A T A + µ L T L ) − 1 A T ( b − Ax k ) or equivalently: ( A T A + µ L T L ) x k +1 = A T b + µ L T Lx k , x k +1 = ( A T A + µ L T L ) − 1 ( A T b + µ L T Lx k ) Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 17 / 30

  18. Formulas for Iterated Tikhonov Let A = U Σ Y T and L = U Λ Y T . Substituting onto the above formula will get the simplified version of the iterations: Y (Σ T Σ + µ Λ T Λ) Y T x k +1 = Y (Σ T U T b + µ Λ T Λ Y T x k ) let Z k = Y T x k and ˆ b = U T b the formulas will become: Z k +1 = (Σ T Σ + µ Λ T Λ) − 1 (Σ T ˆ b + µ Λ T Λ Z k ) Z k +1 = (Σ T Σ + µ Λ T Λ) − 1 (Σ T ˆ b + µ Λ T Λ Z k ) ( ⋆ ) Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 18 / 30

  19. Formulas for Discrepancy Principle, Iterated Tikhonov GSVD Use the idea of the iterated formula: x k +1 = x k + ( A T A + µ L T L ) − 1 A T ( b − Ax k ) and the Discrepancy principle: � Ax k +1 − b � 2 = ( ηδ ) 2 Use the GSVD decomposition of A and L and plug in the iterated formula will get: b � 2 = ( ηδ ) 2 � Σ(Σ T Σ + µ Λ T Λ) − 1 Σ T (ˆ b − Σ Z k ) + Σ Z k − ˆ σ 2 b j ) 2 = ( ηδ ) 2 � m j (ˆ x k ) j − ˆ j =1 ( j b j − σ j ( ˆ x k ) j ) + σ j ( ˆ σ 2 j + µλ 2 Using the fact µ = 1 β will get the final formula: ˆ m b j λ 2 x k ) j λ 2 j − σ j ( ˆ ) 2 = ( ηδ ) 2 j � ( ( ⋆⋆ ) βσ 2 j + λ 2 j j =1 Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 19 / 30

  20. All final formulas together 1 Z k +1 = (Σ T Σ + µ Λ T Λ) − 1 (Σ T ˆ b + µ Λ T Λ Z k ) ( ⋆ ) � ˆ � 2 b j λ 2 x k ) j λ 2 j − σ j ( ˆ 2 φ ( β ) = � m − ( ηδ ) 2 j j =1 βσ 2 j + λ 2 j ( ⋆⋆ ) j ( ˆ − 2 σ 2 b j λ 2 x k ) j λ 2 j ) 2 j − σ j ( ˆ ′ ( β ) = � m 3 φ j =1 ( βσ 2 j + λ 2 j ) 3 ( ⋆ ⋆ ⋆ ) 6 σ 4 j ( ˆ b j λ 2 x k ) j λ 2 j ) 2 j − σ j ( ˆ ′′ ( β ) = � m 4 φ j =1 ( βσ 2 j + λ 2 j ) 3 ( ⋆ ⋆ ⋆⋆ ) Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 20 / 30

  21. Problem with ill-conditioned matrices Algorithm 1 ( Iterated Tikhonov with GSVD) Input: Measurement matrix A, regularization parameter L and data b. Output: Approximate solution x k ≈ x 0 . Calculate the GSVD of the pair (A,L), A = U Σ Y T , L = V Λ Y T 1 . Initialize µ = 1 β ( in general β = 0), x 0 = 0. Let Z k = Y T X k and ˆ b = U T b for k=1, 2, .. until stopping criteria do: 2 . Calculate µ k to satisfy the Discrepancy Principle using the function φ ( β ) 3 . Update Z k +1 = (Σ T Σ + 1 b + 1 β Λ T Λ) − 1 (Σ T ˆ β Λ T Λ Z k ) end Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 21 / 30

  22. Comparing Tikhonov with Iterated Tikhonov. Table: Relative error (1% noise added) . Tikhonov GSVD ITGSVD shaw(100) 0.2503 0.1002 baart(100) 0.0905 0.0356 deriv2(100,2) 0.0202 0.0152 Mirjeta PASHA (KSU) Candidacy exam May 23, 2018 22 / 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend