solving regularized total least squares problems based on
play

Solving Regularized Total Least Squares Problems Based on - PowerPoint PPT Presentation

Solving Regularized Total Least Squares Problems Based on Eigensolvers Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jrg Lampe TUHH Heinrich Voss Total Least Squares


  1. Solving Regularized Total Least Squares Problems Based on Eigensolvers Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 1 / 56

  2. Regularized total least squares problems Total Least Squares Problem The ordinary Least Squares (LS) method assumes that the system matrix A of a linear model is error free, and all errors are confined to the right hand side b . Given A ∈ R m × n , b ∈ R m , m ≥ n Find x ∈ R n and ∆ b ∈ R m such that � ∆ b � = min ! subject to Ax = b + ∆ b . Obviously equivalent to: Find x ∈ R n such that � Ax − b � = min ! TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 2 / 56

  3. Regularized total least squares problems Total Least Squares In practical applications it may happen that all data are contaminated by noise, for instance because the matrix A is also obtained from measurements. If the true values of the observed variables satisfy linear relations, and if the errors in the observations are independent random variables with zero mean and equal variance, then the total least squares (TLS) approach often gives better estimates than LS. Given A ∈ R m × n , b ∈ R m , m ≥ n Find ∆ A ∈ R m × n , ∆ b ∈ R m and x ∈ R n such that � [∆ A , ∆ b ] � 2 F = min ! subject to ( A + ∆ A ) x = b + ∆ b , ( 1 ) where � · � F denotes the Frobenius norm. In statistics this approach is called errors-in-variables problem or orthogonal regression, in image deblurring blind deconvolution TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 3 / 56

  4. Regularized total least squares problems Regularized Total Least Squares Problem If A and [ A , b ] are ill-conditioned, regularization is necessary. Let L ∈ R k × n , k ≤ n and δ > 0. Then the quadratically constrained formulation of the Regularized Total Least Squares (RTLS) problem reads: Find ∆ A ∈ R m × n , ∆ b ∈ R m and x ∈ R n such that subject to ( A + ∆ A ) x = b + ∆ b , � Lx � 2 ≤ = δ 2 . � [∆ A , ∆ b ] � 2 F = min ! Using the orthogonal distance this problems can be rewritten as (cf. Golub, Van Loan 1980) Find x ∈ R n such that f ( x ) := � Ax − b � 2 subject to � Lx � 2 = δ 2 . 1 + � x � 2 = min ! TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 4 / 56

  5. Regularized total least squares problems Regularized Total Least Squares Problem ct. Theorem 1: Let N ( L ) be the null space of L . If � Ax � 2 f ∗ = inf { f ( x ) : � Lx � 2 = δ 2 } < min ( 1 ) � x � 2 x ∈N ( L ) , x � = 0 then f ( x ) := � Ax − b � 2 subject to � Lx � 2 = δ 2 . 1 + � x � 2 = min ! ( 2 ) admits a global minimum. Conversely, if problem (2) admits a global minimum, then � Ax � 2 f ∗ ≤ min � x � 2 x ∈N ( L ) , x � = 0 TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 5 / 56

  6. Regularized total least squares problems Regularized Total Least Squares Problem ct. Under the condition (1) problem (2) is equivalent to the quadratic optimization problem � Ax − b � 2 − f ∗ ( 1 + � x � 2 ) = min ! subject to � Lx � 2 = δ 2 , ( 3 ) i.e. x ∗ is a global minimizer of problem (2) if and only if it is a global minimizer of (3). For fixed y ∈ R n find x ∈ R n such that � Ax − b � 2 − f ( y )( 1 + � x � 2 ) = min ! g ( x ; y ) := subject to � Lx � 2 = δ 2 . ( P y ) Lemma 1 Problem ( P y ) admits a global minimizer if and only if � Ax � 2 f ( y ) ≤ � x � 2 . min x ∈N ( L ) , x � = 0 TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 6 / 56

  7. Regularized total least squares problems RTLSQEP Method (Sima, Van Huffel, Golub 2004) Lemma 2 Assume that y satisfies conditions of Lemma 1 and � Ly � = δ , and let z be a global minimizer of problem ( P y ) . Then it holds that f ( z ) ≤ f ( y ) . Proof ( 1 + � z � ) 2 ( f ( z ) − f ( y )) = g ( z ; y ) ≤ g ( y ; y ) = ( 1 + � y � 2 )( f ( y ) − f ( y )) = 0 . Require: x 0 satisfying conditions of Lemma 1 and � Lx 0 � = δ . for m = 0 , 1 , 2 , . . . until convergence do Determine global minimizer x m + 1 of subject to � Lx � 2 = δ 2 . g ( x ; x m ) = min ! end for TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 7 / 56

  8. Regularized total least squares problems RTLS Method Obviously, 0 ≤ f ( x m + 1 ) ≤ f ( x m ) Problem ( P x m ) can be solved via the first order necessary optimality conditions � Lx � 2 = δ 2 . ( A T A − f ( x m ) I ) x + λ L T Lx = A T b , Although g ( · ; x m ) in general is not convex these conditions are even sufficient if the Lagrange parameter is chosen maximal. TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 8 / 56

  9. Regularized total least squares problems RTLS Method Theorem 2 Assume that (ˆ λ, ˆ x ) solves the first order conditions. � Lx � 2 = δ 2 . ( A T A − f ( y ) I ) x + λ L T Lx = A T b , ( ∗ ) If � Ly � = δ and ˆ λ is the maximal Lagrange multiplier then ˆ x is a global minimizer of problem ( P y ) . Proof The statement follows immediately from the following equation which can be shown similarly as in W. Gander (1981): If ( λ j , z j ) , j = 1 , 2, are solutions of ( ∗ ) then it holds that g ( z 2 ; y ) − g ( z 1 ; y ) = 1 2 ( λ 1 − λ 2 ) � L ( z 1 − z 2 ) � 2 . TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 9 / 56

  10. A quadratic eigenproblem A quadratic eigenproblem � Lx � 2 = δ 2 ( A T A − f ( y ) I ) x + λ L T Lx = A T b , ( ∗ ) If L is square and nonsingular: Let z := Lx . Then Wz + λ z := L − T ( A T A − f ( y ) I ) L − 1 z + λ z = L − T A T b =: h , z T z = δ 2 . u := ( W + λ I ) − 2 h h T u = z T z = δ 2 h = δ − 2 hh T u ⇒ ⇒ ( W + λ I ) 2 u − δ − 2 hh T u = 0 . If ˆ λ is the right-most real eigenvalue, and the corresponding eigenvector is scaled such that h T u = δ 2 then the solution of problem ( ∗ ) is recovered as x = L − 1 ( W + ˆ λ I ) u . TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 10 / 56

  11. A quadratic eigenproblem A quadratic eigenproblem ct. If L ∈ R k × n with linearly independent rows and k < n , the first order conditions can be reduced to a quadratic eigenproblem ( W + λ I ) 2 u − δ − 2 hh T u = 0 . where � C − f ( x m ) D − S ( T − f ( x m ) I n − k ) − 1 S T � W m = g − D ( T − f ( x m ) I n − k ) − 1 c h m = with C , D ∈ R k × k , S ∈ R k × n − k , T ∈ R n − k × n − k , g ∈ R k , c ∈ R n − k , and C , D , T are symmetric matrices. TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 11 / 56

  12. Nonlinear maxmin characterization Nonlinear maxmin characterization Let T ( λ ) ∈ C n × n , T ( λ ) = T ( λ ) H , λ ∈ J ⊂ R an open interval (maybe unbounded). For every fixed x ∈ C n , x � = 0 assume that the real function f ( · ; x ) : J → R , f ( λ ; x ) := x H T ( λ ) x is continuous, and that the real equation f ( λ, x ) = 0 has at most one solution λ =: p ( x ) in J . Then equation f ( λ, x ) = 0 implicitly defines a functional p on some subset D of C n which we call the Rayleigh functional. Assume that ( λ − p ( x )) f ( λ, x ) > 0 for every x ∈ D , λ � = p ( x ) . TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 12 / 56

  13. Nonlinear maxmin characterization maxmin characterization (V., Werner 1982) Let sup v ∈ D p ( v ) ∈ J and assume that there exists a subspace V ⊂ C n of dimension ℓ such that V ∩ D � = ∅ v ∈ V ∩ D p ( v ) ∈ J . and inf Then T ( λ ) x = 0 has at least ℓ eigenvalues in J , and for j = 1 , . . . , ℓ the j -largest eigenvalue λ j can be characterized by λ j = max v ∈ V ∩ D p ( v ) . inf ( 1 ) dim V = j , V ∩ D � = ∅ V ⊂ C n with For j = 1 , . . . , ℓ every j dimensional subspace ˜ ˜ V ∩ D � = ∅ λ j = p ( v ) and inf v ∈ ˜ V ∩ D is contained in D ∪ { 0 } , and the maxmin characterization of λ j can be replaced by λ j = max v ∈ V \{ 0 } p ( v ) . min dim V = j , V \{ 0 }⊂ D TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 13 / 56

  14. Nonlinear maxmin characterization Back to ( W + λ I ) 2 − δ − 2 hh T � � T ( λ ) x := x = 0 ( QEP ) f ( λ, x ) = x H T ( λ ) x = λ 2 � x � 2 + 2 λ x H Wx + � Wx � 2 − | x H h | 2 /δ 2 , x � = 0 is a parabola which attains its minimum at λ = − x H Wx x H x . Let J = ( − λ min , ∞ ) where λ min is the minimum eigenvalue of W . Then f ( λ, x ) = 0 has at most one solution p ( x ) ∈ J for every x � = 0. Hence, the Rayleigh functional p of (QEP) corresponding to J is defined, and the general conditions are satisfied. TUHH Heinrich Voss Total Least Squares RANMEP2008, January 6, 2008 14 / 56

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend