perturbation theory for eigenvalue problems
play

Perturbation Theory for Eigenvalue Problems Nico van der Aa - PowerPoint PPT Presentation

Perturbation Theory for Eigenvalue Problems Nico van der Aa October 19th 2005 Overview of talks Erwin Vondenhoff (21-09-2005) A Brief Tour of Eigenproblems Nico van der Aa (19-10-2005) Perturbation analysis Peter in t Panhuis


  1. Perturbation Theory for Eigenvalue Problems Nico van der Aa October 19th 2005

  2. Overview of talks • Erwin Vondenhoff (21-09-2005) A Brief Tour of Eigenproblems • Nico van der Aa (19-10-2005) Perturbation analysis • Peter in ’t Panhuis (9-11-2005) Direct methods • Luiza Bondar (23-11-2005) The power method • Mark van Kraaij (7-12-2005) Krylov subspace methods • Willem Dijkstra (...) Krylov subspace methods 2

  3. Outline of my talk Goal My goal is to illustrate ways to deal with sensitivity theory of eigenvalues and eigenvectors. Way By means of examples I would like to illustrate the theorems. Assumptions There are no special structures present in the matrices under consideration. They are general complex valued matrices.

  4. Recap on eigenvalue problems Definition of eigenvalue problems Y ∗ A − Λ Y ∗ = 0 AX − X Λ = 0 , with ∗ the complex conjugate transposed and     − − y 1 − − λ 1   | | | − − y 2 − − λ 2 ... , Y ∗ =     x 1 x 2 · · · x n X = , Λ =    .    . .     | | | − − y n − − λ n � �� � � �� � � �� � right eigenvectors left eigenvectors eigenvalues The left-eigenvectors are chosen such that Y ∗ X = I

  5. Bauer-Fike Theorem Theorem Given are λ an eigenvalue and X the matrix consisting of eigenvectors of matrix A . Let µ be an eigenvalue of matrix A + E ∈ C n × n , then λ ∈ σ ( A ) | λ − µ | ≤ � X � p � X − 1 � p min � E � p ( † ) � �� � K p ( X ) where � . � p is any matrix p -norm and K p ( X ) is called the condition number of the eigenvalue problem for matrix A . Proof The proof can be found in many textbooks. • Numerical Methods for Large Eigenvalue Problems Yousef Saad • Numerical Mathematics A. Quarteroni, R. Sacco, F. Saleri

  6. Bauer-Fike Theorem (2) Example √ � � � � � � 1 3 1 3 0 1 2 √ A = , Λ = , X = 2 . 0 2 0 2 1 0 2 2 � � 0 0 � E � 2 = 10 − 4 E = K 2 ( X ) ≈ 2 . 41 , , 10 − 4 0 The Bauer-Fike theorem states that the eigenvalues can change 2 . 41 × 10 − 4 . In this example, they only deviate 1 e − 4 . Remarks • The Bauer-Fike theorem is an over estimate. • The Bauer-Fike theorem does not give a direction.

  7. Eigenvalue derivatives - Theory Suppose that A depends on a parameter p and its eigenvalues are distinct. The derivative of the eigensystem is given by A ′ ( p ) X ( p ) − X ( p ) Λ ′ ( p ) = − A ( p ) X ′ ( p ) + X ′ ( p ) Λ ( p ) . Premultiplication with the left-eigenvectors gives Λ ′ = Y ∗ AX ′ + Y ∗ X ′ Λ . Y ∗ A ′ X − Y ∗ X � �� � = I Introduce X ′ = XC . This is allowed since for distinct eigenvalues the eigenvectors form a basis of C n . Then, Y ∗ A ′ X − Λ ′ = − Y ∗ AX C + Y ∗ X C Λ . � �� � � �� � = Λ = I Written out in components, the eigenvalue derivatives is given by k A ′ x k λ ′ k = y ∗

  8. Eigenvalue derivatives - Example Example definition � p � � 1 � 1 0 A ′ = A = , 1 − p 0 − 1 In this case, the eigenvalues can be computed analytically   p � � √ � − 0 p 2 + 1 − 0 Λ ′ = p 2 +1 Λ = , �   p 2 + 1 p √ 0 0 p 2 +1 The method for p = 1 The following quantities can be computed from the given matrix A ( p ) √ � 0 . 3827 � 0 . 3827 � 1 � � − � � � 1 2 0 − 0 . 9239 − 0 . 9239 √ Y ∗ (1)= A (1)= Λ (1)= X (1)= , , , 1 − 1 0 2 − 0 . 9239 − 0 . 3827 − 0 . 9239 − 0 . 3827 The eigenvalue derivatives can be computed by � � 0 . 3827 � � 1 � √ = − 1 0 � λ ′ 1 (1) = 0 . 3827 − 0 . 9239 2 0 − 1 − 0 . 9239 2 � � 1 � � − 0 . 9239 � √ = 1 0 � λ ′ 2 (1) = − 0 . 9239 − 0 . 3827 2 0 − 1 − 0 . 3827 2

  9. Eigenvector derivatives Theory As long as the eigenvalues are distinct, the eigenvectors form a basis of C n and therefore the following equation holds: Y ∗ A ′ X − Λ ′ = − Λ C + C Λ . Since ( Λ C + C Λ ) ij = − λ i c ij + c ij λ j = c ij ( λ j − λ i ) , the off-diagonal entries of C can be determined as follows i A ′ x j c ij = y ∗ i � = j. , λ j − λ i What about the diagonal entries? ⇒ additional assumption.

  10. Eigenvector derivatives - Normalization Problem description An eigenvector is determined uniquely in case of distinct eigenvalues up to a constant. If matrix A has an eigenvector x k belonging to eigenvalue λ k , then γ x i with γ a nonzero constant, is also an eigenvector. A ( γ x k ) − λ k ( γ x k ) = γ ( Ax k − λ k x k ) = 0 Conclusion: there is one degree of freedom to determine the eigenvector itself and therefore also the derivative contains a degree of freedom. ( c k x k ) ′ = c ′ k x k + c k x ′ k Important: the eigenvector derivative that will be computed is the derivative of this normalized eigenvector!

  11. Eigenvector derivatives - Normalization 2 Solution A mathematical choice is to set one element of the eigenvector equal to 1 for all p . How do you choose these constants? • max l =1 ,...,n | x kl | ; • max l =1 ,...,n | x kl || y kl | . The derivative is computed from the normalized eigenvector. Remark: the derivative of the element set to 1 for all p is equal to 0 for all p .

  12. Eigenvector derivatives - Normalization 3 Result Consider only one eigenvector. Its derivative can be expanded as follows: n � x ′ kl = x km c ml . m =1 By definition the derivative of the element set to 1 for all p is equal to zero. Therefore, n n x km c mk ⇒ c kk = − 1 � � 0 = x kk c kk + x km c mk . x kk m =1 m =1 m � = l m � = l Repeating the normalization procedure for all eigenvectors enables the computation of the diagonal entries of C . Finally, the eigenvector derivatives can be computed as follows: X ′ = XC with X the normalized eigenvector matrix.

  13. Eigenvector derivatives - Example   � � i ( − 1+4 p 2 + p 4 ) − ip ( − 1+ p 2 ) � − ip � � 1 − p 2 � 0 0 1 − p 2 0 (1+ p 2 ) 2 1+ p 2  , Λ = , A ′ = A = , X =  − p 2 − 1 1 + p 2 ip (1+ p 2 ) i (1+4 p 2 − p 4 ) 0 ip 0 0 − 1+ p 2 ( − 1+ p 2 ) 2 Consider the case where p = 2 . The off-diagonal entries of the coefficient matrix C are The matrices are given by � 0 c 12 = y ∗ 1 A ′ x 2 c 21 = y ∗ 2 A ′ x 1 = − 8 = − 8 � − 6 i 5 A = λ 2 − λ 1 3 λ 1 − λ 2 3 − 10 i 0 3 Normalization: for all k and l the following is true | x kl || y kl | = 1 2 . � 0 � − 31 i Therefore, choose A ′ = 25 i 0 � � − 3 3 9 5 5 X = 1 1 � − 0 . 5145 � 0 . 5145 X = 0 . 8575 0 . 8575 Then the diagonal entries of matrix C become c 21 = 8 c 12 = 8 c 11 = − x 22 c 22 = − x 21 � − 0 . 9718 � 0 . 5831 Y ∗ = 3 3 x 21 x 22 0 . 9718 0 . 5831 The eigenvector derivatives can now be computed: � 8 � − 8 X ′ = XC = 25 25 0 0

  14. Repeated eigenvalues Problem statement If repeated eigenvalues occur, that is λ k = λ l for some k and l , then any linear combination of eigenvectors x k and x l is also an eigenvector. To apply the previous theory, we have to make the eigenvectors unique up to a constant multiplier. Solution procedure Assume the n known eigenvectors are linearly independent and denote them by ˜ X . Define X = ˜ ˆ X Γ for some coefficient matrix Γ If the columns of Γ can be defined unique up to a constant multiplier, also ˆ X is uniquely defined up to a constant multiplier.

  15. Repeated eigenvalues - mathematical trick Computing Γ Differentiate the eigenvalue system A ˆ X = ˆ X Λ : ′ + ˆ A ′ ˆ X Λ ′ = − A ˆ ′ Λ X − ˆ X X Premultiply with the left-eigenvectors and use the fact that the eigenvalues are repeated ∗ � ′ − ˆ � ∗ ˜ ∗ A ′ ˜ X ΓΛ ′ = − ˜ ′ Λ ˜ X Γ − ˜ A ˆ Y Y Y X X � �� � =( A − λ I ) ˆ Eliminate the right-hand-side X ∗ ( A − λ I ) ∗ A ′ ˜ X Γ − ΓΛ ′ = − ˜ ′ ˜ ˆ Y Y X � �� � =0 Assume that λ ′ k � = λ ′ l for all k � = l , then Γ consists of the eigenvectors of ∗ A ′ ˜ matrix ˜ Y X and are determined up to a constant.

  16. Repeated eigenvalues - Example Computations of the eigenvalues for p = 2 Matrix A is constructed from an eigenvector matrix and an eigenvalue matrix with values λ 1 = ip and λ 2 = − i ( p − 4) . This results in � � − i ( − 2+ p )( − 1+ p 2 ) 2 i 1+ p 2 A = . − i ( − 2+ p )(1+ p 2 ) 2 i − 1+ p 2 For p = 2 , the eigenvalues become repeated and Matlab gives the following results � 2 i � � 2 i � � 1 � 0 0 0 A = , Λ = , X = . 0 2 i 0 2 i 0 1 From the construction of matrix A , we know that λ ′ 1 = i and λ ′ 2 = − i , but when we follow the procedure from before, we see that � � 0 − 0 . 6 i ∗ A ′ ˜ ˜ � = Λ ′ . X = Y − 1 . 67 i 0 Now, with the mathematical trick � − 0 . 5145 � � − 0 . 5145 � 0 . 5145 0 . 5145 X = ˜ ˆ Γ = , X Γ = . 0 . 8575 0 . 8575 0 . 8575 0 . 8575 Repeat the procedure � i � 0 ∗ A ′ ˆ ˆ = Λ ′ . X = Y 0 − i

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend