numerical methods for solving large scale eigenvalue
play

Numerical Methods for Solving Large Scale Eigenvalue Problems - PowerPoint PPT Presentation

Numerical Methods for Solving Large Scale Eigenvalue Problems Numerical Methods for Solving Large Scale Eigenvalue Problems Lecture 2, February 28, 2018: Numerical linear algebra basics http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz


  1. Numerical Methods for Solving Large Scale Eigenvalue Problems Numerical Methods for Solving Large Scale Eigenvalue Problems Lecture 2, February 28, 2018: Numerical linear algebra basics http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Z¨ urich E-mail: arbenz@inf.ethz.ch Large scale eigenvalue problems, Lecture 2, February 28, 2018 1/46

  2. Numerical Methods for Solving Large Scale Eigenvalue Problems Survey on lecture 1. Introduction 2. Numerical linear algebra basics ◮ Definitions ◮ Similarity transformations ◮ Schur decompositions ◮ SVD 3. Newtons method for linear and nonlinear eigenvalue problems 4. The QR Algorithm for dense eigenvalue problems 5. Vector iteration (power method) and subspace iterations 6. Krylov subspaces methods ◮ Arnoldi and Lanczos algorithms ◮ Krylov-Schur methods 7. Davidson/Jacobi-Davidson methods 8. Rayleigh quotient minimization for symmetric systems 9. Locally-optimal block preconditioned conjugate gradient (LOBPCG) method Large scale eigenvalue problems, Lecture 2, February 28, 2018 2/46

  3. Numerical Methods for Solving Large Scale Eigenvalue Problems Survey on lecture ◮ Basics ◮ Notation ◮ Statement of the problem ◮ Similarity transformations ◮ Schur decomposition ◮ The real Schur decomposition ◮ Hermitian matrices ◮ Jordan normal form ◮ Projections ◮ The singular value decomposition (SVD) Large scale eigenvalue problems, Lecture 2, February 28, 2018 3/46

  4. Numerical Methods for Solving Large Scale Eigenvalue Problems Survey on lecture Literature G. H. Golub and C. F. van Loan. Matrix Computations , 4th edition. Johns Hopkins University Press. Baltimore, 2012. R. A. Horn and C. R. Johnson, Matrix Analysis , Cambridge University Press, Cambridge, 1985. Y. Saad, Numerical Methods for Large Eigenvalue Problems , SIAM, Philadelphia, PA, 2011. E. Anderson et al. LAPACK Users Guide, 3rd edition. SIAM, Philadelphia, 1999. http://www.netlib.org/lapack/ Large scale eigenvalue problems, Lecture 2, February 28, 2018 4/46

  5. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Notation Notations R : The field of real numbers C : The field of complex numbers R n : The space of vectors of n real components C n : The space of vectors of n complex components Scalars : lowercase letters, a, b, c . . . , and α, β, γ . . . . Vectors : boldface lowercase letters, a , b , c , . . . .   x 1 x 2   x ∈ R n ⇐   ⇒ x =  , x i ∈ R . .  .  .  x n We often make statements that hold for real or complex vectors. → x ∈ F n . − Large scale eigenvalue problems, Lecture 2, February 28, 2018 5/46

  6. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Notation ◮ The inner product of two n -vectors in C : n � y i = y ∗ x , ( x , y ) = x i ¯ i =1 ◮ y ∗ = (¯ y 1 , ¯ y 2 , . . . , ¯ y n ): conjugate transposition of complex vectors. ◮ x and y are orthogonal, x ⊥ y , if x ∗ y = 0. ◮ Norm in F , (Euclidean norm or 2-norm) � n � 1 / 2 � � | x i | 2 � x � = ( x , x ) = . i =1 Large scale eigenvalue problems, Lecture 2, February 28, 2018 6/46

  7. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Notation ◮  a 11 a 12 . . . a 1 n  a 21 a 22 . . . a 2 n A ∈ F m × n ⇐   ⇒ A =  , a ij ∈ F .  . . .  . . .   . . .  a m 1 a m 2 . . . a mn  ¯ a 11 a 21 ¯ . . . a m 1 ¯  a 12 ¯ a 22 ¯ . . . a m 2 ¯ ⇒ A ∗ = A ∗ ∈ F n × m ⇐    . . .  . . .   . . .   a 1 n ¯ a 2 n ¯ . . . a nm ¯ is the Hermitian transpose of A . For square matrices: ◮ A ∈ F n × n is called Hermitian ⇐ ⇒ A ∗ = A . ◮ Real Hermitian matrix is called symmetric. ◮ U ∈ F n × n is called unitary ⇐ ⇒ U − 1 = U ∗ . ◮ Real unitary matrices are called orthogonal. ◮ A ∈ F n × n is called normal ⇐ ⇒ A ∗ A = AA ∗ . Both, Hermitian and unitary matrices are normal. Large scale eigenvalue problems, Lecture 2, February 28, 2018 7/46

  8. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Notation ◮ Norm of a matrix (matrix norm induced by vector norm): � A x � � A � := max � x � = max � x � =1 � A x � . x � = 0 ◮ The condition number of a nonsingular matrix: κ ( A ) = � A �� A − 1 � . U unitary = ⇒ � U x � = � x � for all x = ⇒ κ ( U ) = 1. Large scale eigenvalue problems, Lecture 2, February 28, 2018 8/46

  9. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Statement of the problem The (standard) eigenvalue problem: Given a square matrix A ∈ F n × n . Find scalars λ ∈ C and vectors x ∈ C n , x � = 0 , such that A x = λ x , (1) i.e., such that ( A − λ I ) x = 0 (2) has a nontrivial (nonzero) solution. We are looking for numbers λ such that A − λ I is singular . The pair ( λ, x ) be a solution of (1) or (2). ◮ λ is called an eigenvalue of A , ◮ x is called an eigenvector corresponding to λ Large scale eigenvalue problems, Lecture 2, February 28, 2018 9/46

  10. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Statement of the problem ◮ ( λ, x ) is called eigenpair of A . ◮ The set σ ( A ) of all eigenvalues of A is called spectrum of A . ◮ The set of all eigenvectors corresponding to an eigenvalue λ together with the vector 0 form a linear subspace of C n called the eigenspace of λ . ◮ The eigenspace of λ is the null space of λ I − A : N ( λ I − A ). ◮ The dimension of N ( λ I − A ) is called geometric multiplicity g ( λ ) of λ . ◮ An eigenvalue λ is a root of the characteristic polynomial χ ( λ ) := det( λ I − A ) = λ n + a n − 1 λ n − 1 + · · · + a 0 . The multiplicity of λ as a root of χ is called the algebraic multiplicity m ( λ ) of λ . A ∈ F n × n . 1 ≤ g ( λ ) ≤ m ( λ ) ≤ n , λ ∈ σ ( A ) , Large scale eigenvalue problems, Lecture 2, February 28, 2018 10/46

  11. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Statement of the problem ◮ y is called left eigenvector corresponding to λ y ∗ A = λ y ∗ ◮ Left eigenvector of A is a right eigenvector of A ∗ , corresponding to the eigenvalue ¯ λ , A ∗ y = ¯ λ y . ◮ A is an upper triangular matrix,   a 11 a 12 . . . a 1 n a 22 . . . a 2 n     A =  , a ik = 0 for i > k . . ...  .  .  a nn n � ⇐ ⇒ det( λ I − A ) = ( λ − a ii ) . i =1 Large scale eigenvalue problems, Lecture 2, February 28, 2018 11/46

  12. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Statement of the problem (Generalized) eigenvalue problem Given two square matrices A , B ∈ F n × n . Find scalars λ ∈ C and vectors x ∈ C , x � = 0 , such that A x = λ B x , (3) or, equivalently, such that ( A − λ B ) x = 0 (4) has a nontrivial solution. The pair ( λ, x ) is a solution of (3) or (4). ◮ λ is called an eigenvalue of A relative to B , ◮ x is called an eigenvector of A relative to B corresponding to λ . ◮ ( λ, x ) is called an eigenpair of A relative to B , ◮ The set σ ( A ; B ) of all eigenvalues of (3) is called the spectrum of A relative to B . Large scale eigenvalue problems, Lecture 2, February 28, 2018 12/46

  13. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Similarity transformations Similarity transformations Matrix A is similar to a matrix C , A ∼ C , ⇐ ⇒ there is a nonsingular matrix S such that S − 1 AS = C . (5) The mapping A → S − 1 AS is called a similarity transformation. Theorem Similar matrices have equal eigenvalues with equal multiplicities. If ( λ, x ) is an eigenpair of A and C = S − 1 AS then ( λ, S − 1 x ) is an eigenpair of C. Large scale eigenvalue problems, Lecture 2, February 28, 2018 13/46

  14. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Similarity transformations Similarity transformations (cont.) Proof : C = S − 1 AS = ⇒ CS − 1 x = S − 1 ASS − 1 x = λ S − 1 x A x = λ x and Hence A and C have equal eigenvalues and their geometric multiplicity is not changed by the similarity transformation. det( λ I − C ) = det( λ S − 1 S − S − 1 AS ) = det( S − 1 ( λ I − A ) S ) = det( S − 1 ) det( λ I − A ) det( S ) = det( λ I − A ) the characteristic polynomials of A and C are equal and hence also the algebraic eigenvalue multiplicities are equal. Large scale eigenvalue problems, Lecture 2, February 28, 2018 14/46

  15. Numerical Methods for Solving Large Scale Eigenvalue Problems Basics Similarity transformations Unitary similarity transformations Two matrices A and C are called unitarily similar (orthogonally similar) if S ( C = S − 1 AS = S ∗ AS ) is unitary (orthogonal). Reasons for the importance of unitary similarity transformations: → � U � = � U − 1 � = 1 − 1. U is unitary − → κ ( U ) = 1. Hence, if C = U − 1 AU − → C = U ∗ AU and � C � = � A � . If A is disturbed by δ A ( roundoff errors introduced when storing the entries of A in finite-precision arithmetic) → U ∗ ( A + δ A ) U = C + δ C , − � δ C � = � δ A � . Hence, errors (perturbations) in A are not amplified by a unitary similarity transformation. This is in contrast to arbitrary similarity transformations. Large scale eigenvalue problems, Lecture 2, February 28, 2018 15/46

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend