rational krylov methods for linear and nonlinear
play

Rational Krylov methods for linear and nonlinear eigenvalue problems - PowerPoint PPT Presentation

Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational Krylov algorithm for linear


  1. Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014

  2. Outline Arnoldi (and its variants) for linear eigenproblems Rational Krylov algorithm for linear eigenproblems Applications of Rational Krylov algorithm for nonlinear eigenproblems - Linearization by means of Hermite interpolation - Iterative projection methods

  3. Outline Arnoldi (and its variants) for linear eigenproblems Rational Krylov algorithm for linear eigenproblems Applications of Rational Krylov algorithm for nonlinear eigenproblems - Linearization by means of Hermite interpolations - Iterative projection methods

  4. Eigenvalue problem Definition of the problem Given an application A ( · ) : C → C n × n find a pair ( λ, x ) ∈ C × C n such that A ( λ ) x = 0 The number λ is called eigenvalue The vector x is called eigenvector In practical applications the task is to compute eigenvalues in a subset Ω ⊂ C .

  5. Linear eigenvalue problem If A ( λ ) is linear we call the problem linear eigenvalue problem Let A , B ∈ C n × n A ( λ ) = A − λ B then the linear eigenvalue problem is Ax = λ Bx Generalized eigenvalue problem in case B = I (identity matrix) Ax = λ x Classic eigenvalue problem

  6. Arnoldi’s algorithm for classic eigenvalue problem Classic eigenvalue problem Given A ∈ C n × n the problem is to compute the pairs ( λ, x ) such that Ax = λ x Definition (Krylov subspace) Given a vector x ∈ C n and a natural number m � � x , Ax , A 2 x , . . . , A m − 1 x K m ( A , x ) = span is the Krylov subspace The idea is to project the matrix A in the Krylov subspace and solve the projected problem.

  7. Arnoldi’s algorithm for classic eigenvalue problem Gram–Schmidt orthogonalization Given x ∈ C n define   v 1 := x / � x �      h i , j = Av j · v i i = 1 , . . . j  w j +1 := Av j − h 1 , j v 1 − h 2 , j v 2 − · · · − h j , j v j    h j +1 , j = � w j +1 �     v j +1 = w j +1 / h j +1 , j Then v 1 , . . . , v m is an orthonormal basis of K m ( A , x ). Arnoldi sequence In a vectorial form AV m = V m +1 H m +1 , m

  8. Arnoldi’s algorithm for classic eigenvalue problem Observation The matrix H m , m is the projection of A in K m ( A , x ), that is V H m AV m = H m , m Definition Given an eigenpair ( θ, s ) of H m , m , the value θ is called Ritz value and the vector V m s Ritz vector. Proposition If ( θ, s ) is an eigenpair of H m , m then AV m s − θ V m s = h m +1 , m s m v m +1 . If h m +1 , m y m is small, then ( θ, V m s ) is an approximation of an eigenpair of A .

  9. Arnoldi’s algorithm for classic eigenvalue problem Arnoldi’s algorithm 1: Chose a starting vector x 2: for m = 1 , . . . , till convergence do Compute the Arnoldi sequence AV m = V m +1 H m +1 , m 3: Compute eigenpairs ( θ i , y i ) of H m , m 4: if | h m +1 , m ( e H m y i ) | < tol then 5: Store ( θ i , V m y i ) as approximation of an eigenpair of A 6: end if 7: 8: end for Questions: How big must be m to get a good approximation of an eigenpair? How to choose a starting vector x ? Which eigenpairs will be firstly approximated?

  10. Arnoldi’s algorithm for classic eigenvalue problem Arnoldi’s algorithm 1: Chose a starting vector x 2: for m = 1 , . . . , till convergence do Compute the Arnoldi sequence AV m = V m +1 H m +1 , m 3: Compute eigenpairs ( θ i , y i ) of H m , m 4: if | h m +1 , m ( e H m y i ) | < tol then 5: Store ( θ i , V m y i ) as approximation of an eigenpair of A 6: end if 7: 8: end for Questions: How big must be m to get a good approximation of an eigenpair? How to choose a starting vector x ? Which eigenpairs will be firstly approximated?

  11. Convergence of Arnoldi’s algorithm Theorem (Saad) If A is diagonalizable and ( λ i , u i ) are the eigenpairs, if | λ k − λ 1 | > | λ k − λ j | ∀ k � = 1 , j � = 1 then λ 1 is the first eigenvalue to be approximated. Moreover the closer x to the eigenvector u 1 the faster the convergence to u 1 . In other words (under the hypothesis of the theorem) the outermost eigenvalues will be firstly approximated. Heuristically, after a few steps, the approximations to the eigenvalues start to convergence linearly.

  12. Convergence of Arnoldi’s algorithm Theorem (Saad) If A is diagonalizable and ( λ i , u i ) are the eigenpairs, if | λ k − λ 1 | > | λ k − λ j | ∀ k � = 1 , j � = 1 then λ 1 is the first eigenvalue to be approximated. Moreover the closer x to the eigenvector u 1 the faster the convergence to u 1 . In other words (under the hypothesis of the theorem) the outermost eigenvalues will be firstly approximated. Heuristically, after a few steps, the approximations to the eigenvalues start to convergence linearly.

  13. Thick restart Problem When the Arnoldi sequence grows too long, every step of the Arnoldi iteration gets slower. Moreover orthogonality is numerically lost. Thick restart Let AV m = V m +1 H m +1 , m be an Arnoldi sequence with θ 1 , . . . , θ k a subset of Ritz values, where at least one has not (numerically) converged yet. Then it is possible to build another Arnoldi sequence AW k = W k +1 � H k +1 , k such that θ 1 , . . . , θ k are the Ritz values. The generation of the new sequence is numerically stable since it is done using Householder transformations. The idea of thick restart is to select the Ritz values which we want to refine and remove the others. Lock Purge

  14. Thick restart Problem When the Arnoldi sequence grows too long, every step of the Arnoldi iteration gets slower. Moreover orthogonality is numerically lost. Thick restart Let AV m = V m +1 H m +1 , m be an Arnoldi sequence with θ 1 , . . . , θ k a subset of Ritz values, where at least one has not (numerically) converged yet. Then it is possible to build another Arnoldi sequence AW k = W k +1 � H k +1 , k such that θ 1 , . . . , θ k are the Ritz values. The generation of the new sequence is numerically stable since it is done using Householder transformations. The idea of thick restart is to select the Ritz values which we want to refine and remove the others. Lock Purge

  15. Arnoldi’s algorithm for the linear eigenvalue problem Tthe linear eigenproblem Ax = λ Bx can be solved by using Arnoldi’s algorithm applied to the matrix B − 1 A Matrices are often sparse/structured. B − 1 is never computed. At each step of the algorithm a linear systems with the matrix B must be solved. The LU factorization of B can be performed once for all.

  16. Shifted–and–inverted Arnoldi’s algorithm for the linear eigenvalue problem Proposition If ( θ, x ) is an eigenpair of ( A − σ B ) − 1 B then ( σ + 1 /θ, x ) is an eigenpair of the linear problem Ax = λ Bx . Observation If θ is one of the outermost eigenvalues of ( A − σ B ) − 1 B then σ + 1 /θ is one of the eigenvalues of the linear problem nearest σ . [Saad theorem]. This strategy can be used to compute eigenvalues of the linear problem near a point σ . If we want to compute eigenvalues in Ω ⊂ C then we can select a few (equidistributed) points σ 0 , . . . , σ t ∈ Ω and use this approach.

  17. Shifted–and–inverted Arnoldi’s algorithm for the linear eigenvalue problem Proposition If ( θ, x ) is an eigenpair of ( A − σ B ) − 1 B then ( σ + 1 /θ, x ) is an eigenpair of the linear problem Ax = λ Bx . Observation If θ is one of the outermost eigenvalues of ( A − σ B ) − 1 B then σ + 1 /θ is one of the eigenvalues of the linear problem nearest σ . [Saad theorem]. This strategy can be used to compute eigenvalues of the linear problem near a point σ . If we want to compute eigenvalues in Ω ⊂ C then we can select a few (equidistributed) points σ 0 , . . . , σ t ∈ Ω and use this approach.

  18. Outline Arnoldi (and its variants) for linear eigenproblems Rational Krylov algorithm for linear eigenproblems Applications of Rational Krylov algorithm for nonlinear eigenproblems - Linearization by means of Hermite interpolation - Iterative projection methods

  19. Rational Krylov algorithm for linear eigenvalue problem Problem To compute eigenvalues near a set of points σ 0 , . . . , σ t ∈ Ω, one needs to apply Shifted–and–inverted Arnoldi’s algorithm to each σ i Theorem (Ruhe) In O ( m 3 ) it is possible change shift in the Arnoldi sequence, in particular ( A − σ 0 B ) − 1 BV m = V m +1 H m +1 , m = ⇒ ( A − σ 1 B ) − 1 BW m = W m +1 � H m +1 , m moreover span( V m +1 ) = span( W m +1 ). These operations are numerically stable if σ 0 and σ 1 are far enough from the eigenvalues of the original problem.

  20. Rational Krylov algorithm for linear eigenvalue problem Problem To compute eigenvalues near a set of points σ 0 , . . . , σ t ∈ Ω, one needs to apply Shifted–and–inverted Arnoldi’s algorithm to each σ i Theorem (Ruhe) In O ( m 3 ) it is possible change shift in the Arnoldi sequence, in particular ( A − σ 0 B ) − 1 BV m = V m +1 H m +1 , m = ⇒ ( A − σ 1 B ) − 1 BW m = W m +1 � H m +1 , m moreover span( V m +1 ) = span( W m +1 ). These operations are numerically stable if σ 0 and σ 1 are far enough from the eigenvalues of the original problem.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend