on a new proof of the faber manteuffel theorem
play

On a New Proof of the Faber-Manteuffel Theorem Petr Tich joint work - PowerPoint PPT Presentation

On a New Proof of the Faber-Manteuffel Theorem Petr Tich joint work with Jrg Liesen and Vance Faber Institute of Computer Science, Academy of Sciences of the Czech Republic June 2, 2008, Zeuthen, Germany Householder Symposium XVII 1


  1. On a New Proof of the Faber-Manteuffel Theorem Petr Tichý joint work with Jörg Liesen and Vance Faber Institute of Computer Science, Academy of Sciences of the Czech Republic June 2, 2008, Zeuthen, Germany Householder Symposium XVII 1

  2. Outline Introduction 1 2 Formulation of the problem 3 The Faber-Manteuffel theorem The ideas of a new proof 4 2

  3. Outline Introduction 1 2 Formulation of the problem 3 The Faber-Manteuffel theorem The ideas of a new proof 4 3

  4. Krylov subspace methods Basis Methods based on projection onto the Krylov subspaces K j ( A , v ) ≡ span( v, A v, . . . , A j − 1 v ) j = 1 , 2 , . . . . A ∈ R n × n , v ∈ R n . 4

  5. Krylov subspace methods Basis Methods based on projection onto the Krylov subspaces K j ( A , v ) ≡ span( v, A v, . . . , A j − 1 v ) j = 1 , 2 , . . . . A ∈ R n × n , v ∈ R n . Each method must generate a basis of K j ( A , v ) . The trivial choice v, A v, . . . , A j − 1 v is computationally infeasible (recall the Power Method). For numerical stability: Well conditioned basis. For computational efficiency: Short recurrence. 4

  6. Krylov subspace methods Basis Methods based on projection onto the Krylov subspaces K j ( A , v ) ≡ span( v, A v, . . . , A j − 1 v ) j = 1 , 2 , . . . . A ∈ R n × n , v ∈ R n . Each method must generate a basis of K j ( A , v ) . The trivial choice v, A v, . . . , A j − 1 v is computationally infeasible (recall the Power Method). For numerical stability: Well conditioned basis. For computational efficiency: Short recurrence. Best of both worlds: Orthogonal basis computed by short recurrence. 4

  7. Optimal Krylov subspace methods with short recurrences CG (1952), MINRES, SYMMLQ (1975) based on three-term recurrences r j +1 = γ j A r j − α j r j − β j r j − 1 , 5

  8. Optimal Krylov subspace methods with short recurrences CG (1952), MINRES, SYMMLQ (1975) based on three-term recurrences r j +1 = γ j A r j − α j r j − β j r j − 1 , generate orthogonal (or A -orthogonal) Krylov subspace basis, 5

  9. Optimal Krylov subspace methods with short recurrences CG (1952), MINRES, SYMMLQ (1975) based on three-term recurrences r j +1 = γ j A r j − α j r j − β j r j − 1 , generate orthogonal (or A -orthogonal) Krylov subspace basis, optimal in the sense that they minimize some error norm: � x − x j � A in CG, � x − x j � A T A = � r j � in MINRES, � x − x j � in SYMMLQ -here x j ∈ x 0 + A K j ( A , r 0 ) . 5

  10. Optimal Krylov subspace methods with short recurrences CG (1952), MINRES, SYMMLQ (1975) based on three-term recurrences r j +1 = γ j A r j − α j r j − β j r j − 1 , generate orthogonal (or A -orthogonal) Krylov subspace basis, optimal in the sense that they minimize some error norm: � x − x j � A in CG, � x − x j � A T A = � r j � in MINRES, � x − x j � in SYMMLQ -here x j ∈ x 0 + A K j ( A , r 0 ) . An important assumption on A : A is symmetric (MINRES, SYMMLQ) & pos. definite (CG). 5

  11. Gene Golub By the end of the 1970s it was unknown if such methods existed also for general unsymmetric A . Gatlinburg VIII (now Householder VIII) held in Oxford from July 5 to 11, 1981. “A prize of $500 has been offered by Gene Golub for the construction of a 3-term conjugate gradient like descent method for non-symmetric real matrices or a proof that there G. H. Golub, 1932–2007 can be no such method”. 6

  12. What kind of method Golub had in mind We want to solve A x = b using CG-like descent method: error is minimized in some given inner product norm, � · � B = �· , ·� 1 / 2 B . 7

  13. What kind of method Golub had in mind We want to solve A x = b using CG-like descent method: error is minimized in some given inner product norm, � · � B = �· , ·� 1 / 2 B . Starting from x 0 , compute x j +1 = x j + α j p j , j = 0 , 1 , . . . , p j is a direction vector, α j is a scalar (to be determined), span { p 0 , . . . , p j } = K j +1 ( A , r 0 ) , r 0 = b − A x 0 . 7

  14. What kind of method Golub had in mind We want to solve A x = b using CG-like descent method: error is minimized in some given inner product norm, � · � B = �· , ·� 1 / 2 B . Starting from x 0 , compute x j +1 = x j + α j p j , j = 0 , 1 , . . . , p j is a direction vector, α j is a scalar (to be determined), span { p 0 , . . . , p j } = K j +1 ( A , r 0 ) , r 0 = b − A x 0 . � x − x j +1 � B is minimal iff α j = � x − x j , p j � B � p j , p i � B = 0 . and � p j , p j � B 7

  15. What kind of method Golub had in mind We want to solve A x = b using CG-like descent method: error is minimized in some given inner product norm, � · � B = �· , ·� 1 / 2 B . Starting from x 0 , compute x j +1 = x j + α j p j , j = 0 , 1 , . . . , p j is a direction vector, α j is a scalar (to be determined), span { p 0 , . . . , p j } = K j +1 ( A , r 0 ) , r 0 = b − A x 0 . � x − x j +1 � B is minimal iff α j = � x − x j , p j � B � p j , p i � B = 0 . and � p j , p j � B p 0 , . . . , p j has to be a B -orthogonal basis of K j +1 ( A , r 0 ) . 7

  16. Faber and Manteuffel, 1984 Faber and Manteuffel gave the answer in 1984: For a general matrix A there exists no short recurrence for generating orthogonal Krylov subspace bases. What are the details of this statement ? 8

  17. Outline Introduction 1 2 Formulation of the problem 3 The Faber-Manteuffel theorem The ideas of a new proof 4 9

  18. Formulation of the problem B -inner product, Input and Notation Without loss of generality, B = I . Otherwise change the basis: � x, y � B = � B 1 / 2 x, B 1 / 2 y � , A ≡ B 1 / 2 AB − 1 / 2 , ˆ v ≡ B 1 / 2 v . ˆ 10

  19. Formulation of the problem B -inner product, Input and Notation Without loss of generality, B = I . Otherwise change the basis: � x, y � B = � B 1 / 2 x, B 1 / 2 y � , A ≡ B 1 / 2 AB − 1 / 2 , ˆ v ≡ B 1 / 2 v . ˆ Input data : A ∈ C n × n , a nonsingular matrix. v ∈ C n , an initial vector. 10

  20. Formulation of the problem B -inner product, Input and Notation Without loss of generality, B = I . Otherwise change the basis: � x, y � B = � B 1 / 2 x, B 1 / 2 y � , A ≡ B 1 / 2 AB − 1 / 2 , ˆ v ≡ B 1 / 2 v . ˆ Input data : A ∈ C n × n , a nonsingular matrix. v ∈ C n , an initial vector. Notation: d min ( A ) . . . the degree of the minimal polynomial of A . d = d ( A , v ) . . . the grade of v with respect to A , the smallest d s.t. K d ( A , v ) is invariant under mult. with A . 10

  21. Formulation of the problem Our Goal Generate a basis v 1 , . . . , v d of K d ( A , v ) s.t. 1. span { v 1 , . . . , v j } = K j ( A, v ) , for j = 1 , . . . , d , 2. � v i , v j � = 0 , for i � = j , i, j = 1 , . . . , d . 11

  22. Formulation of the problem Our Goal Generate a basis v 1 , . . . , v d of K d ( A , v ) s.t. 1. span { v 1 , . . . , v j } = K j ( A, v ) , for j = 1 , . . . , d , 2. � v i , v j � = 0 , for i � = j , i, j = 1 , . . . , d . Arnoldi’s method: Standard way for generating the orthogonal basis (no normalization for convenience): v 1 ≡ v , j � h i,j = � A v j , v i � v j +1 = A v j − h i,j v i , � v i , v i � , i =1 j = 0 , . . . , d − 1 . 11

  23. Formulation of the problem Arnoldi’s method - matrix formulation In matrix notation: v 1 = v ,   · · · h 1 , 1 h 1 ,d − 1   . ... .   1 .     ... A [ v 1 , . . . , v d − 1 ] = [ v 1 , . . . , v d ]   , h d − 1 ,d − 1   � �� � � �� �   ≡ V d − 1 ≡ V d   1 � �� � ≡ H d,d − 1 V ∗ d V d is diagonal , d = dim K n ( A , v ) . 12

  24. Formulation of the problem Optimal short recurrences (Definition - Liesen and Strakoš, 2008) A admits an optimal ( s + 2) -term recurrence, if for any v , H d,d − 1 is at most ( s + 2) -band Hessenberg, and for at least one v , H d,d − 1 is ( s + 2) -band Hessenberg. s + 1 � �� �   • · · · •   ... ...   •     ... ...   •   = A V d − 1 V d   . ... ...   . .     ...   •   • � �� � d − 1 13

  25. Formulation of the problem Basic question What are sufficient and necessary conditions for A to admit an optimal ( s + 2) -term recurrence? 14

  26. Formulation of the problem Basic question What are sufficient and necessary conditions for A to admit an optimal ( s + 2) -term recurrence? In other words, how can we characterize matrices A such that for any v , Arnoldi’s method applied to A and v generates an orthogonal basis via a short recurrence of length s + 2 . 14

  27. Formulation of the problem Basic question What are sufficient and necessary conditions for A to admit an optimal ( s + 2) -term recurrence? In other words, how can we characterize matrices A such that for any v , Arnoldi’s method applied to A and v generates an orthogonal basis via a short recurrence of length s + 2 . Example of sufficiency: If A ∗ = A , then s = 1 and A admits an optimal 3 -term recurrence. 14

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend