moments krylov subspace methods and model reduction
play

Moments, Krylov subspace methods and model reduction Zden ek - PowerPoint PPT Presentation

Moments, Krylov subspace methods and model reduction Zden ek Strako Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/strakos 8th GAMM ALA Workshop, TU Hamburg-Harburg, September 2008. Thanks to the collaborators


  1. Moments, Krylov subspace methods and model reduction Zdenˇ ek Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/˜strakos 8th GAMM ALA Workshop, TU Hamburg-Harburg, September 2008.

  2. Thanks to the collaborators Gene H. Golub Anne Greenbaum Gerard Meurant Dianne P . O’Leary Chris Paige Petr Tichý Jörg Liesen Z. Strakoš 2

  3. Historical remark on iterative methods 1950 - Iterative methods for elliptic PDE - Ph.D. Thesis by D. Young at Harvard (published in 1954) 1951, 1952 - Lanczos algorithm, conjugate gradient method by C. Lanczos, M. Hestenes and E. Stiefel 1962 - Book Matrix Iterative Analysis by R. Varga 1971 - Book Iterative methods by D. Young 1971 - Lecture of J. Reid in Dundee (published in 1971) 1971 - Ph.D. Thesis of C.C. Paige at the University of London (published in 1972, 1976 and 1980) Z. Strakoš 3

  4. Projections onto Krylov subspaces A ∈ C N × N , A x = b, r 0 = b − Ax 0 A n x n = b n Here x n approximates the solution x using the projection onto low dimensional subspaces K n ( A, r 0 ) ≡ span { r 0 , Ar 0 , · · · , A n − 1 r 0 } . Z. Strakoš 4

  5. Nonlinearity and moments The projection process using Krylov subspaces is highly nonlinear in A and it depends on r 0 , x n ∈ K n ( A, r 0 ) ≡ span { r 0 , Ar 0 , · · · , A n − 1 r 0 } . K n ( A, r 0 ) accumulate the dominant information of A with respect to r 0 . Unlike in the power method for computing the single dominant eigenspace, here all the information accumulated along the way is used, see Parlett (1980), Example 12.1.1. The idea of projections using Krylov subspaces is in a fundamental way linked with the problem of moments. The story goes back to Gauss (1814). 5 Z. Strakoš

  6. Outline 1. Krylov subspace methods as matching moments model reduction 2. Convergence of CG in the presence of close eigenvalues 3. Gauss-Christoffel quadrature can be sensitive to small perturbations of the distribution function 4. CG in finite precision arithmetic Z. Strakoš 6

  7. 1 : Matching moments Consider a non-decreasing distribution function ω ( λ ) , λ ≥ 0 with the moments given by the Riemann-Stieltjes integral � ∞ λ k dω ( λ ) , ξ k = k = 0 , 1 , . . . . 0 λ ( n ) Find the distribution function ω ( n ) ( λ ) with n points of increase i which matches the first 2 n moments for the distribution function ω ( λ ) , � ∞ n λ k dω ( n ) ( λ ) ≡ ) k = ξ k , ω ( n ) ( λ ( n ) � k = 0 , 1 , . . . , 2 n − 1 . i i 0 i =1 7 Z. Strakoš

  8. 1 : Gauss-Christoffel quadrature Clearly, � ∞ n λ k dω ( λ ) = ) k , ω ( n ) ( λ ( n ) � k = 0 , 1 , . . . , 2 n − 1 i i 0 i =1 represents the n -point Gauss-Christoffel quadrature, see . Gauss, Methodus nova integralium valores per approximationem C. F inveniendi, (1814), C. G. J. Jacobi, Über Gauss’ neue Methode, die Werthe der Integrale näherungsweise zu finden, (1826), and the description given in H. H. J. Goldstine, A History of Numerical Analysis from the 16th through the 19th Century, (1977). With no loss of generality we assume ξ 0 = 1 . Z. Strakoš 8

  9. 1 : Model reduction via matching moments I Gauss-Christoffel quadrature formulation: � ∞ n ω ( n ) f ( λ ( n ) � f ( λ ) dω ( λ ) ≈ ) , i i 0 i =1 n points where the reduced model given by the distribution function with ω ( n ) of increase matches the first 2 n moments � ∞ n λ k dω ( λ ) = ) k , ω ( n ) ( λ ( n ) � k = 0 , 1 , . . . , 2 n − 1 . i i 0 i =1 Z. Strakoš 9

  10. 1 : Stieltjes recurrence Let p 1 ( λ ) ≡ 1 , p 2 ( λ ) , . . . , p n +1 ( λ ) be the first n + 1 orthonormal ω ( λ ) . polynomials corresponding to the distribution function P n ( λ ) = ( p 1 ( λ ) , . . . , p n ( λ )) T , Then, writing λ P n ( λ ) = T n P n ( λ ) + δ n +1 p n +1 ( λ ) e n represents the Stieltjes recurrence (1883-4), with the Jacobi matrix   γ 1 δ 2 ...   δ 2 γ 2   T n ≡ , δ l > 0 .   ... ... δ n       δ n γ n Z. Strakoš 10

  11. ≡ 1 : Matrix computation: Lanczos Stieltjes In matrix computations, T n results from the Lanczos process (1951) applied to T n starting with e 1 . Therefore p 1 ( λ ) ≡ 1 , p 2 ( λ ) , . . . , p n ( λ ) are orthonormal with respect to the inner product n , e 1 ) | 2 p s ( θ ( n ) | ( z ( n ) ) p t ( θ ( n ) � ( p s , p t ) ≡ ) , i i i i =1 z ( n ) where is the orthonormal eigenvector of T n corresponding to the i θ ( n ) θ ( n ) eigenvalue , and p n +1 ( λ ) has the roots , i = 1 , . . . , n . i i Consequently, , e 1 ) | 2 , ω ( n ) = | ( z ( n ) λ ( n ) = θ ( n ) , i i i i Golub and Welsh (1969), . . . . . . . . . , Meurant and S, Acta Numerica (2006). Z. Strakoš 11

  12. 1 : Linear algebraic equation A ∈ C N × N , r 0 = b − Ax 0 , w 1 = r 0 / � r 0 � . Given Ax = b with an HPD dim( K n ( A, r 0 )) = n . Assume, for simplicity of notation, Consider the spectral decomposition A = S diag ( λ i ) S ∗ , where for clarity of exposition we assume that the eigenvalues are distinct, 0 < λ 1 < . . . < λ N , S = [ s 1 , . . . , s N ] . A and w 1 ( b, x 0 ) determine the distribution function ω ( λ ) with ω i = | ( s i , w 1 ) | 2 , i = 1 , . . . , N . N points of increase λ i and weights Z. Strakoš 12

  13. ω ( λ ) 1 : Distribution function ω N 1 . . ω 4 . ω 3 ω 2 ω 1 0 . . . . . . ζ λ 1 λ 2 λ 3 λ N ξ Z. Strakoš 13

  14. 1 : Model reduction via matching moments II Matrix formulation: � ∞ N λ k dω ( λ ) 1 A k w 1 , � ω j ( λ j ) k = w ∗ = 0 i =1 n n ) k = ω ( n ) ( λ ( n ) ω ( n ) ( θ ( n ) � � ) k e T 1 T k = n e 1 . i i i i i =1 i =1 matching the first 2 n moments therefore means 1 A k w 1 ≡ e T w ∗ 1 T k n e 1 , k = 0 , 1 , . . . , 2 n − 1 . Z. Strakoš 14

  15. Ax = b 1 : Conjugate gradients (CG) for The A -norm of the error is minimal! See Elman, Silvester and Wathen (2005), p. 71. � x − x n � A = u ∈ x 0 + K n ( A,r 0 ) � x − u � A min with the formulation via the Lanczos process, w 1 = r 0 / � r 0 � , A W n = W n T n + δ n +1 w n +1 e T T n = W ∗ n , n ( A ) A W n ( A ) , and the CG approximation given by T n y n = � r 0 � e 1 , x n = x 0 + W n y n . Z. Strakoš 15

  16. 1 : Alternative descriptions ● Stay with A, b, r 0 , w 1 and work with the matrix formulation using the Lanczos process (CG) applied to A with w 1 . ● Using the basis of eigenvectors S , the matrix formulation reduces to the mathematically equivalent polynomial formulation, Lanczos (CG) reduces to the Stieltjes process applied to the distribution function ω ( λ ) . In both descriptions the n -th step gives the Jacobi matrix T n and the ω n ( λ ) . distribution function The relationship was pointed out by Hestenes and Stiefel (1952), . . . nice Ph.D. Thesis by Kent (1989, Stanford), book by B. Fischer (1996), paper by Fischer and Freund (1992). Z. Strakoš 16

  17. CG ≡ matrix formulation of the Gauss Q 1 : � ξ λ − 1 dω ( λ ) Ax = b , x 0 ← → ζ ↑ ↑ n � � ω ( n ) θ ( n ) � − 1 T n y n = � r 0 � e 1 ← → i i i =1 x n = x 0 + W n y n ω ( n ) ( λ ) − → ω ( λ ) Z. Strakoš 17

  18. 1 : Matching moments model reduction CG (Lanczos) reduces for A HPD at the step n the original model A x = b , r 0 = b − Ax 0 to T n y n = � r 0 � e 1 , 2 n such that the the first moments are matched, 1 A k w 1 = e T w ∗ 1 T k n e 1 , k = 0 , 1 , . . . , 2 n − 1 . Z. Strakoš 18

  19. 1 : Comments on literature Proofs of results related to moments or model reduction are in the literature typically based on factorizations of the matrix of moments, Golub and Welsh (1969), Dahlquist, Golub and Nash (1978), . . . , Kent(1989), . . . , which is also true for Antoulas (2005). Moment matching techniques has been used for decades in computational physics and in computational chemistry, see Gordon (1968). Gauss quadrature formulation related to the nonsymmetric Lanczos process and to the Arnoldi process was given by Freund and Hochbruck (1993), motivated by Fischer and Freund (1992). Gauss quadrature was formally extended to the complex plane by Saylor and Smolarski (2001), with motivation from inverse scattering problems in electromagnetics by Warnick (1997), . . . , Golub, Stoll and Wathen (2008). Here we avoid using matrix of moments, and do not need any formal generalization of the Gauss quadrature formulas to the complex plane. Z. Strakoš 19

  20. 1 : Vorobyev moment problem - 1958, 1965 Find a linear HPD operator A n on K n ( A, r 0 ) such that A n w 1 = A w 1 , A n ( A w 1 ) ≡ A 2 A 2 w 1 , n w 1 = . . . A n ( A n − 2 w 1 ) ≡ A n − 1 A n − 1 w 1 , w 1 = n A n ( A n − 1 w 1 ) ≡ A n Q n ( A n w 1 ) , n w 1 = where Q n projects onto K n orthogonally to K n . Z. Strakoš 20

  21. 1 : Matching moments model reduction By construction, 1 A k w 1 = w ∗ w ∗ 1 A k n w 1 , k = 0 , . . . , n − 1 . K n ( A, w 1 ) = span { w 1 , . . . , A n − 1 w 1 } , Since the projection Q n ( A n w 1 ) − A n n w 1 = Q n ( A n w 1 − A n n w 1 ) = 0 A gives (note that is Hermitian) 1 A k w 1 = w ∗ 1 A k w ∗ n w 1 , k = 0 , 1 , . . . , 2 n − 1 . Z. Strakoš 21

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend