slepc scalable library for eigenvalue problem computations
play

SLEPc: Scalable Library for Eigenvalue Problem Computations Jose E. - PowerPoint PPT Presentation

Introduction Basic Description Further Details Advanced Features Concluding Remarks SLEPc: Scalable Library for Eigenvalue Problem Computations Jose E. Roman High Performance Networking and Computing Group (GRyCAP) Universidad Polit


  1. Introduction Basic Description Further Details Advanced Features Concluding Remarks SLEPc: Scalable Library for Eigenvalue Problem Computations Jose E. Roman High Performance Networking and Computing Group (GRyCAP) Universidad Polit´ ecnica de Valencia, Spain August, 2005

  2. Introduction Basic Description Further Details Advanced Features Concluding Remarks Tutorial Outline 1 Introduction 3 Further Details Motivating Examples EPS Options Background on Spectral Eigenproblems Transformation 2 Basic Description 4 Advanced Features Overview of SLEPc 5 Concluding Remarks Basic Usage

  3. Introduction Basic Description Further Details Advanced Features Concluding Remarks Eigenproblems: Motivation Large sparse eigenvalue problems are among the most demanding calculations in scientific computing Example application areas: ◮ Dynamic structural analysis (e.g. civil engineering) ◮ Stability analysis (e.g. control engineering) ◮ Eigenfunction determination (e.g. quantum chemistry) ◮ Bifurcation analysis (e.g. fluid dynamics) ◮ Statistics / information retrieval (e.g. Google’s PageRank)

  4. Introduction Basic Description Further Details Advanced Features Concluding Remarks Motivating Example 1: Nuclear Engineering 10 0.81 Modal analysis of nuclear reactor cores 0.72 0.63 Objectives: 0.54 0.45 ◮ Improve safety 0.36 0.27 0.18 ◮ Reduce operation costs 0.09 0 Lambda Modes Equation 0 0 10 10 L φ = 1 λ M φ 0.6 0.5 0.4 0.3 Target: modes associated to largest λ 0.2 0.1 −1.39e−16 ◮ Criticality (eigenvalues) −0.1 −0.2 −0.3 ◮ Prediction of instabilities and −0.4 −0.5 −0.6 transient analysis (eigenvectors) 0 0 10

  5. Introduction Basic Description Further Details Advanced Features Concluding Remarks Motivating Example 1: Nuclear Engineering (cont’d) Discretized eigenproblem � � ψ 1 � M 11 � � ψ 1 � � � = 1 L 11 0 M 12 − L 21 L 22 ψ 2 0 0 ψ 2 λ Can be restated as N = M 11 + M 12 L − 1 Nψ 1 = λL 11 ψ 1 , 22 L 21 ◮ Generalized eigenvalue problem ◮ Matrix should not be computed explicitly ◮ In some applications, many successive problems are solved

  6. Introduction Basic Description Further Details Advanced Features Concluding Remarks Motivating Example 2: Computational Electromagnetics Objective: Analysis of resonant cavities r ∇ × � ε r � Source-free wave µ − 1 E ) − κ 2 ∇ × (ˆ 0 ˆ E =0 equations r ∇ × � µ r � ε − 1 H ) − κ 2 ∇ × (ˆ 0 ˆ H =0 Target: A few smallest nonzero eigenfrequencies Discretization: 1 st order edge finite elements (tetrahedral) Ax = κ 2 Generalized Eigenvalue Problem 0 Bx ◮ A and B are large and sparse, possibly complex ◮ A is (complex) symmetric and semi-positive definite ◮ B is (complex) symmetric and positive definite

  7. Introduction Basic Description Further Details Advanced Features Concluding Remarks Motivating Example 2: Comp. Electromagnetics (cont’d) Matrix A has a high-dimensional null space, N ( A ) ◮ The problem Ax = κ 2 0 Bx has many zero eigenvalues ◮ These eigenvalues should be avoided during computation λ 1 , λ 2 , . . . , λ k , λ k +1 , λ k +2 , . . . , λ n � �� � � �� � =0 Target Eigenfunctions associated to 0 are irrotational electric fields, � E = −∇ Φ . This allows the computation of a basis of N ( A ) Constrained Eigenvalue Problem where the columns � Ax = κ 2 0 Bx of C span N ( A ) C T Bx = 0

  8. Introduction Basic Description Further Details Advanced Features Concluding Remarks Facts Observed from the Examples ◮ Many formulations ◮ Not all eigenproblems are formulated as simply Ax = λx or Ax = λBx ◮ We have to account for: spectral transformations, block-structured problems, constrained problems, etc. ◮ Wanted solutions ◮ Many ways of specifying which solutions must be sought ◮ We have to account for: different extreme eigenvalues as well as interior ones ◮ Various problem characteristics ◮ Problems can be real/complex, Hermitian/non-Hermitian Goal: provide a uniform, coherent way of addressing these problems

  9. Introduction Basic Description Further Details Advanced Features Concluding Remarks Background on Eigenvalue Problems Consider the following eigenvalue problems Standard Eigenproblem Generalized Eigenproblem Ax = λx Ax = λBx where ◮ λ is a (complex) scalar: eigenvalue ◮ x is a (complex) vector: eigenvector ◮ Matrices A and B can be real or complex ◮ Matrices A and B can be symmetric (Hermitian) or not ◮ Typically, B is symmetric positive (semi-) definite

  10. Introduction Basic Description Further Details Advanced Features Concluding Remarks Solution of the Eigenvalue Problem There are n eigenvalues (counted with their multiplicities) nev = number of Partial eigensolution: nev solutions eigenvalues / λ 0 , λ 1 , . . . , λ nev − 1 ∈ C eigenvectors x 0 , x 1 , . . . , x nev − 1 ∈ C n (eigenpairs) Different requirements: ◮ Compute a few of the dominant eigenvalues (largest magnitude) ◮ Compute a few λ i ’s with smallest or largest real parts ◮ Compute all λ i ’s in a certain region of the complex plane

  11. Introduction Basic Description Further Details Advanced Features Concluding Remarks Single-Vector Methods The following algorithm converges to the dominant eigenpair ( λ 1 , x 1 ) , where | λ 1 | > | λ 2 | ≥ · · · ≥ | λ n | Notes: Power Method ◮ Only needs two vectors Set y = v 0 For k = 1 , 2 , . . . ◮ Deflation schemes to find v = y/ � y � 2 subsequent eigenpairs y = Av ◮ Slow convergence θ = v ∗ y (proportional to | λ 1 /λ 2 | ) Check convergence ◮ Fails if | λ 1 | = | λ 2 | end

  12. Introduction Basic Description Further Details Advanced Features Concluding Remarks Variants of the Power Method Shifted Power Method ◮ Example: Markov chain problem has two dominant eigenvalues λ 1 = 1 , λ 2 = − 1 = ⇒ Power Method fails! ◮ Solution: Apply the Power Method to matrix A + σI Inverse Iteration ◮ Observation: The eigenvectors of A and A − 1 are identical ◮ The Power Method on ( A − σI ) − 1 will compute the eigenvalues closest to σ Rayleigh Quotient Iteration (RQI) ◮ Similar to Inverse Iteration but updating σ in each iteration

  13. Introduction Basic Description Further Details Advanced Features Concluding Remarks Spectral Transformation A general technique that can be used in many methods = ⇒ Ax = λx Tx = θx In the transformed problem ◮ The eigenvectors are not altered ◮ The eigenvalues are modified by a simple relation ◮ Convergence is usually improved (better separation) Shift-and-invert Shift of Origin Example transformations: T SI = ( A − σI ) − 1 T S = A + σI Drawback: T not computed explicitly, linear solves instead

  14. Introduction Basic Description Further Details Advanced Features Concluding Remarks Invariant Subspace A subspace S is called an invariant subspace of A if A S ⊂ S ◮ If A ∈ C n × n , V ∈ C n × k , and H ∈ C k × k satisfy A V = V H then S ≡ C ( V ) is an invariant subspace of A Objective: build an invariant subspace to extract the eigensolutions Partial Schur Decomposition AQ = QR ◮ Q has nev columns which are orthonormal ◮ R is a nev × nev upper (quasi-) triangular matrix

  15. Introduction Basic Description Further Details Advanced Features Concluding Remarks Projection Methods The general scheme of a projection method: 1. Build an orthonormal basis of a certain subspace 2. Project the original problem onto this subspace 3. Use the solution of the projected problem to compute an approximate invariant subspace ◮ Different methods use different subspaces ◮ Subspace Iteration: A k X ◮ Arnoldi, Lanczos: K m ( A, v 1 ) = span { v 1 , Av 1 , . . . , A m − 1 v 1 } ◮ Dimension of the subspace: ncv (number of column vectors) ◮ Restart & deflation necessary until nev solutions converged

  16. Introduction Basic Description Further Details Advanced Features Concluding Remarks Summary Observations to be added to the previous ones ◮ The solver computes only nev eigenpairs ◮ Internally, it works with ncv vectors ◮ Single-vector methods are very limited ◮ Projection methods are preferred ◮ Internally, solvers can be quite complex (deflation, restart, ...) ◮ Spectral transformations can be used irrespective of the solver ◮ Repeated linear solves may be required Goal: hide eigensolver complexity and separate spectral transform

  17. Introduction Basic Description Further Details Advanced Features Concluding Remarks Executive Summary SLEPc : Scalable Library for Eigenvalue Problem Computations A general library for solving large-scale sparse eigenproblems on parallel computers ◮ For standard and generalized eigenproblems ◮ For real and complex arithmetic ◮ For Hermitian or non-Hermitian problems Current version: 2.3.0 (released July 2005) http://www.grycap.upv.es/slepc

  18. Introduction Basic Description Further Details Advanced Features Concluding Remarks SLEPc and PETSc SLEPc extends PETSc for solving eigenvalue problems PETSc: Portable, Extensible Toolkit for Scientific Computation ◮ Software for the solution of PDE’s in parallel computers ◮ A freely available and supported research code ◮ Usable from C, C++, Fortran77/90 ◮ Focus on abstraction, portability, interoperability, ... ◮ Object-oriented design (encapsulation, inheritance and polymorphism) ◮ Current: 2.3.0 http://www.mcs.anl.gov/petsc SLEPc inherits all good properties of PETSc

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend