Krylov subspace methods for eigenvalue problems David S. Watkins - - PowerPoint PPT Presentation

krylov subspace methods for eigenvalue problems
SMART_READER_LITE
LIVE PREVIEW

Krylov subspace methods for eigenvalue problems David S. Watkins - - PowerPoint PPT Presentation

Krylov subspace methods for eigenvalue problems David S. Watkins watkins@math.wsu.edu Department of Mathematics Washington State University October 16, 2008 p. Problem: Linear Elasticity October 16, 2008 p. Problem: Linear


slide-1
SLIDE 1

Krylov subspace methods for eigenvalue problems

David S. Watkins

watkins@math.wsu.edu

Department of Mathematics Washington State University

October 16, 2008 – p.

slide-2
SLIDE 2

Problem: Linear Elasticity

October 16, 2008 – p.

slide-3
SLIDE 3

Problem: Linear Elasticity

Elastic Deformation (3D, anisotropic, composite materials)

October 16, 2008 – p.

slide-4
SLIDE 4

Problem: Linear Elasticity

Elastic Deformation (3D, anisotropic, composite materials) Singularities at cracks, interfaces

October 16, 2008 – p.

slide-5
SLIDE 5

Problem: Linear Elasticity

Elastic Deformation (3D, anisotropic, composite materials) Singularities at cracks, interfaces Lamé Equations (spherical coordinates)

October 16, 2008 – p.

slide-6
SLIDE 6

Problem: Linear Elasticity

Elastic Deformation (3D, anisotropic, composite materials) Singularities at cracks, interfaces Lamé Equations (spherical coordinates) Separate radial variable.

October 16, 2008 – p.

slide-7
SLIDE 7

Problem: Linear Elasticity

Elastic Deformation (3D, anisotropic, composite materials) Singularities at cracks, interfaces Lamé Equations (spherical coordinates) Separate radial variable. Get quadratic eigenvalue problem.

(λ2M + λG + K)v = 0 M∗ = M > 0 G∗ = −G K∗ = K < 0

October 16, 2008 – p.

slide-8
SLIDE 8

Linear Elasticity, Continued

Discretize θ and ϕ variables.

October 16, 2008 – p.

slide-9
SLIDE 9

Linear Elasticity, Continued

Discretize θ and ϕ variables. (finite element method)

October 16, 2008 – p.

slide-10
SLIDE 10

Linear Elasticity, Continued

Discretize θ and ϕ variables. (finite element method)

(λ2M + λG + K)v = 0 MT = M > 0 GT = −G KT = K < 0

matrix quadratic eigenvalue problem (large, sparse)

October 16, 2008 – p.

slide-11
SLIDE 11

Linear Elasticity, Continued

Discretize θ and ϕ variables. (finite element method)

(λ2M + λG + K)v = 0 MT = M > 0 GT = −G KT = K < 0

matrix quadratic eigenvalue problem (large, sparse) Find few smallest eigenvalues (and corresponding eigenvectors).

October 16, 2008 – p.

slide-12
SLIDE 12

Linear Elasticity, Continued

Discretize θ and ϕ variables. (finite element method)

(λ2M + λG + K)v = 0 MT = M > 0 GT = −G KT = K < 0

matrix quadratic eigenvalue problem (large, sparse) Find few smallest eigenvalues (and corresponding eigenvectors). Respect the structure. (symmetric/skew-symmetric)

October 16, 2008 – p.

slide-13
SLIDE 13

Hamiltonian Structure

October 16, 2008 – p.

slide-14
SLIDE 14

Hamiltonian Structure

October 16, 2008 – p.

slide-15
SLIDE 15

Reduction to First Order

October 16, 2008 – p.

slide-16
SLIDE 16

Reduction to First Order

λ2Mv + λGv + Kv = 0

October 16, 2008 – p.

slide-17
SLIDE 17

Reduction to First Order

λ2Mv + λGv + Kv = 0 w = λv,

October 16, 2008 – p.

slide-18
SLIDE 18

Reduction to First Order

λ2Mv + λGv + Kv = 0 w = λv, Mw = λMv

October 16, 2008 – p.

slide-19
SLIDE 19

Reduction to First Order

λ2Mv + λGv + Kv = 0 w = λv, Mw = λMv

  • −K

−M v w

  • − λ
  • G

M −M v w

  • = 0

October 16, 2008 – p.

slide-20
SLIDE 20

Reduction to First Order

λ2Mv + λGv + Kv = 0 w = λv, Mw = λMv

  • −K

−M v w

  • − λ
  • G

M −M v w

  • = 0

Ax − λBx = 0

October 16, 2008 – p.

slide-21
SLIDE 21

Reduction to First Order

λ2Mv + λGv + Kv = 0 w = λv, Mw = λMv

  • −K

−M v w

  • − λ
  • G

M −M v w

  • = 0

Ax − λBx = 0

symmetric/skew-symmetric

October 16, 2008 – p.

slide-22
SLIDE 22

Reduction to Hamiltonian Matrix

October 16, 2008 – p.

slide-23
SLIDE 23

Reduction to Hamiltonian Matrix

A − λB

(symmetric/skew-symmetric)

October 16, 2008 – p.

slide-24
SLIDE 24

Reduction to Hamiltonian Matrix

A − λB

(symmetric/skew-symmetric)

B = RTJR

  • J =
  • I

−I

  • sometimes easy, always possible

October 16, 2008 – p.

slide-25
SLIDE 25

Reduction to Hamiltonian Matrix

A − λB

(symmetric/skew-symmetric)

B = RTJR

  • J =
  • I

−I

  • sometimes easy, always possible

A − λRTJR

October 16, 2008 – p.

slide-26
SLIDE 26

Reduction to Hamiltonian Matrix

A − λB

(symmetric/skew-symmetric)

B = RTJR

  • J =
  • I

−I

  • sometimes easy, always possible

A − λRTJR R−T AR−1 − λJ

October 16, 2008 – p.

slide-27
SLIDE 27

Reduction to Hamiltonian Matrix

A − λB

(symmetric/skew-symmetric)

B = RTJR

  • J =
  • I

−I

  • sometimes easy, always possible

A − λRTJR R−T AR−1 − λJ JT R−TAR−1 − λI

October 16, 2008 – p.

slide-28
SLIDE 28

Reduction to Hamiltonian Matrix

A − λB

(symmetric/skew-symmetric)

B = RTJR

  • J =
  • I

−I

  • sometimes easy, always possible

A − λRTJR R−T AR−1 − λJ JT R−TAR−1 − λI H = JT R−TAR−1 is Hamiltonian.

October 16, 2008 – p.

slide-29
SLIDE 29

in our case . . .

October 16, 2008 – p.

slide-30
SLIDE 30

in our case . . .

B =

  • G

M −M

  • October 16, 2008 – p.
slide-31
SLIDE 31

in our case . . .

B =

  • G

M −M

  • B = RTJR =
  • I

−1

2G

M I −I I

1 2G

M

  • October 16, 2008 – p.
slide-32
SLIDE 32

in our case . . .

B =

  • G

M −M

  • B = RTJR =
  • I

−1

2G

M I −I I

1 2G

M

  • H =
  • I

−1

2G

I M−1 −K I −1

2G

I

  • October 16, 2008 – p.
slide-33
SLIDE 33

in our case . . .

B =

  • G

M −M

  • B = RTJR =
  • I

−1

2G

M I −I I

1 2G

M

  • H =
  • I

−1

2G

I M−1 −K I −1

2G

I

  • Do not compute H explicitly.

October 16, 2008 – p.

slide-34
SLIDE 34

in our case . . .

B =

  • G

M −M

  • B = RTJR =
  • I

−1

2G

M I −I I

1 2G

M

  • H =
  • I

−1

2G

I M−1 −K I −1

2G

I

  • Do not compute H explicitly. (nor M−1)

October 16, 2008 – p.

slide-35
SLIDE 35

Working with H

October 16, 2008 – p.

slide-36
SLIDE 36

Working with H

Krylov subspace methods

October 16, 2008 – p.

slide-37
SLIDE 37

Working with H

Krylov subspace methods just need to apply the operator:

October 16, 2008 – p.

slide-38
SLIDE 38

Working with H

Krylov subspace methods just need to apply the operator: x → Hx

October 16, 2008 – p.

slide-39
SLIDE 39

Working with H

Krylov subspace methods just need to apply the operator: x → Hx

H =

  • I

−1

2G

I M−1 −K I −1

2G

I

  • October 16, 2008 – p.
slide-40
SLIDE 40

Working with H

Krylov subspace methods just need to apply the operator: x → Hx

H =

  • I

−1

2G

I M−1 −K I −1

2G

I

  • October 16, 2008 – p.
slide-41
SLIDE 41

Working with H

Krylov subspace methods just need to apply the operator: x → Hx

H =

  • I

−1

2G

I M−1 −K I −1

2G

I

  • H−1

=

  • I

1 2G

I (−K)−1 M I

1 2G

I

  • October 16, 2008 – p.
slide-42
SLIDE 42

Working with H−1

October 16, 2008 – p.

slide-43
SLIDE 43

Working with H−1

H−1 =

  • I

1 2G

I (−K)−1 M I

1 2G

I

  • October 16, 2008 – p.
slide-44
SLIDE 44

Working with H−1

H−1 =

  • I

1 2G

I (−K)−1 M I

1 2G

I

  • x → H−1x

October 16, 2008 – p.

slide-45
SLIDE 45

Working with H−1

H−1 =

  • I

1 2G

I (−K)−1 M I

1 2G

I

  • x → H−1x

Do not compute (−K)−1

October 16, 2008 – p.

slide-46
SLIDE 46

Working with H−1

H−1 =

  • I

1 2G

I (−K)−1 M I

1 2G

I

  • x → H−1x

Do not compute (−K)−1 Cholesky decomposition:

October 16, 2008 – p.

slide-47
SLIDE 47

Working with H−1

H−1 =

  • I

1 2G

I (−K)−1 M I

1 2G

I

  • x → H−1x

Do not compute (−K)−1 Cholesky decomposition: (−K) = RTR To compute w = −K−1v,

October 16, 2008 – p.

slide-48
SLIDE 48

Working with H−1

H−1 =

  • I

1 2G

I (−K)−1 M I

1 2G

I

  • x → H−1x

Do not compute (−K)−1 Cholesky decomposition: (−K) = RTR To compute w = −K−1v, Solve (−K)w = v.

October 16, 2008 – p.

slide-49
SLIDE 49

Working with H−1

H−1 =

  • I

1 2G

I (−K)−1 M I

1 2G

I

  • x → H−1x

Do not compute (−K)−1 Cholesky decomposition: (−K) = RTR To compute w = −K−1v, Solve (−K)w = v.

RT Rw = v

Backsolve!

October 16, 2008 – p.

slide-50
SLIDE 50

K =

October 16, 2008 – p. 1

slide-51
SLIDE 51

K =

20 40 60 80 100 120 140 20 40 60 80 100 120 140 nz = 670

October 16, 2008 – p. 1

slide-52
SLIDE 52

R =

October 16, 2008 – p. 1

slide-53
SLIDE 53

R =

20 40 60 80 100 120 140 20 40 60 80 100 120 140 nz = 896

October 16, 2008 – p. 1

slide-54
SLIDE 54

K−1 =

October 16, 2008 – p. 1

slide-55
SLIDE 55

K−1 =

20 40 60 80 100 120 140 20 40 60 80 100 120 140 nz = 21316

October 16, 2008 – p. 1

slide-56
SLIDE 56

Second Application

October 16, 2008 – p. 1

slide-57
SLIDE 57

Second Application

Nonlinear Optics

October 16, 2008 – p. 1

slide-58
SLIDE 58

Second Application

Nonlinear Optics Schrödinger eigenvalue problem

October 16, 2008 – p. 1

slide-59
SLIDE 59

Second Application

Nonlinear Optics Schrödinger eigenvalue problem

−2 2m ∇2ψ + V ψ = λψ

October 16, 2008 – p. 1

slide-60
SLIDE 60

Second Application

Nonlinear Optics Schrödinger eigenvalue problem

−2 2m ∇2ψ + V ψ = λψ

Solve numerically (finite elements)

October 16, 2008 – p. 1

slide-61
SLIDE 61

Second Application

Nonlinear Optics Schrödinger eigenvalue problem

−2 2m ∇2ψ + V ψ = λψ

Solve numerically (finite elements)

Kv = λMv K = KT > 0, M = MT > 0

October 16, 2008 – p. 1

slide-62
SLIDE 62

Second Application

Nonlinear Optics Schrödinger eigenvalue problem

−2 2m ∇2ψ + V ψ = λψ

Solve numerically (finite elements)

Kv = λMv K = KT > 0, M = MT > 0

Matrices are large and sparse.

October 16, 2008 – p. 1

slide-63
SLIDE 63

Kv = λMv

October 16, 2008 – p. 1

slide-64
SLIDE 64

Kv = λMv

Want few smallest eigenvalues and associated eigenvectors.

October 16, 2008 – p. 1

slide-65
SLIDE 65

Kv = λMv

Want few smallest eigenvalues and associated eigenvectors. Invert the problem.

October 16, 2008 – p. 1

slide-66
SLIDE 66

Kv = λMv

Want few smallest eigenvalues and associated eigenvectors. Invert the problem.

K = RT R

October 16, 2008 – p. 1

slide-67
SLIDE 67

Kv = λMv

Want few smallest eigenvalues and associated eigenvectors. Invert the problem.

K = RT R RT Rv = λMv

October 16, 2008 – p. 1

slide-68
SLIDE 68

Kv = λMv

Want few smallest eigenvalues and associated eigenvectors. Invert the problem.

K = RT R RT Rv = λMv R−T MR−1(Rv) = λ−1(Rv)

October 16, 2008 – p. 1

slide-69
SLIDE 69

Kv = λMv

Want few smallest eigenvalues and associated eigenvectors. Invert the problem.

K = RT R RT Rv = λMv R−T MR−1(Rv) = λ−1(Rv) A = R−TMR−1, AT = A > 0

October 16, 2008 – p. 1

slide-70
SLIDE 70

Kv = λMv

Want few smallest eigenvalues and associated eigenvectors. Invert the problem.

K = RT R RT Rv = λMv R−T MR−1(Rv) = λ−1(Rv) A = R−TMR−1, AT = A > 0 x → Ax

October 16, 2008 – p. 1

slide-71
SLIDE 71

Kv = λMv

Want few smallest eigenvalues and associated eigenvectors. Invert the problem.

K = RT R RT Rv = λMv R−T MR−1(Rv) = λ−1(Rv) A = R−TMR−1, AT = A > 0 x → Ax

backsolve

October 16, 2008 – p. 1

slide-72
SLIDE 72

Kv = λMv

Want few smallest eigenvalues and associated eigenvectors. Invert the problem.

K = RT R RT Rv = λMv R−T MR−1(Rv) = λ−1(Rv) A = R−TMR−1, AT = A > 0 x → Ax

backsolve Do not form A explicitly.

October 16, 2008 – p. 1

slide-73
SLIDE 73

What our applications have in common

October 16, 2008 – p. 1

slide-74
SLIDE 74

What our applications have in common

large, sparse matrices

October 16, 2008 – p. 1

slide-75
SLIDE 75

What our applications have in common

large, sparse matrices use of matrix factorization (Cholesky decomposition)

October 16, 2008 – p. 1

slide-76
SLIDE 76

What our applications have in common

large, sparse matrices use of matrix factorization (Cholesky decomposition) some kind of structure

October 16, 2008 – p. 1

slide-77
SLIDE 77

What our applications have in common

large, sparse matrices use of matrix factorization (Cholesky decomposition) some kind of structure . . . not enough time to discuss this

October 16, 2008 – p. 1

slide-78
SLIDE 78

Classification

  • f Eigenvalue Problems

October 16, 2008 – p. 1

slide-79
SLIDE 79

Classification

  • f Eigenvalue Problems

small

October 16, 2008 – p. 1

slide-80
SLIDE 80

Classification

  • f Eigenvalue Problems

small medium

October 16, 2008 – p. 1

slide-81
SLIDE 81

Classification

  • f Eigenvalue Problems

small medium large

October 16, 2008 – p. 1

slide-82
SLIDE 82

Small Matrices

October 16, 2008 – p. 1

slide-83
SLIDE 83

Small Matrices

store conventionally

October 16, 2008 – p. 1

slide-84
SLIDE 84

Small Matrices

store conventionally similarity transformations

October 16, 2008 – p. 1

slide-85
SLIDE 85

Small Matrices

store conventionally similarity transformations

QR algorithm

October 16, 2008 – p. 1

slide-86
SLIDE 86

Small Matrices

store conventionally similarity transformations

QR algorithm

get all eigenvalues/vectors

October 16, 2008 – p. 1

slide-87
SLIDE 87

Small Matrices

store conventionally similarity transformations

QR algorithm

get all eigenvalues/vectors

n ≈ 103

October 16, 2008 – p. 1

slide-88
SLIDE 88

Medium Matrices

October 16, 2008 – p. 1

slide-89
SLIDE 89

Medium Matrices

store as sparse matrix

October 16, 2008 – p. 1

slide-90
SLIDE 90

Medium Matrices

store as sparse matrix no similarity transformations

October 16, 2008 – p. 1

slide-91
SLIDE 91

Medium Matrices

store as sparse matrix no similarity transformations matrix factorization okay

October 16, 2008 – p. 1

slide-92
SLIDE 92

Medium Matrices

store as sparse matrix no similarity transformations matrix factorization okay shift and invert

October 16, 2008 – p. 1

slide-93
SLIDE 93

20 40 60 80 100 120 140 20 40 60 80 100 120 140 nz = 670

October 16, 2008 – p. 1

slide-94
SLIDE 94

20 40 60 80 100 120 140 20 40 60 80 100 120 140 nz = 1815

October 16, 2008 – p. 2

slide-95
SLIDE 95

Medium Matrices, Continued

October 16, 2008 – p. 2

slide-96
SLIDE 96

Medium Matrices, Continued

store matrix factor as sparse matrix

October 16, 2008 – p. 2

slide-97
SLIDE 97

Medium Matrices, Continued

store matrix factor as sparse matrix

n ≈ 105

October 16, 2008 – p. 2

slide-98
SLIDE 98

Medium Matrices, Continued

store matrix factor as sparse matrix

n ≈ 105

get selected eigenvalues/vectors

October 16, 2008 – p. 2

slide-99
SLIDE 99

Medium Matrices, Continued

store matrix factor as sparse matrix

n ≈ 105

get selected eigenvalues/vectors Krylov subspace methods

October 16, 2008 – p. 2

slide-100
SLIDE 100

Medium Matrices, Continued

store matrix factor as sparse matrix

n ≈ 105

get selected eigenvalues/vectors Krylov subspace methods Jacobi-Davidson methods

October 16, 2008 – p. 2

slide-101
SLIDE 101

Large Matrices

October 16, 2008 – p. 2

slide-102
SLIDE 102

Large Matrices

store as sparse matrix

October 16, 2008 – p. 2

slide-103
SLIDE 103

Large Matrices

store as sparse matrix no similarity transformations

October 16, 2008 – p. 2

slide-104
SLIDE 104

Large Matrices

store as sparse matrix no similarity transformations no shift-and-invert

October 16, 2008 – p. 2

slide-105
SLIDE 105

Large Matrices

store as sparse matrix no similarity transformations no shift-and-invert

n ≈ 107

October 16, 2008 – p. 2

slide-106
SLIDE 106

Large Matrices

store as sparse matrix no similarity transformations no shift-and-invert

n ≈ 107

get selected eigenvalues/vectors

October 16, 2008 – p. 2

slide-107
SLIDE 107

Large Matrices

store as sparse matrix no similarity transformations no shift-and-invert

n ≈ 107

get selected eigenvalues/vectors Krylov subspace methods

October 16, 2008 – p. 2

slide-108
SLIDE 108

Large Matrices

store as sparse matrix no similarity transformations no shift-and-invert

n ≈ 107

get selected eigenvalues/vectors Krylov subspace methods Jacobi-Davidson methods

October 16, 2008 – p. 2

slide-109
SLIDE 109

Krylov Subspace Methods

October 16, 2008 – p. 2

slide-110
SLIDE 110

Krylov Subspace Methods

x → Ax

October 16, 2008 – p. 2

slide-111
SLIDE 111

Krylov Subspace Methods

x → Ax

Example: A = R−TMR−1

October 16, 2008 – p. 2

slide-112
SLIDE 112

Krylov Subspace Methods

x → Ax

Example: A = R−TMR−1 Pick a vector v

October 16, 2008 – p. 2

slide-113
SLIDE 113

Krylov Subspace Methods

x → Ax

Example: A = R−TMR−1 Pick a vector v

v, Av, A2v, A3v, . . .

October 16, 2008 – p. 2

slide-114
SLIDE 114

Krylov Subspace Methods

x → Ax

Example: A = R−TMR−1 Pick a vector v

v, Av, A2v, A3v, . . .

Krylov subspace:

Kj(A, v) = span

  • v, Av, A2v, . . . , Aj−1v
  • October 16, 2008 – p. 2
slide-115
SLIDE 115

Krylov Subspace Methods

x → Ax

Example: A = R−TMR−1 Pick a vector v

v, Av, A2v, A3v, . . .

Krylov subspace:

Kj(A, v) = span

  • v, Av, A2v, . . . , Aj−1v
  • Look in here for approximate eigenvectors.

October 16, 2008 – p. 2

slide-116
SLIDE 116

Krylov Subspace Methods

x → Ax

Example: A = R−TMR−1 Pick a vector v

v, Av, A2v, A3v, . . .

Krylov subspace:

Kj(A, v) = span

  • v, Av, A2v, . . . , Aj−1v
  • Look in here for approximate eigenvectors.

. . . but need better basis

October 16, 2008 – p. 2

slide-117
SLIDE 117

Arnoldi Process

October 16, 2008 – p. 2

slide-118
SLIDE 118

Arnoldi Process

build orthonormal basis v1, v2, v3, . . .

October 16, 2008 – p. 2

slide-119
SLIDE 119

Arnoldi Process

build orthonormal basis v1, v2, v3, . . .

jth step: ˆ vj+1 = Avj − j

i=1 vihij

October 16, 2008 – p. 2

slide-120
SLIDE 120

Arnoldi Process

build orthonormal basis v1, v2, v3, . . .

jth step: ˆ vj+1 = Avj − j

i=1 vihij

hij = Avj, vi

(Gram-Schmidt)

October 16, 2008 – p. 2

slide-121
SLIDE 121

Arnoldi Process

build orthonormal basis v1, v2, v3, . . .

jth step: ˆ vj+1 = Avj − j

i=1 vihij

hij = Avj, vi

(Gram-Schmidt) Normalization: hj+1,j = ˆ

vj+1 , vj+1 = ˆ vj+1/hj+1,j

October 16, 2008 – p. 2

slide-122
SLIDE 122

Arnoldi Process

build orthonormal basis v1, v2, v3, . . .

jth step: ˆ vj+1 = Avj − j

i=1 vihij

hij = Avj, vi

(Gram-Schmidt) Normalization: hj+1,j = ˆ

vj+1 , vj+1 = ˆ vj+1/hj+1,j

Collect coefficients hij

October 16, 2008 – p. 2

slide-123
SLIDE 123

Arnoldi Process

build orthonormal basis v1, v2, v3, . . .

jth step: ˆ vj+1 = Avj − j

i=1 vihij

hij = Avj, vi

(Gram-Schmidt) Normalization: hj+1,j = ˆ

vj+1 , vj+1 = ˆ vj+1/hj+1,j

Collect coefficients hij

H4 =      h11 h12 h13 h14 h21 h22 h23 h24 h32 h33 h34 h43 h44     

October 16, 2008 – p. 2

slide-124
SLIDE 124

Arnoldi Process

build orthonormal basis v1, v2, v3, . . .

jth step: ˆ vj+1 = Avj − j

i=1 vihij

hij = Avj, vi

(Gram-Schmidt) Normalization: hj+1,j = ˆ

vj+1 , vj+1 = ˆ vj+1/hj+1,j

Collect coefficients hij

H4 =      h11 h12 h13 h14 h21 h22 h23 h24 h32 h33 h34 h43 h44     

eigenvalues are Ritz values

October 16, 2008 – p. 2

slide-125
SLIDE 125

Example: 479 × 479 matrix

−150 −100 −50 50 100 150 −2000 −1500 −1000 −500 500 1000 1500 2000

October 16, 2008 – p. 2

slide-126
SLIDE 126

Example: 479 × 479 matrix

−150 −100 −50 50 100 150 −2000 −1500 −1000 −500 500 1000 1500 2000

October 16, 2008 – p. 2

slide-127
SLIDE 127

Example: 479 × 479 matrix

−150 −100 −50 50 100 150 −2000 −1500 −1000 −500 500 1000 1500 2000

October 16, 2008 – p. 2

slide-128
SLIDE 128

Example: 479 × 479 matrix

−150 −100 −50 50 100 150 −2000 −1500 −1000 −500 500 1000 1500 2000

October 16, 2008 – p. 2

slide-129
SLIDE 129

For better accuracy . . .

October 16, 2008 – p. 2

slide-130
SLIDE 130

For better accuracy . . . . . . take more steps.

October 16, 2008 – p. 2

slide-131
SLIDE 131

For better accuracy . . . . . . take more steps. but,

October 16, 2008 – p. 2

slide-132
SLIDE 132

For better accuracy . . . . . . take more steps. but, vectors take up space.

October 16, 2008 – p. 2

slide-133
SLIDE 133

For better accuracy . . . . . . take more steps. but, vectors take up space. Alternate plan:

October 16, 2008 – p. 2

slide-134
SLIDE 134

For better accuracy . . . . . . take more steps. but, vectors take up space. Alternate plan: Start over with a better vector.

October 16, 2008 – p. 2

slide-135
SLIDE 135

Implicit Restarts

October 16, 2008 – p. 3

slide-136
SLIDE 136

Implicit Restarts

Take, say, 30 steps.

October 16, 2008 – p. 3

slide-137
SLIDE 137

Implicit Restarts

Take, say, 30 steps. Get 30 Ritz values.

October 16, 2008 – p. 3

slide-138
SLIDE 138

Implicit Restarts

Take, say, 30 steps. Get 30 Ritz values. Keep the best ones (e.g. 10) . . .

October 16, 2008 – p. 3

slide-139
SLIDE 139

Implicit Restarts

Take, say, 30 steps. Get 30 Ritz values. Keep the best ones (e.g. 10) . . . . . . and associated invariant subspace.

October 16, 2008 – p. 3

slide-140
SLIDE 140

Implicit Restarts

Take, say, 30 steps. Get 30 Ritz values. Keep the best ones (e.g. 10) . . . . . . and associated invariant subspace. Restart at step 11.

October 16, 2008 – p. 3

slide-141
SLIDE 141

Implicit Restarts

Take, say, 30 steps. Get 30 Ritz values. Keep the best ones (e.g. 10) . . . . . . and associated invariant subspace. Restart at step 11. (neat details omitted)

October 16, 2008 – p. 3

slide-142
SLIDE 142

Implicit Restarts

Take, say, 30 steps. Get 30 Ritz values. Keep the best ones (e.g. 10) . . . . . . and associated invariant subspace. Restart at step 11. (neat details omitted) Build back up to 30 and then restart again.

October 16, 2008 – p. 3

slide-143
SLIDE 143

Example

October 16, 2008 – p. 3

slide-144
SLIDE 144

Example

460 × 460 Hamiltonian matrix (toy problem)

October 16, 2008 – p. 3

slide-145
SLIDE 145

Example

460 × 460 Hamiltonian matrix (toy problem)

asking for 24 eigenpairs

October 16, 2008 – p. 3

slide-146
SLIDE 146

Example

460 × 460 Hamiltonian matrix (toy problem)

asking for 24 eigenpairs building to dimension 84

October 16, 2008 – p. 3

slide-147
SLIDE 147

Example

460 × 460 Hamiltonian matrix (toy problem)

asking for 24 eigenpairs building to dimension 84 restarting with 28

October 16, 2008 – p. 3

slide-148
SLIDE 148

Example

460 × 460 Hamiltonian matrix (toy problem)

asking for 24 eigenpairs building to dimension 84 restarting with 28 Hamiltonian Lanczos process

October 16, 2008 – p. 3

slide-149
SLIDE 149

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

October 16, 2008 – p. 3

slide-150
SLIDE 150

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

October 16, 2008 – p. 3

slide-151
SLIDE 151

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

October 16, 2008 – p. 3

slide-152
SLIDE 152

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

October 16, 2008 – p. 3

slide-153
SLIDE 153

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

October 16, 2008 – p. 3

slide-154
SLIDE 154

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

October 16, 2008 – p. 3

slide-155
SLIDE 155

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

October 16, 2008 – p. 3

slide-156
SLIDE 156

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

October 16, 2008 – p. 3

slide-157
SLIDE 157

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

October 16, 2008 – p. 4

slide-158
SLIDE 158

Concluding Remarks

October 16, 2008 – p. 4

slide-159
SLIDE 159

Concluding Remarks

I used these methods to solve my “medium” sized eigenvalue problems.

October 16, 2008 – p. 4

slide-160
SLIDE 160

Concluding Remarks

I used these methods to solve my “medium” sized eigenvalue problems. Simple ideas lead to powerful methods.

October 16, 2008 – p. 4

slide-161
SLIDE 161

Concluding Remarks

I used these methods to solve my “medium” sized eigenvalue problems. Simple ideas lead to powerful methods. Thank you for your attention.

October 16, 2008 – p. 4