A Multigrid Tutorial part two William L. Briggs Department of - - PowerPoint PPT Presentation

a multigrid tutorial part two
SMART_READER_LITE
LIVE PREVIEW

A Multigrid Tutorial part two William L. Briggs Department of - - PowerPoint PPT Presentation

A Multigrid Tutorial part two William L. Briggs Department of Mathematics University of Colorado at Denver Van Emden Henson Center for Applied Scientific Computing Lawrence Livermore National Laboratory Steve McCormick Department of Applied


slide-1
SLIDE 1

A Multigrid Tutorial part two

William L. Briggs

Department of Mathematics University of Colorado at Denver

Van Emden Henson

Center for Applied Scientific Computing Lawrence Livermore National Laboratory

Steve McCormick

Department of Applied Mathematics University of Colorado at Boulder

This work was performed, in part, under the auspices of the United States Department of Energy by University of California Lawrence Livermore National Laboratory under contract number W-7405-Eng-48. UCRL-VG-136819 Pt 2

slide-2
SLIDE 2

2 of 104

Outline

  • Nonlinear Problems
  • Neumann Boundary Conditions
  • Anisotropic Problems
  • Variable Mesh Problems
  • Variable Coefficient Problems
  • Algebraic Multigrid
slide-3
SLIDE 3

3 of 104

Nonlinear Problems

  • How should we approach the nonlinear system

and can we use multigrid to solve such a system?

  • A fundamental relation we’ve relied on, the

residual equation does not hold, since, if A(u) is a nonlinear

  • perator,

f u A = ) (

r e A v A f v A u A =

= −

e A v A u A ) ( ≠ ) ( − ) (

slide-4
SLIDE 4

4 of 104

The Nonlinear Residual Equation

  • We still base our development around the residual

equation, now the nonlinear residual equation:

  • How can we use this equation as the basis for a

solution method?

v A f v A u A ) ( − = ) ( − ) (

r v A u A = ) ( − ) (

f u A = ) (

slide-5
SLIDE 5

5 of 104

Let’s consider Newton’s Method

  • The best known and most important method for

solving nonlinear equations!

  • We wish to solve F(x) = 0 .
  • Expand F in a Taylor series about x :
  • Dropping higher order terms, if x+s is a solution,
  • Hence, we develop an iteration

) ( ′ / ) ( − = ∴ ) ( ′ + ) ( = x F x F s x F s s F

x F x F x x ) ( ′ ) ( − ←

F( x + s) = F( x) + sF ′( x) + s2F ″ ( ξ)

slide-6
SLIDE 6

6 of 104

Newton’s method for systems

  • We wish to solve the system A(u) = 0. In vector

form this is

  • Expanding A(v+e) in a Taylor series about v :

u u u f u u u f u u u f u A

⋅ ⋅

  • =
  • )

, … , , ( ⋅ ⋅ ⋅ ) , … , , ( ) , … , , (

  • =

) (

2 1 2 1 2 2 1 1 N N N N

e v J v A e v A + ) ( + ) ( = ) + (

higher order terms

slide-7
SLIDE 7

7 of 104

Newton for systems (cont.)

  • Where J(v) is the Jacobian system
  • If u=v+e is a solution, 0 = A(v) + J(v) e and
  • Leading to the iteration

J( v ) =

  • ∂f 1

∂u 1 ∂f 1 ∂u 2 … ∂f 1 ∂u N ∂f 2 ∂u 1 ∂f 2 ∂u 2 … ∂f 2 ∂u N

  • ⋅ ⋅⋅
  • ∂f N

∂u 1 ∂f N ∂u 2 … ∂f N ∂u N

  • u = v

e = − J( v)

− 1A( v)

v ← v − J( v)

− 1A( v)

slide-8
SLIDE 8

8 of 104

Newton’s method in terms of the residual equation

  • The nonlinear residual equation is
  • Expanding A(v+e) in a two-term Taylor series about v :
  • Newton’s method is thus:

v ← v + J( v)

− 1r

v A f r ) ( − =

r v A e v A = ) ( − ) + (

r v A e v J v A = ) ( − ) ( + ) (

r e v J = ) (

slide-9
SLIDE 9

9 of 104

How does multigrid fit in?

  • One obvious method is to use multigrid to solve

J(v)e = r at each iteration step. This method is called Newton-multigrid and can be very effective.

  • However, we would like to us multigrid ideas to

treat the nonlinearity directly.

  • Hence, we need to specialize the multigrid

components (relaxation, grid transfers, coarsening) for the nonlinear case.

slide-10
SLIDE 10

10 of 104

What is nonlinear relaxation?

  • Several of the common relaxation schemes have

nonlinear counterparts. For A(u)=f, we describe the nonlinear Gauss-Seidel iteration:

– For each j=1, 2, …, N

  • Set the jth component of the residual to zero and solve for

vj . That is, solve (A(v))j = fj .

  • Equivalently,

– For each j=1, 2, …, N

  • Find s ∈ ℜ such that

where is the canonical jth unit basis vector

= ) ) ε + ( ( f s v A

j j j

εj

slide-11
SLIDE 11

11 of 104

How is nonlinear Gauss-Seidel done?

  • Each is a nonlinear scalar equation for

vj . We use the scalar Newton’s method to solve!

  • Example: may be

discretized so that is given by

  • Newton iteration for vj is given by

= ) ) ( ( f v A

j j

, = ) ( + ) ( ″ − f e x u x u

x u ) (

= ) ) ( ( f v A

j j

= + − + − f e v h v v v

j v j j j j + − 2 1 1

2

j

1 1 − ≤ ≤ N j

v e f e v v v

j v h j v j h v v v j j

) + ( + − + − ←

− + −

j j j j j + − 2 2 1 1

2 2

1

slide-12
SLIDE 12

12 of 104

How do we do coarsening for nonlinear multigrid?

  • Recall the nonlinear residual equation
  • In multigrid, we obtain an approximate solution v h
  • n the fine grid, then solve the residual equation
  • n the coarse grid.
  • The residual equation on appears as

r v A e v A

2 2 2 2 2 2 h h h h h h

= ) ( − ) + (

Ω2h

r v A e v A = ) ( − ) + (

slide-13
SLIDE 13

13 of 104

Look at the coarse residual equation

  • We must evaluate the quantities on in
  • Given v h, a fine-grid approximation, we restrict

the residual to the coarse grid

  • For v 2h we restrict v h by
  • Thus,

Ω2h

r v A e v A

2 2 2 2 2 2 h h h h h h

= ) ( − ) + (

v A f I r

2 2 h h h h h h

) ) ( − ( =

v I v

2 2 h h h h =

v A f I v I A e v I A

2 2 2 2 2 2 h h h h h h h h h h h h h h

) ) ( − ( + ) ( = ) + (

slide-14
SLIDE 14

14 of 104

We’ve obtained a coarse-grid equation of the form .

  • Consider the coarse-grid residual equation:
  • We solve for and
  • btain
  • We then apply the correction:

f u A

2 2 2 h h h

= ) (

v A f I v I A e v I A

2 2 2 2 2 2 h h h h h h h h h h h h h h

) ) ( − ( + ) ( = ) + (

f 2h

All quantities are known

u2h

coarse-grid unknown

f u A

2 2 2 h h h

= ) (

e v I u

2 2 2 h h h h h

+ =

v I u e

2 2 2 h h h h h

− =

e I v v

h h h h h

+ =

2 2

slide-15
SLIDE 15

15 of 104

FAS, the Full Approximation Scheme, two grid form

  • Perform nonlinear relaxation on to
  • btain an approximation .
  • Restrict the approximation and its residual
  • Solve the coarse-grid residual problem
  • Extract the coarse-grid error
  • Interpolate and apply the correction

e I v v

h h h h h

+ =

2 2

f u A

h h h

= ) (

vh

v I v

2 2 h h h h =

v A f I r

2 2 h h h h h

) ) ( − ( =

r v A u A

2 2 2 2 2 h h h h h

+ ) ( = ) (

v u e

2 2 2 h h h

− =

slide-16
SLIDE 16

16 of 104

A few observations about FAS

  • If A is a linear operator then FAS reduces directly to

the linear two-grid correction scheme.

  • A fixed point of FAS is an exact solution to the fine-

grid problem and an exact solution to the fine-grid problem is a fixed point of the FAS iteration.

slide-17
SLIDE 17

17 of 104

A few observations about FAS, continued

  • The FAS coarse-grid equation can be written as

where is the so-called tau correction.

  • In general, since , the solution to the

FAS coarse-grid equation is not the same as the solution to the original coarse-grid problem .

  • The tau correction may be viewed as a way to alter

the coarse-grid equations to enhance their approximation properties.

f u A

2 2 2 2 h h h h h

τ + = ) (

τ h

h 2 ≠ τ h

h 2

u2h

f u A

2 2 2 h h h

= ) (

slide-18
SLIDE 18

18 of 104

Still more observations about FAS

  • FAS may be viewed as an inner and outer iteration:

the outer iteration is the coarse-grid correction, the inner iteration the relaxation method.

  • A true multilevel FAS process is recursive, using

FAS to solve the nonlinear problem using . Hence, FAS is generally employed in a V- or W- cycling scheme.

Ω2h Ω4h

slide-19
SLIDE 19

19 of 104

And yet more observations about FAS!

  • For linear problems we use FMG to obtain a good

initial guess on the fine grid. Convergence of nonlinear iterations depends critically on having a good initial guess.

  • When FMG is used for nonlinear problems the

interpolant is generally accurate enough to be in the basin of attraction of the fine-grid solver.

  • Thus, one FMG cycle, whether FAS, Newton, or

Newton-multigrid is used on each level, should provide a solution accurate to the level of discretization, unless the nonlinearity is extremely strong.

u I

2 2 h h h

slide-20
SLIDE 20

20 of 104

Intergrid transfers for FAS

  • Generally speaking, the standard operators (linear

interpolation, full weighting) work effectively in FAS schemes.

  • In the case of strongly nonlinear problems, the

use of higher-order interpolation (e.g., cubic interpolation) may be beneficial.

  • For an FMG scheme, where is the

interpolation of a coarse-grid solution to become a fine-grid initial guess, higher-order interpolation is always advised.

u I

2 2 h h h

slide-21
SLIDE 21

21 of 104

What is in FAS?

  • As in the linear case, there are two basic possibilities:
  • is determined by discretizing the nonlinear
  • perator A(u) in the same fashion as was employed to
  • btain , except that the coarser mesh spacing

is used.

  • is determined from the Galerkin condition

where the action of the Galerkin product can be captured in an implementable formula.

  • The first method is usually easier, and more common.

A h( uh) A 2h( u2h)

A 2h( u2h)

A 2h( u2h)

A 2h( u2h) = Ih

2h A h( uh) I2h h

slide-22
SLIDE 22

22 of 104

Nonlinear problems: an example

  • Consider
  • n the unit square [0,1] x [0,1] with homogeneous

Dirichlet boundary conditions and a regular Cartesian grid.

  • Suppose the exact solution is

− ∆u( x, y) + γu( x, y) eu( x, y) = f ( x,y)

u( x,y) = ( x2 − x3) sin ( 3πy)

slide-23
SLIDE 23

23 of 104

Discretization of nonlinear example

  • The operator can be written (sloppily) as
  • The relaxation is given by

1 1 4 1 1 1 f e u u h 2 = γ +

− − −

  • j

i u h j i h j i , , ,

h j i,

u A

h h

) (

e u f u A u u

u h j i h j i j i h h h j i h j i , , , , ,

) + ( γ + − ) ) ( ( − ←

4

2

1

j i,

slide-24
SLIDE 24

24 of 104

FAS and Newton’s method on

  • FAS
  • Newton’s Method

− ∆u( x, y) + γu( x, y) eu( x, y) = f ( x, y)

1 10 100 1000 convergence factor 0.135 0.124 0.098 0.072 number of FAS cycles 12 11 11 10 1 10 100 1000 convergence factor 4.00E-05 7.00E-05 3.00E-04 2.00E-04 number of Newton iterations 3 3 3 4

γ γ

slide-25
SLIDE 25

25 of 104

Newton, Newton-MG, and FAS on

  • Newton uses exact solve, Newton-MG is inexact Newton with

a fixed number of inner V(2,1)-cycles the Jacobian problem,

  • verall stopping criterion

− ∆u( x, y) + γu( x, y) eu( x, y) = f ( x, y)

Outer Inner Method iterations iterations Megaflops Newton 3 1660.6 Newton-MG 3 20 56.4 Newton-MG 4 10 38.5 Newton-MG 5 5 25.1 Newton-MG 10 2 22.3 Newton-MG 19 1 24.6 FAS 11 27.1

< | | | | r

1 2

1 −

slide-26
SLIDE 26

26 of 104

Comparing FMG-FAS and FMG-Newton

− ∆u( x,y) + γu( x,y) eu( x, y) = f ( x, y) We will do one FMG cycle using a single FAS V(2,1) - cycle as the “solver” at each new level. We then follow that with sufficiently many FAS V(2,1)-cycles as is necessary to obtain ||r|| < 10-10. Next, we will do one FMG cycle using a Newton- multigrid step at each new level (with a single linear V(2,1)-cycle as the Jacobian “solver.”) We then follow that with sufficiently many Newton-multigrid steps as is necessary to obtain ||r|| < 10-10.

slide-27
SLIDE 27

27 of 104

Comparing FMG-FAS and FMG-Newton

− ∆u( x,y) + γu( x,y) eu( x, y) = f ( x, y)

Cycle Mflops Mflops Cycle FMG-FAS 1.10E-02 2.00E-05 3.1 1.06E-02 2.50E-05 2.4 FMG-Newton FAS V 6.80E-04 2.40E-05 5.4 6.70E-04 2.49E-05 4.1 Newton-MG FAS V 5.00E-05 2.49E-05 7.6 5.10E-05 2.49E-05 5.8 Newton-MG FAS V 3.90E-06 2.49E-05 9.9 6.30E-06 2.49E-05 7.5 Newton-MG FAS V 3.20E-07 2.49E-05 12.2 1.70E-06 2.49E-05 9.2 Newton-MG FAS V 3.00E-08 2.49E-05 14.4 5.30E-07 2.49E-05 10.9 Newton-MG FAS V 2.90E-09 2.49E-05 16.7 1.70E-07 2.49E-05 12.6 Newton-MG FAS V 3.00E-10 2.49E-05 18.9 5.40E-08 2.49E-05 14.3 Newton-MG FAS V 3.20E-11 2.49E-05 21.2 1.70E-08 2.49E-05 16.0 Newton-MG 5.50E-09 2.49E-05 17.7 Newton-MG 1.80E-09 2.49E-05 19.4 Newton-MG 5.60E-10 2.49E-05 21.1 Newton-MG 1.80E-10 2.49E-05 22.8 Newton-MG 5.70E-11 2.49E-05 24.5 Newton-MG

| | | | rh | | | | rh | | | | eh | | | | eh

slide-28
SLIDE 28

28 of 104

Outline

  • Nonlinear Problems
  • Neumann Boundary Conditions
  • Anisotropic Problems
  • Variable Mesh Problems
  • Variable Coefficient Problems
  • Algebraic Multigrid

slide-29
SLIDE 29

29 of 104

  • 1

N+2

Neumann Boundary Conditions

  • Consider the (1-d) problem
  • We discretize this on the interval [0,1] with

grid spacing for j=1,2, …, N+1 .

  • We extend the interval with two ghost points

) ( = ) ( ″ − x f x u

1 < < x

u u = ) ( ′ = ) ( ′ 1

h j xj =

N h + = 1 1

0 1 2 3 … j ... N-1 N N+1 0 x 1

slide-30
SLIDE 30

30 of 104

We use central differences

  • We approximate the derivatives with differences,

using the ghost points:

  • Giving the system
  • 1

N+2 0 1 j-1 j j+1 N N+1

h u u u − ≈ ) ( ′ 2

1 1 −

h u u u − ≈ ) ( ′ 2 1

N N + 2

h u u u x u − + − ≈ ) ( ″

j j j j + − 2 1 1

2

= − + − f h u u u

j j j j + − 2 1 1

2 h u u

1 1

= −

2

h u u

N N + 2

= − 2

1 + ≤ ≤ N j

slide-31
SLIDE 31

31 of 104

Eliminating the ghost points

  • Use the boundary conditions to eliminate ,
  • Eliminating the ghost points in the j=0 and j=N+1

equations gives the (N+2)x(N+2) system of equations:

u − 1 uN + 2

h u u

1 1

= −

2

u u

N N + 2 =

h u u

N N + 2

= − 2

u u −

1 1 =

= − + − f h u u u

j j j j + − 2 1 1

2

1 + ≤ ≤ N j

2 2 f h u u

2 1

= −

= + − 2 2 f h u u

N N N + + 1 2 1

slide-32
SLIDE 32

32 of 104

We write the system in matrix form

  • We can write , where
  • Note that is (N+2)x(N+2), nonsymmetric, and

the system involves unknowns and at the boundaries.

f u A

h h h

=

A h = 1 h2

  • 2

− 2 − 1 2 − 1 − 1 2 − 1 ⋅⋅⋅ ⋅⋅⋅ − 1 2 − 1 − 1 2

  • A h

u0

h

uN + 1

h

slide-33
SLIDE 33

33 of 104

We must consider a compatibility condition

  • The problem , for and

with is not well-posed!

  • If u(x) is a solution, so is u(x)+c for any constant c.
  • We cannot be certain a solution exists. If one does,

it must satisfy

  • This integral compatibility condition is necessary!

If f(x) doesn’t satisfy it, there is no solution!

) ( = ) ( ″ − x f x u

1 < < x

u u = ) ( ′ = ) ( ′ 1

  • 1

u″( x) dx =

  • 1

f ( x) dx

− [ u′( 1) − u′( 0) ] =

  • 1

f ( x) dx

0 =

  • 1

f ( x) dx

slide-34
SLIDE 34

34 of 104

The well-posed system

  • The compatibility condition is necessary for a

solution to exist. In general, it is also sufficient, which can be proven that is a well-behaved

  • perator in the space of functions u(x) that have

zero mean.

  • Thus we may conclude that if f(x) satisfies the

compatibility condition, this problem is well-posed:

  • The last says that of all solutions u(x)+c we choose

the one with zero mean.

∂ ∂ −

2 2

x

) ( = ) ( ″ − x f x u

1 < < x

u u = ) ( ′ = ) ( ′ 1

= ) (

  • 1

x d x u

slide-35
SLIDE 35

35 of 104

The discrete problem is not well posed

  • Since all row sums of are zero,
  • Putting into row-echelon form shows that

hence

  • By the Fundamental Theorem of Linear Algebra,

has a solution if and only if

  • It is easy to show that
  • Thus, has a solution if and only if
  • That is,

A h

1h ∈ NS ( A h )

A h 1 S N m i d = ) ) ( ( A h

1 n a p s S N ) ( = ) (A

h h

A h

A S N f

T h h

) ) ( ( ⊥ c A S N ) / , , . . . , , , / ( = ) ) ( (

T T h

2 1 1 1 1 2 1

f u A

h h h

=

c f

T h

) / , , . . . , , , / ( ⊥ 2 1 1 1 1 2 1

2 1 2 1 f f f

1 1 h N h j N j h

= + +

+ =

slide-36
SLIDE 36

36 of 104

We have two issues to consider

  • Solvability. A solution exists iff
  • Uniqueness. If solves so does
  • Note that if then

and solvability and uniqueness can be handled together

  • This is easily done. Multiply 1st and last equations by

1/2, giving

A S N f

T h h

) ) ( ( ⊥

uh

A huh = f h

uh + c1h

A h = ( A h) T NS ( A h) = NS ( ( A h) T)

A

h = 1

h2

  • 1

− 1 − 1 2 − 1 − 1 2 − 1 ⋅⋅⋅ ⋅⋅⋅ − 1 2 − 1 − 1 1

slide-37
SLIDE 37

37 of 104

The new system is symmetric

  • We have the symmetric system :
  • Solvability is guaranteed by ensuring that is
  • rthogonal to the constant vector :

1 h2

  • 1

− 1 − 1 2 − 1 − 1 2 − 1 ⋅⋅⋅ ⋅⋅⋅ − 1 2 − 1 − 1 1

  • u0

h

u1

h

u2

h

  • uh

N

uh

N + 1

  • =
  • f h

0/2

f h

1

f h

2

  • f h

N

f h

N + 1/2

  • A

h uh = f h

f

h

1h

f f

j h N j h h

= = , 1

  • +

= 1

slide-38
SLIDE 38

38 of 104

The well-posed discrete system

  • The (N+3)x(N+2) system is:
  • r, more simply

= − + − f h u u u

j j j j + − 2 1 1

2

1 + ≤ ≤ N j

u0 − u1 h2 = f 0 2 − uN + uN + 1 h2 = f N + 1 2

A

h uh = f h

  • h

i N i + = 1

u = 0

u

h h

= , 1

(choose the zero mean solution)

slide-39
SLIDE 39

39 of 104

Multigrid for the Neumann Problem

  • We must have the interval endpoints on all grids
  • Relaxation is performed at all points, including endpoints:
  • We add a global Gram-Schmidt step after relaxation on

each level to enforce the zero-mean condition

xN

h /2

xh xh

1

xN

h

x

2 2 N h /

x

4 2 N h /

x0

2h

f h v v

h h h 2 1

+ ← f h v v

h N h N N h + + 1 2 1

+ ←

f h v v v

h j h j h j j h

+ + + ←

+ − 2 1 1

2

v v v

h h h h h h h

, , − ← 1 1 1 1

slide-40
SLIDE 40

40 of 104

Interpolation must include the endpoints

  • We use linear interpolation:

I2

h h

  • /

/ / / / / / /

  • =

1 2 1 2 1 1 2 1 2 1 1 2 1 2 1 1 2 1 2 1 1

slide-41
SLIDE 41

41 of 104

Restriction also treats the endpoints

  • For restriction, we use , yielding the

values

f f f

1 2 h h h

+ = 4 1 2 1

f f f

N h h N h N + + 1 2 1

+ = 2 1 4 1 f f f f

h j j h h j h j 1 2 2 1 2 2

+ + = 4 1 2 1 4 1

+ −

I I

T h h h h 2 2

) ( = 2 1

slide-42
SLIDE 42

42 of 104

The coarse-grid operator

  • We compute the coarse-grid operator using the

Galerkin condition

I A I A

2 2 2 h h h h h h =

A

2h =

1 4h2

  • 1

− 1 − 1 2 − 1 − 1 2 − 1 ⋅⋅⋅ ⋅⋅⋅ − 1 2 − 1 − 1 1

ε − ε ε ε

2 2 2 2 2 2 2 2 2 2 2 2 2 h h h h h h h h h h h h h h h

4 1 4 1 2 1 2 1 2 1 1 1 h h I A I h h I A I

j2h = 0 1

slide-43
SLIDE 43

43 of 104

Coarse-grid solvability

  • Assuming satisfies , the solvability

condition, we can show that theoretically the coarse- grid problem is also solvable.

  • To be certain numerical round-off does not perturb

solvability, we incorporate a Gram-Schmidt like step each time a new right-hand side is generated for the coarse grid:

f

h

f

h h

= , 1

v A f I u A

2 2 2 h h h h h h h

) − ( =

f

2h

f f f

2 2 2 2 2 2 2 h h h h h h h

, , − ← 1 1 1 1

slide-44
SLIDE 44

44 of 104

Neumann Problem: an example

  • Consider the problem ,

which has as a solution for any c

(c=-1/12 gives the zero mean solution).

− = ) ( ″ − x x u 1 2 1 < < x u u = ) ( ′ = ) ( ′ 1

c x x x u + − = ) (

3 2

3 2

grid size average number N

  • conv. factor
  • f cycles

31 6.30E-11 0.079 9.70E-05 9 63 1.90E-11 0.089 2.40E-05 10 127 2.60E-11 0.093 5.90E-06 10 255 3.70E-11 0.096 1.50E-06 10 511 5.70E-11 0.100 3.70E-07 10 1027 8.60E-11 0.104 9.20E-08 10 2047 2.10E-11 0.112 2.30E-08 10 4095 5.20E-11 0.122 5.70E-09 10

| | | | rh

| | | | eh

slide-45
SLIDE 45

45 of 104

Outline

  • Nonlinear Problems
  • Neumann Boundary Conditions
  • Anisotropic Problems
  • Variable Mesh Problems
  • Variable Coefficient Problems
  • Algebraic Multigrid

✔ ✔

slide-46
SLIDE 46

46 of 104

Anisotropic Problems

  • All problems considered thus far have had as

the off-diagonal entries.

  • We consider two situations when the matrix has

two different constant on the off-diagonals. These situations arise when

– the (2-d) differential equation has constant, but different, coefficients for the derivatives in the coordinate directions – the discretization has constant, but different, mash spacing in the different coordinate directions

− 1 h2

slide-47
SLIDE 47

47 of 104

We consider two types of anisotropy

  • Different coefficients on the derivatives

discretized on a uniform grid with spacing h .

  • Constant, but different, mesh spacings:

= α − − f u u

y y x x

N h hx = = 1

h h

x y

α =

hy

hx

slide-48
SLIDE 48

48 of 104

Both problems lead to the same stencil

− + − α + − + − h u u u h u u u

k j k j k j k j k j k j + , , − , , + , , − 2 1 1 2 1 1

2 2

+ − + − + − u u u h u u u

h k j k j k j k j k j k j α + , , − , , + , , − 2 1 1 2 1 1

2 2 h A h

  • α

− − α + − α −

  • =

1 2 2 1 1

2

slide-49
SLIDE 49

49 of 104

Why standard multigrid fails

  • Note that has weak connections in the

y-direction. MG convergence factors degrade as α gets

  • small. Poor performance at α = 0.1 .
  • Consider α 0 .
  • This is a collection of disconnected 1-d problems!
  • Point relaxation will smooth oscillatory errors in the x-

direction (strong connections), but with no connections in the y-direction the errors in that direction will generally be random, and no point relaxation will have the smoothing property in the y-direction.

h A h

  • α

− − α + − α −

  • =

1 2 2 1 1

2

h A h

α + −

  • 1

2 2 1 1

2

slide-50
SLIDE 50

50 of 104

5 10 15 5 10 15 0.2 0.4 0.6 0.8 1 l k

We analyze weighted Jacobi

  • The eigenvalues of the weighted Jacobi iteration

matrix for this problem are

2 4 6 8 10 12 14 16 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 2 4 6 8 10 12 14 16 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

λi,l = 1 − 2ω 1+ α

  • sin2

2N

  • + α sin2

2N

  • i

i

l l

slide-51
SLIDE 51

51 of 104

Two strategies for anisotropy

  • Semicoarsening Because we expect MG-like

convergence for the 1-d problems along lines of constant y, we should coarsen the grid in the x- direction, but not in the y-direction.

  • Line relaxation Because the the equations are

strongly coupled in the x-direction it may be advantageous to solve simultaneously for entire lines

  • f unknowns in the x-direction (along lines of

constant y)

slide-52
SLIDE 52

52 of 104

Semicoarsening with point relaxation

  • Point relaxation on smooths in the x-
  • direction. Coarsen by removing every other y-line.
  • We do not coarsen along the remaining y-lines.
  • Semicoarsening is not as “fast” as full coarsening. The

number of points on is about half the number of points on , instead of the usual one-fourth.

h A h

  • α

− − α + − α −

  • =

1 2 2 1 1

2

Ω2h

Ω2h

Ωh

Ωh

slide-53
SLIDE 53

53 of 104

Interpolation with semicoarsening

  • We interpolate in the 1-dimensional way along each

line of constant y.

  • The coarse-grid correction equations are

v v v

2 2 2 h k j h k j h k j , , ,

+ =

v v v v

2 1 2 1 2 1 2 h k j h k j h k j h k j , + , , + , +

+ + = 2

slide-54
SLIDE 54

54 of 104

Line relaxation with full coarsening

  • The other approach to this problem is to do full

coarsening, but to relax entire lines (constant y)

  • f variables simultaneously.
  • Write in block form as

where and

A h

A h =

  • D

− cI − cI D − cI − cI D − cI ⋅⋅⋅ ⋅⋅⋅ − cI − cI D

  • h

c α =

2

D = 1 h2

  • 2 + 2α

− 1 − 1 2+ 2α − 1 ⋅⋅⋅ − 1 2 + 2α

slide-55
SLIDE 55

55 of 104

Line relaxation

  • One sweep of line relaxation consists of solving a

tridiagonal system for each line of constant y.

  • The kth such system has the form where

is the kth subvector of with entries and the kth right-hand side subvector is

  • Because D is tridiagonal, the kth system can be solved

very efficiently.

g v D

h k h k =

vh

k

vh

= ) ( v v

k j h j h k ,

) + ( α + = ) ( v v h f g

h k j h k j h k j j h k + , − , , 1 1 2

slide-56
SLIDE 56

56 of 104

5 10 15 5 10 15

  • 0.2

0.2 0.4 0.6 0.8

Why line relaxation works

  • The eigenvalues of the weighted block Jacobi

iteration matrix are

  • π
  • α

+

  • π
  • α

+

  • ω

− = λ

N i l i π ,

2 n i s 2 n i s n i s 2 2 1

2 2 2 2

N l N i

i

l

i

l

2 4 6 8 10 12 14 16

  • 0.2

0.2 0.4 0.6 0.8 2 4 6 8 10 12 14 16

  • 0.2

0.2 0.4 0.6 0.8

slide-57
SLIDE 57

57 of 104

Semicoarsening with line relaxation

  • We might not know the direction of weak coupling
  • r it might vary.
  • Suppose we want a method that can handle either
  • r
  • We could use semicoarsening in the x-direction to

handle and line relaxation in the y-direction to take care of .

A1

h = 1

h2

  • − α

− 1 2 + 2α − 1 − α

  • A2

h

= 1 h2

  • − 1

− α 2+ 2α − α − 1

  • A1

h

A2

h

slide-58
SLIDE 58

58 of 104

Semicoarsening with line relaxation

  • The original grid
  • Original grid

viewed as a stack of “pencils.” Line relaxation is used to solve problem along each pencil.

  • Coarsening is

done by deleting every other pencil

slide-59
SLIDE 59

59 of 104

An anisotropic example

  • Consider with u=0 on the boundaries
  • f the unit square, and stencil given by
  • Suppose so the exact

solution is given by

  • Observe that if α is small, the x-direction dominates

while if α is large, the y-direction dominates

= α − − f u u

y y x x

h A h

  • α

− − α + − α −

  • =

1 2 2 1 1

2

f ( x,y) = 2( y − y2) + 2α( x − x2) u( x, y) = ( y − y2) ( x − x2)

slide-60
SLIDE 60

60 of 104

0.2 0.4 0.6 0.8 1 0.5 1 0.02 0.04 0.06 0.08 0.1

What is smooth error?

  • Consider α=0.001 and suppose point Gauss-Seidel

is applied to a random initial guess. The error after 50 sweeps appears as:

0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 1 0 . 0 2 0 . 0 4 0 . 0 6 0 . 0 8 0 . 1 0 . 1 2 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 1 0 . 0 2 0 . 0 4 0 . 0 6 0 . 0 8 0 . 1 0 . 1 2

Error along line of constant x Error along line of constant y

x y

slide-61
SLIDE 61

61 of 104

We experiment with 3 methods

  • Standard V(2,1)-cycling, with point Gauss-Seidel

relaxation, full coarsening, and linear interpolation

  • Semicoarsening in the x-direction. Coarse and fine

grids have the same number of points in the y-

  • direction. 1-d full weighting and linear

interpolation are used in the x-direction, there is no y-coupling in the intergrid transfers

  • Semicoarsening in the x-direction combined with

line relaxation in the y-direction. 1-d full weighting and interpolation.

slide-62
SLIDE 62

62 of 104

With semicoarsening, the

  • perator must change
  • To account for unequal mesh spacing, the residual

and relaxation operators must use a modified stencil

  • Note that as grids become coarser, grows

while remains constant.

A =

  • − α

hy

2

− 1 hx

2

  • 2

hx

2 + 2α

hy

2

  • − 1

hx

2

− 1 hy

2

  • hx

hy

slide-63
SLIDE 63

63 of 104

How do the 3 methods work for various values of α α α α ?

  • Asymptotic convergence factors:
  • Note: semicoarsening in x works well for α < .001

but degrades noticeably even at α = .1

scheme 1000 100 10 1 0.1 0.01 0.001 1E-04 V(2,1)-cycles 0.95 0.94 0.58 0.13 0.58 0.90 0.95 0.95 emicoarsening in x 0.94 0.99 0.98 0.93 0.71 0.28 0.07 0.07 semiC / line relax 0.04 0.08 0.08 0.08 0.07 0.07 0.08 0.08

α

y-direction strong x-direction strong

slide-64
SLIDE 64

64 of 104

A semicoarsening subtlety

  • Suppose α is small, so that semicoarsening in x is
  • used. As we progress to coarser grids, gets

small but remains constant.

  • If, on some coarse grid, becomes comparable

to , the problem effectively becomes recoupled in the y-direction. Continued semicoarsening can produce artificial anisotropy, strong in the y-direction.

  • When this occurs, it is best to stop semicoarsening

and continue with full coarsening on any further coarse grids.

hx− 2

hy− 2

hx− 2

αhy− 2

slide-65
SLIDE 65

65 of 104

Outline

  • Nonlinear Problems
  • Neumann Boundary Conditions
  • Anisotropic Problems
  • Variable Mesh Problems
  • Variable Coefficient Problems
  • Algebraic Multigrid

✔ ✔ ✔

slide-66
SLIDE 66

66 of 104

Variable Mesh Problems

  • Non-uniform grids are commonly used to

accommodate irregularities in problem domains

  • Consider how we might approach the 1-d problem

posed on this grid: − u″( x) = f ( x)

0 < x < 1

u( 0) = u( 1) = 0

x0

xj − 1 xj

xj + 1 xN

x = 0

x =1

slide-67
SLIDE 67

67 of 104

We need some notation for the mesh spacing

  • Let N be a positive integer. We define the

spacing interval between and : xj

xj + 1 hj + 1/2 ≡ xj + 1 − xj

j = 0, 1, ... , N − 1

x0

xj − 1 xj

xj + 1 xN

hj + 1/2

slide-68
SLIDE 68

68 of 104

We define the discrete differential operator

  • Using second order finite differences (and

plugging through a mess of algebra!) we obtain this discrete representation for the problem:

  • where

− αj

h uj − 1 h

+ ( αj

h + βj h ) uj h − βj h uj + 1 h

= f j

h 1 ≤ j ≤ N − 1

u0

h = uN h = 0

αj

h = 2 hj − 1/2( hj − 1/2 + hj + 1/2)

βj

h = 2 hj + 1/2( hj − 1/2 + hj + 1/2)

slide-69
SLIDE 69

69 of 104

We modify standard multigrid to accommodate variable spacing

  • We choose every second fine-grid point as a

coarse-grid point

  • We use linear interpolation, modified for the
  • spacing. If , then for

xh x0

2h

xN

h

x

2 2 N h /

x2j

h

x j

2h

v2j

h = vj 2h

v2j + 1

h

=

h2j + 3/2 vj

2h + h2j + 1/2 vj + 1 2h

h2j + 1/2 + h2j + 3/2

vh = I2h

h v2h 1 ≤ j ≤ N/2 − 1

slide-70
SLIDE 70

70 of 104

We use the variational properties to derive restriction and .

  • This produces a stencil on that is similar, but

not identical, to the fine-grid stencil. If the resulting system is scaled by , then the Galerkin product is the same as the fine-grid stencil.

  • For 2-d problems this approach can be generalized

readily to logically rectangular grids. However, for irregular grids that are not logically rectangular, AMG is a better choice.

A 2h

A 2h = Ih

2h A h I2h h

Ih

2h = 1 2( I2h h ) T

Ω2h

( hj − 1/2 + hj + 1/2)

slide-71
SLIDE 71

71 of 104

Outline

  • Nonlinear Problems
  • Neumann Boundary Conditions
  • Anisotropic Problems
  • Variable Mesh Problems
  • Variable Coefficient Problems
  • Algebraic Multigrid

✔ ✔ ✔ ✔

slide-72
SLIDE 72

72 of 104

Variable coefficient problems

  • A common difficulty is the variable coefficient

problem, given in 1-d by where a(x) is a positive function on [0,1]

  • We seek to develop a conservative, or self-adjoint,

method for discretizing this problem.

  • Assume we have available to us the values of a(x)

at the midpoints of the grid

− ( a( x) u′( x) ) ′ = f ( x)

0 < x < 1

u( 0) = u( 1) = 0

aj + 1/2 ≡ a( xj + 1/2)

xh

j

xh

j + 1

xh

j − 1

xh

xh

N

slide-73
SLIDE 73

73 of 104

We discretize using central differences

  • We can use second-order differences to

approximate the derivatives. To use a grid spacing

  • f h we evaluate a(x)u’(x) at points midway

between the gridpoints:

( a( x) u′( x) ) ′ ≈

( au′) |xj + 1/2 − ( a u′) |xj − 1/2 h

+ O( h2)

xj

xh

j

xh

j + 1

xh

j − 1

xh

xh

N

Points used to evaluate (au’ ) ’ at xj

slide-74
SLIDE 74

74 of 104

We discretize using central differences

  • To evaluate we must sample a(x) at the

point and use second order differences: where

( au′) |xj + 1/2

xj + 1/2 ( a u′ ) |xj + 1/2 ≈ aj + 1/2

uj + 1 − uj h

( a u′ ) |xj − 1/2 ≈ aj − 1/2

uj − uj − 1 h

aj + 1/2 ≡ a( xj + 1/2)

xh

j

xh

j + 1

xh

j − 1

xh

xh

N

Points used to evaluate (au’ ) ’ at xj

Points used to evaluate u’ at

xj + 1/2

slide-75
SLIDE 75

75 of 104

The basic stencil is given

  • We combine the differences for u’ and for (au’ ) ’

to obtain the operator and the problem becomes, for

− aj + 1/2 uj + 1 − uj − 1 h + aj − 1/2 uj − uj − 1 h h

− ( a( xj) u′ ( xj) ) ′ ( xj) ≈

1 h2( − aj − 1/2uj − 1 + ( aj − 1/2 + aj + 1/2) uj − aj + 1/2uj + 1) = f j

1 ≤ j ≤ N − 1

u0 = uN = 0

slide-76
SLIDE 76

76 of 104

Coarsening the variable coefficient problem

  • A reasonable approach is to use a standard

multigrid algorithm with linear interpolation, full weighting, and the stencil where

  • The same stencil can be obtained via the Galerkin

relation

A 2h =

1 ( 2h) 2( − aj − 1/2 2h

aj − 1/2

2h

+ aj + 1/2

2h

− aj + 1/2

2h

)

aj + 1/2

2h

=

a2j + 1/2

h

+ a2j + 3/2

h

2

slide-77
SLIDE 77

77 of 104

Standard multigrid degrades if a(x) is highly variable

  • It can be shown that the variable coefficient

discretization is equivalent to using standard multigrid with simple averaging on the Poisson problem on a certain variable-mesh spacing.

  • But simple averaging won’t accurately represent smooth

components if is close to but far from .

  • (a(x)u’)’ = f
  • u’’(x) = f

x2j + 1

h

x2j

h

x2j + 2

h

x2j + 1

h

x2j

h

x2j + 2

h

xj

2h

xj + 1

2h

slide-78
SLIDE 78

78 of 104

One remedy is to apply operator interpolation

  • Assume that relaxation does not change smooth

error, so the residual is approximately zero. Applying at yields

  • Solving for

− a2j + 1/2

h

e2j

h

+ ( a2j + 1/2

h

+ a2j + 3/2

h

) e2j + 1

h

− a2j + 3/2

h

e2j + 2

h

h2

= 0

x2j + 1

h

e2j + 1

h

e2j + 1

h

=

a2j + 1/2

h

ej

2h + a2j + 3/2 h

ej + 1

2h

a2j + 1/2

h

+ a2j + 3/2

h

slide-79
SLIDE 79

79 of 104

Thus, the operator induced interpolation is

  • And, as usual, the restriction and coarse-grid
  • perators are defined by the Galerkin relations

v2j + 1

h

=

a2j + 1/2

h

vj

2h + a2j + 3/2 h

vj + 1

2h

a2j + 1/2

h

+ a2j + 3/2

h

v2j

h = vj 2h

A 2h = Ih

2h A h I2h h

Ih

2h = c( I2h h ) T

slide-80
SLIDE 80

80 of 104

A Variable coefficient example

  • We use V(2,1) cycle, full weighting, linear interpolation.
  • We use and

a( x) = ρ sin( k πx )

a( x) = ρ rand ( k πx)

k=3 k=25 k=50 k=100 k=200 k=400 0.085 0.085 0.085 0.085 0.085 0.085 0.085 0.25 0.084 0.098 0.098 0.094 0.093 0.083 0.083 0.5 0.093 0.185 0.194 0.196 0.195 0.187 0.173 0.75 0.119 0.374 0.387 0.391 0.39 0.388 0.394 0.85 0.142 0.497 0.511 0.514 0.514 0.526 0.472 0.95 0.191 0.681 0.69 0.694 0.699 0.745 0.672

a( x) = ρ rand ( k πx)

a( x) = ρ sin( k πx ) ρ

slide-81
SLIDE 81

81 of 104

Outline

  • Nonlinear Problems
  • Neumann Boundary Conditions
  • Anisotropic Problems
  • Variable Mesh Problems
  • Variable Coefficient Problems
  • Algebraic Multigrid

✔ ✔ ✔ ✔ ✔

slide-82
SLIDE 82

Algebraic multigrid: for unstructured-grids

  • Automatically defines coarse “grid”
  • AMG has two distinct phases:

— setup phase: define MG components — solution phase: perform MG cycles

  • AMG approach is opposite of geometric MG

— fix relaxation (point Gauss-Seidel) — choose coarse “grids” and prolongation, P, so that error not reduced by relaxation is in range(P) — define other MG components so that coarse- grid correction eliminates error in range(P) (i.e., use Galerkin principle) (in contrast, geometric MG fixes coarse grids, then defines suitable operators and smoothers)

82 of 112

slide-83
SLIDE 83
  • Solve Phase

— Standard multigrid operations, e.g., V-cycle, W-cycle, FMG, FAS, etc

AMG has two phases:

  • Setup Phase

– Select Coarse “grids,” – Define interpolation, – Define restriction and coarse-grid operators

. . . , , = , Ωm + 1 m 2 1 m I m

m + 1

. . . , , = , 2 1

I I

T m m m m + + 1 1

) ( =

I A I A

m m m m m m + + + 1 1 1 =

  • Note: Only the selection of coarse grids does not

parallelize well using existing techniques!

83 of 112

slide-84
SLIDE 84

AMG fundamental concept: Smooth error = “small” residuals

  • Consider the iterative method error recurrence
  • Error that is slow to converge satisfies
  • More precisely, it can be shown that smooth error

satisfies

e A Q I e

k k − + 1 1

) − ( =

≈ ) − ( e e A Q I

−1

  • e

A Q−1

  • r

e r

A D − 1 «

) (1

84 of 112

slide-85
SLIDE 85

AMG uses strong connection to determine MG components

  • It is easy to show from (1) that smooth error

satisfies

  • Define i is strongly connected to j by
  • For M-matrices, we have from (2)
  • implying that smooth error varies slowly in the

direction of strong connections

e e D e e A , , «

≤ θ < , } − { θ ≥ − a a

k i i k j i

1 x a m

) (2

1 « 2 2 1

i j i i i j i j i ≠

  • e

e e a a

2

85 of 112

slide-86
SLIDE 86

Some useful definitions

  • The set of strong connections of a variable ,

that is, the variables upon whose values the value

  • f depends, is defined as
  • The set of points strongly connected to a variable

is denoted: .

  • The set of coarse-grid variables is denoted C.
  • The set of fine-grid variables is denoted F.
  • The set of coarse-grid variables used to

interpolate the value of the fine-grid variable is denoted .

Si

θ > − :

  • =

a a j

j i i j j i

x a m

ui

S T

i

} ∈ : { = S j j

i

Ci

ui

ui ui

86 of 112

slide-87
SLIDE 87

Choosing the Coarse Grid

  • Two Criteria

– (C1) For each , each point should either be in or should be strongly connected to at least one point in – (C2) should be a maximal subset with the property that no two -points are strongly connected to each

  • ther.
  • Satisfying both (C1) and (C2) is sometimes
  • impossible. We use (C2) as a guide while enforcing

(C1).

i ∈ F

C C C Ci

S j ∈

i

87 of 112

slide-88
SLIDE 88

Selecting the coarse-grid points

C-point selected (point with largest “value”) Neighbors of C-point become F- points Next C-point selected (after

updating “values”)

F-points selected, etc.

88 of 112

slide-89
SLIDE 89

Examples: Laplacian Operator

5-pt FD, 9-pt FE (quads), and 9-pt FE (stretched quads)

− − − − − − −

  • 1

1 1 1 8 1 1 1 1

− − −

  • 1

1 4 1 1

− − − − −

  • 1

4 1 2 8 2 1 4 1

5-pt FD 9-pt FE (quads) 9-pt FE (stretched quads)

89 of 112

slide-90
SLIDE 90

Prolongation is based on smooth error, strong connections (from M-matrices)

Prolongation : Smooth error is given by:

i C C C F F F

e a e a r

j j i N j i i i i

≈ + =

i

∈ , ω ∈ ,

  • =

) ( F i e C i e e P

k k i C k i i

  • ∈ i

90 of 112

slide-91
SLIDE 91

Prolongation is based on smooth error, strong connections (from M-matrices)

The definition of smooth error, Sets: Strongly connected -pts. Strongly connected -pts. Weakly connected points.

i C C C F F F

Ci

C

Ds

i

Dw

i F

e a e a

j j i i j i i i

− ≈

Gives:

e a

i i i

Strong C Strong F Weak pts.

− − − ≈

  • j

j i D j j j i D j j j i C j ∈ ∈ ∈

w i s i i

e a e a e a

91 of 112

slide-92
SLIDE 92

Finally, the prolongation weights are defined

  • In the smooth-error relation, use for weak
  • connections. For the strong F-points use :

yielding the prolongation weights:

e e

i j =

a e a e

k j C k k k j C k j

  • =

i i

a a a w

n i D n i i a a a D j j i j i

+ + − =

w i m k i C m j k k i s i ∈

92 of 112

slide-93
SLIDE 93

AMG setup costs: a bad rap

  • Many geometric MG methods need to compute

prolongation and coarse-grid operators

  • The only additional expense in the AMG setup phase is

the coarse grid selection algorithm

  • AMG setup phase is only 10-25% more

expensive than in geometric MG and may be

considerably less than that!

93 of 112

slide-94
SLIDE 94

AMG Performance: Sometimes a Success Story

  • AMG performs extremely well on the model

problem (Poisson’s equation, regular grid)- optimal convergence factor (e.g., 0.14) and scalability with increasing problem size.

  • AMG appears to be both scalable and efficient on

diffusion problems on unstructured grids (e.g., 0.1- 0.3).

  • AMG handles anisotropic diffusion coefficients on

irregular grids reasonably well.

  • AMG handles anisotropic operators on structured

and unstructured grids relatively well (e.g., 0.35).

94 of 112

slide-95
SLIDE 95

So, what could go wrong?

Strong F-F connections: weights are dependent on each other

  • For point the value is interpolated from , ,

and is needed to make the interpolation weights for approximating .

  • For point the value is interpolated from , ,

and is needed to make the interpolation weights for approximating .

  • It’s an implicit system!

C k C k

1 1

∈ ∈

j i

C k C k

2 2

∈ ∈

j i

C k3 ∈

i C k 4 ∈

j

i

j

ei

ej ej

k1

i

k2 k2

j

k1

ei

95 of 112

slide-96
SLIDE 96

Is there a fix?

  • A Gauss-Seidel like iterative approach to weight

definition is implemented. Usually two passes

  • suffice. But does it work?

theta Standard Iterative

0.25 0.47 0.14 0.5 0.24 0.14 0.25 0.83 0.82 0.5 0.53 0.23

Convergence factors for Laplacian, stretched quadrilaterals

∆ = ∆ y x 1

∆ = ∆ y x 1

  • Frequently, it does:

96 of 112

slide-97
SLIDE 97

AMG for systems

  • How can we do AMG on systems?
  • Naïve approach: “Block” approach (block Gauss-Seidel,

using scalar AMG to “solve” at each cycle)

  • =
  • g

f v u A A A A

2 2 1 2 2 1 1 1

u A g A v v A f A u ) − ( ) ( ← ) − ( ) ( ←

1 2 1 2 2 2 1 1 1 1 − −

  • Great Idea! Except that it doesn’t work! (relaxation

does not evenly smooth errors in both unknowns)

97 of 112

slide-98
SLIDE 98

AMG for systems: a solution

  • To solve the system problem, allow interaction

between the unknowns at all levels: and

  • This is called the “unknown” approach.
  • Results: 2-D elasticity, uniform quadrilateral

mesh:

A A A A A

k k k k k

  • =

2 2 1 2 2 1 1 1

I I I

v k k u k k k k + + + 1 1 1

  • )

( ) (

  • =

m esh spacing 0.125 0.0625 0.03135 0.015625 Convergence factor 0.22 0.35 0.42 0.44

98 of 112

slide-99
SLIDE 99

How’s it perform (vol I)?

Regular grids, plain, old, vanilla problems

  • The Laplace Operator:
  • Anisotropic Laplacian:

Convergence Time Setup

Stencil

per cycle Complexity per Cycle Times

5-pt

0.054 2.21 0.29 1.63

5-pt skew

0.067 2.12 0.27 1.52

9-pt (-1,8)

0.078 1.30 0.26 1.83

9-pt (-1,-4,20)

0.109 1.30 0.26 1.83

− ε − U U

y y x x

Epsilon

0.001 0.01 0.1 0.5 1 2 10 100 1000

Convergence/cycle

0.084 0.093 0.058 0.069 0.056 0.079 0.087 0.093 0.083

99 of 112

slide-100
SLIDE 100

How’s it perform (vol II)?

Structured Meshes, Rectangular Domains

  • 5-point Laplacian on regular rectangular grids

Convergence factor (y-axis) plotted against number of nodes (x-axis)

0 .0 5 5 0 .0 8 2 0 .1 4 6 0 .1 4 2 0 .1 3 5 0 .1 3 3

0 .0 2 0 .0 4 0 .0 6 0 .0 8 0 .1 0 .1 2 0 .1 4 0 .1 6 5 0 0 0 0 0 1 0 0 0 0 0 0

100 of 112

slide-101
SLIDE 101

How’s it perform (vol III)?

Unstructured Meshes, Rectangular Domains

  • Laplacian on random unstructured grids (regular

triangulations, 15-20% nodes randomly collapsed into neighboring nodes) Convergence factor (y-axis) plotted against number of nodes (x-axis)

0 .0 9 7 0 .1 1 1 0 .2 5 3 0 .1 7 5 0 .2 2 3

0 . 0 5 0 . 1 0 . 1 5 0 . 2 0 . 2 5 0 . 3 2 0 0 0 0 4 0 0 0 0 6 0 0 0 0

101 of 112

slide-102
SLIDE 102

How’s it perform (vol IV)?

Isotropic diffusion, Structured/Unstructured Grids

  • n structured, unstructured

grids Problems used: “a” means parameter c=10, “b” means c=1000

. 0 5 . 1 . 1 5 . 2 . 2 5 . 3 . 3 5 . 4 6 a 6 b 7 a 7 b 8 a 8 b 9 a 9 b

1 6 − + . = ) , ( : y x c y x d 5 5 1 7 . > . ≤ .

  • =

) , ( : x c x y x d

5 2 5 5 5 2 1 1 8 . ≤ } . − , . − { ≤ . .

  • =

) , ( : c y x y x d e s i w r e h t

  • x

a m

5 2 5 5 5 2 1 1 9 . ≤ ) . − ( + ) . − ( ≤ . .

  • =

) , ( : c y x y x d

2 2

e s i w r e h t

  • )

∇ ) , ( (

u y x d

N=16642 N=66049 N=13755 N=54518

Structured Structured Unstruct. Unstruct. 102 of 112

slide-103
SLIDE 103

How’s it perform (vol V)?

Laplacian operator, unstructured Grids

Convergence factor

0.1002 0.1715 0.2198 0.1888 0.2237 0.2754 0.05 0.1 0.15 0.2 0.25 0.3 50000 100000 150000 200000 250000 300000 350000 Gridpoints

103 of 112

slide-104
SLIDE 104

104 of 104

Outline

  • Nonlinear Problems
  • Neumann Boundary Conditions
  • Anisotropic Problems
  • Variable Mesh Problems
  • Variable Coefficient Problems
  • Algebraic Multigrid

✔ ✔ ✔ ✔ ✔ ✔

104 of 112

slide-105
SLIDE 105

Multigrid Rules!

  • We conclude with a few observations:

– We have barely scratched the surface of the myriad ways that multigrid has been, and can be, employed. – With diligence and care, multigrid can be made to handle many types of complications in a robust, efficient manner. – Further extensions to multigrid methodology are being sought by many people working on many different problems.