Optimal adaptive wavelet methods for linear operator equations T. - - PowerPoint PPT Presentation

optimal adaptive wavelet methods
SMART_READER_LITE
LIVE PREVIEW

Optimal adaptive wavelet methods for linear operator equations T. - - PowerPoint PPT Presentation

Convergent iterations Complexity analysis An adaptive Galerkin method Summary Optimal adaptive wavelet methods for linear operator equations T. Gantumur R. Stevenson Numerical Colloquium 17 June Gantumur, Stevenson Convergent iterations


slide-1
SLIDE 1

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Optimal adaptive wavelet methods

for linear operator equations

  • T. Gantumur
  • R. Stevenson

Numerical Colloquium 17 June

Gantumur, Stevenson

slide-2
SLIDE 2

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Overview

Linear operator equation Au = g with A : H → H′ Riesz basis Ψ = {ψλ} of H, e.g. u =

λ uλψλ

Infinite dimensional matrix-vector system Au = g, with u = (uλ)λ and A : ℓ2 → ℓ2 Convergent iterations such as u(i+1) = u(i) + α[g − Au(i)] We can approximate Au(i) by a finitely supported vector How cheap can we compute this approximation? The answer will depend on A and Ψ

Gantumur, Stevenson

slide-3
SLIDE 3

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Outline

1

Continuous problem, discretization, and convergent iterations Linear operator equations Discretization Convergent iterations in discrete space

2

Complexity analysis Uniform methods - convergence, complexity Nonlinear approximation Optimal complexity Computability

3

An adaptive Galerkin method Optimal complexity with coarsening Optimal complexity without coarsening

Gantumur, Stevenson

slide-4
SLIDE 4

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Linear Operator Equations

Let H be a separable Hilbert space, H′ be its dual A : H → H′ is boundedly invertible g ∈ H′ is a linear functional Problem u ∈ H is such that Au = g For v ∈ H and h ∈ H′, h, v = h(v) the duality pairing

Gantumur, Stevenson

slide-5
SLIDE 5

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Sobolev Spaces

Let Ω be an n-dimensional domain or smooth manifold H = Ht ⊂ Ht(Ω) is a closed subspace H′ = H−t the dual space

Gantumur, Stevenson

slide-6
SLIDE 6

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Linear Differential Operators

Partial differential operators of order 2t Au, v =

  • |α|,|β|≤t

aαβ∂βu, ∂αv, Example: The reaction-diffusion equation (t = 1) Au, v =

∇u · ∇v + κ2uv,

Gantumur, Stevenson

slide-7
SLIDE 7

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Boundary Integral Operators

Boundary integral operators Au, v =

v(x)K(x, y)u(y)dΩydΩx with the kernel K(x, y) singular at x = y Example: The single layer operator for the Laplace BVP in 3-d domain (t = −1

2)

K(x, y) = 1 4π|x − y|

Gantumur, Stevenson

slide-8
SLIDE 8

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Convergent Iterations in Continuous Space

Gradient Iterations u(i+1) = u(i) + Bi(g − Au(i)), Bi : H′ → H u − u(i+1) = u − u(i) − BiA(u − u(i)) = (I − BiA)(u − u(i)) u − u(i+1)H ≤ I − BiAH→Hu − u(i)H Convergence ρi := I − BiAH→H < 1

Gantumur, Stevenson

slide-9
SLIDE 9

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Normal Equations

Observation Let R : H′ → H be self-adjoint: Rf, h = f, Rh for f, h ∈ H′ and H′-elliptic: with some α > 0 Rf, f ≥ αf2

H for f ∈ H′.

Then A′RA : H → H′ is self-adjoint and H-elliptic. Normal Equation Au = g = ⇒ A′RAu = A′Rg Assumption A is self-adjoint and H-elliptic.

Gantumur, Stevenson

slide-10
SLIDE 10

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Riesz bases

Ψ = {ψλ : λ ∈ ∇} is a Riesz basis for H – each v ∈ H has a unique expansion v =

  • λ∈∇

dλ(v)ψλ s.t. cv2

H ≤

  • λ∈∇

|dλ(v)|2 ≤ Cv2

H

dλ ∈ H′ and dλ(ψµ) = δλµ {dλ : λ ∈ ∇} is a Riesz basis for H′ ˜ Ψ = { ˜ ψλ} := {dλ} is the dual basis: ˜ ψλ, ψµ = δλµ For v ∈ H, we have v = {vλ} := {dλ(v)} ∈ ℓ2(∇)

Gantumur, Stevenson

slide-11
SLIDE 11

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Wavelet bases

Ψ Riesz basis for H = Ht Nested index sets ∇0 ⊂ ∇1 ⊂ . . . ⊂ ∇j ⊂ . . . ⊂ ∇, Sj = span{ψλ : λ ∈ ∇j} ⊂ H and ˜ Sj = span{ ˜ ψλ : λ ∈ ∇j} ⊂ H′ Locality, Polynomial exactness and Vanishing moments diam(supp ψλ) = O(2−j) if λ ∈ ∇j \ ∇j−1 All polynomials of degree d − 1, Pd−1 ⊂ S0 P˜

d−1 ⊂ ˜

S0 more precisely, P˜

d−1, ·L2 ⊂ ˜

S0 {Sj} has a good approximation property If λ ∈ ∇ \ ∇0, we have P˜

d−1, ψλL2 = 0 cancellation property

Gantumur, Stevenson

slide-12
SLIDE 12

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Equivalent Discrete Problem

[Cohen, Dahmen, DeVore ’02] Wavelet basis Ψ = {ψλ : λ ∈ ∇} Stiffness A = Aψλ, ψµλ,µ and load g = g, ψλλ Linear equation in ℓ2(∇) Au = g, A : ℓ2(∇) → ℓ2(∇) SPD and g ∈ ℓ2(∇) u =

λ uλψλ is the solution of Au = g

u − vℓ2(∇) u − vH with v =

λ vλψλ

A good approx. of u induces a good approx. of u Ψ defines a topological isomorphism between H and ℓ2(∇)

Gantumur, Stevenson

slide-13
SLIDE 13

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Convergent Iterations in Discrete Space

Richardson’s iterations u(0) = 0 u(i+1) = u(i) + α[g − Au(i)] i = 0, 1, . . . u − u(i+1) = u − u(i) − αA(u − u(i)) = (I − αA)(u − u(i)) u − u(i+1)ℓ2 ≤ I − αAℓ2→ℓ2u − u(i)ℓ2 Convergence ρ := I − αAℓ2→ℓ2 < 1 g and Au(i) are infinitely supported Approximate them by finitely supported sequences

Gantumur, Stevenson

slide-14
SLIDE 14

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Approximate Iterations

Approximate right-hand side RHS[g, ε] → gε satisfies g − gεℓ2 ≤ ε Approximate application of the matrix APPLY[A, v, ε] → wε satisfies Av − wεℓ2 ≤ ε Approximate Richardson’s iterations ˜ u(0) = 0 ˜ u(i+1) = ˜ u(i) + α

  • RHS[g, εi] − APPLY[A, ˜

u(i), εi]

  • i = 0, 1, . . .

Choose εi such that u(i) − ˜ u(i) u − u(i)

Gantumur, Stevenson

slide-15
SLIDE 15

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Convergence

RICHARDSON[˜ u(0), εfin] → ˜ u(i) for i = 0, 1, . . . εi := Cρi; ˜ r(i) := RHS[g, εi] − APPLY[A, ˜ u(i), εi] if ˜ r(i)ℓ2 + 2εi ≤ εfin then terminate; ˜ u(i+1) := ˜ u(i) + αr(i) endfor Lemma RICHARDSON[˜ u(0), ε] → ˜ u terminates with g − A˜ uℓ2 ≤ ε Computational cost of RICHARDSON[˜ u(0), ε] depending on ε?

Gantumur, Stevenson

slide-16
SLIDE 16

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Uniform Refinement Galerkin Methods

Wavelet basis Ψj := {ψλ : λ ∈ ∇j} of Sj Stiffness Aj = Aψλ, ψµλ,µ∈∇j Load gj = g, ψλλ∈∇j Linear equation in ℓ2(∇j) Ajuj = gj, Aj : ℓ2(∇j) → ℓ2(∇j) SPD and gj ∈ ℓ2(∇j) uj =

λ[uj]λψλ ∈ Sj approximates the solution of Au = g

With the orthogonal projector Pj : ℓ2(∇) → ℓ2(∇j), the above equation is equivalent to PjAuj = Pjg

Gantumur, Stevenson

slide-17
SLIDE 17

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Convergence and Complexity

If u ∈ Ht+ns for some s ∈ (0, d−t

n ]

εj := u − ujHt ≤ C inf

v∈Sj u − vHt ≤ O(2−jns)

Nj = dim Sj = O(2jn) εj ≤ O(N−s

j )

Solve Ajuj = gj with Cascadic CG complexity O(Nj) Similar estimates for FEM

Gantumur, Stevenson

slide-18
SLIDE 18

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Best N-term Approximation

Given u = (uλ)λ ∈ ℓ2, approximate u using N nonzero coeffs ℵN :=

  • Λ⊂∇:#Λ=N

ℓ2(Λ) ℵN is a nonlinear manifold Let uN be such that u − uNℓ2 ≤ u − vℓ2 for v ∈ ℵN uN is a best approximation of u with #supp uN ≤ N uN can be constructed by picking N largest in modulus coeffs from u

Gantumur, Stevenson

slide-19
SLIDE 19

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Nonlinear vs. linear approximation

Nonlinear approximation If u ∈ Bt+ns

τ

(Lτ) with 1

τ = 1 2 + s for some s ∈ (0, d−t n )

εN = uN − u ≤ O(N−s) Linear approximation If u ∈ Ht+ns for some s ∈ (0, d−t

n ], uniform refinement

εj = uj − u ≤ O(N−s

j )

Ht+ns is a proper subset of Bt+ns

τ

(Lτ) [Dahlke, DeVore]: u ∈ Bt+ns

τ

(Lτ) much milder than u ∈ Ht+ns

Gantumur, Stevenson

slide-20
SLIDE 20

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Approximation spaces

Approximation space As := {v ∈ ℓ2 : v − vNℓ2 ≤ O(N−s)} Quasi-semi-norm |v|As := supN∈N Nsv − vNℓ2 u ∈ Bt+ns

τ

(Lτ) with 1

τ = 1 2 + s for some s ∈ (0, d−t n ) ⇒ u ∈ As

Assumption u ∈ As for some s ∈ (0, d−t

n )

Best approximation u − v ≤ ε satisfies #supp v ≤ ε−1/s|u|1/s

As

Gantumur, Stevenson

slide-21
SLIDE 21

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Requirements on the Subroutines

Complexity of RHS RHS[g, ε] → gε terminates with g − gεℓ2 ≤ ε #supp gε ε−1/s|u|1/s

As

flops, memory ε−1/s|u|1/s

As + 1

Complexity of APPLY For #supp v < ∞ APPLY[A, v, ε] → wε terminates with Av − wεℓ2 ≤ ε #supp wε ε−1/s|v|1/s

As

flops, memory ε−1/s|v|1/s

As + #supp v + 1

Gantumur, Stevenson

slide-22
SLIDE 22

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Complexity of RICHARDSON

RICHARDSON[˜ u(0), εfin] → ˜ u(i) for i = 0, 1, . . . εi := Cρi; ˜ r(i) := RHS[g, εi] − APPLY[A, ˜ u(i), εi] if ˜ r(i) + 2εiℓ2 ≤ εfin then terminate; ˜ u(i+1) := ˜ u(i) + αr(i) endfor Lemma RICHARDSON[˜ u(0), ε] → ˜ u terminates with g − A˜ uℓ2 ≤ ε. ε0 := u − ˜ u(0)ℓ2 #supp ˜ u ε−1/s|u|1/s

As + ε−1/s

(ε0/ε)C|u|1/s

As + ε−1/s(ε0/ε)C|˜

u(0)|1/s

As

flops, memory the same expression

Gantumur, Stevenson

slide-23
SLIDE 23

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Coarsening

COARSE[v, ε] → w v − w ≤ ε and #supp v is minimal Lemma θ < 1/2. Let u − v ≤ θε. w = COARSE[v, (1 − θ)ε] satisfies #supp w ε−1/s|u|1/s

As and u − w ≤ ε

Gantumur, Stevenson

slide-24
SLIDE 24

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Complexity with coarsening

SOLVE[εfin] → ˜ u(i) ˜ u(0) := 0; ε0 := f for i = 0, 1, . . . εi+1 := εi/2 v := RICHARDSON[˜ u(i), θεi+1] ˜ u(i+1) := COARSE[v, (1 − θ)εi+1] until εi+1 ≤ εfin Theorem [Cohen, Dahmen, DeVore ’02] SOLVE[ε] → ˜ u terminates with g − A˜ uℓ2 ≤ ε. #supp ˜ u ε−1/s|u|1/s

As

flops, memory the same expression

Gantumur, Stevenson

slide-25
SLIDE 25

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Computing the Right Hand Side

Complexity of RHS RHS[g, ε] → gε terminates with g − gεℓ2 ≤ ε #supp gε ε−1/s|u|1/s

As

flops, memory ε−1/s|u|1/s

As + 1

A naive approach: Compute ˜ g = g, ψλλ∈Λ for some Λ ⊂ ∇ s.t. g − ˜ g ≤ δ Arrange the coeffs in ˜ g in modulus beforehand RHS[g, ε] := COARSE[˜ g, ε − δ]

Gantumur, Stevenson

slide-26
SLIDE 26

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

The Subroutine APPLY

Computability Matrix A is called q∗-computable, when for each N one can construct an infinite matrix AN s.t. for any q < q∗, AN − A ≤ O(N−q) having in each column O(N) non-zero entries whose computation takes O(N) operations Theorem [Cohen, Dahmen, DeVore ’01] Recall s ∈ (0, d−t

n ). Let A be q∗-computable with q∗ > s. Then we can

construct APPLY satisfying the requirements. A needs to be approximated well by computable sparse matrices

Gantumur, Stevenson

slide-27
SLIDE 27

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Compressibility

Assume A, A′ : Ht+σ → H−t+σ Level |λ| := j such that λ ∈ ∇j \ ∇j−1 ψλHr 2|λ|(r−t) for r ∈ [−˜ d, γ), γ := sup{q : ψλ ∈ Hq} r ≤ min{t + ˜ d, σ} and r < γ − t, |µ| ≥ |λ| |Aψλ, ψµ| ≤ AψλH−t+rψµHt−r ψλHt+rψµHt−r 2−r(|µ|−|λ|) Theorem [Stevenson ’04] {ψλ} are piecewise polynomial wavelets that are sufficiently smooth and have sufficiently many vanishing moments A is either differential or singular integral operator any entry of A can be computed spending O(1) operations then A is q∗-computable for some q∗ ≥ d−t

n

(> s)

Gantumur, Stevenson

slide-28
SLIDE 28

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Computability

Unit cost assumption Any entry of A can be computed spending O(1) operations Only satisfied for very special cases: differential operators with constant coefficients, single layer potential operator on R Numerical quadrature is needed Theorem [Gantumur, Stevenson ’04, ’05] {ψλ} are piecewise polynomial wavelets that are sufficiently smooth and have sufficiently many vanishing moments A is either differential or singular integral operator then A is q∗-computable for some q∗ ≥ d−t

n

(> s)

Gantumur, Stevenson

slide-29
SLIDE 29

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Galerkin solutions

  • ·, ·

:= A·, · is an inner product on ℓ2, | | | · | | | := ·, ·

  • 1

2 is a norm

Let ˜ u ∈ ℓ2(Λ) be an approx. to u inside SOLVE AΛ := PΛA|ℓ2(Λ) : ℓ2(Λ) → ℓ2(Λ), and gΛ := PΛg ∈ ℓ2(Λ) uΛ ∈ ℓ2(Λ) is the solution to AΛuΛ = gΛ | | |u − uΛ| | | = infv∈ℓ2(Λ) | | |u − v| | | In a sense, uΛ is the best approx. from ℓ2(Λ) The next set ˜ Λ generated by SOLVE can be too big, not optimal

Gantumur, Stevenson

slide-30
SLIDE 30

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Saturation

Galerkin orthogonality u − uΛ ⊥A ℓ2(Λ) Lemma µ ∈ (0, 1), w ∈ ℓ2, and Λ ⊃ supp w s.t. PΛ(g − Aw) ≥ µg − Aw Then we have | | |u − uΛ| | | ≤ [1 − κ(A)−1µ2]

1 2 |

| |u − w| | |

Gantumur, Stevenson

slide-31
SLIDE 31

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Adaptive Galerkin Method

GROW[w] → [Λ, ν]: r := RHS[g, ζ] − APPLY[A, w, ζ] ν := r + 2ζ determine a set Λ ⊃ supp w, with minimal cardinality, such that PΛr ≥ µr GALSOLVE[ε] → wk: k := 0; wk := 0 while with [Λk+1, νk] := GROW[wk], νk > ε do Solve AΛk+1wk+1 = gΛk+1 k := k + 1 if k = 0 (mod K) then wk+1 = COARSE[wk+1, ξ] enddo

Gantumur, Stevenson

slide-32
SLIDE 32

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Complexity

Theorem [Cohen, Dahmen, DeVore ’01] Let k < ∞ suitably chosen. GALSOLVE[ε] → w terminates with g − Awℓ2 ≤ ε. #supp w ε−1/s|u|1/s

As

flops, memory the same expression

Gantumur, Stevenson

slide-33
SLIDE 33

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Optimal expansion

Lemma [Gantumur, Harbrecht, Stevenson ’05] µ ∈ (0, κ(A)− 1

2 ), w ∈ ℓ2. Then the smallest set Λ ⊃ supp w with

PΛ(g − Aw) ≥ µg − Aw satisfies #(Λ \ supp w) g − Aw−1/s|u|1/s

As

Gantumur, Stevenson

slide-34
SLIDE 34

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Optimal Complexity without Coarsening

Theorem [Gantumur, Harbrecht, Stevenson ’05] Let K = ∞. GALSOLVE[ε] → w terminates with g − Awℓ2 ≤ ε. #supp w ε−1/s|u|1/s

As

flops, memory the same expression

Gantumur, Stevenson

slide-35
SLIDE 35

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

Summary

There exist asymptotically optimal fully discrete adaptive wavelet algorithms for solving linear operator equations. There exist adaptive Galerkin methods without coarsening of the intermediate iterands.

Gantumur, Stevenson

slide-36
SLIDE 36

Convergent iterations Complexity analysis An adaptive Galerkin method Summary

References

  • A. Cohen, W. Dahmen, and R. DeVore. Adaptive wavelet methods II - Beyond the

elliptic case. Found. Comput. Math., 2(3):203–245, 2002. R.P. Stevenson. On the compressibility of operators in wavelet coordinates. SIAM J.

  • Math. Anal., 35(5):1110–1132, 2004.
  • T. Gantumur and R.P. Stevenson. Computation of differential operators in wavelet
  • coordinates. Technical Report 1306, Utrecht University, August 2004. To appear in
  • Math. Comp..
  • T. Gantumur and R.P. Stevenson. Computation of singular integral operators in

wavelet coordinates. Technical Report 1321, Utrecht University, January 2005. Submitted.

  • T. Gantumur, H. Harbrecht, R.P. Stevenson. An optimal adaptive wavelet method

without coarsening. Technical Report 1325, Utrecht University, March 2005. Submitted.

Gantumur, Stevenson