Adaptive wavelet methods: Quantitative improvements and extensions - - PowerPoint PPT Presentation

adaptive wavelet methods quantitative improvements and
SMART_READER_LITE
LIVE PREVIEW

Adaptive wavelet methods: Quantitative improvements and extensions - - PowerPoint PPT Presentation

Adaptive wavelet methods: Quantitative improvements and extensions Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam Collaborators: Christoph Schwab (ETH, Z urich), Nabi Chegini (Univ. of Tafresh, Iran)


slide-1
SLIDE 1

Adaptive wavelet methods: Quantitative improvements and extensions

Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam Collaborators: Christoph Schwab (ETH, Z¨ urich), Nabi Chegini (Univ. of Tafresh, Iran)

slide-2
SLIDE 2

Contents

  • Adaptive wavelet methods for solving well-posed operator equations with

symmetric, coercive Fr´ echet derivatives

  • An efficient approximate residual evaluation for 1st order systems
  • Adaptive wavelet methods for solving general well-posed operator

equations: Nonlinear least squares

  • Time

evolution problems: Simultaneous space-time variational formulations of parabolic problems and (N)SE

1/38

slide-3
SLIDE 3

Well-posed op. eqs.

For X (real) sep. Hilbert space, let

  • F : X ⊃ dom(F) → X ′,
  • F cont. Fr´

echet diff. in neighb. of a sol u of F(u) = 0,

  • DF(u) ∈ Lis(X, X ′), DF(u) = DF(u)′ > 0,

(so linearized eq. is SPD).

2/38

slide-4
SLIDE 4

Well-posed op. eqs.

For X (real) sep. Hilbert space, let

  • F : X ⊃ dom(F) → X ′,
  • F cont. Fr´

echet diff. in neighb. of a sol u of F(u) = 0,

  • DF(u) ∈ Lis(X, X ′), DF(u) = DF(u)′ > 0,

(so linearized eq. is SPD). Ex.

  • Ω ⊂ I

Rd, d ≤ 3, X = H1

0(Ω), F(u)(v) =

  • Ω grad u · grad v + u3v − fv dx
  • F(u)(v) =

1 4π

  • ∂Ω

∂Ω (u(y)−u(x))(v(y)−v(x)) |x−y|3

dy − v(x)f(x)

  • dx,

Ω ⊂ I R3, X = H

1 2(∂Ω)/R (hypersingular boundary integral equation). 2/38

slide-5
SLIDE 5

Reformulation as a countable set of coupled scalar eqs

Let Ψ = {ψλ: λ ∈ ∇} Riesz basis for X, i.e., synthesis operator, F : c → c⊤Ψ :=

  • λ∈∇

cλψλ ∈ Lis(ℓ2(∇), X),

3/38

slide-6
SLIDE 6

Reformulation as a countable set of coupled scalar eqs

Let Ψ = {ψλ: λ ∈ ∇} Riesz basis for X, i.e., synthesis operator, F : c → c⊤Ψ :=

  • λ∈∇

cλψλ ∈ Lis(ℓ2(∇), X), and so its adjoint, the analysis operator, F′ : g → g(Ψ) := [g(ψλ)]λ∈∇ ∈ Lis(X ′, ℓ2(∇)).

3/38

slide-7
SLIDE 7

Reformulation as a countable set of coupled scalar eqs

Let Ψ = {ψλ: λ ∈ ∇} Riesz basis for X, i.e., synthesis operator, F : c → c⊤Ψ :=

  • λ∈∇

cλψλ ∈ Lis(ℓ2(∇), X), and so its adjoint, the analysis operator, F′ : g → g(Ψ) := [g(ψλ)]λ∈∇ ∈ Lis(X ′, ℓ2(∇)). Then with F = F′FF : ℓ2(Λ) ⊃ dom(F) → ℓ2(Λ), equiv. form. F(u) = 0 , where u := F−1u.

3/38

slide-8
SLIDE 8

Reformulation as a countable set of coupled scalar eqs

Let Ψ = {ψλ: λ ∈ ∇} Riesz basis for X, i.e., synthesis operator, F : c → c⊤Ψ :=

  • λ∈∇

cλψλ ∈ Lis(ℓ2(∇), X), and so its adjoint, the analysis operator, F′ : g → g(Ψ) := [g(ψλ)]λ∈∇ ∈ Lis(X ′, ℓ2(∇)). Then with F = F′FF : ℓ2(Λ) ⊃ dom(F) → ℓ2(Λ), equiv. form. F(u) = 0 , where u := F−1u. Norm on ℓ2(∇) will be denoted as · . u − w u − FwX.

3/38

slide-9
SLIDE 9

Adaptive wavelet Galerkin method

In its original form introduced by [Cohen, Dahmen, DeVore ’01, 02] Alg (awgm). % Let U ⊂ ℓ2(Λ) be a neigh. of u, µ ∈ (0, 1], finite Λ0 ⊂ ∇. for i = 0, 1, . . . do solve ui ∈ U with supp ui ⊆ Λi s.t. F(ui)|Λi = 0 determine a smallest Λi+1 ⊃ Λi s.t. F(ui)|Λi+1 ≥ µF(ui) endfor

4/38

slide-10
SLIDE 10

Adaptive wavelet Galerkin method

In its original form introduced by [Cohen, Dahmen, DeVore ’01, 02] Alg (awgm). % Let U ⊂ ℓ2(Λ) be a neigh. of u, µ ∈ (0, 1], finite Λ0 ⊂ ∇. for i = 0, 1, . . . do solve ui ∈ U with supp ui ⊆ Λi s.t. F(ui)|Λi = 0 determine a smallest Λi+1 ⊃ Λi s.t. F(ui)|Λi+1 ≥ µF(ui) endfor Thm (convergence). ∃α < 1 s.t. when U and infv∈ℓ2(Λ0) u − v suff. small, u − ui αiu − u0.

For affine F, use |||u − ui+1|||2 = |||u − ui|||2 − |||ui+1 − ui|||2, and saturation |||ui+1 − ui||| |||u − ui||| by ‘bulk chasing’. Perturb arg. for non-affine.

4/38

slide-11
SLIDE 11

Adaptive wavelet Galerkin method

In its original form introduced by [Cohen, Dahmen, DeVore ’01, 02] Alg (awgm). % Let U ⊂ ℓ2(Λ) be a neigh. of u, µ ∈ (0, 1], finite Λ0 ⊂ ∇. for i = 0, 1, . . . do solve ui ∈ U with supp ui ⊆ Λi s.t. F(ui)|Λi = 0 determine a smallest Λi+1 ⊃ Λi s.t. F(ui)|Λi+1 ≥ µF(ui) endfor Thm (convergence). ∃α < 1 s.t. when U and infv∈ℓ2(Λ0) u − v suff. small, u − ui αiu − u0.

For affine F, use |||u − ui+1|||2 = |||u − ui|||2 − |||ui+1 − ui|||2, and saturation |||ui+1 − ui||| |||u − ui||| by ‘bulk chasing’. Perturb arg. for non-affine.

Def (approx. class). For s > 0, As :=

  • u ∈ ℓ2(∇): uAs := sup

N∈N

N s inf

{w: # supp w≤N}u − w < ∞

  • .

4/38

slide-12
SLIDE 12

Adaptive wavelet Galerkin method

In its original form introduced by [Cohen, Dahmen, DeVore ’01, 02] Alg (awgm). % Let U ⊂ ℓ2(Λ) be a neigh. of u, µ ∈ (0, 1], finite Λ0 ⊂ ∇. for i = 0, 1, . . . do solve ui ∈ U with supp ui ⊆ Λi s.t. F(ui)|Λi = 0 determine a smallest Λi+1 ⊃ Λi s.t. F(ui)|Λi+1 ≥ µF(ui) endfor Thm (convergence). ∃α < 1 s.t. when U and infv∈ℓ2(Λ0) u − v suff. small, u − ui αiu − u0.

For affine F, use |||u − ui+1|||2 = |||u − ui|||2 − |||ui+1 − ui|||2, and saturation |||ui+1 − ui||| |||u − ui||| by ‘bulk chasing’. Perturb arg. for non-affine.

Def (approx. class). For s > 0, As :=

  • u ∈ ℓ2(∇): uAs := sup

N∈N

N s inf

{w: # supp w≤N}u − w < ∞

  • .

Thm (optimal rate). If µ is suff. small, then if u ∈ As, (# supp ui)su − ui 1.

4/38

slide-13
SLIDE 13

Practical awgm

  • Thm. With approx. eval. of F(ui) with rel. tolerance δ > 0 (suff. small

but fixed), awgm also converges with optimal rate.

5/38

slide-14
SLIDE 14

Practical awgm

  • Thm. With approx. eval. of F(ui) with rel. tolerance δ > 0 (suff. small

but fixed), awgm also converges with optimal rate. Thm (optimal comput. compl.). If cost to compute such approx. residuals is O(u − ui−1/s + # supp ui), then (cost to compute ui)su − ui 1.

5/38

slide-15
SLIDE 15

Practical awgm

  • Thm. With approx. eval. of F(ui) with rel. tolerance δ > 0 (suff. small

but fixed), awgm also converges with optimal rate. Thm (optimal comput. compl.). If cost to compute such approx. residuals is O(u − ui−1/s + # supp ui), then (cost to compute ui)su − ui 1. This cost condition has been verified for large class of linear PDEs and singular integral eqs using compactly supported wavelets that are sufficiently smooth and have sufficiently many vanishing moments.

5/38

slide-16
SLIDE 16

Practical awgm

  • Thm. With approx. eval. of F(ui) with rel. tolerance δ > 0 (suff. small

but fixed), awgm also converges with optimal rate. Thm (optimal comput. compl.). If cost to compute such approx. residuals is O(u − ui−1/s + # supp ui), then (cost to compute ui)su − ui 1. This cost condition has been verified for large class of linear PDEs and singular integral eqs using compactly supported wavelets that are sufficiently smooth and have sufficiently many vanishing moments. Such bases for the common Sob. spaces are available on general polygonal domains and consist of piecewise polynomial wavelets. Wavelet ψλ

  • n

‘level’ |λ| ∈ I N has diam supp ψλ 2−|λ|.

5/38

slide-17
SLIDE 17

Usual residual evaluation ([CDD01])

For F(u) = Au− f, approximate both F′AFui and F′f separately within absolute tolerance 1

2δu − ui.

6/38

slide-18
SLIDE 18

Usual residual evaluation ([CDD01])

For F(u) = Au− f, approximate both F′AFui and F′f separately within absolute tolerance 1

2δu − ui.

Ex (Poisson). Terms read as

Ω grad Ψ · grad Ψ

  • ui and
  • Ω fΨ.

Assuming ˜ d vanishing moments, rhs approximation based on |

fψλ| ≤ ψλL2(Ω) inf

p∈P ˜

d−1

f − pL2(supp ψλ).

6/38

slide-19
SLIDE 19

Usual residual evaluation ([CDD01])

For F(u) = Au− f, approximate both F′AFui and F′f separately within absolute tolerance 1

2δu − ui.

Ex (Poisson). Terms read as

Ω grad Ψ · grad Ψ

  • ui and
  • Ω fΨ.

Assuming ˜ d vanishing moments, rhs approximation based on |

fψλ| ≤ ψλL2(Ω) inf

p∈P ˜

d−1

f − pL2(supp ψλ). Similar arg. shows that stiffness is ‘near-sparse’. Restricting it to fixed ‘band’ gives right complexity, but not suff. accuracy. u ∈ As means that vector is ‘near-sparse’. One has uiAs uAs. Approximate jth column of stiffness with accuracy proportional to |(ui)j|. Realizes cost condition. Quantitatively expensive.

6/38

slide-20
SLIDE 20

An alternative residual evaluation

Ex.

  • −u′′ + u3 = f on (0, 1),

u(0) = u(1) = 0.

  • Piecew. pol. wav. basis Ψ for H1

0(0, 1).

F(ui) = 1

0 u′ iψ′ λ + (u3 i − f)ψλ dx

  • λ∈∇=

1

0 (−u′′ i + u3 i − f)

  • ψλ dx
  • λ∈∇,

(where ui := Fui) assuming Ψ ⊂ H2(0, 1). For simpl., let f be polynomial (ignore data oscillation).

7/38

slide-21
SLIDE 21

An alternative residual evaluation

Ex.

  • −u′′ + u3 = f on (0, 1),

u(0) = u(1) = 0.

  • Piecew. pol. wav. basis Ψ for H1

0(0, 1).

F(ui) = 1

0 u′ iψ′ λ + (u3 i − f)ψλ dx

  • λ∈∇=

1

0 (−u′′ i + u3 i − f)

  • ψλ dx
  • λ∈∇,

(where ui := Fui) assuming Ψ ⊂ H2(0, 1). For simpl., let f be polynomial (ignore data oscillation). Tree constraint on sets Λi ⊂ ∇: If λ ∈ Λi, and |λ| > 0, then supp ψλ ⊆ ∪{supp ψµ: |µ| = |λ| − 1, µ ∈ Λi}. Affects As, but only slightly.

7/38

slide-22
SLIDE 22

An alternative residual evaluation

Ex.

  • −u′′ + u3 = f on (0, 1),

u(0) = u(1) = 0.

  • Piecew. pol. wav. basis Ψ for H1

0(0, 1).

F(ui) = 1

0 u′ iψ′ λ + (u3 i − f)ψλ dx

  • λ∈∇=

1

0 (−u′′ i + u3 i − f)

  • ψλ dx
  • λ∈∇,

(where ui := Fui) assuming Ψ ⊂ H2(0, 1). For simpl., let f be polynomial (ignore data oscillation). Tree constraint on sets Λi ⊂ ∇: If λ ∈ Λi, and |λ| > 0, then supp ψλ ⊆ ∪{supp ψµ: |µ| = |λ| − 1, µ ∈ Λi}. Affects As, but only slightly. Then ui, and thus −u′′

i +u3 i −f, is piecewise polynomial w.r.t. partition Ti;

and its repr. c w.r.t. a single-scale basis Φi can be found in lin. compl. (wavelet-to-single scale transf).

7/38

slide-23
SLIDE 23

An alternative residual evaluation

Ex.

  • −u′′ + u3 = f on (0, 1),

u(0) = u(1) = 0.

  • Piecew. pol. wav. basis Ψ for H1

0(0, 1).

F(ui) = 1

0 u′ iψ′ λ + (u3 i − f)ψλ dx

  • λ∈∇=

1

0 (−u′′ i + u3 i − f)

  • ψλ dx
  • λ∈∇,

(where ui := Fui) assuming Ψ ⊂ H2(0, 1). For simpl., let f be polynomial (ignore data oscillation). Tree constraint on sets Λi ⊂ ∇: If λ ∈ Λi, and |λ| > 0, then supp ψλ ⊆ ∪{supp ψµ: |µ| = |λ| − 1, µ ∈ Λi}. Affects As, but only slightly. Then ui, and thus −u′′

i +u3 i −f, is piecewise polynomial w.r.t. partition Ti;

and its repr. c w.r.t. a single-scale basis Φi can be found in lin. compl. (wavelet-to-single scale transf). Dropping from F(ui) all λ whose levels exceed the level of Ti|supp ψλ by a fixed constant k = k(δ) gives a relative error δ. With Λi,δ the remaining set of indices, let Φi,δ be a single-scale basis for span Ψ|Λi,δ. Compute F(ui)|Λi,δ by computing [ 1

0 ΦiΦi,δdx]c, followed by

a single-scale-to-wavelet transf. Total cost #Λi,δ Λi.

7/38

slide-24
SLIDE 24

An alternative residual evaluation

Ex.

  • −u′′ + u3 = f on (0, 1),

u(0) = u(1) = 0.

  • Piecew. pol. wav. basis Ψ for H1

0(0, 1).

F(ui) = 1

0 u′ iψ′ λ + (u3 i − f)ψλ dx

  • λ∈∇=

1

0 (−u′′ i + u3 i − f)

  • ψλ dx
  • λ∈∇,

(where ui := Fui) assuming Ψ ⊂ H2(0, 1). For simpl., let f be polynomial (ignore data oscillation). Tree constraint on sets Λi ⊂ ∇: If λ ∈ Λi, and |λ| > 0, then supp ψλ ⊆ ∪{supp ψµ: |µ| = |λ| − 1, µ ∈ Λi}. Affects As, but only slightly. Then ui, and thus −u′′

i +u3 i −f, is piecewise polynomial w.r.t. partition Ti;

and its repr. c w.r.t. a single-scale basis Φi can be found in lin. compl. (wavelet-to-single scale transf). Dropping from F(ui) all λ whose levels exceed the level of Ti|supp ψλ by a fixed constant k = k(δ) gives a relative error δ. With Λi,δ the remaining set of indices, let Φi,δ be a single-scale basis for span Ψ|Λi,δ. Compute F(ui)|Λi,δ by computing [ 1

0 ΦiΦi,δdx]c, followed by

a single-scale-to-wavelet transf. Total cost #Λi,δ Λi. Cost condition satisfied. Quantitatively more efficient. Generalizes to non- linear problems. Inconvenient condition on wavelets (in d > 1 dims.)

7/38

slide-25
SLIDE 25

Nonlinear least squares

To relax Ψ ⊂ H2 (C1 wavs) to at most continuity, write 2nd order PDE as 1st order system.

8/38

slide-26
SLIDE 26

Nonlinear least squares

To relax Ψ ⊂ H2 (C1 wavs) to at most continuity, write 2nd order PDE as 1st order system. Recall awgm setting

  • F : X ⊃ dom(F) → X ′,
  • F cont. Fr´

echet diff. in neighb. of a sol u of F(u) = 0,

  • DF(u) ∈ Lis(X, X ′), DF(u) = DF(u)′ > 0.

8/38

slide-27
SLIDE 27

Nonlinear least squares

To relax Ψ ⊂ H2 (C1 wavs) to at most continuity, write 2nd order PDE as 1st order system. Recall awgm setting

  • F : X ⊃ dom(F) → X ′,
  • F cont. Fr´

echet diff. in neighb. of a sol u of F(u) = 0,

  • DF(u) ∈ Lis(X, X ′), DF(u) = DF(u)′ > 0.

Let

  • G : X ⊃ dom(G) → Y′,
  • G 2x cont. Fr´

echet diff. in neighb. of a sol u of G(u) = 0,

  • DG(u) ∈ L(X, Y′) iso with range, i.e., DG(u)(v)Y′ vX.

8/38

slide-28
SLIDE 28

Nonlinear least squares

To relax Ψ ⊂ H2 (C1 wavs) to at most continuity, write 2nd order PDE as 1st order system. Recall awgm setting

  • F : X ⊃ dom(F) → X ′,
  • F cont. Fr´

echet diff. in neighb. of a sol u of F(u) = 0,

  • DF(u) ∈ Lis(X, X ′), DF(u) = DF(u)′ > 0.

Let

  • G : X ⊃ dom(G) → Y′,
  • G 2x cont. Fr´

echet diff. in neighb. of a sol u of G(u) = 0,

  • DG(u) ∈ L(X, Y′) iso with range, i.e., DG(u)(v)Y′ vX.
  • Necess. u = argminv∈dom(G)

1 2G(v)2 Y′, and so (E-L), F(u) = 0, where

F : X ⊃ dom(F) → X ′ F(u)(v) := DG(u)(v), G(u)Y′. Having Riesz basis ΨX for X, awgm applies.

8/38

slide-29
SLIDE 29

Nonlinear least squares

Problem when , Y′ is not evaluable: Equip Y with Riesz basis ΨY, and Y′ with equiv. norm F′f (= f(ΨY)). Then F(u)(v) := DG(u)(v)(ΨY)⊤G(u)(ΨY), and so F(·) (= F′

XFFX) = DG(·)⊤G(·), where G = F′ YGFX.

  • Rem. If, however, Y = Y1 × · · · × YN , then only those Yi with a non-evaluable inner

product have to be equipped with Riesz bases. This setting not covered by approach of first writing system in wavelet coordinates, and then forming (nonlinear) normal equations.

9/38

slide-30
SLIDE 30

First order system least squares

  • Ex. −∆u = f on Ω, u = 0 at ∂Ω.

G: (u, p) → (div p+f, p−grad u): H1

0(Ω) × H(div; Ω)

  • X

→ L2(Ω) × L2(Ω)d

  • Y′

.

10/38

slide-31
SLIDE 31

First order system least squares

  • Ex. −∆u = f on Ω, u = 0 at ∂Ω.

G: (u, p) → (div p+f, p−grad u): H1

0(Ω) × H(div; Ω)

  • X

→ L2(Ω) × L2(Ω)d

  • Y′

. DG(u, p) = DG ∈ Lis(X, Y′). Least-squares minimalisation, E-L ❀F(u, p) = 0, where F : X → X ′ reads F(u, p)(w, q) = DG(u, p)(v, q), G(u, p)Y′.

10/38

slide-32
SLIDE 32

First order system least squares

  • Ex. −∆u = f on Ω, u = 0 at ∂Ω.

G: (u, p) → (div p+f, p−grad u): H1

0(Ω) × H(div; Ω)

  • X

→ L2(Ω) × L2(Ω)d

  • Y′

. DG(u, p) = DG ∈ Lis(X, Y′). Least-squares minimalisation, E-L ❀F(u, p) = 0, where F : X → X ′ reads F(u, p)(w, q) = DG(u, p)(v, q), G(u, p)Y′. awgm: Equip H1

0(Ω) and H(div; Ω) (thus X) with Riesz bases ΨH1

0 and

ΨH(div) F(ui, pi) =

  • grad ΨH1

0, grad ui −

pi

  • L2(Ω)d

[div Ψdiv, div pi + f

  • L2(Ω) + Ψdiv,

pi − grad ui

  • L2(Ω)d
  • .

10/38

slide-33
SLIDE 33

First order system least squares

  • Ex. −∆u = f on Ω, u = 0 at ∂Ω.

G: (u, p) → (div p+f, p−grad u): H1

0(Ω) × H(div; Ω)

  • X

→ L2(Ω) × L2(Ω)d

  • Y′

. DG(u, p) = DG ∈ Lis(X, Y′). Least-squares minimalisation, E-L ❀F(u, p) = 0, where F : X → X ′ reads F(u, p)(w, q) = DG(u, p)(v, q), G(u, p)Y′. awgm: Equip H1

0(Ω) and H(div; Ω) (thus X) with Riesz bases ΨH1

0 and

ΨH(div) F(ui, pi) =

  • grad ΨH1

0, grad ui −

pi

  • L2(Ω)d

[div Ψdiv, div pi + f

  • L2(Ω) + Ψdiv,

pi − grad ui

  • L2(Ω)d
  • .

Pros:

  • efficient ‘alternative residual evaluation’ applies without additional

smoothness requirements on wavelets.

  • lower order (nonlinear) terms can be added (as with any least squares

formulation).

  • least squares minimalisation in Y′ = L2(Ω)d+1 (convenient).

10/38

slide-34
SLIDE 34

First order system least squares

Numerics

L-shaped domain Ω ⊂ I

  • R2. Bases for H1

0(Ω) and H(div; Ω) consisting of

  • cont. piecewise linears and lowest order RT-functions, resp.

−∆u + N(u) = f on Ω, u = 0 at ∂Ω, where N(u) = u3 or N(u) = sin u. f = 1

10 10

1

10

2

10

3

10

4

10

5

10

6

10

−3

10

−2

10

−1

10

Figure 1: Norm of residual vs. number of wavelets in log-log scale, for N(u) = u3 (black, upper curve) or N(u) = sin u (blue, lower curve). The hypotenuse of the triangle has a slope of −1

2.

11/38

slide-35
SLIDE 35

Numerics

0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0

Figure 2: Centers of the supports of the wavelets in H1

0(Ω) for the

approximation of u (left, 930 wavelets), or the wavelets in H(div; Ω) for the approximation of p (right, 631 wavelets) produced by awgm after 39 iterations for N(u) = u3.

12/38

slide-36
SLIDE 36

Numerics

Figure 3: Approximate solutions for N(u) = u3 (left) or N(u) = sin u (right), as a linear combination of approximately 200 wavelets. Note the difference in vertical scale in both pictures.

13/38

slide-37
SLIDE 37

Numerics

Figure 3: Approximate solutions for N(u) = u3 (left) or N(u) = sin u (right), as a linear combination of approximately 200 wavelets. Note the difference in vertical scale in both pictures. Cons current first order system formulation:

  • Requires wavelet basis for H(div; Ω). Realized in two dims only.
  • For −△u + N(u) = f, needed N : H1

0(Ω) → L2(Ω) and f ∈ L2(Ω).

  • Not a ‘canonical’ approach.

13/38

slide-38
SLIDE 38

First order system least squares (revisited)

  • Ex. −∆u = f on Ω, u = 0 at ∂Ω.

G: (u, p) → (f−grad ′ p, p−grad u): H1

0(Ω) × L2(Ω)d

  • X

→ H−1(Ω) × L2(Ω)d

  • Y′

. DG(u, p) = DG ∈ Lis(X, Y′). Least-squares minimalisation, E-L ❀F(u, p) = 0, where F : X → X ′ reads F(u, p)(w, q) = DG(u, p)(v, q), G(u, p)Y′. Equip H1

0(Ω) with Riesz basis ΨH1

0, and repl. · H−1(Ω) by F′

H1

0 · . 14/38

slide-39
SLIDE 39

First order system least squares (revisited)

  • Ex. −∆u = f on Ω, u = 0 at ∂Ω.

G: (u, p) → (f−grad ′ p, p−grad u): H1

0(Ω) × L2(Ω)d

  • X

→ H−1(Ω) × L2(Ω)d

  • Y′

. DG(u, p) = DG ∈ Lis(X, Y′). Least-squares minimalisation, E-L ❀F(u, p) = 0, where F : X → X ′ reads F(u, p)(w, q) = DG(u, p)(v, q), G(u, p)Y′. Equip H1

0(Ω) with Riesz basis ΨH1

0, and repl. · H−1(Ω) by F′

H1

0 · .

F(ui, pi)(v, q) = q, grad ΨH1

0L2(Ω)d

  • ΨH1

0, fL2(Ω) − grad ΨH1 0,

piL2(Ω)d

  • +

q − grad v, pi − grad uiL2(Ω)d = q, grad ΨH1

0L2(Ω)d

  • ΨH1

0, f + div

piL2(Ω)d

  • +

q − grad vi, pi − grad uiL2(Ω)d if pi ∈ H(div; Ω).

14/38

slide-40
SLIDE 40

First order system least squares (revisited)

awgm: Equip H1

0(Ω), L2(Ω)d with Riesz bases ΨH1

0, ΨLd 2, where ΨLd 2 ⊂

H(div ;Ω). Then F(ui, pi) =   grad ΨH1

0, grad ui −

pi

  • L2(Ω)d

ΨLn

2, grad ΨH1 0L2(Ω)dΨH1 0, div

pi + f

  • L2(Ω)d + ΨLn

2,

pi − grad ui

  • L2(Ω)d

  .

15/38

slide-41
SLIDE 41

First order system least squares (revisited)

awgm: Equip H1

0(Ω), L2(Ω)d with Riesz bases ΨH1

0, ΨLd 2, where ΨLd 2 ⊂

H(div ;Ω). Then F(ui, pi) =   grad ΨH1

0, grad ui −

pi

  • L2(Ω)d

ΨLn

2, grad ΨH1 0L2(Ω)dΨH1 0, div

pi + f

  • L2(Ω)d + ΨLn

2,

pi − grad ui

  • L2(Ω)d

  . Pros:

  • efficient ‘alternative residual evaluation’ applies under mild condition.
  • wavelet bases available in general settings (no basis for H(div; Ω)

required)

  • lower order (nonlinear) terms N : H1

0(Ω) → H−1(Ω) can be added.

f ∈ H−1(Ω) allowed.

  • ‘canonical’ approach: Well-posedness of first order formulation follows

from that of second order formulation. Applies equally well to time evolution problems.

15/38

slide-42
SLIDE 42

Parabolic problems

Ω ⊂ I Rd, I = (0, T).   

∂u ∂t − △u = g

  • n I × Ω,

u = 0

  • n I × ∂Ω,

u(0, ·) = 0

  • n Ω.
  • −△ can be read as semi-linear elliptic operator.
  • other (inhom) initial or boundary conditions are allowed.

16/38

slide-43
SLIDE 43

Parabolic problems

Ω ⊂ I Rd, I = (0, T).   

∂u ∂t − △u = g

  • n I × Ω,

u = 0

  • n I × ∂Ω,

u(0, ·) = 0

  • n Ω.
  • −△ can be read as semi-linear elliptic operator.
  • other (inhom) initial or boundary conditions are allowed.

Standard appr.: Approx.

∂u ∂t(t, ·) by, say u(t,·)−u(t−h,·) h

, and solve seq. of elliptic problems for 0 < t1 < t2 < · · · < tM = T

  • −△u(ti, ·) − (ti − ti−1)−1u(ti, ·) = (ti − ti−1)−1u(ti−1, ·) + g(ti, ·)
  • n Ω

u(ti, ·) = 0

  • n ∂Ω

16/38

slide-44
SLIDE 44

Parabolic problems

Ω ⊂ I Rd, I = (0, T).   

∂u ∂t − △u = g

  • n I × Ω,

u = 0

  • n I × ∂Ω,

u(0, ·) = 0

  • n Ω.
  • −△ can be read as semi-linear elliptic operator.
  • other (inhom) initial or boundary conditions are allowed.

Standard appr.: Approx.

∂u ∂t(t, ·) by, say u(t,·)−u(t−h,·) h

, and solve seq. of elliptic problems for 0 < t1 < t2 < · · · < tM = T

  • −△u(ti, ·) − (ti − ti−1)−1u(ti, ·) = (ti − ti−1)−1u(ti−1, ·) + g(ti, ·)
  • n Ω

u(ti, ·) = 0

  • n ∂Ω
  • How to distribute optimally ‘grid points’ over space and time?
  • Even if you can, approximation not effective for singularities that are local in space and

time.

  • Inherently sequential.
  • When the whole time evolution is needed, as with problems of optimal control or in

visualizations, huge amount of storage.

16/38

slide-45
SLIDE 45

Space-time variational formulation

(Gu)(v) :=

  • I

∂u ∂tv + grad u · grad v dx dt −

  • I
  • Ω gv dx dt = 0.

DG(u) = DG ∈ Lis

  • L2(I; H1

0(Ω)) ∩ H1 0,{0}(I; H−1(Ω))

  • X :=

, L2(I; H1

0(Ω))

  • Y :=

. After selecting Riesz ΨX, ΨY for X, Y, apply awgm to DG⊤G(u) = 0. (even better first to write it as a well-posed first order system)

17/38

slide-46
SLIDE 46

Tensor product bases

Let ΘX, ΘY, and ΣX, ΣY be collections of temporal or spatial functions such that, normalized in the corresponding norms, ΘX is a Riesz basis for L2(I) and for H1

0,{0}(I),

ΘY ” L2(I), ΣX ” H1

0(Ω)

” H−1(Ω), ΣY ” H1

0(Ω).

Then, normalized, ΘX ⊗ ΣX is a Riesz basis for L2(I; H1

0(Ω)), H1 0,{0}(I; H−1(Ω)), and so for X,

ΘY ⊗ ΣY ” Y.

18/38

slide-47
SLIDE 47

Best possible rates

If ΘX and ΣX are of orders pt and px, then best possible approximation rate in X is min(pt − 1, px−1

d ).

Rate requires boundedness of certain mixed derivatives of sol in Lp for some p < 2 (p = 2 required for sparse-grids). Approx. classes can be characterized as

tensor products of Besov spaces.

For pt −1 ≥ px−1

d , best rate is equal to best possible approx. rate in H1 0(Ω)

using ΣX. So thanks to tensor product basis, no penalty because of additional time dimension.

19/38

slide-48
SLIDE 48

First test on an ODE

du(t)

dt

+ νu(t) = g(t) (t ∈ I), u(0) = u0, (Gu)(v) :=

  • I −u(t)dv(t)

dt

+ νu(t)v(t)dt −

  • I g(t)v(t)dt − u0v(0) = 0.
  • Prop. With X := L2(I) and Y(ν) := H1

0,{T }(I), equipped with · Y(ν) :=

  • ν2 · 2

L2(I) + | · |2 H1(I), the operator DG ∈ Lis(X, Y(ν)′) and DG ≤

√ 2, DG−1 ≤ √ 2.

  • Num. results for ν = 1, g = 1 on (0, 1

3), g = 2 on (1 3, 1).

20/38

slide-49
SLIDE 49

ODE

Uniform, non-adaptive refinements, i.e. collect all wavelets up to some level. Rate = 1.5

21/38

slide-50
SLIDE 50

ODE

Adaptive refinements, i.e. awgm Rate = 3

22/38

slide-51
SLIDE 51

Some approximations

0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1

1 wavelet

23/38

slide-52
SLIDE 52

Some approximations

2 wavelets

24/38

slide-53
SLIDE 53

Some approximations

3 wavelets

25/38

slide-54
SLIDE 54

Some approximations

4 wavelets

26/38

slide-55
SLIDE 55

Some approximations

5 wavelets (all scaling functions are now in)

27/38

slide-56
SLIDE 56

Some approximations

15 wavelets

28/38

slide-57
SLIDE 57

ODE

0.2 0.4 0.6 0.8 1 2 4 6 8 10 12 14 16 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1

Figure 4: For u0 = 1 and #ui = 202, the non-zero coefficients of ui.

29/38

slide-58
SLIDE 58

Numerical results heat eqn

10 10

1

10

2

10

3

10

4

10

5

10

!15

10

!10

10

!5

10 10

5

Figure 5: Heat eqn. in d = 1 spatial dimension, right-hand side g = 1 and initial condition u0 = 0. G(ui)/G(0) vs. N = #supp ui for the awgm (solid), full-grid (dashed) and sparse-grid method (dashed-dotted). The dotted line is a multiple of N −5(log N)51

2. 30/38

slide-59
SLIDE 59

Numerical results heat eqn

10 10

1

10

2

10

3

10

4

10

5

10

−10

10

−5

10 10

5

10

10

Figure 6: Heat eqn. in d = 1 spatial dimension, right-hand side g = 1 and initial condition u0 = 1. G(ui)/G(0) vs. N = #supp ui for the awgm (solid). The dotted line is a multiple of N −5(log N)51

2. 31/38

slide-60
SLIDE 60

Numerical results heat eqn

0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0000 0.0002 0.0004 0.0006 0.0008 0.0010 0.0 0.2 0.4 0.6 0.8 1.0 0.0000 0.0002 0.0004 0.0006 0.0008 0.0010

Figure 7: Heat eqn. in d = 1 spatial dimension and right-hand side g = 1. Centers of the supports of the wavelets selected by the awgm. Left u0 = 0 and #ui = 13420. Right u0 = 1 and #ui = 13917. A zoom in near t = 0 is given at the bottom row.

32/38

slide-61
SLIDE 61

(N)SE

          

∂u ∂t − ν∆xu + ∇x p = f

  • n I × Ω,

divx u = g

  • n I × Ω,

u = 0

  • n I × ∂Ω,

u(0, ·) = 0

  • n Ω,
  • Ω p dx = 0.

(1) Can be reduced to parabolic for velocities, but then arising spaces will be spaces of divergence-free functions. We enforce incompressibility constraint via Lagrange multiplier. Saddle point form.

33/38

slide-62
SLIDE 62

(N)SE

          

∂u ∂t − ν∆xu + ∇x p = f

  • n I × Ω,

divx u = g

  • n I × Ω,

u = 0

  • n I × ∂Ω,

u(0, ·) = 0

  • n Ω,
  • Ω p dx = 0.

(1) Can be reduced to parabolic for velocities, but then arising spaces will be spaces of divergence-free functions. We enforce incompressibility constraint via Lagrange multiplier. Saddle point form. Space-time variational form: With                      c(u, v) :=

  • I

∂u ∂t · v + ν∇xu : ∇xv dxdt,

d(p, v) := −

  • I

p div v dxdt, f(v) :=

  • I

f · v dxdt, g(q) :=

  • I

g q dxdt, (2) find (u, p) in some suitable space, such that

33/38

slide-63
SLIDE 63

(N)SE

G(u, p)(v, q) := c(u, v) + d(p, v) + d(q, u) − f(v) + g(q) = 0 for all (v, q) from another suitable space.

34/38

slide-64
SLIDE 64

(N)SE

G(u, p)(v, q) := c(u, v) + d(p, v) + d(q, u) − f(v) + g(q) = 0 for all (v, q) from another suitable space. For δ ∈ {0, T}, ˘ Hs

0,{δ}(I) := [L2(I), H1 0,{δ}(I)]s,

ˆ Hs(Ω) := [L2(Ω), H2(Ω) ∩ H1

0(Ω)]s

2,

¯ Hs(Ω) := [(H1(Ω)/I R)′, H1(Ω)/I R)]s+1

2 ,

U s

δ := L2(I; ˆ

H2s(Ω)n) ∩ ˘ Hs

0,{δ}(I; L2(Ω)n),

Ps

δ :=

  • L2(I; ¯

H2s−1(Ω)′) ∩ ˘ H1−s

0,{δ}(I; ¯

H1(Ω)′) ′.

  • Thm. For Ω ⊂ I

Rd a bounded Lipschitz domain, and s ∈ (1

4, 3 4), it holds

that DG ∈ Lis(U s

0 × Ps T, (U 1−s T

× P1−s )′).

34/38

slide-65
SLIDE 65

(N)SE

G(u, p)(v, q) := c(u, v) + d(p, v) + d(q, u) − f(v) + g(q) = 0 for all (v, q) from another suitable space. For δ ∈ {0, T}, ˘ Hs

0,{δ}(I) := [L2(I), H1 0,{δ}(I)]s,

ˆ Hs(Ω) := [L2(Ω), H2(Ω) ∩ H1

0(Ω)]s

2,

¯ Hs(Ω) := [(H1(Ω)/I R)′, H1(Ω)/I R)]s+1

2 ,

U s

δ := L2(I; ˆ

H2s(Ω)n) ∩ ˘ Hs

0,{δ}(I; L2(Ω)n),

Ps

δ :=

  • L2(I; ¯

H2s−1(Ω)′) ∩ ˘ H1−s

0,{δ}(I; ¯

H1(Ω)′) ′.

  • Thm. For Ω ⊂ I

Rd a bounded Lipschitz domain, and s ∈ (1

4, 3 4), it holds

that DG ∈ Lis(U s

0 × Ps T, (U 1−s T

× P1−s )′). All arising spaces can be ‘conveniently’ equipped with wavelet Riesz bases, and awgm applies (preferably to reformulation as first order system) Generalizes to NSE for d = 2; for d = 3 we need ‘s’ > 3

4 which requires

more smooth or convex domains, and C1-wavelets.

34/38

slide-66
SLIDE 66

Proof of Thm.

Recall saddle-point structure DG(u, p)(v, q) := c(u, v) + d(p, v) + d(q, u). Boundedness is easy. The right-inverse div+ of div constructed in [Bog79] satisfies both div+ ∈ L( ¯ H−1(Ω), L2(Ω)n) and, for s ∈ [0, 3

4), div+ ∈ L( ¯

H2s−1(Ω), ˆ H2s(Ω)n), and so I ⊗ div+ ∈ L((P1−s )′, U s

0 ).

This implies that for s ∈ [0, 3

4),

I ⊗ div ∈ L(U s

0 , (P1−s

)′) is surjective, i.e., inf

0=q∈P1−s

sup

0=u∈U s

d(u, q) uU s

0 qP1−s

> 0, and analogously, for s ∈ (1

4, 1],

inf

0=p∈Ps

T

sup

0=v∈U 1−s

T

d(v, p) vU 1−s

T

pPs

T

> 0. Remains to show that (Cu)(v) := c(u, v) boundedly inv. between {u ∈ U s

0 : d(P1−s

, u) = 0} and

  • {v ∈ U 1−s

T

: d(Ps

T, v) = 0}

′.

35/38

slide-67
SLIDE 67

Proof of Thm.

Again the existence of div+ as constructed in [Bog79] shows that for (ς, δ) ∈ {(s, 0), (1 − s, T)} {w ∈ U ς

δ : d(P1−ς δ

, w) = 0} ≃ L2(I; ˆ H2ς(div 0; Ω)) ∩ ˘ Hς

0,{δ}(I; ˆ

H0(div 0; Ω)) =: U ς

δ (div 0),

i.e. the order of interpolation and taking divergence-free parts can be reversed.

36/38

slide-68
SLIDE 68

Proof of Thm.

Again the existence of div+ as constructed in [Bog79] shows that for (ς, δ) ∈ {(s, 0), (1 − s, T)} {w ∈ U ς

δ : d(P1−ς δ

, w) = 0} ≃ L2(I; ˆ H2ς(div 0; Ω)) ∩ ˘ Hς

0,{δ}(I; ˆ

H0(div 0; Ω)) =: U ς

δ (div 0),

i.e. the order of interpolation and taking divergence-free parts can be reversed. With (Au)(v) := ν

  • Ω ∇u: ∇v dx on ˆ

H1(div 0; Ω) × ˆ H1(div 0; Ω), elliptic regularity shows that for ς ∈ [0, 3

4), ˆ

H2ς(div 0; Ω) ≃ [ ˆ H0(div 0; Ω), D(A)]ς and so U ς

δ (div 0) ≃ L2(I; [ ˆ

H0(div 0; Ω), D(A)]ς)∩ ˘ Hς

0,{δ}(I; ˆ

H0(div 0; Ω)) =: ˜ U ς

δ (div 0)

Finally, C ∈ Lis( ˜ U ς

0 (div 0), ( ˜

U 1−ς

T

(div 0))′) (ς ∈ [0, 1]), follows from interpolation and this result for ς ∈ {0, 1}, which results are known as maximal regularity of evolution equations.

36/38

slide-69
SLIDE 69

Summary

  • Adaptive wavelet methods solve well-posed operator equations at optimal

rates, in linear comput. complexity

  • Quantitative improvements by writing the problem as a first order system
  • Promising applications for solving time evolution problems

37/38

slide-70
SLIDE 70

Summary

  • Adaptive wavelet methods solve well-posed operator equations at optimal

rates, in linear comput. complexity

  • Quantitative improvements by writing the problem as a first order system
  • Promising applications for solving time evolution problems

Thanks for your attention/patience!

37/38

slide-71
SLIDE 71

References

[Bog79]

  • M. E. Bogovski˘

ı. Solution of the first boundary value problem for an equation of continuity of an incompressible medium. Dokl. Akad. Nauk SSSR, 248(5):1037– 1040, 1979. [CDD01] A. Cohen, W. Dahmen, and R. DeVore. Adaptive wavelet methods for elliptic

  • perator equations – Convergence rates. Math. Comp, 70:27–75, 2001.

[CDD03] A. Cohen, W. Dahmen, and R. DeVore. Sparse evaluation of compositions of functions using multiscale expansions. SIAM J. Math. Anal., 35(2):279–303 (electronic), 2003. [CS15] N.G. Chegini and R.P. Stevenson. An adaptive wavelet method for semi-linear first order system least squares.

  • Comput. Math. Appl., August 2015.

DOI: 10.1515/cmam-2015-0023. [SS09]

  • Ch. Schwab and R.P. Stevenson. A space-time adaptive wavelet method for

parabolic evolution problems. Math. Comp., 78:1293–1318, 2009. [SS15]

  • Ch. Schwab and R.P. Stevenson. Fractional space-time variational formulations
  • f (Navier)-Stokes equations. Technical report, December 2015.

[Ste14] R.P. Stevenson. Adaptive wavelet methods for linear and nonlinear least-squares

  • problems. Found. Comput. Math., 14(2):237–283, 2014.

38/38