Multilevel discrete least squares polynomial approximation Ral - - PowerPoint PPT Presentation

multilevel discrete least squares polynomial approximation
SMART_READER_LITE
LIVE PREVIEW

Multilevel discrete least squares polynomial approximation Ral - - PowerPoint PPT Presentation

Multilevel discrete least squares polynomial approximation Ral Tempone Alexander von Humboldt Professor, RWTH-Aachen, KAUST Joint work with : A.-L. Haji-Ali (Heriot Watt) F. Nobile (EPFL), S. Wolfers (ex KAUST, now G-Research) DCSE Fall


slide-1
SLIDE 1

Multilevel discrete least squares polynomial approximation

Raúl Tempone

Alexander von Humboldt Professor, RWTH-Aachen, KAUST Joint work with: A.-L. Haji-Ali (Heriot Watt) F. Nobile (EPFL), S. Wolfers (ex KAUST, now G-Research)

DCSE Fall School on ROM and UQ, November 4–8, 2019

slide-2
SLIDE 2

Contents

  • 1. Problem framework
  • 2. Weighted discrete least squares approximation
  • 3. Multilevel least squares approximation
  • 4. Application to random elliptic PDEs
  • 5. Conclusions
slide-3
SLIDE 3

PDEs with random parameters

Consider a differential problem L(u; y) = G (*) depending on a set of random parameters y = (y1, . . . , yN) ∈ Γ ⊂ RN with joint probability measure µ on Γ. We assume that (*) has a unique solution u(y), in some suitable function space V , and we focus on a Quantity of Interest Q : V → R. Goal: approximate the whole response function y → f(y) := Q(u(y)) : Γ → R by multivariate polynomials. Possibly derive approximated statistics as E[f], V ar[f], etc.

1/20

slide-4
SLIDE 4

PDEs with random parameters

Consider a differential problem L(u; y) = G (*) depending on a set of random parameters y = (y1, . . . , yN) ∈ Γ ⊂ RN with joint probability measure µ on Γ. We assume that (*) has a unique solution u(y), in some suitable function space V , and we focus on a Quantity of Interest Q : V → R. Goal: approximate the whole response function y → f(y) := Q(u(y)) : Γ → R by multivariate polynomials. Possibly derive approximated statistics as E[f], V ar[f], etc.

1/20

slide-5
SLIDE 5

PDEs with random parameters

Consider a differential problem L(u; y) = G (*) depending on a set of random parameters y = (y1, . . . , yN) ∈ Γ ⊂ RN with joint probability measure µ on Γ. We assume that (*) has a unique solution u(y), in some suitable function space V , and we focus on a Quantity of Interest Q : V → R. Goal: approximate the whole response function y → f(y) := Q(u(y)) : Γ → R by multivariate polynomials. Possibly derive approximated statistics as E[f], V ar[f], etc.

1/20

slide-6
SLIDE 6

Polynomial approximation on downward closed sets

Assume f ∈ L2

µ(Γ). We seek an approximation of f in a finite

dimensional polynomial subspace VΛ = span N

n=1 ypn n ,

with p = (p1, . . . , pN) ∈ Λ

  • with Λ ⊂ NN a downward closed index set.

p2 p1

1 2 3 4 2 3 4 5 5 6 7 1

  • Definition. An index set Λ is

downward closed if p ∈ Λ and q ≤ p = ⇒ q ∈ Λ.

2/20

slide-7
SLIDE 7

Contents

  • 1. Problem framework
  • 2. Weighted discrete least squares approximation
  • 3. Multilevel least squares approximation
  • 4. Application to random elliptic PDEs
  • 5. Conclusions
slide-8
SLIDE 8

Weighted discrete least squares approximation

  • 1. Sample independently M points
  • y(1), . . . , y(M)

∈ ΓM from a distribution ν ≪ µ, with density ρ = dν

dµ;

  • 2. define the weight function w(y) =

1 ρ(y);

  • 3. find weighted discrete least squares approximation on VΛ

ˆ ΠMf = argmin

v∈VΛ

f − vM with g2

M = 1

M

M

  • j=1

w

  • y(j)

g

  • y(j)2

. Here: E

  • g2

M

  • =
  • Γ w(y)g(y)2ν(dy) =
  • Γ g(y)2µ(dy) = g2

L2

µ.

Algebraic system: let {φj}|Λ|

j=1 be a basis of VΛ, orthonormal w.r.t.

µ, and ˆ ΠMf(y) = |Λ|

j=1 cjφj(y). Then, c =

  • c1, . . . , c|Λ|

T satisfies Gc = ˆ f, Gi,j = (φi, φj)M , ˆ fi = (f, φi)M.

3/20

slide-9
SLIDE 9

Weighted discrete least squares approximation

  • 1. Sample independently M points
  • y(1), . . . , y(M)

∈ ΓM from a distribution ν ≪ µ, with density ρ = dν

dµ;

  • 2. define the weight function w(y) =

1 ρ(y);

  • 3. find weighted discrete least squares approximation on VΛ

ˆ ΠMf = argmin

v∈VΛ

f − vM with g2

M = 1

M

M

  • j=1

w

  • y(j)

g

  • y(j)2

. Here: E

  • g2

M

  • =
  • Γ w(y)g(y)2ν(dy) =
  • Γ g(y)2µ(dy) = g2

L2

µ.

Algebraic system: let {φj}|Λ|

j=1 be a basis of VΛ, orthonormal w.r.t.

µ, and ˆ ΠMf(y) = |Λ|

j=1 cjφj(y). Then, c =

  • c1, . . . , c|Λ|

T satisfies Gc = ˆ f, Gi,j = (φi, φj)M , ˆ fi = (f, φi)M.

3/20

slide-10
SLIDE 10

Weighted discrete least squares approximation

  • 1. Sample independently M points
  • y(1), . . . , y(M)

∈ ΓM from a distribution ν ≪ µ, with density ρ = dν

dµ;

  • 2. define the weight function w(y) =

1 ρ(y);

  • 3. find weighted discrete least squares approximation on VΛ

ˆ ΠMf = argmin

v∈VΛ

f − vM with g2

M = 1

M

M

  • j=1

w

  • y(j)

g

  • y(j)2

. Here: E

  • g2

M

  • =
  • Γ w(y)g(y)2ν(dy) =
  • Γ g(y)2µ(dy) = g2

L2

µ.

Algebraic system: let {φj}|Λ|

j=1 be a basis of VΛ, orthonormal w.r.t.

µ, and ˆ ΠMf(y) = |Λ|

j=1 cjφj(y). Then, c =

  • c1, . . . , c|Λ|

T satisfies Gc = ˆ f, Gi,j = (φi, φj)M , ˆ fi = (f, φi)M.

3/20

slide-11
SLIDE 11

Optimally of discrete least squares approximation

Theorem ([Cohen-Migliorati 2017][Cohen-Davenport-Leviatan 2013])

For arbitrary r > 0 define κr := 1/2(1 − log 2) 1 + r and KΛ,w := sup

y∈Γ

 w(y)

|Λ|

  • j=1

φi(y)2   . If M log M ≥ KΛ,w κr , then P

  • G − I ≤ 1

2

  • > 1 − 2M −r.

f − ˆ ΠMfL2

µ ≤ (1 +

√ 2) inf

v∈VΛ f − vL∞

√w with prob. > 1 − 2M −r.

E

  • f − ˆ

Πc

Mf2 L2

µ

  • ≤ CM inf

v∈VΛ f − v2 L2

µ + 2f2

L2

µM −r

where ˆ Πc

Mf = ˆ

ΠMf · 1{G−I≤ 1

2 } and CM =

  • 1 +

4κr log M

  • M→∞

− − − − → 1.

4/20

slide-12
SLIDE 12

Sufficient number of points - uniform measure

Uniform measure: µ = U N

i=1 Γi

  • [Chkifa-Cohen-Migliorati-Nobile-Tempone 2015] When sampling from the same

distribution (ν = µ and w = 1), then |Λ| ≤ KΛ,1 ≤ |Λ|2. Hence, (unweighted) discrete least square is stable and

  • ptimally convergent under the condition

M log M ≥ |Λ|2 κr (quadratic proportionality).

5/20

slide-13
SLIDE 13

Sufficient number of points - optimal measure

[Cohen-Migliorati 2017] For arbitrary µ, when sampling from the optimal

measure dν∗ dµ (y) = ρ∗(y) = 1 |Λ|

|Λ|

  • j=1

φj(y)2 = ⇒ KΛ,w∗ = |Λ|. Hence, weighted discrete least squares stable and optimal with M log M ≥ |Λ| κr (linear proportionality). Sampling algorithms from the optimal distribution are available (marginalization [Cohen-Migliorati 2017], acceptance rejection

[H.-Nobile-Tempone-Wolfers, 2017])

However, the optimal distribution depends on Λ. Not good for adaptive algorithms

6/20

slide-14
SLIDE 14

Sufficient number of points - Chebyshev measure

Alternatively, for uniform measure µ (or more generally a product measure µ = ⊗N

j=1µj, with µj doubling measure, i.e.

µj(2I) = Lµj(I)) one can sample from the arcsin (Chebyshev) distribution. KΛ,w ≤ CN|Λ|, M log M ≥ CN κr |Λ|. Still linear scaling but with a constant exponentially dependent

  • n N.

Advantage: the sampling measure does not depend on Λ. Good for adaptivity.

7/20

slide-15
SLIDE 15

Contents

  • 1. Problem framework
  • 2. Weighted discrete least squares approximation
  • 3. Multilevel least squares approximation
  • 4. Application to random elliptic PDEs
  • 5. Conclusions
slide-16
SLIDE 16

Multilevel least squares approximation

In practice f(y) = Q(u(y)) can not be evaluated exactly as it requires the solution of a differential equation. We introduce a sequence of approximations fnℓ, nℓ ∈ N with increasing cost, s.t. lim

ℓ→∞ f − fnℓL2

µ = 0,

(or possibly a stronger norm) Similarly, we introduce a sequence of nested downward closed sets Λm0 ⊂ Λm1 ⊂ . . . ⊂ Λmk ⊂ . . . such that lim

k→∞

inf

v∈VΛmk

f − vL2

µ = 0.

Correspondingly, for each Λmk we introduce a weighted discrete least squares projector ˆ ΠMk using

Mk log Mk = O(|Λmk|)

random points.

8/20

slide-17
SLIDE 17

Multilevel least squares approximation

In practice f(y) = Q(u(y)) can not be evaluated exactly as it requires the solution of a differential equation. We introduce a sequence of approximations fnℓ, nℓ ∈ N with increasing cost, s.t. lim

ℓ→∞ f − fnℓL2

µ = 0,

(or possibly a stronger norm) Similarly, we introduce a sequence of nested downward closed sets Λm0 ⊂ Λm1 ⊂ . . . ⊂ Λmk ⊂ . . . such that lim

k→∞

inf

v∈VΛmk

f − vL2

µ = 0.

Correspondingly, for each Λmk we introduce a weighted discrete least squares projector ˆ ΠMk using

Mk log Mk = O(|Λmk|)

random points.

8/20

slide-18
SLIDE 18

Multilevel least squares approximation

In practice f(y) = Q(u(y)) can not be evaluated exactly as it requires the solution of a differential equation. We introduce a sequence of approximations fnℓ, nℓ ∈ N with increasing cost, s.t. lim

ℓ→∞ f − fnℓL2

µ = 0,

(or possibly a stronger norm) Similarly, we introduce a sequence of nested downward closed sets Λm0 ⊂ Λm1 ⊂ . . . ⊂ Λmk ⊂ . . . such that lim

k→∞

inf

v∈VΛmk

f − vL2

µ = 0.

Correspondingly, for each Λmk we introduce a weighted discrete least squares projector ˆ ΠMk using

Mk log Mk = O(|Λmk|)

random points.

8/20

slide-19
SLIDE 19

Multilevel least squares approximation

Multilevel formula: given maximum level L ∈ N SLf =

  • k+ℓ≤L

(ˆ ΠMk − ˆ ΠMk−1)(fnℓ − fnℓ−1) =

L

  • ℓ=0

ˆ ΠML−ℓ(fnℓ − fnℓ−1). In the multilevel formula one might consider more general index sets (k, ℓ) ∈ I ⊂ R2. However, one can always recast to k + ℓ ≤ L by properly choosing {nℓ} and {mk}. Question: How to properly choose {nℓ} and {mk}? Issue: Since the least squares projection is random, we have to ensure that it is stable and optimally convergent on all

  • levels. (Need union bound on failure probabilities)

9/20

slide-20
SLIDE 20

Multilevel least squares approximation

Multilevel formula: given maximum level L ∈ N SLf =

  • k+ℓ≤L

(ˆ ΠMk − ˆ ΠMk−1)(fnℓ − fnℓ−1) =

L

  • ℓ=0

ˆ ΠML−ℓ(fnℓ − fnℓ−1). In the multilevel formula one might consider more general index sets (k, ℓ) ∈ I ⊂ R2. However, one can always recast to k + ℓ ≤ L by properly choosing {nℓ} and {mk}. Question: How to properly choose {nℓ} and {mk}? Issue: Since the least squares projection is random, we have to ensure that it is stable and optimally convergent on all

  • levels. (Need union bound on failure probabilities)

9/20

slide-21
SLIDE 21

Multilevel least squares approximation

Multilevel formula: given maximum level L ∈ N SLf =

  • k+ℓ≤L

(ˆ ΠMk − ˆ ΠMk−1)(fnℓ − fnℓ−1) =

L

  • ℓ=0

ˆ ΠML−ℓ(fnℓ − fnℓ−1). In the multilevel formula one might consider more general index sets (k, ℓ) ∈ I ⊂ R2. However, one can always recast to k + ℓ ≤ L by properly choosing {nℓ} and {mk}. Question: How to properly choose {nℓ} and {mk}? Issue: Since the least squares projection is random, we have to ensure that it is stable and optimally convergent on all

  • levels. (Need union bound on failure probabilities)

9/20

slide-22
SLIDE 22

Multilevel least squares approximation

Multilevel formula: given maximum level L ∈ N SLf =

  • k+ℓ≤L

(ˆ ΠMk − ˆ ΠMk−1)(fnℓ − fnℓ−1) =

L

  • ℓ=0

ˆ ΠML−ℓ(fnℓ − fnℓ−1). In the multilevel formula one might consider more general index sets (k, ℓ) ∈ I ⊂ R2. However, one can always recast to k + ℓ ≤ L by properly choosing {nℓ} and {mk}. Question: How to properly choose {nℓ} and {mk}? Issue: Since the least squares projection is random, we have to ensure that it is stable and optimally convergent on all

  • levels. (Need union bound on failure probabilities)

9/20

slide-23
SLIDE 23

Assumptions for ML

For the Multilevel algorithm to be effective, we have to rely on certain “mixed regularity”. Let (F, · F ) ֒ → (L2

µ, · L2

µ) be a normed vector space of

“smooth” functions (e.g. Hölder / Sobolev / analytic regularity).

10/20

slide-24
SLIDE 24

Assumptions for ML

Assumption 1 (regularity): f, fnℓ ∈ F for all ℓ ∈ N Assumption 2 (PDE discretization): the sequence {fnℓ} is s.t. f − fnℓL2

µ n−βw

, f − fnℓF n−βs

and, for a single y ∈ Γ, the cost of computing fnℓ(y) is Work(fnℓ) nγ

ℓ .

Assumption 3 (polynomial approximability): the sequence {Λmk} is s.t. dim

  • VΛmk
  • = |Λmk| mσ

k,

inf

v∈VΛmk

f − vL∞

√w m−αp

k

fF , ∀f ∈ F, (Alternatively inf

v∈VΛmk

f − vL2

µ m−αe

k

fF , ∀f ∈ F).

11/20

slide-25
SLIDE 25

Assumptions for ML

Assumption 1 (regularity): f, fnℓ ∈ F for all ℓ ∈ N Assumption 2 (PDE discretization): the sequence {fnℓ} is s.t. f − fnℓL2

µ n−βw

, f − fnℓF n−βs

and, for a single y ∈ Γ, the cost of computing fnℓ(y) is Work(fnℓ) nγ

ℓ .

Assumption 3 (polynomial approximability): the sequence {Λmk} is s.t. dim

  • VΛmk
  • = |Λmk| mσ

k,

inf

v∈VΛmk

f − vL∞

√w m−αp

k

fF , ∀f ∈ F, (Alternatively inf

v∈VΛmk

f − vL2

µ m−αe

k

fF , ∀f ∈ F).

11/20

slide-26
SLIDE 26

Assumptions for ML

Assumption 1 (regularity): f, fnℓ ∈ F for all ℓ ∈ N Assumption 2 (PDE discretization): the sequence {fnℓ} is s.t. f − fnℓL2

µ n−βw

, f − fnℓF n−βs

and, for a single y ∈ Γ, the cost of computing fnℓ(y) is Work(fnℓ) nγ

ℓ .

Assumption 3 (polynomial approximability): the sequence {Λmk} is s.t. dim

  • VΛmk
  • = |Λmk| mσ

k,

inf

v∈VΛmk

f − vL∞

√w m−αp

k

fF , ∀f ∈ F, (Alternatively inf

v∈VΛmk

f − vL2

µ m−αe

k

fF , ∀f ∈ F).

11/20

slide-27
SLIDE 27

Tuning the ML least squares algorithm

We now choose nℓ = C exp

γ + βs

  • ,

ℓ = 0, . . . , L (space discr.) mk = C exp

  • k

σ + αp

  • ,

k = 0, . . . , L (Polynomial approx.) mσ

k

κL ≤ Mk log Mk ≤ 2mσ

k

κL , k = 0, . . . , L (sample size with r = L) By taking r = L we guarantee that P

  • ∃k : Gk − I > 1

2

L

  • k=0

P

  • Gk − I > 1

2

  • L−L.

12/20

slide-28
SLIDE 28

Tuning the ML least squares algorithm

We now choose nℓ = C exp

γ + βs

  • ,

ℓ = 0, . . . , L (space discr.) mk = C exp

  • k

σ + αp

  • ,

k = 0, . . . , L (Polynomial approx.) mσ

k

κL ≤ Mk log Mk ≤ 2mσ

k

κL , k = 0, . . . , L (sample size with r = L) By taking r = L we guarantee that P

  • ∃k : Gk − I > 1

2

L

  • k=0

P

  • Gk − I > 1

2

  • L−L.

12/20

slide-29
SLIDE 29

Complexity result

Theorem ([H.-Nobile-Tempone-Wolfers 2017])

Given ǫ > 0 and βs = βw, we can choose L ∈ N such that f − SLfL2

µ ≤ ǫ,

with prob. ≥ 1 − Cǫlog | log ǫ|, Work(SLf) ǫ−λ| log ǫ|t log | log ǫ|, with λ = max (σ/αp, γ/βs) , t =      2 if γ/βs < σ/αp, 3 + σ/αp if γ/βs = σ/αp, 1 if γ/βs > σ/αp.

Proof

Analogous result holds in expectation with αp replaced by αe.

13/20

slide-30
SLIDE 30

Complexity result

Theorem ([H.-Nobile-Tempone-Wolfers 2017])

Given ǫ > 0 and βs = βw, we can choose L ∈ N such that f − SLfL2

µ ≤ ǫ,

with prob. ≥ 1 − Cǫlog | log ǫ|, Work(SLf) ǫ−λ| log ǫ|t log | log ǫ|, with λ = max (σ/αp, γ/βs) , t =      2 if γ/βs < σ/αp, 3 + σ/αp if γ/βs = σ/αp, 1 if γ/βs > σ/αp.

Proof

Analogous result holds in expectation with αp replaced by αe.

13/20

slide-31
SLIDE 31

Improved complexity in the case γ/βs > σ/α

In the case γ/βs > σ/α and βw > βs the complexity can be improved by taking mk = C exp

  • k

σ + αp + L(βw − βs) α(γ + βs)

  • .

In this case the complexity result becomes f − SLfL2

µ ≤ ǫ,

with prob. ≥ 1 − Cǫlog | log ǫ|, Work(SLf) ǫ−λ| log ǫ|t log | log ǫ|, with t = 1 and λ = γ βw +

  • 1 − βs

βw σ αp which always improves the single level rate λSL =

γ βw + σ αp . 14/20

slide-32
SLIDE 32

Improved complexity in the case γ/βs > σ/α

In the case γ/βs > σ/α and βw > βs the complexity can be improved by taking mk = C exp

  • k

σ + αp + L(βw − βs) α(γ + βs)

  • .

In this case the complexity result becomes f − SLfL2

µ ≤ ǫ,

with prob. ≥ 1 − Cǫlog | log ǫ|, Work(SLf) ǫ−λ| log ǫ|t log | log ǫ|, with t = 1 and λ = γ βw +

  • 1 − βs

βw σ αp which always improves the single level rate λSL =

γ βw + σ αp . 14/20

slide-33
SLIDE 33

Contents

  • 1. Problem framework
  • 2. Weighted discrete least squares approximation
  • 3. Multilevel least squares approximation
  • 4. Application to random elliptic PDEs
  • 5. Conclusions
slide-34
SLIDE 34

Application to random elliptic PDEs

Consider

  • − div(a(x, y)∇u(x, y)) = g,

for x ∈ D ⊂ Rd u(x, y) = 0, for x ∈ ∂D with y ∈ Γ = [−1, 1]N and Q linear bounded functional in L2(D) (e.g. Q(u) =

  • D u).

Goal: approximate f(y) = Q(u(y)). Assumptions: 0 < amin ≤ a(x, y) ≤ amax, ∀(x, y) ∈ D × Γ. g and D sufficiently smooth.

15/20

slide-35
SLIDE 35

Application to random elliptic PDEs

Consider

  • − div(a(x, y)∇u(x, y)) = g,

for x ∈ D ⊂ Rd u(x, y) = 0, for x ∈ ∂D with y ∈ Γ = [−1, 1]N and Q linear bounded functional in L2(D) (e.g. Q(u) =

  • D u).

Goal: approximate f(y) = Q(u(y)). Assumptions: 0 < amin ≤ a(x, y) ≤ amax, ∀(x, y) ∈ D × Γ. g and D sufficiently smooth.

15/20

slide-36
SLIDE 36

Application to random elliptic PDEs

Consider

  • − div(a(x, y)∇u(x, y)) = g,

for x ∈ D ⊂ Rd u(x, y) = 0, for x ∈ ∂D with y ∈ Γ = [−1, 1]N and Q linear bounded functional in L2(D) (e.g. Q(u) =

  • D u).

Goal: approximate f(y) = Q(u(y)). Assumptions: 0 < amin ≤ a(x, y) ≤ amax, ∀(x, y) ∈ D × Γ. g and D sufficiently smooth.

15/20

slide-37
SLIDE 37

Application to random elliptic PDEs

Proposition

Let un be a finite element approximation of order r ≥ 1 with maximal element diameter h = n−1 and fn(y) = Q(un(y)). If a ∈ Cr,s(D × Γ) = {v : D × Γ → R : ∂r

x∂s yvC0(D×Γ) < ∞,

∀|r|1 ≤ r, |s|1 ≤ s}, then f − fnCp(Γ) n−(r+1), ∀p = 0, . . . , s.

16/20

slide-38
SLIDE 38

ML least squares complexity – mixed regularity

Consider the coefficient a(x, y) = 1 + xr

2 + ys 2 ∈ Cr−1,1(D) ⊗ Cs−1,1(Γ).

smoother space: F = Cs−1,1(Γ); spatial approximation: continuous finite elements of degree r,

◮ error: f − fnL2

µ = O(n−(r+1)) = f − fnCs−1,1

= ⇒ βw = βs = r + 1;

◮ cost: Work(fn) = nd with optimal solver

= ⇒ γ = d;

polynomial approximation: VΛm = Pm= polynomial space of total degree m,

◮ error: f − ΠPmfL∞ = O(m−s),

= ⇒ αp = s;

◮ cost: dim(VΛm) =

m+N

N

  • mN,

= ⇒ σ = N.

17/20

slide-39
SLIDE 39

ML least squares complexity – mixed regularity

Complexity of single level method WorkSL = O

  • ǫ−

d r+1− N s log ǫ−1

. Complexity of multilevel method WorkML = O

  • ǫ− max{

d r+1, N s }(log ǫ−1)t

, with t =      1, if

d r+1 > N s ,

3 +

d r+1,

if

d r+1 = N s ,

2, if

d r+1 < N s .

In our experiments: d = 2, r = 1, s = 3 and N = 2, 3, 4, 6.

18/20

slide-40
SLIDE 40

Single-level Multilevel Adaptive Multilevel Adaptive Multilevel with arcsine

10−5 10−4 10−3 10−2 10−1 101 102 103 104 105 106 107 L2 Error Work Estimate ǫ− 5

3 log(ǫ−1)

ǫ−1 log(ǫ−1)

(a) N = 2

10−4 10−3 10−2 102 103 104 105 106 107 L2 Error Work Estimate ǫ−2 log(ǫ−1) ǫ−1 log(ǫ−1)4

(b) N = 3

10−4 10−3 10−2 101 102 103 104 105 106 107 L2 Error Work Estimate ǫ− 7

3 log(ǫ−1)

ǫ− 4

3 log(ǫ−1)2

(c) N = 4

10−3 10−2 101 102 103 104 105 106 L2 Error Work Estimate ǫ−3 log(ǫ−1) ǫ−2 log(ǫ−1)2

(d) N = 6 19/20

slide-41
SLIDE 41

Contents

  • 1. Problem framework
  • 2. Weighted discrete least squares approximation
  • 3. Multilevel least squares approximation
  • 4. Application to random elliptic PDEs
  • 5. Conclusions
slide-42
SLIDE 42

Conclusions

We have derived a multilevel discrete least squares method for polynomial approximation of an output quantity of interest of a random PDE. The method uses the classical “Combination technique” and sparsifies sequences of polynomial approximations, obtained by weighted discrete least squares, and sequences of spatial discretizations of the underlying PDE. In particular, we have proposed a way to select the number of sample points on each level to guarantee the overall stability and accuracy of the ML formula with high probability. Complexity analysis carries over to infinite dimensional problems (different choice of polynomial spaces). We are currently working on adaptive algorithms for infinite dimensional problems.

20/20

slide-43
SLIDE 43

Conclusions

We have derived a multilevel discrete least squares method for polynomial approximation of an output quantity of interest of a random PDE. The method uses the classical “Combination technique” and sparsifies sequences of polynomial approximations, obtained by weighted discrete least squares, and sequences of spatial discretizations of the underlying PDE. In particular, we have proposed a way to select the number of sample points on each level to guarantee the overall stability and accuracy of the ML formula with high probability. Complexity analysis carries over to infinite dimensional problems (different choice of polynomial spaces). We are currently working on adaptive algorithms for infinite dimensional problems.

20/20

slide-44
SLIDE 44

Conclusions

We have derived a multilevel discrete least squares method for polynomial approximation of an output quantity of interest of a random PDE. The method uses the classical “Combination technique” and sparsifies sequences of polynomial approximations, obtained by weighted discrete least squares, and sequences of spatial discretizations of the underlying PDE. In particular, we have proposed a way to select the number of sample points on each level to guarantee the overall stability and accuracy of the ML formula with high probability. Complexity analysis carries over to infinite dimensional problems (different choice of polynomial spaces). We are currently working on adaptive algorithms for infinite dimensional problems.

20/20

slide-45
SLIDE 45

Conclusions

We have derived a multilevel discrete least squares method for polynomial approximation of an output quantity of interest of a random PDE. The method uses the classical “Combination technique” and sparsifies sequences of polynomial approximations, obtained by weighted discrete least squares, and sequences of spatial discretizations of the underlying PDE. In particular, we have proposed a way to select the number of sample points on each level to guarantee the overall stability and accuracy of the ML formula with high probability. Complexity analysis carries over to infinite dimensional problems (different choice of polynomial spaces). We are currently working on adaptive algorithms for infinite dimensional problems.

20/20

slide-46
SLIDE 46

Conclusions

We have derived a multilevel discrete least squares method for polynomial approximation of an output quantity of interest of a random PDE. The method uses the classical “Combination technique” and sparsifies sequences of polynomial approximations, obtained by weighted discrete least squares, and sequences of spatial discretizations of the underlying PDE. In particular, we have proposed a way to select the number of sample points on each level to guarantee the overall stability and accuracy of the ML formula with high probability. Complexity analysis carries over to infinite dimensional problems (different choice of polynomial spaces). We are currently working on adaptive algorithms for infinite dimensional problems.

20/20

slide-47
SLIDE 47

Thank you for your attention.

slide-48
SLIDE 48

References

A-L. Haji-Ali, F. Nobile, R. Tempone, S. Wolfers, Multilevel weighted least squares polynomial approximation, arXiv:1707.00026. A.-L. Haji-Ali, F. Nobile, L. Tamellini and R. Tempone. Multi-Index Stochastic Collocation convergence rates for random PDEs with parametric regularity, FoCM 16(2016) 1555-1605. A.-L. Haji-Ali, F. Nobile, L. Tamellini and R. Tempone. Multi-Index Stochastic Collocation for random PDEs, CMAME 306(2016) 95–122. A-L. Haji-Ali, F. Nobile, R. Tempone, Multi index Monte Carlo: when sparsity meets sampling, Numer. Math. 132(2016) 767–806, published online.

  • A. Chkifa, A. Cohen, G. Migliorati, F. Nobile, and R. Tempone

Discrete least squares polynomial approximation with random evaluations – application to parametric and stochastic elliptic PDEs, ESAIM: M2AN, vol. 49, num. 3, p. 815-837, 2015.

slide-49
SLIDE 49

Sketch of the proof

Bound on Mk: use that √Mk ≤

Mk log Mk ≤ 2mσ

k

κL and κL ≈ 1/(L + 1)

Mk ≤ 2 κL mσ

k log Mk (L + 1)e

kσ σ+αp

(L + 1) log(L + 1)e

kσ σ+αp (k + 1)

Bound on total work:

Work(SLf)

L

  • ℓ=0

ML−ℓnγ

(L + 1) log(L + 1)e

Lσ σ−αp

L

  • ℓ=0

exp

  • −l
  • σ

σ − αp − γ γ + βs

  • (L − ℓ + 1)

hence, distinguish three cases γ/βs <, =, > σ/αp

1/3

slide-50
SLIDE 50

Sketch of the proof

Bound on Mk: use that √Mk ≤

Mk log Mk ≤ 2mσ

k

κL and κL ≈ 1/(L + 1)

Mk ≤ 2 κL mσ

k log Mk (L + 1)e

kσ σ+αp

(L + 1) log(L + 1)e

kσ σ+αp (k + 1)

Bound on total work:

Work(SLf)

L

  • ℓ=0

ML−ℓnγ

(L + 1) log(L + 1)e

Lσ σ−αp

L

  • ℓ=0

exp

  • −l
  • σ

σ − αp − γ γ + βs

  • (L − ℓ + 1)

hence, distinguish three cases γ/βs <, =, > σ/αp

1/3

slide-51
SLIDE 51

Sketch of the proof

Bound on the error in probability: f − SLfL2

µ = f − fL +

L

  • ℓ=0

(Id − ˆ ΠML−ℓ)(fℓ − fℓ−1)L2

µ

≤ f − fLL2

µ +

L

  • ℓ=0

Id − ˆ ΠML−ℓF →L2

µfℓ − fℓ−1F

e− Lβw

γ+βs + e− Lα σ+α

L

  • ℓ=0

exp

  • α

σ + αp − βs γ + βs

  • Again split the three cases γ/βs <, =, > σ/αp and notice that the

first term e− Lβw

γ+βs is always negligible as βw > βs.

Back

2/3

slide-52
SLIDE 52

Single-level Multilevel Adaptive Multilevel Adaptive Multilevel with arcsine

10−5 10−4 10−3 10−2 10−1 10−3 10−2 10−1 100 101 102 103 L2 Error Time [s.] ǫ− 5

3 log(ǫ−1)

ǫ−1 log(ǫ−1)

(a) N = 2

10−4 10−3 10−2 10−1 100 101 102 103 L2 Error Time [s.] ǫ−2 log(ǫ−1) ǫ−1 log(ǫ−1)4

(b) N = 3

10−4 10−3 10−2 10−1 100 101 102 103 L2 Error Time [s.] ǫ− 7

3 log(ǫ−1)

ǫ− 4

3 log(ǫ−1)2

(c) N = 4

10−3 10−2 10−1 100 101 102 L2 Error Time [s.] ǫ−3 log(ǫ−1) ǫ−2 log(ǫ−1)2

(d) N = 6 3/3