Discrete least squares polynomial approximation with random - - PowerPoint PPT Presentation

discrete least squares polynomial approximation with
SMART_READER_LITE
LIVE PREVIEW

Discrete least squares polynomial approximation with random - - PowerPoint PPT Presentation

Discrete least squares polynomial approximation with random evaluations application to PDEs with random parameters F. Nobile 1 , G. Migliorati 1 , R. Tempone 2 1 CSQI-MATHICSE, EPFL, Switzerland 2 AMCS and SRI-UQ Center, KAUST, Saudi Arabia


slide-1
SLIDE 1

Discrete least squares polynomial approximation with random evaluations – application to PDEs with random parameters

  • F. Nobile1, G. Migliorati1, R. Tempone2

1CSQI-MATHICSE, EPFL, Switzerland 2AMCS and SRI-UQ Center, KAUST, Saudi Arabia

Acknowlegments: A. Cohen, A. Chkifa (UPMC - Paris VI),

  • E. von Schwerin (KTH),

Advances in UQ Methods, Algorthms and Applications KAUST, January 6-9, 2015

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 1

slide-2
SLIDE 2

Outline

1

Introduction – PDEs with random parameters

2

Stochastic polynomial approximation

3

Discrete projection using random evaluations Stability Convergence results in expectation and probability The case of noisy observations

4

Conclusions

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 2

slide-3
SLIDE 3

Introduction – PDEs with random parameters

Outline

1

Introduction – PDEs with random parameters

2

Stochastic polynomial approximation

3

Discrete projection using random evaluations Stability Convergence results in expectation and probability The case of noisy observations

4

Conclusions

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 3

slide-4
SLIDE 4

Introduction – PDEs with random parameters

UQ for deterministic PDE models

Consider a deterministic PDE model find u : L(y)(u) = F in D ⊂ Rd (1) with suitable boundary / initial conditions. The operator L(y) depends on a vector of N parameters: y = (y1, . . . , yN) ∈ RN (N = ∞ when dealing with distributed fields). Often in applications the parameters y are not perfectly known or are intrinsically variable. Examples are:

subsurface modeling: porous media flows; seismic waves; basin evolutions; ... modeling of living tissues: mechanical response; growth models; material science: properties of composite materials

Probabilistic approach: y is a random vector with probability density function ρ : Γ → R+.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 4

slide-5
SLIDE 5

Introduction – PDEs with random parameters

UQ for deterministic PDE models

Consider a deterministic PDE model find u : L(y)(u) = F in D ⊂ Rd (1) with suitable boundary / initial conditions. The operator L(y) depends on a vector of N parameters: y = (y1, . . . , yN) ∈ RN (N = ∞ when dealing with distributed fields). Often in applications the parameters y are not perfectly known or are intrinsically variable. Examples are:

subsurface modeling: porous media flows; seismic waves; basin evolutions; ... modeling of living tissues: mechanical response; growth models; material science: properties of composite materials

Probabilistic approach: y is a random vector with probability density function ρ : Γ → R+.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 4

slide-6
SLIDE 6

Introduction – PDEs with random parameters

UQ for deterministic PDE models

Consider a deterministic PDE model find u : L(y)(u) = F in D ⊂ Rd (1) with suitable boundary / initial conditions. The operator L(y) depends on a vector of N parameters: y = (y1, . . . , yN) ∈ RN (N = ∞ when dealing with distributed fields). Often in applications the parameters y are not perfectly known or are intrinsically variable. Examples are:

subsurface modeling: porous media flows; seismic waves; basin evolutions; ... modeling of living tissues: mechanical response; growth models; material science: properties of composite materials

Probabilistic approach: y is a random vector with probability density function ρ : Γ → R+.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 4

slide-7
SLIDE 7

Introduction – PDEs with random parameters

UQ for deterministic PDE models

Consider a deterministic PDE model find u : L(y)(u) = F in D ⊂ Rd (1) with suitable boundary / initial conditions. The operator L(y) depends on a vector of N parameters: y = (y1, . . . , yN) ∈ RN (N = ∞ when dealing with distributed fields). Often in applications the parameters y are not perfectly known or are intrinsically variable. Examples are:

subsurface modeling: porous media flows; seismic waves; basin evolutions; ... modeling of living tissues: mechanical response; growth models; material science: properties of composite materials

Probabilistic approach: y is a random vector with probability density function ρ : Γ → R+.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 4

slide-8
SLIDE 8

Introduction – PDEs with random parameters

UQ for deterministic PDE models

Assumption: ∀y ∈ Γ the problem admits a unique solution u ∈ V in a Hilbert space V . Moreover, u(y)V ≤ C(y)FV ′ Then, the PDE (1) induces a map u = u(y) : Γ → V . if C(y) ∈ Lp

ρ(Γ), then u ∈ Lp ρ(Γ, V ).

Goals: Compute statistics of the solution

pointwise Expected value: ¯ u(x) = E[u(x, ·)], x ∈ D pointwise Variance: Var[u](x) = E[(u(x, ·) − ¯ u(x))2](x) two points corr.: Covu(x1, x2) = E[(u(x1, ·) − ¯ u(x1))(u(x2, ·) − ¯ u(x2))]

  • r of specific Quantities of Interest Q(u) : V → R. Then ϕ(y) = Q(u(y))

is a real-valued function of the random vector y and we would like to approximate its law.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 5

slide-9
SLIDE 9

Introduction – PDEs with random parameters

UQ for deterministic PDE models

Assumption: ∀y ∈ Γ the problem admits a unique solution u ∈ V in a Hilbert space V . Moreover, u(y)V ≤ C(y)FV ′ Then, the PDE (1) induces a map u = u(y) : Γ → V . if C(y) ∈ Lp

ρ(Γ), then u ∈ Lp ρ(Γ, V ).

Goals: Compute statistics of the solution

pointwise Expected value: ¯ u(x) = E[u(x, ·)], x ∈ D pointwise Variance: Var[u](x) = E[(u(x, ·) − ¯ u(x))2](x) two points corr.: Covu(x1, x2) = E[(u(x1, ·) − ¯ u(x1))(u(x2, ·) − ¯ u(x2))]

  • r of specific Quantities of Interest Q(u) : V → R. Then ϕ(y) = Q(u(y))

is a real-valued function of the random vector y and we would like to approximate its law.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 5

slide-10
SLIDE 10

Introduction – PDEs with random parameters

UQ for deterministic PDE models

Assumption: ∀y ∈ Γ the problem admits a unique solution u ∈ V in a Hilbert space V . Moreover, u(y)V ≤ C(y)FV ′ Then, the PDE (1) induces a map u = u(y) : Γ → V . if C(y) ∈ Lp

ρ(Γ), then u ∈ Lp ρ(Γ, V ).

Goals: Compute statistics of the solution

pointwise Expected value: ¯ u(x) = E[u(x, ·)], x ∈ D pointwise Variance: Var[u](x) = E[(u(x, ·) − ¯ u(x))2](x) two points corr.: Covu(x1, x2) = E[(u(x1, ·) − ¯ u(x1))(u(x2, ·) − ¯ u(x2))]

  • r of specific Quantities of Interest Q(u) : V → R. Then ϕ(y) = Q(u(y))

is a real-valued function of the random vector y and we would like to approximate its law.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 5

slide-11
SLIDE 11

Introduction – PDEs with random parameters

Example: Elliptic PDE with random coefficients

  • − div(a(y, x)∇u(y, x)) = f (x)

x ∈ D, y ∈ Γ, u(y, x) = 0 x ∈ ∂D, y ∈ Γ with amin(y) = infx∈D a(y, x) > 0 for all y ∈ Γ and f ∈ L2(D). Then ∀y ∈ Γ, u(y) ∈ V ≡ H1

0(D),

and u(y)V ≤ CP amin(y)f L2(D). Inclusions problem

y describes the conductivity in each inclusion a(y, x) = a0+

N

  • n=N

yn✶Dn(x)

Random fields problem

a(y, x) is a random field, e.g. lognormal: a(y, x) = eγ(y,x) with γ expanded e.g. in Karhunen-Lo` eve series γ(y, x) =

  • n=1
  • λnynbn(x),

yn ∼ N(0, 1) i.i.d.

random field with Lc=1/4

0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −1 −0.5 0.5 1 1.5 2 2.5
  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 6

slide-12
SLIDE 12

Stochastic polynomial approximation

Outline

1

Introduction – PDEs with random parameters

2

Stochastic polynomial approximation

3

Discrete projection using random evaluations Stability Convergence results in expectation and probability The case of noisy observations

4

Conclusions

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 7

slide-13
SLIDE 13

Stochastic polynomial approximation

Stochastic multivariate polynomial approximation

The parameter-to-solution map u(y) : Γ → V is often smooth (even analytic for the elliptic diffusion model). It is therefore sound to approximate it by global multivariate polynomials. Let Λ ⊂ NN be an index set of cardinality |Λ| = M, and consider the multivariate polynomial space PΛ(Γ) = span N

n=1 ypn n ,

with p = (p1, . . . , pN) ∈ Λ

  • We seek an approximation PΛu ∈ PΛ(Γ) ⊗ V .

The optimal choice of Λ depends heavily on the problem at hand and the “structure” of the map u(y).

  • Definition. An index set Λ is downward closed if

p ∈ Λ and q ≤ p = ⇒ q ∈ Λ

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 8

slide-14
SLIDE 14

Stochastic polynomial approximation

Stochastic multivariate polynomial approximation

The parameter-to-solution map u(y) : Γ → V is often smooth (even analytic for the elliptic diffusion model). It is therefore sound to approximate it by global multivariate polynomials. Let Λ ⊂ NN be an index set of cardinality |Λ| = M, and consider the multivariate polynomial space PΛ(Γ) = span N

n=1 ypn n ,

with p = (p1, . . . , pN) ∈ Λ

  • We seek an approximation PΛu ∈ PΛ(Γ) ⊗ V .

The optimal choice of Λ depends heavily on the problem at hand and the “structure” of the map u(y).

  • Definition. An index set Λ is downward closed if

p ∈ Λ and q ≤ p = ⇒ q ∈ Λ

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 8

slide-15
SLIDE 15

Stochastic polynomial approximation

Stochastic multivariate polynomial approximation

The parameter-to-solution map u(y) : Γ → V is often smooth (even analytic for the elliptic diffusion model). It is therefore sound to approximate it by global multivariate polynomials. Let Λ ⊂ NN be an index set of cardinality |Λ| = M, and consider the multivariate polynomial space PΛ(Γ) = span N

n=1 ypn n ,

with p = (p1, . . . , pN) ∈ Λ

  • We seek an approximation PΛu ∈ PΛ(Γ) ⊗ V .

The optimal choice of Λ depends heavily on the problem at hand and the “structure” of the map u(y).

  • Definition. An index set Λ is downward closed if

p ∈ Λ and q ≤ p = ⇒ q ∈ Λ

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 8

slide-16
SLIDE 16

Stochastic polynomial approximation

Stochastic multivariate polynomial approximation

The parameter-to-solution map u(y) : Γ → V is often smooth (even analytic for the elliptic diffusion model). It is therefore sound to approximate it by global multivariate polynomials. Let Λ ⊂ NN be an index set of cardinality |Λ| = M, and consider the multivariate polynomial space PΛ(Γ) = span N

n=1 ypn n ,

with p = (p1, . . . , pN) ∈ Λ

  • We seek an approximation PΛu ∈ PΛ(Γ) ⊗ V .

The optimal choice of Λ depends heavily on the problem at hand and the “structure” of the map u(y).

  • Definition. An index set Λ is downward closed if

p ∈ Λ and q ≤ p = ⇒ q ∈ Λ

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 8

slide-17
SLIDE 17

Stochastic polynomial approximation

Common choices of polynomial spaces

2 4 6 8 10 12 14 16 2 4 6 8 10 12 14 16

p1 p2

Tensor Product (TP)

tensor product (TP) Λ(w) = {p : maxn pn ≤ w}

2 4 6 8 10 12 14 16 2 4 6 8 10 12 14 16

p1 p2

Total Degree (TD)

total degree (TD) Λ(w) = {p : N

n=1 pn ≤ w}

2 4 6 8 10 12 14 16 2 4 6 8 10 12 14 16

p1 p2

Hyperbolic Cross (HC)

hyperbolic cross (HC) Λ(w) = {p : N

n=1(pn + 1) ≤ w + 1}

Anisotropic versions are also possible. All these index sets are all downward closed.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 9

slide-18
SLIDE 18

Stochastic polynomial approximation

Common choices of polynomial spaces

2 4 6 8 10 12 14 16 2 4 6 8 10 12 14 16

p1 p2

Tensor Product (TP)

tensor product (TP) Λ(w) = {p : maxn pn ≤ w}

2 4 6 8 10 12 14 16 2 4 6 8 10 12 14 16

p1 p2

Total Degree (TD)

total degree (TD) Λ(w) = {p : N

n=1 pn ≤ w}

2 4 6 8 10 12 14 16 2 4 6 8 10 12 14 16

p1 p2

Hyperbolic Cross (HC)

hyperbolic cross (HC) Λ(w) = {p : N

n=1(pn + 1) ≤ w + 1}

Anisotropic versions are also possible. All these index sets are all downward closed.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 9

slide-19
SLIDE 19

Discrete projection using random evaluations

Outline

1

Introduction – PDEs with random parameters

2

Stochastic polynomial approximation

3

Discrete projection using random evaluations Stability Convergence results in expectation and probability The case of noisy observations

4

Conclusions

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 10

slide-20
SLIDE 20

Discrete projection using random evaluations

Discrete L2 projection using random evaluations

(see e.g. [Hosder-Walters et al. 2010, Blatman-Sudret 2008, Burkardt-Eldred 2009, Eldred 2011, Yan-Guo-Xiu 2012, Cohen-Davenport-Leviatan 2013, Migliorati et al 2011-2014])

1 Generate M random i.i.d. samples y(ωk) ∼ ρ(y)dy, k = 1, . . . , M 2 Compute the corresponding solutions u(k) = u(y(ωk)) 3 Find the discrete least squares approximation ΠM

Λ u ∈ PΛ(Γ) ⊗ V :

ΠM

Λ u =

argmin

v∈PΛ(Γ)⊗V

1 M

M

  • k=1

u(k) − v(y(ωk))2

V

For a quantity of interest ϕ(y) = Q(u(y)) this reads simply ΠM

Λ ϕ =

argmin

v∈PΛ(Γ)⊗V

1 M

M

  • k=1

|ϕ(k) − v(y(ωk))|2

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 11

slide-21
SLIDE 21

Discrete projection using random evaluations

Discrete L2 projection using random evaluations

(see e.g. [Hosder-Walters et al. 2010, Blatman-Sudret 2008, Burkardt-Eldred 2009, Eldred 2011, Yan-Guo-Xiu 2012, Cohen-Davenport-Leviatan 2013, Migliorati et al 2011-2014])

1 Generate M random i.i.d. samples y(ωk) ∼ ρ(y)dy, k = 1, . . . , M 2 Compute the corresponding solutions u(k) = u(y(ωk)) 3 Find the discrete least squares approximation ΠM

Λ u ∈ PΛ(Γ) ⊗ V :

ΠM

Λ u =

argmin

v∈PΛ(Γ)⊗V

1 M

M

  • k=1

u(k) − v(y(ωk))2

V

For a quantity of interest ϕ(y) = Q(u(y)) this reads simply ΠM

Λ ϕ =

argmin

v∈PΛ(Γ)⊗V

1 M

M

  • k=1

|ϕ(k) − v(y(ωk))|2

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 11

slide-22
SLIDE 22

Discrete projection using random evaluations

Notation

continuous norm: v2

L2

ρ(Γ,V ) =

  • Γ v(y)2

V ρ(y)dy

discrete norm: v2

M,V = 1 M

M

i=1 v(y(ωi))2 V

{ψp}p∈Λ: orthonormal basis of PΛ(Γ)

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 12

slide-23
SLIDE 23

Discrete projection using random evaluations

Algebraic formulation (for Q.o.I.)

Design matrix: D ∈ R|Λ|×M, Dip = ψp(y(ωi)), p ∈ Λ, 1 ≤ i ≤ M. Then, expanding ΠM

Λ ϕ onto the basis: ΠM Λ ϕ(y) = p∈Λ cpψp(y), and

setting (ϕ)i = ϕ(y(ωi)), the vector c = {cp}p of Fourier coefficients satisfies the normal equations (DTD)c = DTϕ. Equivalent reformulation: Gc = Jϕ, with G = 1 M DTD, J = 1 M DT G is symmetric and (semi)-positive definite. The stability of the discrete least squares is related to G −1. It holds G = sup

v∈PΛ(Γ)

v2

M,R

v2

L2

ρ(Γ,R)

, G −1 = sup

v∈PΛ(Γ)

v2

L2

ρ(Γ,R)

v2

M,R

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 13

slide-24
SLIDE 24

Discrete projection using random evaluations

Algebraic formulation (for Q.o.I.)

Design matrix: D ∈ R|Λ|×M, Dip = ψp(y(ωi)), p ∈ Λ, 1 ≤ i ≤ M. Then, expanding ΠM

Λ ϕ onto the basis: ΠM Λ ϕ(y) = p∈Λ cpψp(y), and

setting (ϕ)i = ϕ(y(ωi)), the vector c = {cp}p of Fourier coefficients satisfies the normal equations (DTD)c = DTϕ. Equivalent reformulation: Gc = Jϕ, with G = 1 M DTD, J = 1 M DT G is symmetric and (semi)-positive definite. The stability of the discrete least squares is related to G −1. It holds G = sup

v∈PΛ(Γ)

v2

M,R

v2

L2

ρ(Γ,R)

, G −1 = sup

v∈PΛ(Γ)

v2

L2

ρ(Γ,R)

v2

M,R

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 13

slide-25
SLIDE 25

Discrete projection using random evaluations

Algebraic formulation (for Q.o.I.)

Design matrix: D ∈ R|Λ|×M, Dip = ψp(y(ωi)), p ∈ Λ, 1 ≤ i ≤ M. Then, expanding ΠM

Λ ϕ onto the basis: ΠM Λ ϕ(y) = p∈Λ cpψp(y), and

setting (ϕ)i = ϕ(y(ωi)), the vector c = {cp}p of Fourier coefficients satisfies the normal equations (DTD)c = DTϕ. Equivalent reformulation: Gc = Jϕ, with G = 1 M DTD, J = 1 M DT G is symmetric and (semi)-positive definite. The stability of the discrete least squares is related to G −1. It holds G = sup

v∈PΛ(Γ)

v2

M,R

v2

L2

ρ(Γ,R)

, G −1 = sup

v∈PΛ(Γ)

v2

L2

ρ(Γ,R)

v2

M,R

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 13

slide-26
SLIDE 26

Discrete projection using random evaluations

Algebraic formulation (for Q.o.I.)

Design matrix: D ∈ R|Λ|×M, Dip = ψp(y(ωi)), p ∈ Λ, 1 ≤ i ≤ M. Then, expanding ΠM

Λ ϕ onto the basis: ΠM Λ ϕ(y) = p∈Λ cpψp(y), and

setting (ϕ)i = ϕ(y(ωi)), the vector c = {cp}p of Fourier coefficients satisfies the normal equations (DTD)c = DTϕ. Equivalent reformulation: Gc = Jϕ, with G = 1 M DTD, J = 1 M DT G is symmetric and (semi)-positive definite. The stability of the discrete least squares is related to G −1. It holds G = sup

v∈PΛ(Γ)

v2

M,R

v2

L2

ρ(Γ,R)

, G −1 = sup

v∈PΛ(Γ)

v2

L2

ρ(Γ,R)

v2

M,R

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 13

slide-27
SLIDE 27

Discrete projection using random evaluations

Algebraic formulation (for Q.o.I.)

Design matrix: D ∈ R|Λ|×M, Dip = ψp(y(ωi)), p ∈ Λ, 1 ≤ i ≤ M. Then, expanding ΠM

Λ ϕ onto the basis: ΠM Λ ϕ(y) = p∈Λ cpψp(y), and

setting (ϕ)i = ϕ(y(ωi)), the vector c = {cp}p of Fourier coefficients satisfies the normal equations (DTD)c = DTϕ. Equivalent reformulation: Gc = Jϕ, with G = 1 M DTD, J = 1 M DT G is symmetric and (semi)-positive definite. The stability of the discrete least squares is related to G −1. It holds G = sup

v∈PΛ(Γ)

v2

M,R

v2

L2

ρ(Γ,R)

, G −1 = sup

v∈PΛ(Γ)

v2

L2

ρ(Γ,R)

v2

M,R

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 13

slide-28
SLIDE 28

Discrete projection using random evaluations

Algebraic formulation (for Q.o.I.)

Design matrix: D ∈ R|Λ|×M, Dip = ψp(y(ωi)), p ∈ Λ, 1 ≤ i ≤ M. Then, expanding ΠM

Λ ϕ onto the basis: ΠM Λ ϕ(y) = p∈Λ cpψp(y), and

setting (ϕ)i = ϕ(y(ωi)), the vector c = {cp}p of Fourier coefficients satisfies the normal equations (DTD)c = DTϕ. Equivalent reformulation: Gc = Jϕ, with G = 1 M DTD, J = 1 M DT G is symmetric and (semi)-positive definite. The stability of the discrete least squares is related to G −1. It holds G = sup

v∈PΛ(Γ)

v2

M,R

v2

L2

ρ(Γ,R)

, G −1 = sup

v∈PΛ(Γ)

v2

L2

ρ(Γ,R)

v2

M,R

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 13

slide-29
SLIDE 29

Discrete projection using random evaluations Stability

More on the matrix G

Recall G = 1

M DTD. Hence

G = 1 M

M

  • i=1

G (i), with G (i)

pq = ψq(y(ωi))ψp(y(ωi))

Remarks: The matrices G (i), i = 1, . . . , M are i.i.d. E[G (i)] = I. Indeed E[G (i)

pq ] = E[ψpψq] = δpq.

Define K(Λ) = sup

y∈Γ

 

p∈Λ

|ψp(y)|2   = sup

v∈PΛ

v2

L∞(Γ)

v2

L2

ρ(Γ)

Note that K(Λ) does not depend on the orthonormal basis chosen. Then G (i) satisfies a uniform bound G (i) = sup

v∈PΛ(Γ)⊗V

v(y(ωi))2

V

v2

Lρ(Γ,V )

≤ K(Λ)

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 14

slide-30
SLIDE 30

Discrete projection using random evaluations Stability

More on the matrix G

Recall G = 1

M DTD. Hence

G = 1 M

M

  • i=1

G (i), with G (i)

pq = ψq(y(ωi))ψp(y(ωi))

Remarks: The matrices G (i), i = 1, . . . , M are i.i.d. E[G (i)] = I. Indeed E[G (i)

pq ] = E[ψpψq] = δpq.

Define K(Λ) = sup

y∈Γ

 

p∈Λ

|ψp(y)|2   = sup

v∈PΛ

v2

L∞(Γ)

v2

L2

ρ(Γ)

Note that K(Λ) does not depend on the orthonormal basis chosen. Then G (i) satisfies a uniform bound G (i) = sup

v∈PΛ(Γ)⊗V

v(y(ωi))2

V

v2

Lρ(Γ,V )

≤ K(Λ)

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 14

slide-31
SLIDE 31

Discrete projection using random evaluations Stability

More on the matrix G

Recall G = 1

M DTD. Hence

G = 1 M

M

  • i=1

G (i), with G (i)

pq = ψq(y(ωi))ψp(y(ωi))

Remarks: The matrices G (i), i = 1, . . . , M are i.i.d. E[G (i)] = I. Indeed E[G (i)

pq ] = E[ψpψq] = δpq.

Define K(Λ) = sup

y∈Γ

 

p∈Λ

|ψp(y)|2   = sup

v∈PΛ

v2

L∞(Γ)

v2

L2

ρ(Γ)

Note that K(Λ) does not depend on the orthonormal basis chosen. Then G (i) satisfies a uniform bound G (i) = sup

v∈PΛ(Γ)⊗V

v(y(ωi))2

V

v2

Lρ(Γ,V )

≤ K(Λ)

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 14

slide-32
SLIDE 32

Discrete projection using random evaluations Stability

More on the matrix G

Recall G = 1

M DTD. Hence

G = 1 M

M

  • i=1

G (i), with G (i)

pq = ψq(y(ωi))ψp(y(ωi))

Remarks: The matrices G (i), i = 1, . . . , M are i.i.d. E[G (i)] = I. Indeed E[G (i)

pq ] = E[ψpψq] = δpq.

Define K(Λ) = sup

y∈Γ

 

p∈Λ

|ψp(y)|2   = sup

v∈PΛ

v2

L∞(Γ)

v2

L2

ρ(Γ)

Note that K(Λ) does not depend on the orthonormal basis chosen. Then G (i) satisfies a uniform bound G (i) = sup

v∈PΛ(Γ)⊗V

v(y(ωi))2

V

v2

Lρ(Γ,V )

≤ K(Λ)

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 14

slide-33
SLIDE 33

Discrete projection using random evaluations Stability

More on the matrix G

G is the sample average of i.i.d. positive definite and uniformly bounded random matrices and E[G] = I. Results on G − I = G − E[G] can be obtained from concentration of measure inequalities for sums of independent matrices. Goal: obtain conditions under which G − I ≤ δ for some δ ∈ (0, 1). Observe that this implies a norm equivalence on PΛ(Γ) ⊗ V (1 − δ)v2

L2

ρ(Γ;V ) ≤ v2

M,V ≤ (1 + δ)v2 L2

ρ(Γ;V ),

∀v ∈ PΛ ⊗ V

(analogous to the Restricted Isometry Property (RIP) in compressed sensing, see [Cand` es-Tao ’06, Rahout-Ward ’12, ...])

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 15

slide-34
SLIDE 34

Discrete projection using random evaluations Stability

More on the matrix G

G is the sample average of i.i.d. positive definite and uniformly bounded random matrices and E[G] = I. Results on G − I = G − E[G] can be obtained from concentration of measure inequalities for sums of independent matrices. Goal: obtain conditions under which G − I ≤ δ for some δ ∈ (0, 1). Observe that this implies a norm equivalence on PΛ(Γ) ⊗ V (1 − δ)v2

L2

ρ(Γ;V ) ≤ v2

M,V ≤ (1 + δ)v2 L2

ρ(Γ;V ),

∀v ∈ PΛ ⊗ V

(analogous to the Restricted Isometry Property (RIP) in compressed sensing, see [Cand` es-Tao ’06, Rahout-Ward ’12, ...])

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 15

slide-35
SLIDE 35

Discrete projection using random evaluations Stability

More on the matrix G

G is the sample average of i.i.d. positive definite and uniformly bounded random matrices and E[G] = I. Results on G − I = G − E[G] can be obtained from concentration of measure inequalities for sums of independent matrices. Goal: obtain conditions under which G − I ≤ δ for some δ ∈ (0, 1). Observe that this implies a norm equivalence on PΛ(Γ) ⊗ V (1 − δ)v2

L2

ρ(Γ;V ) ≤ v2

M,V ≤ (1 + δ)v2 L2

ρ(Γ;V ),

∀v ∈ PΛ ⊗ V

(analogous to the Restricted Isometry Property (RIP) in compressed sensing, see [Cand` es-Tao ’06, Rahout-Ward ’12, ...])

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 15

slide-36
SLIDE 36

Discrete projection using random evaluations Stability

Concentration of measure for sums of random matrices

Matrix Chernoff’s bound (for i.i.d. random matrices) [J. Tropp, FoCM 2011] Let X1, . . . , XM ∈ Rd×d be i.i.d. spd random matrices s.t. λmax(Xi) ≤ R almost surely. Let µmin = λmin(E[Xi]), µmax = λmax(E[Xi]) and ¯ X =

1 M

M

i=1 Xi. Then

P(λmax( ¯ X) ≥ (1 + δ)µmax) ≤ d exp

  • −Mµmax ˜

βδ R

  • ,

δ ≥ 0 P(λmin( ¯ X) ≤ (1 − δ)µmin) ≤ d exp

  • −Mµmaxβδ

R

  • ,

δ ∈ [0, 1], with ˜ βδ = (1 + δ) log(1 + δ) − δ and βδ = δ + (1 − δ) log(1 − δ)

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 16

slide-37
SLIDE 37

Discrete projection using random evaluations Stability

Concentration of measure result

Theorem [Cohen-Davenport-Leviatan ’13] Introduce the event ΩM

+ (δ) := {G − I ≤ δ}

= {(1 − δ)v2

L2

ρ(Γ,V ) ≤ v2

M,V ≤ (1 + δ)v2 L2

ρ(Γ,V ), ∀v ∈ PΛ}.

For any δ, γ > 0 and M satisfying K(Λ) ≤ βδ 1 + γ M log M , βδ = δ + (1 − δ) log(1 − δ) (2) we have that P

  • ΩM

+ (δ)

  • ≥ 1 − 2M−γ.

Hence on ΩM

+ (δ) the random discrete L2 projection is stable and

cond(DTD) = cond(G) ≤ 1 + δ 1 − δ

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 17

slide-38
SLIDE 38

Discrete projection using random evaluations Stability

Concentration of measure result

Theorem [Cohen-Davenport-Leviatan ’13] Introduce the event ΩM

+ (δ) := {G − I ≤ δ}

= {(1 − δ)v2

L2

ρ(Γ,V ) ≤ v2

M,V ≤ (1 + δ)v2 L2

ρ(Γ,V ), ∀v ∈ PΛ}.

For any δ, γ > 0 and M satisfying K(Λ) ≤ βδ 1 + γ M log M , βδ = δ + (1 − δ) log(1 − δ) (2) we have that P

  • ΩM

+ (δ)

  • ≥ 1 − 2M−γ.

Hence on ΩM

+ (δ) the random discrete L2 projection is stable and

cond(DTD) = cond(G) ≤ 1 + δ 1 − δ

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 17

slide-39
SLIDE 39

Discrete projection using random evaluations Convergence results in expectation and probability

Convergence in Probability

From the stability of the random projection one can derive optimality results either in expectation or probability Theorem [Chkifa-Cohen-Migliorati-N.-Tempone ’14], [Migliorati-N.-Tempone ’15] For any α, δ ∈ (0, 1), under the condition

M log M+log(2/α) ≥ K(Λ) βδ , it holds

with probability greater that 1 − α u − ΠM

Λ uL2

ρ(Γ,V ) ≤ (1 +

  • 1

1 − δ) inf

v∈PΛ⊗V u − vL∞(Γ,V )

Proof: Under the above condition P(ΩM

+ (δ)) ≥ 1 − α. Given any draw in ΩM + (δ),

we have for any v ∈ PΛ u − ΠM

Λ uL2

ρ(Γ,V ) ≤ u − vL2 ρ(Γ,V ) + v − ΠM

Λ uL2

ρ(Γ,V )

≤ u − vL2

ρ(Γ,V ) +

  • (1 − δ)−1v − ΠM

Λ uM,V

≤ u − vL2

ρ(Γ,V ) +

  • (1 − δ)−1u − vM,V

≤ (1 +

  • (1 − δ)−1)u − vL∞(Γ,V ).
  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 18

slide-40
SLIDE 40

Discrete projection using random evaluations Convergence results in expectation and probability

Convergence in Probability

From the stability of the random projection one can derive optimality results either in expectation or probability Theorem [Chkifa-Cohen-Migliorati-N.-Tempone ’14], [Migliorati-N.-Tempone ’15] For any α, δ ∈ (0, 1), under the condition

M log M+log(2/α) ≥ K(Λ) βδ , it holds

with probability greater that 1 − α u − ΠM

Λ uL2

ρ(Γ,V ) ≤ (1 +

  • 1

1 − δ) inf

v∈PΛ⊗V u − vL∞(Γ,V )

Proof: Under the above condition P(ΩM

+ (δ)) ≥ 1 − α. Given any draw in ΩM + (δ),

we have for any v ∈ PΛ u − ΠM

Λ uL2

ρ(Γ,V ) ≤ u − vL2 ρ(Γ,V ) + v − ΠM

Λ uL2

ρ(Γ,V )

≤ u − vL2

ρ(Γ,V ) +

  • (1 − δ)−1v − ΠM

Λ uM,V

≤ u − vL2

ρ(Γ,V ) +

  • (1 − δ)−1u − vM,V

≤ (1 +

  • (1 − δ)−1)u − vL∞(Γ,V ).
  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 18

slide-41
SLIDE 41

Discrete projection using random evaluations Convergence results in expectation and probability

Convergence in Probability

From the stability of the random projection one can derive optimality results either in expectation or probability Theorem [Chkifa-Cohen-Migliorati-N.-Tempone ’14], [Migliorati-N.-Tempone ’15] For any α, δ ∈ (0, 1), under the condition

M log M+log(2/α) ≥ K(Λ) βδ , it holds

with probability greater that 1 − α u − ΠM

Λ uL2

ρ(Γ,V ) ≤ (1 +

  • 1

1 − δ) inf

v∈PΛ⊗V u − vL∞(Γ,V )

Proof: Under the above condition P(ΩM

+ (δ)) ≥ 1 − α. Given any draw in ΩM + (δ),

we have for any v ∈ PΛ u − ΠM

Λ uL2

ρ(Γ,V ) ≤ u − vL2 ρ(Γ,V ) + v − ΠM

Λ uL2

ρ(Γ,V )

≤ u − vL2

ρ(Γ,V ) +

  • (1 − δ)−1v − ΠM

Λ uM,V

≤ u − vL2

ρ(Γ,V ) +

  • (1 − δ)−1u − vM,V

≤ (1 +

  • (1 − δ)−1)u − vL∞(Γ,V ).
  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 18

slide-42
SLIDE 42

Discrete projection using random evaluations Convergence results in expectation and probability

Convergence in Probability

From the stability of the random projection one can derive optimality results either in expectation or probability Theorem [Chkifa-Cohen-Migliorati-N.-Tempone ’14], [Migliorati-N.-Tempone ’15] For any α, δ ∈ (0, 1), under the condition

M log M+log(2/α) ≥ K(Λ) βδ , it holds

with probability greater that 1 − α u − ΠM

Λ uL2

ρ(Γ,V ) ≤ (1 +

  • 1

1 − δ) inf

v∈PΛ⊗V u − vL∞(Γ,V )

Proof: Under the above condition P(ΩM

+ (δ)) ≥ 1 − α. Given any draw in ΩM + (δ),

we have for any v ∈ PΛ u − ΠM

Λ uL2

ρ(Γ,V ) ≤ u − vL2 ρ(Γ,V ) + v − ΠM

Λ uL2

ρ(Γ,V )

≤ u − vL2

ρ(Γ,V ) +

  • (1 − δ)−1v − ΠM

Λ uM,V

≤ u − vL2

ρ(Γ,V ) +

  • (1 − δ)−1u − vM,V

≤ (1 +

  • (1 − δ)−1)u − vL∞(Γ,V ).
  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 18

slide-43
SLIDE 43

Discrete projection using random evaluations Convergence results in expectation and probability

Convergence in Probability

From the stability of the random projection one can derive optimality results either in expectation or probability Theorem [Chkifa-Cohen-Migliorati-N.-Tempone ’14], [Migliorati-N.-Tempone ’15] For any α, δ ∈ (0, 1), under the condition

M log M+log(2/α) ≥ K(Λ) βδ , it holds

with probability greater that 1 − α u − ΠM

Λ uL2

ρ(Γ,V ) ≤ (1 +

  • 1

1 − δ) inf

v∈PΛ⊗V u − vL∞(Γ,V )

Proof: Under the above condition P(ΩM

+ (δ)) ≥ 1 − α. Given any draw in ΩM + (δ),

we have for any v ∈ PΛ u − ΠM

Λ uL2

ρ(Γ,V ) ≤ u − vL2 ρ(Γ,V ) + v − ΠM

Λ uL2

ρ(Γ,V )

≤ u − vL2

ρ(Γ,V ) +

  • (1 − δ)−1v − ΠM

Λ uM,V

≤ u − vL2

ρ(Γ,V ) +

  • (1 − δ)−1u − vM,V

≤ (1 +

  • (1 − δ)−1)u − vL∞(Γ,V ).
  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 18

slide-44
SLIDE 44

Discrete projection using random evaluations Convergence results in expectation and probability

Convergence in expectation

assume uL∞(Γ,V ) ≤ τ and define the truncation operator Tτ : V → V , Tτ(v) =

  • v

if vV ≤ τ

τ vV v,

if vV > τ Theorem [Cohen-Davenport-Leviatan ’13], [Chkifa-Cohen-Migliorati-N.-Tempone ’14] For any δ ∈ (0, 1) and any γ > 0, under the condition

M log M ≥ (1 + γ) K(Λ) βδ ,

it holds E(u − Tτ ◦ ΠM

Λ u2 L2

ρ(Γ,V )) ≤ C

inf

v∈PΛ⊗V u − v2 L2

ρ(Γ,V ) + 8τ 2M−γ

with C = 1 +

4βδ (1+γ) log M M→∞

− − − − → 1.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 19

slide-45
SLIDE 45

Discrete projection using random evaluations The case of noisy observations

Case of noisy observations

Let us consider the case of a QoI ϕ(y) = Q(u(y)) and noisy observations ϕ(k) = ϕ(yk) + ηk with ηk i.i.d. and E[ηk|yk] = ¯ η(yk) ∈ L2

ρ(Γ)

(offset) sup

yk∈Γ

Var(ηk|yk) = σ2 < ∞ (variance) The offset could model any deterministic source of error due e.g. to numerical discretization. The fluctuations ˜ ηk = ηk − ¯ η(yk) model random measurement errors. We will also consider the case of bounded noise |˜ ηk| ≤ ˜ ηmax, ¯ ηL∞(Γ) < ∞.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 20

slide-46
SLIDE 46

Discrete projection using random evaluations The case of noisy observations

Case of noisy observations

Let us consider the case of a QoI ϕ(y) = Q(u(y)) and noisy observations ϕ(k) = ϕ(yk) + ηk with ηk i.i.d. and E[ηk|yk] = ¯ η(yk) ∈ L2

ρ(Γ)

(offset) sup

yk∈Γ

Var(ηk|yk) = σ2 < ∞ (variance) The offset could model any deterministic source of error due e.g. to numerical discretization. The fluctuations ˜ ηk = ηk − ¯ η(yk) model random measurement errors. We will also consider the case of bounded noise |˜ ηk| ≤ ˜ ηmax, ¯ ηL∞(Γ) < ∞.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 20

slide-47
SLIDE 47

Discrete projection using random evaluations The case of noisy observations

Case of noisy observations

Let us consider the case of a QoI ϕ(y) = Q(u(y)) and noisy observations ϕ(k) = ϕ(yk) + ηk with ηk i.i.d. and E[ηk|yk] = ¯ η(yk) ∈ L2

ρ(Γ)

(offset) sup

yk∈Γ

Var(ηk|yk) = σ2 < ∞ (variance) The offset could model any deterministic source of error due e.g. to numerical discretization. The fluctuations ˜ ηk = ηk − ¯ η(yk) model random measurement errors. We will also consider the case of bounded noise |˜ ηk| ≤ ˜ ηmax, ¯ ηL∞(Γ) < ∞.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 20

slide-48
SLIDE 48

Discrete projection using random evaluations The case of noisy observations

Convergence in expectation

Theorem [Chkifa-Cohen-Migliorati-N.-Tempone ’14], [Migliorati-N.-Tempone ’15] Assume ϕL∞(Γ) ≤ τ. For any δ ∈ (0, 1) and any γ > 0, under the condition

M log M ≥ (1 + γ) K(Λ) βδ , it holds

E(ϕ − Tτ ◦ ΠM

Λ ϕ2 L2

ρ(Γ)) ≤ C1 inf

v∈PΛ ϕ − v2 L2

ρ(Γ)

  • best approx. error in L2

+ 2 (1 − δ)2

M σ2

noise variance

+C2 ¯ η2

L2

ρ(Γ)

  • noise offset
  • +

8τ 2M−γ

  • prob. bad events

with C1, C2

M→∞

− − − − → 1.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 21

slide-49
SLIDE 49

Discrete projection using random evaluations The case of noisy observations

Convergence in Probability – bounded noise

Theorem [Migliorati-N.-Tempone ’15] In the bounded noise case, for any α, δ ∈ (0, 1), under the condition

M log M+log(3/α) ≥ K(Λ) βδ , it holds with probability greater that 1 − α

ϕ − ΠM

Λ ϕ2 L2

ρ(Γ) ≤ (1 +

2 1 − δ) inf

v∈PΛ ϕ − v2 L∞

ρ (Γ)

  • best approx. error in L∞

+ 4(1 + δ) (1 − δ)2

  • 2#Λ log(3Mα−1)

M ˜ η2

max

  • bounded noise

+ ¯ η2

L∞

ρ (Γ)

  • noise offset
  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 22

slide-50
SLIDE 50

Discrete projection using random evaluations The case of noisy observations

Case of uniform random variables in [−1, 1]

The discrete L2 projection is stable and optimally convergent under the condition K(Λ) := sup

y∈Γ

 

p∈Λ

|ψp(y)|2   ≤ βδ 1 + γ M log M where βδ is defined in (2). Recall that for Legendre polynomials we have: |ψp(y)| ≤ N

n=1

√2pn + 1, for all y ∈ [−1, 1]N. Theorem [Chkifa-Cohen-Migliorati-Nobile-Tempone ’14] For any set Λ ⊂ NN monotone it holds (#Λ) ≤ K(Λ) ≤ (#Λ)2. Hence, the discrete L2 projection over PΛ is stable and optimally convergent in expectation under the (sufficient) condition 1 + γ βδ (#Λ)2 ≤ M log M

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 23

slide-51
SLIDE 51

Discrete projection using random evaluations The case of noisy observations

Case of uniform random variables in [−1, 1]

The discrete L2 projection is stable and optimally convergent under the condition K(Λ) := sup

y∈Γ

 

p∈Λ

|ψp(y)|2   ≤ βδ 1 + γ M log M where βδ is defined in (2). Recall that for Legendre polynomials we have: |ψp(y)| ≤ N

n=1

√2pn + 1, for all y ∈ [−1, 1]N. Theorem [Chkifa-Cohen-Migliorati-Nobile-Tempone ’14] For any set Λ ⊂ NN monotone it holds (#Λ) ≤ K(Λ) ≤ (#Λ)2. Hence, the discrete L2 projection over PΛ is stable and optimally convergent in expectation under the (sufficient) condition 1 + γ βδ (#Λ)2 ≤ M log M

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 23

slide-52
SLIDE 52

Discrete projection using random evaluations The case of noisy observations

Case of uniform random variables in [−1, 1]

For specific index sets Λ the condition can be improved. For instance for the Total Degree polynomial space of degree w the bound K(Λ) ≤ (#Λ)2 is very conservative

10 10

1

10

2

10

3

10 10

1

10

2

10

3

10

4

10

5

(#Λ)2, d=2 K(Λ), d=2 #Λ, d=2

dimension N = 2

10 10

1

10

2

10

3

10

4

10

5

10 10

1

10

2

10

3

10

4

10

5

10

6

10

7

10

8

10

9

(#Λ)2, d=4 K(Λ), d=4 #Λ, d=4

dimension N = 4

10 10

1

10

2

10

3

10

4

10

5

10

6

10

7

10 10

2

10

4

10

6

10

8

10

10

10

12

10

14

(#Λ)2, d=8 K(Λ), d=8 #Λ, d=8

dimension N = 8 The bound for K(Λ) heavily depends on the underlying distribution. For instance, Chebyshev distribution = ⇒ K(Λ) ≤ min{(#Λ)

log 3 log 2 , 2N#Λ}

Beta distribution with θ1, θ2 ∈ N = ⇒ K(Λ) ≤ (#Λ)2 max{θ1,θ2}+2

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 24

slide-53
SLIDE 53

Discrete projection using random evaluations The case of noisy observations

Case of uniform random variables in [−1, 1]

For specific index sets Λ the condition can be improved. For instance for the Total Degree polynomial space of degree w the bound K(Λ) ≤ (#Λ)2 is very conservative

10 10

1

10

2

10

3

10 10

1

10

2

10

3

10

4

10

5

(#Λ)2, d=2 K(Λ), d=2 #Λ, d=2

dimension N = 2

10 10

1

10

2

10

3

10

4

10

5

10 10

1

10

2

10

3

10

4

10

5

10

6

10

7

10

8

10

9

(#Λ)2, d=4 K(Λ), d=4 #Λ, d=4

dimension N = 4

10 10

1

10

2

10

3

10

4

10

5

10

6

10

7

10 10

2

10

4

10

6

10

8

10

10

10

12

10

14

(#Λ)2, d=8 K(Λ), d=8 #Λ, d=8

dimension N = 8 The bound for K(Λ) heavily depends on the underlying distribution. For instance, Chebyshev distribution = ⇒ K(Λ) ≤ min{(#Λ)

log 3 log 2 , 2N#Λ}

Beta distribution with θ1, θ2 ∈ N = ⇒ K(Λ) ≤ (#Λ)2 max{θ1,θ2}+2

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 24

slide-54
SLIDE 54

Discrete projection using random evaluations The case of noisy observations

Some numerical examples – 1D function

Condition number of DTD

1 5 10 15 20 25 10 10

2

10

4

10

6

10

8

10

10

10

12

10

14

10

16

w cond(DM(ω)

T

DM(ω)) Condition number, N=1, M=c⋅#Λ

c=2 c=3 c=5 c=10 c=20

M = c · #Λ

1 5 10 15 20 25 30 35 40 10 10

2

10

4

10

6

10

8

10

10

10

12

10

14

10

16

w cond(DM(ω)

T

DM(ω)) Condition number, N=1, M=c⋅#Λ2

c=0.5 c=1 c=1.5 c=2 c=3

M = c · (#Λ)2

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 25

slide-55
SLIDE 55

Discrete projection using random evaluations The case of noisy observations

Some numerical examples – 1D function

Approximation of the meromorphic function φ(y) =

1 1+0.5y

1 5 10 15 20 25 10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

w ||φ−Πw

M(ω)φ||cv

Error φ(Y)=1/(1+0.5Y), N=1, M=c⋅#Λ

c=2 c=3 c=5 c=10 c=20

M = c · #Λ

1 5 10 15 20 25 30 35 40 10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

w ||φ−Πw

M(ω)φ||cv

Error φ(Y)=1/(1+0.5Y), N=1, M=c⋅#Λ2

c=0.5 c=1 c=1.5 c=2 c=3

M = c · (#Λ)2 error with respect to polynomial degree.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 26

slide-56
SLIDE 56

Discrete projection using random evaluations The case of noisy observations

Some numerical examples – 1D function

Approximation of the meromorphic function φ(y) =

1 1+0.5y

5 10 25 50 100 250 500 1000 2000 5000 10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

M ||φ−Πw

M(ω)φ||cv

Error φ(Y)=1/(1+0.5Y), N=1, M=c⋅#Λα

c=2, α=1 c=20, α=1 c=1, α=2 c=3, α=2

error with respect to total number of sampling points.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 27

slide-57
SLIDE 57

Discrete projection using random evaluations The case of noisy observations

Some numerical examples

Condition number of DTD – multiD – Total Degree poly. space

1 5 10 10 10

2

10

4

10

6

10

8

10

10

10

12

w cond(DM(ω)

T

DM(ω)) Condition number, TD, N=2, M=c⋅#Λ c=1.1 c=1.25 c=2 c=5 c=20

M = c · #Λ

1 5 10 15 20 25 30 33 10 10

1

10

2

w cond(DM(ω)

T

DM(ω)) Condition number, TD, N=2, M=c⋅#Λ2 c=0.5 c=1 c=2

M = c · (#Λ)2

1 5 10 14 10 10

2

10

4

10

6

10

8

10

10

w cond(DM(ω)

T

DM(ω)) Condition number, TD, N=4, M=c⋅#Λ c=1.1 c=1.25 c=2 c=5 c=20

M = c · #Λ

1 2 3 4 5 6 7 8 10 10

1

w cond(DM(ω)

T

DM(ω)) Condition number, TD, N=4, M=c⋅#Λ2 c=0.5 c=1 c=2

M = c · (#Λ)2

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 28

slide-58
SLIDE 58

Discrete projection using random evaluations The case of noisy observations

Elliptic PDE with random inclusions

We derived the theoretical bound E(u − Tτ ◦ ΠM

Λ u2 L2

ρ(Γ,H1 0(D))) ≤ c1e−c2NM 1 1+2N

10 10

1

10

2

10

3

10

4

10

5

10

6

10

7

10

8

10

−16

10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

n E(||Q−Pm

n Q||cv) n ∼ m−1/2 n = 2gd(eζ)−1m2+d

−1

with d=2 n = m2 n = 10m n = 3m 10 10

1

10

2

10

3

10

4

10

5

10

6

10

7

10

8

10

−16

10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

n E(||Q−Pm

n Q||cv) n ∼ m−1/2 n = 2gd(eζ)−1m2+d

−1

with d=4 n = m2 n = 10m n = 3m 10 10

1

10

2

10

3

10

4

10

5

10

6

10

7

10

8

10

−16

10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

n E(||Q−Pm

n Q||cv) n ∼ m−1/2 n = 2gd(eζ)−1m2+d

−1

with d=8 n = m2 n = 10m n = 3m

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 29

slide-59
SLIDE 59

Discrete projection using random evaluations The case of noisy observations

Cantilever beam

linear elasticity equations Young modulus uncertain in each brick: Ei = e7+Yi, in Ωi, Yi ∼ U([−1, 1]), iid Uncertainty analysis on maximum vertical displacement.

1 2 3 4 5 10 10

1

10

2

10

3

10

4

10

5

w cond(DM(ω)

T

DM(ω)) Condition number Total Degree, N=7, M=c⋅#Λ c=1.1 c=3 c=10

1 2 3 4 5 10

−3

10

−2

10

−1

10 10

1

w ||QOI6(u)−Πw

M(ω)QOI6(u)||cv

Error QOI6(u), Total Degree, N=7, M=c⋅#Λ c=1.1 c=3 c=10

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 30

slide-60
SLIDE 60

Discrete projection using random evaluations The case of noisy observations

Improvements on the quadratic relation

Improvements can be obtained by sampling from a different distribution ˆ ρ. Let us consider the weighted least squares approx. ˆ uΛ,M = argmin

v∈PΛ(Γ)⊗V

1 M

M

  • k=1

ρ(y(k)) ˆ ρ(y(k))u(k) − v(y(k))2

V

where the sample {yk}k is drawn from the distribution ˆ ρ(y)dy. ρ(y) = ˆ ρ(y) = Chebyshev distribution in [−1, 1]N, then the relation M ∝ min{2N(#Λ), (#Λ)

log(3) log(2) } is enough to guarantee optimal

convergence [Chkifa-Cohen-Migliorati-N.-Tempone ’14] ρ(y)=uniform and ˆ ρ(y)=Chebyshev distribution in [−1, 1]N, then, the relation M ∝ 2N(#Λ) guarantees optimal convergence [Rauhut-Ward

’12]. However, the constant depends on N [Yan-Guo-Xiu ’12].

ρ(y)=Gaussian: still unclear. Numerically, the situation seems to be worse. Improvements suggested in [Tang-Zhou ’14]

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 31

slide-61
SLIDE 61

Discrete projection using random evaluations The case of noisy observations

Improvements on the quadratic relation

Improvements can be obtained by sampling from a different distribution ˆ ρ. Let us consider the weighted least squares approx. ˆ uΛ,M = argmin

v∈PΛ(Γ)⊗V

1 M

M

  • k=1

ρ(y(k)) ˆ ρ(y(k))u(k) − v(y(k))2

V

where the sample {yk}k is drawn from the distribution ˆ ρ(y)dy. ρ(y) = ˆ ρ(y) = Chebyshev distribution in [−1, 1]N, then the relation M ∝ min{2N(#Λ), (#Λ)

log(3) log(2) } is enough to guarantee optimal

convergence [Chkifa-Cohen-Migliorati-N.-Tempone ’14] ρ(y)=uniform and ˆ ρ(y)=Chebyshev distribution in [−1, 1]N, then, the relation M ∝ 2N(#Λ) guarantees optimal convergence [Rauhut-Ward

’12]. However, the constant depends on N [Yan-Guo-Xiu ’12].

ρ(y)=Gaussian: still unclear. Numerically, the situation seems to be worse. Improvements suggested in [Tang-Zhou ’14]

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 31

slide-62
SLIDE 62

Discrete projection using random evaluations The case of noisy observations

Improvements on the quadratic relation

Improvements can be obtained by sampling from a different distribution ˆ ρ. Let us consider the weighted least squares approx. ˆ uΛ,M = argmin

v∈PΛ(Γ)⊗V

1 M

M

  • k=1

ρ(y(k)) ˆ ρ(y(k))u(k) − v(y(k))2

V

where the sample {yk}k is drawn from the distribution ˆ ρ(y)dy. ρ(y) = ˆ ρ(y) = Chebyshev distribution in [−1, 1]N, then the relation M ∝ min{2N(#Λ), (#Λ)

log(3) log(2) } is enough to guarantee optimal

convergence [Chkifa-Cohen-Migliorati-N.-Tempone ’14] ρ(y)=uniform and ˆ ρ(y)=Chebyshev distribution in [−1, 1]N, then, the relation M ∝ 2N(#Λ) guarantees optimal convergence [Rauhut-Ward

’12]. However, the constant depends on N [Yan-Guo-Xiu ’12].

ρ(y)=Gaussian: still unclear. Numerically, the situation seems to be worse. Improvements suggested in [Tang-Zhou ’14]

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 31

slide-63
SLIDE 63

Discrete projection using random evaluations The case of noisy observations

Improvements on the quadratic relation

Improvements can be obtained by sampling from a different distribution ˆ ρ. Let us consider the weighted least squares approx. ˆ uΛ,M = argmin

v∈PΛ(Γ)⊗V

1 M

M

  • k=1

ρ(y(k)) ˆ ρ(y(k))u(k) − v(y(k))2

V

where the sample {yk}k is drawn from the distribution ˆ ρ(y)dy. ρ(y) = ˆ ρ(y) = Chebyshev distribution in [−1, 1]N, then the relation M ∝ min{2N(#Λ), (#Λ)

log(3) log(2) } is enough to guarantee optimal

convergence [Chkifa-Cohen-Migliorati-N.-Tempone ’14] ρ(y)=uniform and ˆ ρ(y)=Chebyshev distribution in [−1, 1]N, then, the relation M ∝ 2N(#Λ) guarantees optimal convergence [Rauhut-Ward

’12]. However, the constant depends on N [Yan-Guo-Xiu ’12].

ρ(y)=Gaussian: still unclear. Numerically, the situation seems to be worse. Improvements suggested in [Tang-Zhou ’14]

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 31

slide-64
SLIDE 64

Discrete projection using random evaluations The case of noisy observations

Numerical example with Chebyshev preconditioning

Expansion in Legendre polynomials (ρ(y)=uniform) and samples from Chebyshev distribution (ˆ ρ(y)=Chebyshev) condition number cond(DTD)

5 10 15 20 25 1 2 3 4

Total Degree, M=3#Λ w cond( Dcross

T

Dcross )

d=1 µ+σ d=2 µ+σ d=3 µ+σ d=4 µ+σ

M = 3 · (#Λ) error for u(y) =

  • 1 + 0.7

2N

N

n=1 yn

−1

5 10 15 20 25 −6 −5 −4 −3 −2 −1

Total Degree, M=3#Λ w error L∞

d=1 µ+σ d=2 µ+σ d=3 µ+σ d=4 µ+σ

M = 3 · (#Λ)

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 32

slide-65
SLIDE 65

Discrete projection using random evaluations The case of noisy observations

Adaptive construction of polynomial spaces

{Λk}k≥0 sequence of downward closed multi-index sets, with Λ0 = {0}. The sequence is adaptively computed by means of greedy algorithms based

  • n the random discrete L2 projection.

Definitions: Margin M(Λ) associated to a multi-index set Λ: M(Λ) = {p : p / ∈ Λ and ∃j > 0 : p − ej ∈ Λ} Reduced margin R(Λ) associated to a multi-index set Λ: R(Λ) = {p : p / ∈ Λ and ∀j = 1, . . . , d : pj = 0 ⇒ p − ej ∈ Λ} set Λ and its Margin set Λ and its Reduced margin

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 33

slide-66
SLIDE 66

Discrete projection using random evaluations The case of noisy observations

Adaptive construction of polynomial spaces

{Λk}k≥0 sequence of downward closed multi-index sets, with Λ0 = {0}. The sequence is adaptively computed by means of greedy algorithms based

  • n the random discrete L2 projection.

Definitions: Margin M(Λ) associated to a multi-index set Λ: M(Λ) = {p : p / ∈ Λ and ∃j > 0 : p − ej ∈ Λ} Reduced margin R(Λ) associated to a multi-index set Λ: R(Λ) = {p : p / ∈ Λ and ∀j = 1, . . . , d : pj = 0 ⇒ p − ej ∈ Λ} set Λ and its Margin set Λ and its Reduced margin

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 33

slide-67
SLIDE 67

Discrete projection using random evaluations The case of noisy observations

Adaptive construction of polynomial spaces

{Λk}k≥0 sequence of downward closed multi-index sets, with Λ0 = {0}. The sequence is adaptively computed by means of greedy algorithms based

  • n the random discrete L2 projection.

Definitions: Margin M(Λ) associated to a multi-index set Λ: M(Λ) = {p : p / ∈ Λ and ∃j > 0 : p − ej ∈ Λ} Reduced margin R(Λ) associated to a multi-index set Λ: R(Λ) = {p : p / ∈ Λ and ∀j = 1, . . . , d : pj = 0 ⇒ p − ej ∈ Λ} set Λ and its Margin set Λ and its Reduced margin

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 33

slide-68
SLIDE 68

Discrete projection using random evaluations The case of noisy observations

The D¨

  • rfler marking

Idea proposed by W. D¨

  • rfler in 1996 for Adaptive Finite Elements.

Given a multi-index set Λ, a subset R ⊆ R(Λ), a (continuous) function e : R → R and a parameter θ ∈ (0, 1], we define a procedure D¨

  • rfler = D¨
  • rfler(R, e, θ)

that computes a set F ⊆ R ⊆ R(Λ) of minimal cardinality such that

  • p∈F

e(p)2 ≥ θ

  • p∈R

e(p)2. In practice, for any p ∈ R, the error indicator e(p) will be either an estimator of the coefficient cp of the function u expanded over the Legendre basis or the projected residual on the p-th Legendre basis function. This corresponds to choose a fraction θ of the energy associated with the (estimates of the) coefficients in the set R.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 34

slide-69
SLIDE 69

Discrete projection using random evaluations The case of noisy observations

The D¨

  • rfler marking

Idea proposed by W. D¨

  • rfler in 1996 for Adaptive Finite Elements.

Given a multi-index set Λ, a subset R ⊆ R(Λ), a (continuous) function e : R → R and a parameter θ ∈ (0, 1], we define a procedure D¨

  • rfler = D¨
  • rfler(R, e, θ)

that computes a set F ⊆ R ⊆ R(Λ) of minimal cardinality such that

  • p∈F

e(p)2 ≥ θ

  • p∈R

e(p)2. In practice, for any p ∈ R, the error indicator e(p) will be either an estimator of the coefficient cp of the function u expanded over the Legendre basis or the projected residual on the p-th Legendre basis function. This corresponds to choose a fraction θ of the energy associated with the (estimates of the) coefficients in the set R.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 34

slide-70
SLIDE 70

Discrete projection using random evaluations The case of noisy observations

Orthogonal Matching Pursuit with D¨

  • rfler marking

Algorithm 1 Orthogonal Matching Pursuit with D¨

  • rfler marking

Set r0 = u(y), u0 ≡ 0 and Λ0 = {0}, for k = 1, . . . , kmax do F1 = D¨

  • rfler(R(Λk−1), {|(rk−1, ψp)M,V |}p, θ1)
  • Λk = Λk−1 ∪ F1

uk = argminv∈P

Λk u − vM,V ,

uk =

p∈˜ Λk c(k) p ψp

F2 = D¨

  • rfler(F1, {c(k)

p }p, θ2)

Λk = Λk−1 ∪ F2 rk = u − uk|Λk end for

θ1 ∈ (0, 1) and θ2 = 1: D¨

  • rfler marking only with the correlations.

θ1 = 1 and θ2 ∈ (0, 1): D¨

  • rfler marking only with the random discrete L2

projection on Λk−1 ∪ R(Λk).

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 35

slide-71
SLIDE 71

Discrete projection using random evaluations The case of noisy observations

Some remarks and open issues

The first D¨

  • rfler marking performs a screening of the reduced margin,

to avoid an L2 discrete minimization over a too large polynomial space. At each step the correlations {|(rk−1, ψp)M,V | : p ∈ R(Λk)} are mutually uncoupled and cheap to compute, but might provide only a rough estimate of the coefficients (depending on the choice of Mk). The second D¨

  • rfler marking performs a selection based on the more

accurate estimates of the coefficients coming from the L2 projection. At each step the adaptive algorithm remains stable and accurate by choosing Mk ∝ (#Λk)2 (consequence of the theory in the first part). The adaptive algorithm generates a sequence {Λk}k≥0 of only quasi best N-term sets. Rate of convergence? Choice of θ1, θ2? What if Mk ∝ #Λk?

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 36

slide-72
SLIDE 72

Discrete projection using random evaluations The case of noisy observations

A numerical test

Approximation of a meromorphic function (16-variables) φ(y) = 1 1 − γ · y, y ∼ U([−1, 1]16) γ = 0.3 ∗ (1, 5 · 10−1, 10−1, 5 · 10−2, . . . , 5 · 10−8)

10 10

1

10

2

10

3

10

−4

10

−3

10

−2

10

−1

10 #Λk ||φ − ΠΛk

m φ||cv

m=(#Λ)2, θ1=0.5, θ2=0.2 m=(#Λ)2, θ1=0.9, θ2=0.7 m=3#Λ, θ1=0.5, θ2=0.2 m=3#Λ, θ1=0.9, θ2=0.7

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 37

slide-73
SLIDE 73

Conclusions

Outline

1

Introduction – PDEs with random parameters

2

Stochastic polynomial approximation

3

Discrete projection using random evaluations Stability Convergence results in expectation and probability The case of noisy observations

4

Conclusions

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 38

slide-74
SLIDE 74

Conclusions

Conclusions

We have derived conditions under which the random discrete least squares projection is stable and optimally convergent. The condition M ≥ C(#Λ)2 for uniform points and Legendre polynomials holds in any dimension and for any “shape” of the polynomial space, opening the possibility of adaptive algorithms. The condition M ∼ (#Λ)2 seems to be too stringent in high dimension and a linear scaling is often enough, making this technique more attractive for high dimensional problems. Still open questions on preconditioned least squares or unbounded random variables. We have proposed an adaptive algorithm based on a double D¨

  • rfler marking that

performs very well. The analysis is still ongoing. Very high/infinite dimensional approximtions are possible with this algorithm. Other sampling schemes can be used to build the discrete least squares (DLS)

  • projection. In [Migliorati-N. 15] we have shown that DLS with low discrepancy

sequences has similar stability conditions as the random DLS, at least for tensor product polynomial spaces.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 39

slide-75
SLIDE 75

Conclusions

Conclusions

We have derived conditions under which the random discrete least squares projection is stable and optimally convergent. The condition M ≥ C(#Λ)2 for uniform points and Legendre polynomials holds in any dimension and for any “shape” of the polynomial space, opening the possibility of adaptive algorithms. The condition M ∼ (#Λ)2 seems to be too stringent in high dimension and a linear scaling is often enough, making this technique more attractive for high dimensional problems. Still open questions on preconditioned least squares or unbounded random variables. We have proposed an adaptive algorithm based on a double D¨

  • rfler marking that

performs very well. The analysis is still ongoing. Very high/infinite dimensional approximtions are possible with this algorithm. Other sampling schemes can be used to build the discrete least squares (DLS)

  • projection. In [Migliorati-N. 15] we have shown that DLS with low discrepancy

sequences has similar stability conditions as the random DLS, at least for tensor product polynomial spaces.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 39

slide-76
SLIDE 76

Conclusions

Conclusions

We have derived conditions under which the random discrete least squares projection is stable and optimally convergent. The condition M ≥ C(#Λ)2 for uniform points and Legendre polynomials holds in any dimension and for any “shape” of the polynomial space, opening the possibility of adaptive algorithms. The condition M ∼ (#Λ)2 seems to be too stringent in high dimension and a linear scaling is often enough, making this technique more attractive for high dimensional problems. Still open questions on preconditioned least squares or unbounded random variables. We have proposed an adaptive algorithm based on a double D¨

  • rfler marking that

performs very well. The analysis is still ongoing. Very high/infinite dimensional approximtions are possible with this algorithm. Other sampling schemes can be used to build the discrete least squares (DLS)

  • projection. In [Migliorati-N. 15] we have shown that DLS with low discrepancy

sequences has similar stability conditions as the random DLS, at least for tensor product polynomial spaces.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 39

slide-77
SLIDE 77

Conclusions

Conclusions

We have derived conditions under which the random discrete least squares projection is stable and optimally convergent. The condition M ≥ C(#Λ)2 for uniform points and Legendre polynomials holds in any dimension and for any “shape” of the polynomial space, opening the possibility of adaptive algorithms. The condition M ∼ (#Λ)2 seems to be too stringent in high dimension and a linear scaling is often enough, making this technique more attractive for high dimensional problems. Still open questions on preconditioned least squares or unbounded random variables. We have proposed an adaptive algorithm based on a double D¨

  • rfler marking that

performs very well. The analysis is still ongoing. Very high/infinite dimensional approximtions are possible with this algorithm. Other sampling schemes can be used to build the discrete least squares (DLS)

  • projection. In [Migliorati-N. 15] we have shown that DLS with low discrepancy

sequences has similar stability conditions as the random DLS, at least for tensor product polynomial spaces.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 39

slide-78
SLIDE 78

Conclusions

Conclusions

We have derived conditions under which the random discrete least squares projection is stable and optimally convergent. The condition M ≥ C(#Λ)2 for uniform points and Legendre polynomials holds in any dimension and for any “shape” of the polynomial space, opening the possibility of adaptive algorithms. The condition M ∼ (#Λ)2 seems to be too stringent in high dimension and a linear scaling is often enough, making this technique more attractive for high dimensional problems. Still open questions on preconditioned least squares or unbounded random variables. We have proposed an adaptive algorithm based on a double D¨

  • rfler marking that

performs very well. The analysis is still ongoing. Very high/infinite dimensional approximtions are possible with this algorithm. Other sampling schemes can be used to build the discrete least squares (DLS)

  • projection. In [Migliorati-N. 15] we have shown that DLS with low discrepancy

sequences has similar stability conditions as the random DLS, at least for tensor product polynomial spaces.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 39

slide-79
SLIDE 79

Conclusions

Conclusions

We have derived conditions under which the random discrete least squares projection is stable and optimally convergent. The condition M ≥ C(#Λ)2 for uniform points and Legendre polynomials holds in any dimension and for any “shape” of the polynomial space, opening the possibility of adaptive algorithms. The condition M ∼ (#Λ)2 seems to be too stringent in high dimension and a linear scaling is often enough, making this technique more attractive for high dimensional problems. Still open questions on preconditioned least squares or unbounded random variables. We have proposed an adaptive algorithm based on a double D¨

  • rfler marking that

performs very well. The analysis is still ongoing. Very high/infinite dimensional approximtions are possible with this algorithm. Other sampling schemes can be used to build the discrete least squares (DLS)

  • projection. In [Migliorati-N. 15] we have shown that DLS with low discrepancy

sequences has similar stability conditions as the random DLS, at least for tensor product polynomial spaces.

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 39

slide-80
SLIDE 80

Conclusions

Thank you for your attention!

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 40

slide-81
SLIDE 81

Conclusions

References

  • G. Milgiorati, F. Nobile and R. Tempone,

Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points, in preparation.

  • G. Migliorati and F. Nobile,

Analysis of discrete least squares on multivariate polynomial spaces with evaluations in low-discrepancy point sets, MATHICSE Report 25.2014, submitted.

  • A. Chkifa, A. Cohen, G. Migliorati, F. Nobile, and R. Tempone

Discrete least squares polynomial approximation with random evaluations application to parametric and stochastic elliptic PDEs, to appear in M2AN, 2015

  • G. Migliorati,

Adaptive polynomial approximation by means of random discrete least squares, ENUMATH 2013 Proceedings, LNCSE Springer.

  • G. Migliorati, F. Nobile, E. von Schwerin, and R. Tempone

Analysis of the discrete L2 projection on polynomial spaces with random evaluations,

  • Found. Comp. Math., 14(3), 2014.
  • G. Migliorati, F. Nobile, E. von Schwerin and R. Tempone,

Approximation of quantities of interest in stochastic PDEs by the random discrete L2 projection on polynomial spaces, SISC 35(3), 2013

  • F. Nobile (EPFL)

Discrete least squares for random PDEs UQAW 2015, KAUST 41