A Reduced-Basis Approach to the Reduction of Computations in - - PowerPoint PPT Presentation

a reduced basis approach to the reduction of computations
SMART_READER_LITE
LIVE PREVIEW

A Reduced-Basis Approach to the Reduction of Computations in - - PowerPoint PPT Presentation

A Reduced-Basis Approach to the Reduction of Computations in Multiscale Models & Uncertainty Quantification Sbastien Boyaval 1 , 2 , 3 1 Univ. Paris Est , Laboratoire dhydraulique Saint-Venant (EDF R&D Ecole des Ponts Paristech


slide-1
SLIDE 1

A Reduced-Basis Approach to the Reduction of Computations in Multiscale Models & Uncertainty Quantification

Sébastien Boyaval1,2,3

  • 1Univ. Paris Est, Laboratoire d’hydraulique Saint-Venant (EDF R&D – Ecole des Ponts Paristech – CETMEF), Chatou, France

2INRIA, MICMAC team–project, Rocquencourt, France

RICAM workshop on Numerical Analysis of Multiscale Problems & Stochastic Modelling, Linz, December 2011

slide-2
SLIDE 2

Outline

1

Introduction: why do we need to reduce computations ?

2

A multiscale paradigm: 2-scale homogenization

3

A standard use of the Reduced-Basis (RB) method

4

A paradigm for uncertainty quantification: PDEs with random input

5

RB Variance Reduction in parametrized Monte-Carlo

6

Another multiscale paradigm: complex fluids

7

Conclusion: when shall one use RB ?

  • S. Boyaval

Reduced-Basis and stochastics 2 / 43

slide-3
SLIDE 3

Taking perturbations into account is demanding

Micro-Macro models (multiscale) couple algorithms (S/P/D)Equation[α](u) = 0 in Ω , ∂Ω (1) α(x) = F(w(x)) ∀x ∈ Ω (2) (s/p/d)equation[x](w) = 0 in Y[x] ∀x ∈ Ω (3) (using mathematical frameworks: homogenization, non-equilibrium molecular dynamics. . . ) Uncertainty quantification (UQ) cares about fine statistics

  • f u under the law generated by α

A Reduced-Basis approach to computational reductions ?

[2] B., SIAM MMS 7(1), 2008. [3] B., Le Bris, Maday, Nguyen, Patera, CMAME, 2009. [4] B., Lelièvre, CMS 8, 2010.

  • S. Boyaval

Reduced-Basis and stochastics 3 / 43

slide-4
SLIDE 4

Outline

1

Introduction: why do we need to reduce computations ?

2

A multiscale paradigm: 2-scale homogenization

3

A standard use of the Reduced-Basis (RB) method

4

A paradigm for uncertainty quantification: PDEs with random input

5

RB Variance Reduction in parametrized Monte-Carlo

6

Another multiscale paradigm: complex fluids

7

Conclusion: when shall one use RB ?

  • S. Boyaval

Reduced-Basis and stochastics 4 / 43

slide-5
SLIDE 5

Ideal composites with 2-scale heterogeneity

Scanning Electron Microscope Synthetic material (Google entry “microstructure”) (numerical modelling)

  • S. Boyaval

Reduced-Basis and stochastics 5 / 43

slide-6
SLIDE 6

Problem: fast-oscillating diffusion

Numerical problem: Compute uǫ(x) solution in D ⊂ Rd to −div(Aǫ(x)∇uǫ(x)) = f(x) (4) + Boundary Conditions (BC) for uǫ on ∂D, but uǫ is expected to vary “fast” ! like Aǫ, where “fast” = at scale ǫ ≪ |D| Finite-Element (FE) discretization using simplices of size h ≤ ǫ for D ⇒ O

  • |D|/hd

≥ O

  • |D|/ǫd

≫ 1 d.o.f required ! Discretization of a transformed problem → homogenization

  • S. Boyaval

Reduced-Basis and stochastics 6 / 43

slide-7
SLIDE 7

Sketch of homogenization procedure

  • I. Problem transformed to a new problem:

For some (Aǫ)ǫ>0, find u⋆ ← uǫ solution to −div(A⋆∇u⋆) = f +BC where A⋆

H

← − Aǫ [Murat-Tartar] as ǫ → 0. Rm1: (Aǫ)ǫ>0 is not given in practice ; weak hypotheses suffice, but computing oscillating functions (zǫ

j ) ∈ H1(D) such that

A⋆(x)ej · ei

ǫ→0

← − − − − − − − −

L2(D)−weak Aǫ(x)∇zǫ j (x) · ei may be hard !

  • II. How to make computational use of the new problem ?

Can we compute u⋆ faster than FEM and evaluate the error ? Rm2: Aǫ(x) = A(x, x

ǫ − [x ǫ ]) 2-scale periodic, zǫ j = xj + ǫwj

  • x, x

ǫ

  • then computable through −divy(A(x, y) · [ej + ∇ywj(x, y)]) = 0

and uǫ − u⋆L2(Ω) = O(ǫ) holds + “corrected” H1 approximation

  • S. Boyaval

Reduced-Basis and stochastics 7 / 43

slide-8
SLIDE 8

Efficient multiscale computations

The transformed problem is still a numerical challenge: A⋆(x) at many quadrature points requires to solve at many x ∈ D −divy(A(x, y) · [ej + ∇ywj(x, y)]) = 0 A generic difficulty in all numerical homogenization procedures (MsFEM,HMM,...: many micro computations) and a paradigm for many micro-macro models. A posteriori error control can moderate the problem by choos- ing good quadratures in D [Ohlberger ; Abdulle . . . ], yet some gain is still possible using a Reduced-Basis approximation (in- voking accurate solutions only at a few well-chosen points xn, n = 1, . . . , N) wi(x, ·) ≃ wi,N(x, ·) =

N

  • n=1

αn(x)wi(xn, ·)

  • S. Boyaval

Reduced-Basis and stochastics 8 / 43

slide-9
SLIDE 9

Outline of the talk

1

Introduction: why do we need to reduce computations ?

2

A multiscale paradigm: 2-scale homogenization

3

A standard use of the Reduced-Basis (RB) method

4

A paradigm for uncertainty quantification: PDEs with random input

5

RB Variance Reduction in parametrized Monte-Carlo

6

Another multiscale paradigm: complex fluids

7

Conclusion: when shall one use RB ?

  • S. Boyaval

Reduced-Basis and stochastics 9 / 43

slide-10
SLIDE 10

Standard (elliptic) RB framework

Assume one can parametrize {wi(x), x ∈ D} with µ ∈ Λ ⊂ RP. Problem: compute wi(µ) ∈ X = H1

♯ (Y) for many µ ∈ Λ ⊂ RP

solution to a(wi, v; µ) = fi(v; µ) ∀v ∈ X where a(u, v; µ(x)) =

  • Y A(x, y)∇u(y) · ∇v(y)dy ∀(u, v) ∈ X × X

fi(v; µ(x)) = −

  • Y A(x, y)ei · ∇v(y)dy ∀v ∈ X, 1 ≤ i ≤ n

+ outputs A⋆

i,j(x) at quadrature points x

sij(µ) = −fj(wi; µ(x)) =

  • Y A(x, y)∇wi(x, y) · ejdy

RB method: w(µ) ≃ wN(µ) ∈ XN spanned by “snapshots”

1 select XN = Span

  • w(µN

n ), n = 1, . . . , N

  • ⊂ X

(hopefully {w(µ), µ ∈ Λ} is close to small-dimensional)

2 compute fast many wN(µ) ∈ arginf{w(µ) − vµ,X, v ∈ XN}

(reduction hopefully efficient at given fixed accuracy ε)

  • S. Boyaval

Reduced-Basis and stochastics 10 / 43

slide-11
SLIDE 11

RB principles to be put in practice

A µ-parametrized elliptic pb. a(w(µ), v; µ) = l(v) , ∀v ∈ X Given µ, find w(µ) ∈ X solution to −div(A(µ)∇w(µ)) = f + BC wN best approx. in energy (Galerkin): · µ,X =

  • a(·, ·; µ)

XN N-linear space minimizing L∞: sup

µ

w(µ) − wN(µ)µ,X

goal-oriented cases (like homogenization) → RB also for adjoint eq.

[Porsching . . . Maday, Patera, Turinici, Prud’homme, Rozza, Haasdonk, Ohlberger . . . ]

Amounts to numerical approximation of “best N-linear space”: XN is a minimizer of inf

µ1,...,µN

  • sup

µ

w(µ) − wN(µ)µ,X

  • ≤ ǫ

[Maday, Patera, Turinici, Prud’homme, Buffa, Binev, Cohen, Dahmen, Devore . . . ]

  • S. Boyaval

Reduced-Basis and stochastics 11 / 43

slide-12
SLIDE 12

1) Constructive approximation of XN

µ discretization + a posteriori esimator + greedy algorithm Provided a good discretization XN of X (d.o.f. N ≫ 1), build XN = Span

  • wN (µN

n ) ≈ w(µN n ); n = 1, . . . , N

  • using

1 a posteriori estimators ∆N,N(µ) ≥ wN (µ) − wN(µ)µ,X, 2 a greedy algorithm selecting iteratively (n = 1 . . .) µN n = µn

in a training sample while sup{∆N,N(µ), µ ∈ sample} ≥ ε µ1 = rand(); µn+1 ∈

  • sup

µ∈sample

∆N,n(µ)

  • , n = 1, . . . , N − 1

and use XN and precomputations for all queried µ to compute fast approximations wN(µ) that can be certified ∆N,N(µ) ≤ ǫ !

Technical issue (application): cheap estimator, precomputation

  • S. Boyaval

Reduced-Basis and stochastics 12 / 43

slide-13
SLIDE 13

A case-dependent step in RB methodology !

Residual error estimation (with αA > 0 coercivity constant) wi(x, ·) − wi,N(x, ·)X ≤ ∆N(wi(x, ·)) = a(wi(x, ·) − wi,N(x, ·), ·; x)X ′ αA(x) |sij(x) − sij,N(x)| ≤ ∆s

ij,N(x) = αA(x) ∆N(wi(x, ·)) ∆N(wj(x, ·))

controls RB error in outputs (w.r.t N-dimensional FE space XN ) sij,N(x) =

  • Y

A(x, y)∇wi,N(x, y) · ejdy A⋆

N(x)i,j

=

  • Y

A(x, y)[ei + ∇ywi,N(x, y)] · ej dy

  • S. Boyaval

Reduced-Basis and stochastics 13 / 43

slide-14
SLIDE 14

2) Assembling fast the problem at any µ

Typical parametrization for homogenization: moving inclusion with varying contrast

1 y1

1 y2

A(x,·)

b1 c1 b2 c2 Need to rebuild FE rigidity matrix at each x ∈ D ! a(u, v; x) =

  • Y

A(x, y)∇u(y) · ∇v(y)dy ∀(u, v) ∈ XN × XN

  • S. Boyaval

Reduced-Basis and stochastics 14 / 43

slide-15
SLIDE 15

Preprocessing trick: affine parametrization

∀x ∈ D map Y ⊂ d

k=1 Yk(x) partitioned into d

nonoverlapping subsets Yk(x) to Y ⊂ d

k=1 Y 0 k using . . .

affine homeomorphisms Φk(x, ·) : Y 0

k → Yk(x), 1 ≤ k ≤ d,

affine parametrization for (A(x, Φ(x, ·)))x∈D on each Y 0

k ,

A(x, Φk(x, y)) = A0(y) +

Z

  • q=1

Θq(x)Aq(y) ∀y ∈ Y 0

k

(5) ⇒ matrix easily computed after variable change y′ = Φ−1(x, y) (but new affine-equivalent mesh has spline-distorted shape functions φi, beware aspect ratio !) Θq(x)

  • Y

Aq(y′)∇φi(y′) · ∇φj(y′)| det(∇yΦk(x, y′))|dy′

  • S. Boyaval

Reduced-Basis and stochastics 15 / 43

slide-16
SLIDE 16

1 y1

1 y2 Q0 0.25 0.75 0.25 0.75

1 y1

1 y2 Q(x) b1 c1 b2 c2

Φ(x, ·)

Figure: For each parameter value x, the cell with inclusion Q(x) (on the right) is mapped through the piecewise affine homeomorphism Φ(x, ·) from a reference cell with inclusion Q0 (on the left).

  • S. Boyaval

Reduced-Basis and stochastics 16 / 43

slide-17
SLIDE 17

Numerical results

Given δ = .1 (mesh distortion margin), the parameter is 5-d (b1, c1, b2, c2, θ) ∈ [.25−δ; .25+δ]2 ×[.75−δ; .75+δ]2 ×(0.1, 1).

5 10 15 20 25 30 35 40 45 50 −5 −4 −3 −2 −1 1 Log max sH1 relative error for w1N and w2N wrt N Size N of the reduced basis 5 10 15 20 25 30 35 40 45 50 −10 −9 −8 −7 −6 −5 −4 −3 −2 −1 Log max relative error for aij wrt N Size N of the reduced basis

Error wrto N (log-scale) while “practicing” in Λ ⊂ D (online)

  • S. Boyaval

Reduced-Basis and stochastics 17 / 43

slide-18
SLIDE 18

ratio Offline ǫ = RB for A⋆

N

FE for A⋆

N

N/N hY part

2 3h⋆

(online) (direct) 1/5 1E−1 17 s 2E−2 4+3 = 7 s 27 s 1/5 1E−1 15 s 2E−3 410+330 = 740 s 3100 s 1/20 5E−2 42 s 2E−2 16+10 = 26 s 520 s 1/20 5E−2 53 s 2E−3 1600+1000 = 2600 s 37000 s

CPU time (in seconds) needed by a Matlab code with an Intel Pentium IV processor (3.0 GHz/1 Go) to approximate the FE matrix for the homogenized problem either with a direct FE approach or with an RB method. In the RB approach, one has to take into account the RB construction (offline algorithm with a sample of p parameter values), the online RB computation of one homogenized solution plus the online a posteriori estimation error (hence the two terms, RB solution + estimation, in the RB online column).

  • S. Boyaval

Reduced-Basis and stochastics 18 / 43

slide-19
SLIDE 19

(hhom

ǫ

= 3

2)

uǫ − u⋆0 u⋆

N − u⋆ N1

∇rǫ0 ∆N (theory) ≤ C1ǫ ≤ C2 √ǫ hY = 1E−1 ǫ = 2.0E−2 1.2E−4 √ǫ = 1.4E−1 2.9E−2 hY = 1E−1 ǫ = 2.0E−3 4.7E−3 √ǫ = 4.5E−2 1.0E−2 hY = 5E−2 ǫ = 2.0E−2 3.1E−3 √ǫ = 1.4E−1 8.6E−5 hY = 5E−2 ǫ = 2.0E−3 1.1E−3 √ǫ = 4.5E−2 3.0E−2

Theoretical correction error ∇rǫ = ∇uǫ − (I + W)∇u⋆ for the homogenized solution, and RB numerical approximation error for the homogenized solution when δ = .2, θ0 = .99, p = 50 and N = 20.

  • S. Boyaval

Reduced-Basis and stochastics 19 / 43

slide-20
SLIDE 20

Some reasons of a success

Footstones of a full mathematical understanding: The set of solutions has fast-decaying Kolmogorov N-linear widths in H1 (when the problem is smoothly parametrized, see [Maday, Patera, Turinici] e.g.) The approximation by the greedy algorithm does not degrade that rate too much, at least asymptotically (see [Buffa, Maday, Patera, Turinici] or [Binev, Cohen, Dahmen, DeVore, Petrova, Wojtaszczyk]) The implementation relies on precomputed data as much as possible (see [Huynh,Rozza,Patera])

  • S. Boyaval

Reduced-Basis and stochastics 20 / 43

slide-21
SLIDE 21

Outline of the talk

1

Introduction: why do we need to reduce computations ?

2

A multiscale paradigm: 2-scale homogenization

3

A standard use of the Reduced-Basis (RB) method

4

A paradigm for uncertainty quantification: PDEs with random input

5

RB Variance Reduction in parametrized Monte-Carlo

6

Another multiscale paradigm: complex fluids

7

Conclusion: when shall one use RB ?

  • S. Boyaval

Reduced-Basis and stochastics 21 / 43

slide-22
SLIDE 22

A simple uncertainty quantification problem

− div (k(x)∇u(x, ω)) = 0 in D n(x)T k(x)∇u(x, ω) + Bi(x, ω) u(x, ω) = g(x) on∂D random “high-dimensional” parameter Bi(x, ω)/E(Bi(x, ω)) random output S(ω) =

  • ΓR u( · , ω)

with additional control parameter E(Bi(x, ω)), k(x)

D2 D1 ΓB ΓN ΓR

  • S. Boyaval

Reduced-Basis and stochastics 22 / 43

slide-23
SLIDE 23

Karhunen–Loève expansion of random input

Bi(x, ω) = E(Bi)(x)

  • G(x) + Υ

K

  • k=1
  • λkZk(ω)Φk(x)
  • K = rank (possibly ∞) of covariance operator for Bi(x, ω)

eigenpairs

  • (Υ2λk), Φk(x)
  • 1≤k≤K

(Zk(ω))1≤k≤K = mutually uncorr. L2

P(Ω) random var.

Υ = positive amplitude parameter We require Zk(ω) ∈ L∞

P (Ω) so Bi(x, ω) > 0 a.s.

We parametrize the realizations set with Yk = Υ√λkZk. Rmq: extension to log-KL requires “empirical” interpolation e.g.

  • S. Boyaval

Reduced-Basis and stochastics 23 / 43

slide-24
SLIDE 24
  • S. Boyaval

Reduced-Basis and stochastics 24 / 43

slide-25
SLIDE 25

Monte-Carlo convergence

2000 4000 6000 8000 10000 3.694 3.695 3.696 3.697 3.698 3.699 3.7 3.701 3.702

M

2000 4000 6000 8000 10000 4.2 4.25 4.3 4.35 4.4 4.45 4.5 4.55 4.6 x 10

−3

M

Figure: Expected value EM[sN,K] and variance VM[sN,K] w.r.t. M (k(x) = 1D1(x) + 2 × 1D2(x) and E(Bi)(x) ≡ 0.5). Correlation(x, y) = e−|x−y|2/(0.052) approximated with 20 modes.

  • S. Boyaval

Reduced-Basis and stochastics 25 / 43

slide-26
SLIDE 26

Error in MC estimation due to RB+KL

2 4 6 8 10 12 14 10

−3

10

−2

10

−1

10 10

1

N ∆o

E[sN,K](2, 0.5) K = 5 K = 10 K = 15 K = 20

2 4 6 8 10 12 14 10

−4

10

−3

10

−2

10

−1

10

N ∆o

V [sN,K](2, 0.5) K = 5 K = 10 K = 15 K = 20

Figure: Error bounds for (a) EM(S) and (b) VarM(S) w.r.t. N and K.

[3] B., Le Bris, Maday, Nguyen and Patera, A Reduced Basis Approach for Variational Problems with Stochastic Parameters: Application to Heat Conduction with Variable Robin Coefficient, CMAME 198, 2009.

  • S. Boyaval

Reduced-Basis and stochastics 26 / 43

slide-27
SLIDE 27

2 4 6 8 10 12 14 10

−6

10

−4

10

−2

10

N ∆s

E[sN,K](2, 0.5) K = 5 K = 10 K = 15 K = 20

2 4 6 8 10 12 14 10

−3

10

−2

10

−1

10 10

1

N ∆t

E[sN,K](2, 0.5) K = 5 K = 10 K = 15 K = 20

Figure: Error bounds for EM(S) due to (a) approximation in H1(D) and (b) KL truncation w.r.t. N and K.

  • S. Boyaval

Reduced-Basis and stochastics 27 / 43

slide-28
SLIDE 28

2 4 6 8 10 12 14 10

−6

10

−4

10

−2

10

N ∆s

V [sN,K](2, 0.5) K = 5 K = 10 K = 15 K = 20

2 4 6 8 10 12 14 10

−4

10

−3

10

−2

10

−1

10

N ∆t

V [sN,K](2, 0.5) K = 5 K = 10 K = 15 K = 20

Figure: Error bounds for VarM(S) due to (a) approximation in H1(D) and (b) KL truncation w.r.t. N and K.

  • S. Boyaval

Reduced-Basis and stochastics 28 / 43

slide-29
SLIDE 29

Just to get an idea of numbers. . .

K ♯µ N ∆N(µN+1) u − uNX CPU (s) speed-up 3 2048 20 1.39e-3 1.32e-4 1.11e+2 33.8 3 2048 25 8.18e-6 4.05e-12 4.14e+2 19.4 8 65536 25 2.27e-4 1.57e-6 1.95e+4 19.4 13 2e+6 25 7.38e-4 4.57e-5 1.19e+6 19.9 Trial sample size = (24 × 24 = 256) × 2Kmax. CPU time on 2.40GHz IntelCore (1 proc). Marginal speed-up per FE computation.

  • S. Boyaval

Reduced-Basis and stochastics 29 / 43

slide-30
SLIDE 30

Construction of a response surface

Figure: Map of Var M(s(k2, E(Bi))) computed with naive MC method (104 realizations) and RB approximations at each parameter value

  • S. Boyaval

Reduced-Basis and stochastics 30 / 43

slide-31
SLIDE 31

Outline of the talk

1

Introduction: why do we need to reduce computations ?

2

A multiscale paradigm: 2-scale homogenization

3

A standard use of the Reduced-Basis (RB) method

4

A paradigm for uncertainty quantification: PDEs with random input

5

RB Variance Reduction in parametrized Monte-Carlo

6

Another multiscale paradigm: complex fluids

7

Conclusion: when shall one use RB ?

  • S. Boyaval

Reduced-Basis and stochastics 31 / 43

slide-32
SLIDE 32

Parametrized r.v. Z µ ∈ L2(Ω) in Monte-Carlo

Goal: compute expectation E(Z µ) for many µ.

Monte-Carlo with confidence intervals (CLT+Slutsky) ∀a > 0 EM (Z µ) := 1 M

M

  • m=1

Z µ

m P−a.s.

− − − − →

M→∞ E(Z µ)

VarM (Z µ) = . . . P

  • |EM (Z µ) − E(Z µ)| ≤ a
  • VarM (Z µ)

M

M→∞

a

−a

e−x2/2 √ 2π dx For faster MC with reduced variance use control variates Y µ e.g. Compute E(Z µ) = E(Z µ − Y µ) + E(Y µ) where E(Y µ) is known (here E(Y µ) = 0) and Var(Z µ) ≥ Var(Z µ − Y µ).

  • S. Boyaval

Reduced-Basis and stochastics 32 / 43

slide-33
SLIDE 33

Practically useful control variates

Ideally Y µ = Z µ − E(Z µ) ⇒ Var(Z µ − Y µ) = 0, but in practice: ˜ Y µ ≈ Y µ is chosen in a practically computable way to minimize Var(Z µ − ˜ Y µ) = E(

  • (Z µ − E(Z µ)) − ˜

Y µ

  • 2

) = E(

  • Y µ − ˜

Y µ

  • 2

) Faster MC with RB control-variate approach: we choose . . . ˜ Y µ :=

N

  • n=1

αn(µ)Y µn =

N

  • n=1

αn(µ) (Z µn − E(Z µn)) where the αn(µ) minimize Var(Z µ − ˜ Y µ) for given µ, then EMsmall

  • Z µ − ˜

Y µ = 1 Msmall

Msmall

  • m=1

(Z µ

m − ˜

Y µ

m) P−a.s.

− − − − − − →

Msmall→∞ E(Z µ).

  • S. Boyaval

Reduced-Basis and stochastics 33 / 43

slide-34
SLIDE 34

Practically useful control variates

Ideally Y µ = Z µ − E(Z µ) ⇒ Var(Z µ − Y µ) = 0, but in practice: ˜ Y µ ≈ Y µ is chosen in a practically computable way to minimize Var(Z µ − ˜ Y µ) = E(

  • (Z µ − E(Z µ)) − ˜

Y µ

  • 2

) = E(

  • Y µ − ˜

Y µ

  • 2

) Faster MC with RB control-variate approach: we choose . . . ˜ Y µ :=

N

  • n=1

αn(µ)Y µn ≈

N

  • n=1

αn(µ)

  • Z µn − EMlarge(Z µn)
  • where the αn(µ) minimize VarMsmall(Z µ − ˜

Y µ) for given µ, then EMsmall

  • Z µ − ˜

Y µ = 1 Msmall

Msmall

  • m=1

(Z µ

m − ˜

Y µ

m) P−a.s.

− − − − − − →

Msmall→∞ E(Z µ).

  • S. Boyaval

Reduced-Basis and stochastics 33 / 43

slide-35
SLIDE 35

Effective variance reduction

Online: variance reduction is numerically effective if . . . for any µ, one solves efficiently the least-squares problem inf

{α1(µ),...,αN(µ)}VarMsmall

  • Z µ −

N

  • n=1

αn(µ) (Z µn − E(Z µn))

  • Online marginal cost for one µ = O(N2Msmall).

Offline: N expensive computations EMlarge(Z µn) + greedy, so . . . effective computational gains only in many-query limits (♯µ ≫ 1). OK for micro-macro models in material science, finance, . . .

  • S. Boyaval

Reduced-Basis and stochastics 34 / 43

slide-36
SLIDE 36
  • 40
  • 35
  • 30
  • 25
  • 20
  • 15
  • 10
  • 5

1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Size of the reduced basis of control variates Variance in logscale Offline variance reduction in trial sample for K=8

  • 40
  • 35
  • 30
  • 25
  • 20
  • 15
  • 10
  • 5

1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 Size of the reduced-basis Variance in logscale Offline variance reduction in trial sample

Figure: Sample of VM(s(k2, E(Bi))) wrto number of control variates.

Construction times by inspection of 24 × 24 = 256 parameters using N = 25: 800s left K = 8 / 1300s right K = 13.

  • S. Boyaval

Reduced-Basis and stochastics 35 / 43

slide-37
SLIDE 37

Outline of the talk

1

Introduction: why do we need to reduce computations ?

2

A multiscale paradigm: 2-scale homogenization

3

A standard use of the Reduced-Basis (RB) method

4

A paradigm for uncertainty quantification: PDEs with random input

5

RB Variance Reduction in parametrized Monte-Carlo

6

Another multiscale paradigm: complex fluids

7

Conclusion: when shall one use RB ?

  • S. Boyaval

Reduced-Basis and stochastics 36 / 43

slide-38
SLIDE 38

Viscoelastic fluid (dilute polymer solutions)

Direct computation with micro-macro models of particulate suspensions

Gareth McKinley’s Non-Newtonian Fluid Dynam- ics Group at MIT (web.mit.edu/nnf/) Return

Weissenberg effect: extra-stress tensor τ(t, x) in Navier-Stokes ρ (∂tu + (u · ∇)u) = div (−pI + νD(u) + τ) We want to compute τ = τ T from stochastic molecular models.

  • S. Boyaval

Reduced-Basis and stochastics 37 / 43

slide-39
SLIDE 39

Stochastic models in microphysics

Before closure assumptions, molecular models

Navier-Stokes coupled through extra-stress τ with:

1 Constitutive differential equation, e.g. Oldroyd-B:

λ

  • ∂tτ + (u · ∇)τ − (∇u)τ − τ(∇u)T

= νpD(u) − τ

2 Statistical mean of configurations Xt(x) for “bead-spring”

polymer models (Brownian particles, Langevin SDEs)

  • S. Boyaval

Reduced-Basis and stochastics 38 / 43

slide-40
SLIDE 40

Many dumbbells diluted in a flow

1 Fix velocity gradient µn at x ∈ D on time slab [Tn, Tn+1] 2 Compute the foot ψn(tn; x) of charac. in x at Tn+1 3 Sample IC X µn Tn (x) = X µn−1 Tn

(ψn(tn; x)) and solve dX µn

t

=

  • µnX µn

t

− F(X µn

t )

  • dt + dBt , t ∈ [Tn, Tn+1]

4 Couple with Navier-Stokes through output τ n+1(x)

E(E(X µn

Tn+1 ⊗ F(X µn Tn+1)|X µn Tn ))

  • S. Boyaval

Reduced-Basis and stochastics 39 / 43

slide-41
SLIDE 41

Numerical Results for FENE dumbbells

dXt ✭✭✭✭✭✭✭ +(u · ∇x)Xtdt = (µXt − F(Xt)) dt + dBt where F(X) =

X 1−|X|2/b in d = 2, with b = 16.

SDE: Euler-Maruyama scheme (N = 100 × ∆t = 10−2) (X0 = (1, 1) + reflecting conditions at boundary |X| = √ b) Parameter: µ ∈ [−1, 1]4 (tr(µ) = 0) – Output: τ11 Next figure: Var(Z µ − Y µ

N) with respect to N when µ ∈ Λtest

  • S. Boyaval

Reduced-Basis and stochastics 40 / 43

slide-42
SLIDE 42

Variance reduction up to

Var(Z µ) Var(Z µ−Y µ

N ) ≃ 104

2 4 6 8 10 12 14 16 18

  • 7

10

  • 5

10

  • 3

10

  • 1

10 1 10 2 4 6 8 10 12 14 16 18 20

  • 7

10

  • 5

10

  • 3

10

  • 1

10 1 10

  • S. Boyaval

Reduced-Basis and stochastics 41 / 43

slide-43
SLIDE 43

Outline of the talk

1

Introduction: why do we need to reduce computations ?

2

A multiscale paradigm: 2-scale homogenization

3

A standard use of the Reduced-Basis (RB) method

4

A paradigm for uncertainty quantification: PDEs with random input

5

RB Variance Reduction in parametrized Monte-Carlo

6

Another multiscale paradigm: complex fluids

7

Conclusion: when shall one use RB ?

  • S. Boyaval

Reduced-Basis and stochastics 42 / 43

slide-44
SLIDE 44

Temptative guidelines for RB in practice

RB ideas can efficiently reduce computations in a number of “many-query parametrized problems” with little effort when: parametric variations are smooth (fast decaying N-width) cheap projection/collocation on RB (incl. adequate parameter representation) + error indicator available (greedy learning with fast projection + error indicator) Then one can hope for greedy succes with good trial

  • sample. . . than can be enriched online ! (certification)
  • S. Boyaval

Reduced-Basis and stochastics 43 / 43

slide-45
SLIDE 45

For Further Reading I

S.B., C. Le Bris, T. Lelièvre, Y. Maday, N.C. Nguyen and A.T. Patera Reduced basis techniques for stochastic problems ArCME special issue (E. Cueto, F . Chinesta, P . Ladeveze and A. Nouy ed.), 2010 (arXiv:1004.0357). S.B. Reduced-Basis approach for homogenization beyond the periodic setting SIAM MMS 7(1):466–494, 2008.

  • S. Boyaval

Reduced-Basis and stochastics 44 / 43

slide-46
SLIDE 46

For Further Reading II

  • S. B., C. Le Bris, Y. Maday, N.C. Nguyen and A.T. Patera

A Reduced Basis Approach for Variational Problems with Stochastic Parameters: Application to Heat Conduction with Variable Robin Coefficient CMAME 198(41–44):3187–3206, 2009.

  • S. B., T. Lelièvre

A variance reduction method for parametrized stochastic differential equations using the reduced basis paradigm CMS 8 spec. iss. (P . Zhang ed.), 2010 (arXiv:0906.3600).

  • S. Boyaval

Reduced-Basis and stochastics 45 / 43