IGA-based Multi-Index Stochastic Collocation for Uncertainty - - PowerPoint PPT Presentation

iga based multi index stochastic collocation for
SMART_READER_LITE
LIVE PREVIEW

IGA-based Multi-Index Stochastic Collocation for Uncertainty - - PowerPoint PPT Presentation

IGA-based Multi-Index Stochastic Collocation for Uncertainty Quantification J. Beck 1 , L. Tamellini 2 , R. Tempone 1 , 3 1 King Abdullah University of Science and Technology, Saudi Arabia 2 CNR-IMATI, Italy 3 Alexander von Humboldt Professor in


slide-1
SLIDE 1

IGA-based Multi-Index Stochastic Collocation for Uncertainty Quantification

  • J. Beck1, L. Tamellini2, R. Tempone1,3

1 King Abdullah University of Science and Technology, Saudi Arabia 2 CNR-IMATI, Italy 3 Alexander von Humboldt Professor in Mathematics of Uncertainty

Quantification, RWTH Aachen University, Germany

DCSE Fall School on ROM and UQ, November 4–8, 2019

1 / 28

slide-2
SLIDE 2

Outline

Introduction to Uncertainty Quantification Multi-Index Stochastic Collocation (MISC) method Numerical tests Conclusions

2 / 28

slide-3
SLIDE 3

Outline

Introduction to Uncertainty Quantification Multi-Index Stochastic Collocation (MISC) method Numerical tests Conclusions

3 / 28

slide-4
SLIDE 4

The uncertainty quantification problem

PDE solution u(x) → PDE solution u(x, y)

◮ The parameter y = [y1, y2, y3 . . .] is a set of model parameters, e.g., in the

equation, external forces, geometry, initial and boundary conditions,

◮ The values of y are uncertain (experimental measures, limited knowledge

  • n system properties).

◮ y can be modeled as a random vector and with N components

⇒ u is a random function, u(y). Goal (forward UQ): Compute statistical quantities for u, e.g., to assess how the uncertainty on y reflects on u: E[u], Var[u], Pr(u > u0).

4 / 28

slide-5
SLIDE 5

Working example: elliptic PDE with a random coefficient

Mathematical model:

  • −div[a(x)∇u(x)] = f (x)

x ∈ B, u(x) = 0 x ∈ ∂B, Quantity of Interest: u at some ˆ x

5 / 28

slide-6
SLIDE 6

Working example: elliptic PDE with a random coefficient

Parameters y: N independent uniform rand. var on [−1, 1] describe randomness in diffusion coefficient (more later) Mathematical model:

  • −div[a(x, y)∇u(x, y)] = f (x)

x ∈ B, u(x, y) = 0 x ∈ ∂B, Quantity of Interest: Ey[u(ˆ x, y)], i.e., the expected value of u at ˆ x ∈ B

5 / 28

slide-7
SLIDE 7

Working example: elliptic PDE with a random coefficient

Parameters y: N independent uniform rand. var on [−1, 1] describe randomness in diffusion coefficient (more later) Mathematical model:

  • −div[a(x, y)∇u(x, y)] = f (x)

x ∈ B, u(x, y) = 0 x ∈ ∂B, Quantity of Interest: Ey[u(ˆ x, y)], i.e., the expected value of u at ˆ x ∈ B Challenge: N can be VERY large

5 / 28

slide-8
SLIDE 8

Solve PDE using IGA

Say we want to compute the heat diffusion in a body shaped as shown Need to define the mesh: Solve PDE using IGA on a grid with discretization level α = (α1, α2, . . . , αd), i.e., having 2α1 × 2α2 × · · · 2αd elements. Here d = 3.

6 / 28

slide-9
SLIDE 9

Random input data

Let us assume the body is not homogeneous due to imperfections, which are not known precisely, and so we describe thermal conductivity as random fluctuations around some nominal value. Below are three possible realizations of such random conductivity: Here, y enters in the description of the thermal conductivity as:

a([ρ, θ, z], y) = 5+ y1 sin (2θ) sin (π(ρ − 1)) sin (πz) + 0.8y2 sin (4θ) sin (π(ρ − 1)) sin (πz) + 0.7y3 sin (6θ) sin (π(ρ − 1)) sin (πz) + . . . 0.2y8 sin (16θ) sin (π(ρ − 1)) sin (πz) + . . .

The more parameters included, the more complex fluctuations can be modeled.

7 / 28

slide-10
SLIDE 10

Naive method to compute E[u(ˆ x, ·)] - Full tensorization

E[·] is an integral, and if u varies smoothly wrt to y (u(y) is an analytic func.) → use Gauss quadrature with M1 × M2 × . . . × MN points (“stochastic collocation”) E[u(ˆ x, ·)] ≈

M1×M2×...

  • j=1

ωjuα(ˆ x, yj)

◮ Exploits an interpolant of u(y) on the collocation points {yj}

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1

mesh for the PDE collocation points yj Expensive for large N: Cost ∼ O(N

j=1 Mj)

8 / 28

slide-11
SLIDE 11

Outline

Introduction to Uncertainty Quantification Multi-Index Stochastic Collocation (MISC) method Numerical tests Conclusions

9 / 28

slide-12
SLIDE 12

Idea: use successive corrections

PDE solver accuracy Stochastic Collocation

F P Q C

◮ C, coarse discretization ◮ Q, refine stochastic

collocation only

◮ P, refine PDE mesh only ◮ F, fine discretization in

both ← our goal

10 / 28

slide-13
SLIDE 13

Idea: use successive corrections

PDE solver accuracy Stochastic Collocation

F P Q C

Write a telescopic equality: F =F + C − C + C − C + P − P + Q − Q

11 / 28

slide-14
SLIDE 14

Idea: use successive corrections

PDE solver accuracy Stochastic Collocation

F P Q C

Rearrange it: F =C + P − C + Q − C + F − P − Q + C

11 / 28

slide-15
SLIDE 15

Idea: use successive corrections

PDE solver accuracy Stochastic Collocation

F P Q C

Interpret as a sum of corrections F =C + ∆PDE + Q − C + F − P − Q + C

11 / 28

slide-16
SLIDE 16

Idea: use successive corrections

PDE solver accuracy Stochastic Collocation

F P Q C

Interpret as a sum of corrections F =C + ∆PDE + ∆stoc + F − P − Q + C

11 / 28

slide-17
SLIDE 17

Idea: use successive corrections

PDE solver accuracy Stochastic Collocation

F P Q C

Interpret as a sum of corrections F =C + ∆PDE + ∆stoc + ∆mixed

11 / 28

slide-18
SLIDE 18

Idea: use successive corrections

PDE solver accuracy Stochastic Collocation

F P Q C

i.e., a hierarchical decomp. of F: F =C + ∆PDE + ∆stoc + ∆mixed

11 / 28

slide-19
SLIDE 19

Idea: use successive corrections

PDE solver accuracy Stochastic Collocation Sparsification components

Strategy: drop ∆mixed F ≈C + ∆PDE + ∆stoc +✘✘✘ ∆mixed Rationale: mixed correction are expensive to compute (they in- volve F) but small if u regular! Formula: F ≈ P + Q − C Big computational saving, small loss in accuracy

11 / 28

slide-20
SLIDE 20

One step further: Multi-Index Stochastic Collocation

For PDE in Rd, d > 1 with N > 1 parameters: specify discretization for each spatial direction and parameter independently! → Hence, the name “Multi-Index” Instead of the fine discretization...

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1

... we combine linearly many “small-ish” discretizations over space and parameters.

12 / 28

slide-21
SLIDE 21

One step further: Multi-Index Stochastic Collocation

Fix e.g., d = 3 (in space), N = 2 (in probability) Full-tensor approximation of E[u]: Gβ1,β2,β3,M1,M2 Gβ,β,β,M,M = Fine approx. sol. with a mesh with 2β × 2β × 2β elements and M × M points in probability space Gα,α,α,m,m = Coarse approx. sol. with a mesh with 2α × 2α × 2α elements and m × m points in probability space

13 / 28

slide-22
SLIDE 22

One step further: Multi-Index Stochastic Collocation

Gβ,β,β,M,M =Gα,α,α,m,m + correction along x1 + correction along x2 + correction along x3 + correction along y1 + correction along y2 + all mixed correction . . .

14 / 28

slide-23
SLIDE 23

One step further: Multi-Index Stochastic Collocation

Gβ,β,β,M,M =Gα,α,α,m,m + Gβ,α,α,m,m − Gα,α,α,m,m + correction along x2 + correction along x3 + correction along y1 + correction along y2 + all mixed correction . . .

14 / 28

slide-24
SLIDE 24

One step further: Multi-Index Stochastic Collocation

Gβ,β,β,M,M =Gα,α,α,m,m + Gβ,α,α,m,m − Gα,α,α,m,m + Gα,β,α,m,m − Gα,α,α,m,m + correction along x3 + correction along y1 + correction along y2 + all mixed correction . . .

14 / 28

slide-25
SLIDE 25

One step further: Multi-Index Stochastic Collocation

Gβ,β,β,M,M =Gα,α,α,m,m + Gβ,α,α,m,m − Gα,α,α,m,m + Gα,β,α,m,m − Gα,α,α,m,m + Gα,α,β,m,m − Gα,α,α,m,m + correction along y1 + correction along y2 + all mixed correction . . .

14 / 28

slide-26
SLIDE 26

One step further: Multi-Index Stochastic Collocation

Gβ,β,β,M,M =Gα,α,α,m,m + Gβ,α,α,m,m − Gα,α,α,,m,m + Gα,β,α,m,m − Gα,α,α,m,m + Gα,α,β,m,m − Gα,α,α,m,m + Gα,α,α,M,m − Gα,α,α,m,m + correction along y2 + all mixed correction . . .

14 / 28

slide-27
SLIDE 27

One step further: Multi-Index Stochastic Collocation

Gβ,β,β,M,M =Gα,α,α,m,m + Gβ,α,α,m,m − Gα,α,α,m,m + Gα,β,α,m,m − Gα,α,α,m,m + Gα,α,β,m,m − Gα,α,α,m,m + Gα,α,α,M,m − Gα,α,α,m,m + Gα,α,α,m,M − Gα,α,α,m,m + all mixed correction . . .

14 / 28

slide-28
SLIDE 28

One step further: Multi-Index Stochastic Collocation

Gβ,β,β,M,M =Gα,α,α,m,m + ∆[1 0 0 0 0]G + ∆[0 1 0 0 0]G + ∆[0 0 1 0 0]G + ∆[0 0 0 1 0]G + ∆[0 0 0 0 1]G + all mixed correction (here drop those of “low profit”) . . . Mixed corrections, e.g., ∆[1 1 0 0 0]G or ∆[0 1 1 0 1]G

14 / 28

slide-29
SLIDE 29

One step further: Multi-Index Stochastic Collocation

◮ Evaluate either form with corrections, or final linear combination

(“combination technique”), e.g., Gβ,β,β,M,M ≈ Gβ,α,α,m,m + Gα,β,α,m,m + Gβ,α,α,M,m + · · · i.e., a linear combination of several “small” approximations

◮ Anisotropic meshes in space are also used ◮ Multi-Index Monte Carlo, Multi-Index Stochastic Collocation etc ◮ When only isotropic meshes are considered (i.e. α1 = α2 = . . .), the

methods are called “Multilevel” (Multilevel Monte Carlo,, Multilevel Stochastic Collocation ...)

14 / 28

slide-30
SLIDE 30

Sparse grids using the combination technique

Use a linear combination of coarser approximations (i.e., extract most of the information from these)

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1

;

15 / 28

slide-31
SLIDE 31

Sparse grids using the combination technique

Use a linear combination of coarser approximations (i.e., extract most of the information from these)

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1

15 / 28

slide-32
SLIDE 32

Sparse grids using the combination technique

Use a linear combination of coarser approximations (i.e., extract most of the information from these)

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1

A collocation approach (over several sparse grids) → Independent solves, code reuse, parallel implementation!

15 / 28

slide-33
SLIDE 33

MISC estimator

3) Define MISC estimator, AMISC, as E[u] ≈ AMISC =

  • (

α, β)∈I

c

α, βG α, β,

for some index set I ∈ Nd+N.

◮ The choice of multi-index set I is critical ◮ Avoid refine all directions simultaneously

16 / 28

slide-34
SLIDE 34

A-priori algorithm to build I (PDE-informed)

For the elliptic PDE we introduced earlier, these estimates typically hold error( α, β) ∼

N

  • j=1

exp(−gj2βj )

  • u is y-analytic, 2βj pts per dir.

×

d

  • i=1

exp(−riαi)

  • = hri

i ,ri =solver rate

work( α, β) ∼

N

  • j=1

2βj

C.C. points are nested

×

d

  • i=1

exp(γiαi)

  • = hγi

i

,solver scales algebraically in nDoF

17 / 28

slide-35
SLIDE 35

A-priori algorithm to build I (PDE-informed)

For the elliptic PDE we introduced earlier, these estimates typically hold error( α, β) ∼

N

  • j=1

exp(−gj2βj )

  • u is y-analytic, 2βj pts per dir.

×

d

  • i=1

exp(−riαi)

  • = hri

i ,ri =solver rate

work( α, β) ∼

N

  • j=1

2βj

C.C. points are nested

×

d

  • i=1

exp(γiαi)

  • = hγi

i

,solver scales algebraically in nDoF

Because of the double exponent in β the profit decay is faster wrt y than x; then the set I =

  • [

α β] ∈ Nd+N

+

: error( α, β) work( α, β) > ε

  • and it automatically balances errors wrt

x and y (cf. motivations!)

correction in probability correction in space

17 / 28

slide-36
SLIDE 36

Convergence result for MISC

Given the estimates above (Haji-Ali, Nobile, Tamellini, Tempone, 2016): If N < ∞ Err ≤ C work−s1 log(work)s2 with s1, s2 > 0 depending on the worst spatial direction and independent of N (but C does); here s1 = mini=1,...,d

ri γi .

18 / 28

slide-37
SLIDE 37

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I

  • curr. idx
  • α

19 / 28

slide-38
SLIDE 38

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-39
SLIDE 39

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-40
SLIDE 40

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

  • 3. set as new [

α∗ β∗] the idx with the highest profit Profit = error work

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-41
SLIDE 41

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

  • 3. set as new [

α∗ β∗] the idx with the highest profit Profit = error work

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-42
SLIDE 42

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

  • 3. set as new [

α∗ β∗] the idx with the highest profit Profit = error work

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-43
SLIDE 43

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

  • 3. set as new [

α∗ β∗] the idx with the highest profit Profit = error work

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-44
SLIDE 44

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

  • 3. set as new [

α∗ β∗] the idx with the highest profit Profit = error work

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-45
SLIDE 45

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

  • 3. set as new [

α∗ β∗] the idx with the highest profit Profit = error work

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-46
SLIDE 46

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

  • 3. set as new [

α∗ β∗] the idx with the highest profit Profit = error work

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-47
SLIDE 47

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

  • 3. set as new [

α∗ β∗] the idx with the highest profit Profit = error work

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-48
SLIDE 48

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

  • 3. set as new [

α∗ β∗] the idx with the highest profit Profit = error work

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-49
SLIDE 49

A posteriori algorithm to build index set

Algorithm (Gerstner & Griebel, 2003):

Given [ α∗ β∗] = [1 1], I = {[1 1]}, repeat

  • 1. Add to the MISC estimator the neighbors
  • f [

α∗ β∗] (feasible wrt to I)

  • 2. Assess (a posteriori) work and error

contribution of each multi-idx just added

  • 3. set as new [

α∗ β∗] the idx with the highest profit Profit = error work

Multi-index set

  • β

2 4 6 8 1 2 3 4 5 6 7 8 I reduced margin

  • curr. idx
  • α

19 / 28

slide-50
SLIDE 50

Outline

Introduction to Uncertainty Quantification Multi-Index Stochastic Collocation (MISC) method Numerical tests Conclusions

20 / 28

slide-51
SLIDE 51

Test I - diffusion equation with random diffusion coeff.

  • −div[a(x, y)∇u(x, y)] = 1

x ∈ B, u(x, y) = 0 x ∈ ∂B,

  • yi ∼ uniform(−1, 1)
  • Quantity of Interest (QoI):

E[

  • Ω u(x, ·)dx]
  • IGA with NURBS of degree p = 2 and

maximal continuity C 1 Diffusion coeff

a([ρ, θ, z], y) = exp

  • y1 sin (2θ) sin (π(ρ − 1)) sin (πz) +

4 10 y2 sin (8θ) sin (π(ρ − 1)) sin (πz) + 1 10 y3 sin (32θ) sin (π(ρ − 1)) sin (πz)

  • 21 / 28
slide-52
SLIDE 52

Test I - diffusion equation with random diffusion coeff.

10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 2 10 3 10 4 10 5 10 6 MIMC MLMC MISC MIMC realizations MLMC realizations TOL-2 - MIMC/MLMC TOL-2.75 - MC TOL-1/4 - MISC 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 MIMC MLMC MISC Error = TOL

22 / 28

slide-53
SLIDE 53

Test I - diffusion equation with random diffusion coeff.

1 2 3 4 5 6 7 8 9 10 0 10 1 10 2 10 3 10 4

TOL1 TOL2 TOL3 TOL4 TOL5

Figure: Collocation allocation with respect to d

j=1(αj − 1)

23 / 28

slide-54
SLIDE 54

Test II - linear elasticty, random Young modulus / Poission ratio

  • D

2µ∇su : ∇sv+

  • D

λ div(u) div(v) =

  • Σ2

P·v, ∀v ∈ [H1

Σ1(D)]d ◮ ∇su = ∇u+∇T u 2

, P = [0, 0, 0.1]

◮ boundary conditions: clamped bottom, free surface otherwise ◮ λ =

E ν (1 + ν)(1 − 2ν), µ = E 2(1 + ν)

24 / 28

slide-55
SLIDE 55

Test II - linear elasticty, random Young modulus / Poission ratio

  • D

2µ∇su : ∇sv+

  • D

λ div(u) div(v) =

  • Σ2

P·v, ∀v ∈ [H1

Σ1(D)]d ◮ ∇su = ∇u+∇T u 2

, P = [0, 0, 0.1]

◮ boundary conditions: clamped bottom, free surface otherwise ◮ λ =

E ν (1 + ν)(1 − 2ν), µ = E 2(1 + ν)

◮ E ∼ uniform(105e9, 120e9), ν ∼ uniform(0.265, 0.34) (titanium)

24 / 28

slide-56
SLIDE 56

Test II - linear elasticty, random Young modulus / Poission ratio

  • D

2µ∇su : ∇sv+

  • D

λ div(u) div(v) =

  • Σ2

P·v, ∀v ∈ [H1

Σ1(D)]d ◮ ∇su = ∇u+∇T u 2

, P = [0, 0, 0.1]

◮ boundary conditions: clamped bottom, free surface otherwise ◮ λ =

E ν (1 + ν)(1 − 2ν), µ = E 2(1 + ν)

◮ E ∼ uniform(105e9, 120e9), ν ∼ uniform(0.265, 0.34) (titanium) ◮ target: displacement of point P (in red in picture)

24 / 28

slide-57
SLIDE 57

Test II - linear elasticty, random Young modulus / Poission ratio

10 -7 10 -6 10 -5 10 -4 10 0 10 1 10 2 10 3 10 4 MIMC MIMC MIMC realizations TOL-2 - MIMC/MLMC TOL-0.4 - MISC TOL-3 - MC 10 -7 10 -6 10 -5 10 -4 10 -9 10 -8 10 -7 10 -6 10 -5 10 -4 MIMC MISC Error = TOL

25 / 28

slide-58
SLIDE 58

Outline

Introduction to Uncertainty Quantification Multi-Index Stochastic Collocation (MISC) method Numerical tests Conclusions

26 / 28

slide-59
SLIDE 59

Conclusions

◮ Uncertainty Quantification is a fast-growing topic, at the verge of

Engineering, Applied Mathematics, Statistics and Scientific Computing

27 / 28

slide-60
SLIDE 60

Conclusions

◮ Uncertainty Quantification is a fast-growing topic, at the verge of

Engineering, Applied Mathematics, Statistics and Scientific Computing

◮ UQ analysis: typically repeatedly solving PDEs for different

combination of random inputs. This can be very computationally expensive

27 / 28

slide-61
SLIDE 61

Conclusions

◮ Uncertainty Quantification is a fast-growing topic, at the verge of

Engineering, Applied Mathematics, Statistics and Scientific Computing

◮ UQ analysis: typically repeatedly solving PDEs for different

combination of random inputs. This can be very computationally expensive

◮ Multilevel/Multi-Index methods mitigate this cost by exploiting

hierarchies of meshes, thereby extracting as much information as possible from the coarser ones

27 / 28

slide-62
SLIDE 62

Conclusions

◮ Uncertainty Quantification is a fast-growing topic, at the verge of

Engineering, Applied Mathematics, Statistics and Scientific Computing

◮ UQ analysis: typically repeatedly solving PDEs for different

combination of random inputs. This can be very computationally expensive

◮ Multilevel/Multi-Index methods mitigate this cost by exploiting

hierarchies of meshes, thereby extracting as much information as possible from the coarser ones

◮ the adaptive selection of the corrections in MISC guarantees

balancing between physical and stochastic errors

27 / 28

slide-63
SLIDE 63

Conclusions

◮ Uncertainty Quantification is a fast-growing topic, at the verge of

Engineering, Applied Mathematics, Statistics and Scientific Computing

◮ UQ analysis: typically repeatedly solving PDEs for different

combination of random inputs. This can be very computationally expensive

◮ Multilevel/Multi-Index methods mitigate this cost by exploiting

hierarchies of meshes, thereby extracting as much information as possible from the coarser ones

◮ the adaptive selection of the corrections in MISC guarantees

balancing between physical and stochastic errors

◮ Multi-index methods can be naturally combined with IGA solvers,

due to their tensor structure

27 / 28

slide-64
SLIDE 64

Conclusions

◮ Uncertainty Quantification is a fast-growing topic, at the verge of

Engineering, Applied Mathematics, Statistics and Scientific Computing

◮ UQ analysis: typically repeatedly solving PDEs for different

combination of random inputs. This can be very computationally expensive

◮ Multilevel/Multi-Index methods mitigate this cost by exploiting

hierarchies of meshes, thereby extracting as much information as possible from the coarser ones

◮ the adaptive selection of the corrections in MISC guarantees

balancing between physical and stochastic errors

◮ Multi-index methods can be naturally combined with IGA solvers,

due to their tensor structure

◮ In the numerical tests, the IGA-based MISC method shows the best

performance and exhibits the theoretical rates, namely TOL−1/4 and TOL−0.4, respectively, in comparison with the rate of MLMC/MIMC, TOL−2 (i.e., the optimal sampling rate).

27 / 28

slide-65
SLIDE 65

Thanks for your attention

  • J. Beck, L. Tamellini, R. Tempone.

IGA-based multi-index stochastic collocation for random PDEs on arbitrary

  • domains. Computer Methods in Applied Mechanics and Engineering, 351:

130–350, 2019.

  • J. Beck, L. Tamellini, G. Sangalli.

A sparse-grid isogeometric solver. Computer Methods in Applied Mechanics and Engineering, 335: 128–151, 2018. A.-L. Haji-Ali, F. Nobile, L. Tamellini, and R. Tempone. Multi-index stochastic collocation for random PDEs. Computer Methods in Applied Mechanics and Engineering, 306(–):95 – 122, 2016.

28 / 28