A Multifidelity Cross-entropy Method for Rare Event Simulation - - PowerPoint PPT Presentation

a multifidelity cross entropy method for rare event
SMART_READER_LITE
LIVE PREVIEW

A Multifidelity Cross-entropy Method for Rare Event Simulation - - PowerPoint PPT Presentation

A Multifidelity Cross-entropy Method for Rare Event Simulation Benjamin Peherstorfer Courant Institute of Mathematical Sciences New York University Karen Willcox and Boris Kramer MIT 1 / 22 Problem setup High-fidelity model with costs w 1


slide-1
SLIDE 1

A Multifidelity Cross-entropy Method for Rare Event Simulation

Benjamin Peherstorfer Courant Institute of Mathematical Sciences New York University Karen Willcox and Boris Kramer MIT

1 / 22

slide-2
SLIDE 2

Problem setup

High-fidelity model with costs w1 > 0 f (1) : D → Y Threshold 0 < t ∈ R and random variable Z ∼ p Estimate rare event probability Pf = Pp[f (1) ≤ t] Reformulated to Pf = Ep[I(1)

t ] with

I(1)

t (z) =

  • 1,

f (1)(z) ≤ t , 0, f (1)(z) > t Coefficient of variation of standard Monte Carlo estimator grows with 1/Pf

  • 0.5

0.5 1 1.5

  • utputs f (1)(z)

1 2 3 4 5

density

realizations density

2 / 22

slide-3
SLIDE 3

Surrogate models in uncertainty quantification

Replace f (1) with a surrogate model

◮ Costs of uncertainty quantification reduced ◮ Often orders of magnitude speedups

Estimate depends on surrogate accuracy

◮ Control with error bounds/estimators ◮ Rebuild if accuracy too low ◮ No guarantees without bounds/estimators

Issues

◮ Propagation of surrogate error on estimate ◮ Surrogates without error control ◮ Costs of rebuilding a surrogate model

surrogate model uncertainty quantification

  • utput y

input z

3 / 22

slide-4
SLIDE 4

Our approach: Multifidelity methods

Combine high-fidelity and surrogate models

◮ Leverage surrogate models for speedup ◮ Recourse to high-fidelity for accuracy

Multifidelity speeds up computations

◮ Balance #solves among models ◮ Adapt, fuse, filter with surrogate models

Multifidelity guarantees high-fidelity accuracy

◮ Occasional recourse to high-fidelity model ◮ High-fidelity model is kept in the loop ◮ Independent of error control for surrogates

[P., Willcox, Gunzburger, Survey of multifidelity methods in uncertainty propagation, inference, and opti- mization; SIAM Review, 2017 (to appear)]

high-fidelity model surrogate model uncertainty quantification

  • utput y

input z

.

4 / 22

slide-5
SLIDE 5

Our approach: Multifidelity methods

Combine high-fidelity and surrogate models

◮ Leverage surrogate models for speedup ◮ Recourse to high-fidelity for accuracy

Multifidelity speeds up computations

◮ Balance #solves among models ◮ Adapt, fuse, filter with surrogate models

Multifidelity guarantees high-fidelity accuracy

◮ Occasional recourse to high-fidelity model ◮ High-fidelity model is kept in the loop ◮ Independent of error control for surrogates

[P., Willcox, Gunzburger, Survey of multifidelity methods in uncertainty propagation, inference, and opti- mization; SIAM Review, 2017 (to appear)]

high-fidelity model . . . surrogate model surrogate model uncertainty quantification

  • utput y

input z

.

4 / 22

slide-6
SLIDE 6

MFCE: Importance sampling

Consider a biasing density q with supp(p) ⊆ supp(q) Reformulate estimation problem Pf = Ep[I(1)

t ] = Eq

  • I(1)

t

p q

  • 1. First, construct suitable q with

Varq

  • I(1)

t

p q

  • ≪ Varp[I(1)

t ]

  • 2. Estimate probability with Monte Carlo

˜ Pf = 1 m

m

  • i=1

I(1)

t ( ˜

Zi)p( ˜ Zi) q( ˜ Zi) , ˜ Zi ∼ q ⇒ Use surrogates for constructing q

  • 0.5

0.5 1 1.5

  • utputs f (1)(z)

1 2 3 4 5

density

realizations nominal biasing

5 / 22

slide-7
SLIDE 7

MFCE: Literature review

Two-fidelity approaches

◮ Switch between models [Li, Xiu et al., 2010, 2011, 2014] ◮ Reduced basis models with error estimators [Chen and Quarteroni, 2013] ◮ Kriging models and importance sampling [Dubourg et al., 2013] ◮ Subset method with machine-learning-based models [Bourinet et al., 2011],

[Papadopoulos et al., 2012]

◮ Surrogates and importance sampling [P., Cui, Marzouk, Willcox, 2016]

Multilevel methods for rare event simulation

◮ Variance reduction via control variates [Giles et al., 2015], [Elfverson et al., 2014, 2016],

[Fagerlund et al., 2016]

◮ Subset method with coarse-grid approximations [Ullmann and Papaioannou, 2015]

Combining multiple general types of surrogates

◮ Importance sampling + control variates [P., Kramer, Willcox, 2017]

[P., Willcox, Gunzburger, Survey of multifidelity methods in uncertainty propagation, inference, and opti- mization; SIAM Review, 2017 (to appear)] 6 / 22

slide-8
SLIDE 8

MFCE: Direct sampling of surrogate models

Directly sampling surrogate models to construct biasing density

◮ Reduces costs per sample ◮ Number of samples to construct biasing density remains the same ◮ Works well for probabilities > 10−5

⇒ Insufficient for very rare event probabilities in range of 10−9

0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations of f (1)(Z)

[P., Cui, Marzouk, Willcox, 2016], [P., Kramer, Willcox, 2017] 7 / 22

slide-9
SLIDE 9

MFCE: Direct sampling of surrogate models

Directly sampling surrogate models to construct biasing density

◮ Reduces costs per sample ◮ Number of samples to construct biasing density remains the same ◮ Works well for probabilities > 10−5

⇒ Insufficient for very rare event probabilities in range of 10−9

0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations of f (1)(Z)

[P., Cui, Marzouk, Willcox, 2016], [P., Kramer, Willcox, 2017] 7 / 22

slide-10
SLIDE 10

MFCE: Construct biasing density iteratively

Threshold t controls “rareness” of event I(1)

t (z) =

  • 1 ,

f (1)(z) ≤ t , 0 , f (1)(z) > t Cross-entropy method constructs densities iteratively t1 > t2 > · · · > t

0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations of f (1)(Z)

[Rubinstein, 1999], [Rubinstein, 2001] 8 / 22

slide-11
SLIDE 11

MFCE: Construct biasing density iteratively

Threshold t controls “rareness” of event I(1)

t (z) =

  • 1 ,

f (1)(z) ≤ t , 0 , f (1)(z) > t Cross-entropy method constructs densities iteratively t1 > t2 > · · · > t

0.5 1 1.5 2 2.5 3 3.5 4 t t1 mean density realizations of f (1)(Z)

[Rubinstein, 1999], [Rubinstein, 2001] 8 / 22

slide-12
SLIDE 12

MFCE: Construct biasing density iteratively

Threshold t controls “rareness” of event I(1)

t (z) =

  • 1 ,

f (1)(z) ≤ t , 0 , f (1)(z) > t Cross-entropy method constructs densities iteratively t1 > t2 > · · · > t

0.5 1 1.5 2 2.5 3 3.5 4 t t2 t1 mean density realizations of f (1)(Z)

[Rubinstein, 1999], [Rubinstein, 2001] 8 / 22

slide-13
SLIDE 13

MFCE: Construct biasing density iteratively

Threshold t controls “rareness” of event I(1)

t (z) =

  • 1 ,

f (1)(z) ≤ t , 0 , f (1)(z) > t Cross-entropy method constructs densities iteratively t1 > t2 > · · · > t

0.5 1 1.5 2 2.5 3 3.5 4 t t3 t2 t1 mean density realizations of f (1)(Z)

[Rubinstein, 1999], [Rubinstein, 2001] 8 / 22

slide-14
SLIDE 14

MFCE: Construct biasing density iteratively

Threshold t controls “rareness” of event I(1)

t (z) =

  • 1 ,

f (1)(z) ≤ t , 0 , f (1)(z) > t Cross-entropy method constructs densities iteratively t1 > t2 > · · · > t

0.5 1 1.5 2 2.5 3 3.5 4 t t4 t3 t2 t1 mean density realizations of f (1)(Z)

[Rubinstein, 1999], [Rubinstein, 2001] 8 / 22

slide-15
SLIDE 15

MFCE: Construct biasing density iteratively

Threshold t controls “rareness” of event I(1)

t (z) =

  • 1 ,

f (1)(z) ≤ t , 0 , f (1)(z) > t Cross-entropy method constructs densities iteratively t1 > t2 > · · · > t

0.5 1 1.5 2 2.5 3 3.5 4 t t5 t4 t3 t2 t1 mean density realizations of f (1)(Z)

[Rubinstein, 1999], [Rubinstein, 2001] 8 / 22

slide-16
SLIDE 16

MFCE: Construct biasing density iteratively

Threshold t controls “rareness” of event I(1)

t (z) =

  • 1 ,

f (1)(z) ≤ t , 0 , f (1)(z) > t Cross-entropy method constructs densities iteratively t1 > t2 > · · · > t

0.5 1 1.5 2 2.5 3 3.5 4 t t6 t5 t4 t3 t2 t1 mean density realizations of f (1)(Z)

[Rubinstein, 1999], [Rubinstein, 2001] 8 / 22

slide-17
SLIDE 17

MFCE: Construct biasing density iteratively

Threshold t controls “rareness” of event I(1)

t (z) =

  • 1 ,

f (1)(z) ≤ t , 0 , f (1)(z) > t Cross-entropy method constructs densities iteratively t1 > t2 > · · · > t

0.5 1 1.5 2 2.5 3 3.5 4 tt7t6 t5 t4 t3 t2 t1 mean density realizations of f (1)(Z)

[Rubinstein, 1999], [Rubinstein, 2001] 8 / 22

slide-18
SLIDE 18

MFCE: Construct biasing density iteratively

Threshold t controls “rareness” of event I(1)

t (z) =

  • 1 ,

f (1)(z) ≤ t , 0 , f (1)(z) > t Cross-entropy method constructs densities iteratively t1 > t2 > · · · > t

0.5 1 1.5 2 2.5 3 3.5 4 tt7t6 t5 t4 t3 t2 t1 mean density realizations of f (1)(Z)

[Rubinstein, 1999], [Rubinstein, 2001] 8 / 22

slide-19
SLIDE 19

MFCE: Cross-entropy method – in each step

Need to find biasing density in each step i = 1, . . . , T

◮ Optimal biasing density that reduces variance to 0

qi(z) ∝ I(1)

ti (z)p(z)

⇒ Unknown normalizing constant (quantity we want to estimate)

◮ Find qv i ∈ Q = {qv : v ∈ P} with min Kullback-Leibler distance to qi

min

v i∈P DKL(qi||qv i) ◮ Reformulate as (independent of normalizing constant of qi)

max

v i∈P Ep[I(1) ti log(qv i)] ◮ Solve approximately by replacing Ep with Monte Carlo estimator

max

v i∈P

1 m

m

  • i=1

I(1)

ti (Zi) log(qv i(Zi)) ,

Z1, . . . , Zm ∼ p ⇒ Optimization problems affected by rareness of event I(1)

ti (Z)

[Rubinstein, 1999], [Rubinstein, 2001] 9 / 22

slide-20
SLIDE 20

MFCE: Iterative cross-entropy procedure

Reuse qv i−1 as biasing density for constructing qv i max

v i∈P Eqvi−1 [I(1) ti

p qv i−1 log(qv i)]

◮ Choose t1 ≫ t, solve for v 1 ∈ P with

max

v 1∈P Ep[I(1) t1 log(qv 1)] ◮ Select t2 < t1, solve for v 2 ∈ P with

max

v 2∈P Eqv1 [I(1) t2

p qv 1 log(qv 2)]

◮ Repeat until threshold t is reached and parameter v ∗ is obtained ◮ Reweighed optimization problems have same optimum as original problems

Once qv ∗ has been found, cross-entropy estimator of Pf is ˆ PCE

t

= 1 m

m

  • i=1

I(1)

t (Z ∗ i ) p(Z ∗ i )

qv ∗(Z ∗

i ) ,

Z ∗

1 , . . . , Z ∗ m ∼ qv ∗

10 / 22

slide-21
SLIDE 21

MFCE: Iterative cross-entropy procedure

Reuse qv i−1 as biasing density for constructing qv i max

v i∈P Eqvi−1 [I(1) ti

p qv i−1 log(qv i)]

◮ Choose t1 ≫ t, solve for v 1 ∈ P with

max

v 1∈P Ep[I(1) t1 log(qv 1)] ◮ Select t2 < t1, solve for v 2 ∈ P with

max

v 2∈P Eqv1 [I(1) t2

p qv 1 log(qv 2)]

◮ Repeat until threshold t is reached and parameter v ∗ is obtained ◮ Reweighed optimization problems have same optimum as original problems

Once qv ∗ has been found, cross-entropy estimator of Pf is ˆ PCE

t

= 1 m

m

  • i=1

I(1)

t (Z ∗ i ) p(Z ∗ i )

qv ∗(Z ∗

i ) ,

Z ∗

1 , . . . , Z ∗ m ∼ qv ∗

10 / 22

slide-22
SLIDE 22

MFCE: Costs of the cross-entropy method

Step ti defined by quantile parameter ρ

◮ Quantile parameter typically ρ ∈ [10−2, 10−1] ◮ Step ti is ρ quantile corresponding to qv i−1

Number of steps T

◮ Introduce minimal step size δ > 0 ◮ Number steps T is bounded as

T ≤ t1 − t δ Lemma 1. Costs of cross-entropy method bounded as c( ˆ PCE

t

) ≤ (t1 − t) mw1 δ

0.5 1 1.5 2 2.5 3 3.5 4 tt7t6 t5 t4 t3 t2 t1 mean density realizations of f (1)(Z)

⇒ Critically depends on t1 and thus on density used in first step

11 / 22

slide-23
SLIDE 23

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-24
SLIDE 24

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t t1 mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-25
SLIDE 25

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t t2 t1 mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-26
SLIDE 26

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t t3 t2 t1 mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-27
SLIDE 27

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t t4 t3 t2 t1 mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-28
SLIDE 28

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t t5 t4 t3 t2 t1 mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-29
SLIDE 29

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t t6 t5 t4 t3 t2 t1 mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-30
SLIDE 30

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t t6 t5 t4 t3 t2 t1 mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-31
SLIDE 31

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t t6 t5 t4 t3 t2 t1 mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-32
SLIDE 32

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t t6 t5 t4 t3 t2 t1 mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t t1 mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-33
SLIDE 33

MFCE: Multifidelity-preconditioned cross-entropy method

Our approach: Use surrogates to reduce t1 and thus #iters with f (1) f (2), . . . , f (k) : Z → Y Multifidelity-preconditioned cross-entropy (MFCE) method

surrogate model high-fidelity model 0.5 1 1.5 2 2.5 3 3.5 4 t t6 t5 t4 t3 t2 t1 mean density realizations of f (2)(Z) 0.5 1 1.5 2 2.5 3 3.5 4 t t1 mean density realizations f (1)(Z) ◮ Find biasing density q(k) v ∗ with surrogate f (k) ◮ Start with q(k) v ∗ to find biasing density q(k−1) v ∗

with f (k−1)

◮ Repeat until q(1) v ∗ is found with f (1)

12 / 22

slide-34
SLIDE 34

MFCE: Analysis setup

Consider high-fidelity f (1) and low-fidelity model f (2)

◮ Let q(2) v ∗ be the biasing density found with f (2) ◮ Set t(1) 1

to be the ρ-quantile corresponding to q(2)

v ∗ ◮ Set tp to be the ρ-quantile corresponding to the nominal density p

Lemma 2. If Pq(2)

v∗

  • f (1) ≤ t(1)

1

  • − Pp
  • f (1) ≤ t(1)

1

  • ≥ 0

then t(1)

1

≤ tp, and cost bound of ˆ PMFCE

f

is lower than cost bound of ˆ PCE

f

  • Proof. Monotonicity of cumulative distribution function.

13 / 22

slide-35
SLIDE 35

MFCE: Analysis

Two assumptions (see [Elfverson et al., 2014, 2016])

◮ A “local” bound on model error: Let 0 < α < 1. The models satisfy

|f (2)(z) − f (1)(z)| ≤ α or |f (2)(z) − f (1)(z)| ≤ |f (2)(z) − t| , z ∈ D

◮ Lipschitz continuous distribution functions F (i) q (t) = Pq[f (i) ≤ t]

Proposition 1 (P., Kramer, Willcox, 2017). Pq(2)

v∗

  • f (1) ≤ t(1)

1

  • − Pp
  • f (1) ≤ t(1)

1

  • −α

Proof (uses arguments of [Elfverson et al., 2014, 2016]).

◮ Consider set B = {z ∈ D : |f (2)(z) − t| ≤ α} ◮ Show that corresponding indicator functions are equal for D \ B ◮ Use Lipschitz continuity to bound the probability of B

14 / 22

slide-36
SLIDE 36

MFCE: Numerical examples: Heat transfer

Consider −∇ · (a(ω, ξ)∇u(ω, ξ)) = 1 , ξ ∈ (0, 1) , (1) u(ω, 0) = 0 , (2) ∂nu(ω, 1) = 0 , (3)

◮ Coefficient a is given as

a(ω, ξ) = ez1(ω)e−0.5 |ξ−0.5|2

0.0225 + ez2(ω)e−0.5 |ξ−0.8|2 0.0225

◮ Random vector Z = [z1, z2] normally distributed ◮ System response

f (Z(ω)) = u(ω, 1)

◮ Discretize with varying mesh width h ∈ {2−8, 2−7, . . . , 2−3} ◮ Obtain models f (1), . . . , f (6)

Goal is to estimate Pf = Pp[f (1) ≤ 0.75] with reference Pf ≈ 6 × 10−9

15 / 22

slide-37
SLIDE 37

MFCE: Numerical examples: Speedup

1e-06 1e-05 1e-04 1e-03 1e-02 1e-01 1e+00 1e+00 1e+01 1e+02 1e+03 1e+04 1e+05

  • est. squared coeff. of var.

runtime biasing construction [s] CE, single model MFCE 1e-06 1e-05 1e-04 1e-03 1e-02 1e-01 1e+00 1e+00 1e+01 1e+02 1e+03 1e+04 1e+05

  • est. squared coeff. of var.

total runtime [s] CE, single model MFCE

(a) biasing runtime (b) total runtime ◮ Single-fidelity approach uses f (1) ◮ MFCE uses f (1), . . . , f (6) ◮ MFCE reduces runtime by almost 2 orders of magnitude

16 / 22

slide-38
SLIDE 38

MFCE: Numerical examples: Number of iterations

2 4 6 8 10 l e v e l 3 l e v e l 4 l e v e l 5 l e v e l 6 l e v e l 7 l e v e l 8 avg number of CE iterations CE, single model MFCE

◮ Number of iterations averaged over 30 runs ◮ MFCE performs most iterations with coarse-grid models

17 / 22

slide-39
SLIDE 39

MFCE: Numerical examples: Reacting flow

x1 [cm] x2 [cm] 0.5 1 1.5 0.2 0.4 0.6 0.8 temp [K] 500 1000 1500 2000

1e-04 1e-03 1e-02 1e-01 1e+00 1e+01 1e+02 1e+03 1e+04 1e+05

  • est. squared coeff. of var.

runtime biasing construction [s] CE, single model MFCE 1e-04 1e-03 1e-02 1e-01 1e+01 1e+02 1e+03 1e+04 1e+05

  • est. squared coeff. of var.

total runtime [s] CE, single model MFCE

(a) biasing runtime (b) total runtime

Reacting-flow problem with two inputs

◮ Three models: data-fit, projection-based, and high-fidelity model ◮ Estimate probability that temperature is below threshold, ref Pf ≈ 10−6 ◮ MFCE reduces iterations with high-fidelity model from ≈ 5 to ≈ 1

18 / 22

slide-40
SLIDE 40

MFCE: Numerical examples: Reacting flow with 5 inputs

x1 [cm] x2 [cm] 0.5 1 1.5 0.2 0.4 0.6 0.8 temp [K] 500 1000 1500 2000

1e-03 1e-02 1e-01 1e+00 1e+01 1e+01 1e+02 1e+03 1e+04 1e+05 1e+06

  • est. squared coeff. of var.

runtime biasing construction [s] CE, single model MFCE 1e-03 1e-02 1e-01 1e+00 1e+01 1e+01 1e+02 1e+03 1e+04 1e+05 1e+06

  • est. squared coeff. of var.

total runtime [s] CE, single model MFCE

(a) biasing runtime (b) total runtime

Reacting-flow problem with five inputs

◮ Same setup as previous reacting-flow example, except now five inputs ◮ First 2 inputs follow Gaussian, 3-4th Gamma, and 5th a log-normal ◮ MFCE achieves speedup compared to using high-fidelity model alone

19 / 22

slide-41
SLIDE 41

MFCE: Apply to OpenAeroStruct

[Jasa et al. 2018] https://github.com/johnjasa/OpenAeroStruct/

Consider baseline UAV definition in OpenAeroStruct

◮ Design variables are thickness and position of control points ◮ Uncertain flight conditions (angle of attack, air density, Mach number) ◮ Output is fuel burn

Estimate 10−6-quantile γ (value-at-risk) Pp[f (1) ≤ γ] = 10−6 Derive a data-fit surrogate

◮ Take a 3 × 3 × 3 grid in stochastic domain ◮ Evaluate high-fidelity model at those 27 points ◮ Derive linear interpolant of output

Apply multifidelity pre-conditioned cross-entropy method

20 / 22

slide-42
SLIDE 42

MFCE: OpenAeroStruct: Value-at-Risk computation

1e-08 1e-07 1e-06 1e-05 1e-04 1e+03 1e+04 1e+05 1e+06

  • est. mean-squared error

total runtime [s] CE, single model multifidelity

◮ Computing 10−6-quantile for a fixed design point ◮ Multifidelity approach achieves up to one order of magnitude speedup

21 / 22

slide-43
SLIDE 43

Conclusions

1e-06 1e-05 1e-04 1e-03 1e-02 1e-01 1e+00 1e+00 1e+01 1e+02 1e+03 1e+04 1e+05

  • est. squared coeff. of var.

total runtime [s] CE, single model MFCE 2 4 6 8 10 l e v e l 3 l e v e l 4 l e v e l 5 l e v e l 6 l e v e l 7 l e v e l 8 avg number of CE iterations CE, single model MFCE

Multifidelity rare event simulation

◮ Leverage surrogate models for runtime speedup ◮ Recourse to high-fidelity model for accuracy guarantees

Our references

3 P., Kramer, Willcox: Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation. SIAM/ASA J. Uncertainty Quantification, 2018. 2 P., Kramer, Willcox: Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models. J. of Computational Physics, 341:61-75, 2017. 1 P., Cui, Marzouk, Willcox: Multifidelity Importance Sampling. Computer Methods in Applied Mechanics and Engineering, 300:490-509, 2016.

22 / 22