Reduced models and the bottlenecks for problems with many parameters - - PowerPoint PPT Presentation

reduced models and the bottlenecks for problems with many
SMART_READER_LITE
LIVE PREVIEW

Reduced models and the bottlenecks for problems with many parameters - - PowerPoint PPT Presentation

RBM in Gravity Workshop, CalTech, June 2013 Reduced models and the bottlenecks for problems with many parameters Jan S Hesthaven Brown University Jan.Hesthaven@Brown.edu w/ B. Stamm (UC Berkeley), S. Zhang (Brown), M. Tiglio (UMCP), S. Field


slide-1
SLIDE 1

Jan S Hesthaven Brown University Jan.Hesthaven@Brown.edu

Reduced models and the bottlenecks for problems with many parameters

RBM in Gravity Workshop, CalTech, June 2013

w/ B. Stamm (UC Berkeley), S. Zhang (Brown), M. Tiglio (UMCP), S. Field (UMCP), C. Galley (CalTech), F. Herrmann (UMCP), E. Ochsner (UW)

Funded by AFOSR/OSD

1 Thursday, June 13, 13

slide-2
SLIDE 2

Basic questions to consider

WHAT do we mean by ‘reduced models’ ? WHY should we care ? WHEN could it work ? HOW do we know ? DOES it work ?

Reduced models

WHEN do problems arise ?

2 Thursday, June 13, 13

slide-3
SLIDE 3

Reduced models ?

We do not consider reduced physics - .. but reduced representations of the full problem E + ω2E = f

2E = f ∂u ∂t + u · u = p

∂u ∂t + u · u = p + ν2u

· u = 0 High-frequency vs low-frequency EM

vs

Viscous vs inviscid fluid flows · u = 0

vs

3 Thursday, June 13, 13

slide-4
SLIDE 4

.. but WHY ?

Assume we are interested in

2u(x, µ) = f(x, µ)

x ∈ Ω

µ

and wish to solve it accurately for many values of ‘some’ parameter We can use our favorite numerical method Ahuh(x, µ) = fh(x, µ) For many parameter values, this is expensive

  • and slow !

dim(uh) = N 1

4 Thursday, June 13, 13

slide-5
SLIDE 5

.. but WHY (con’t)

Assume we (somehow) know

uh(x, µ) uRB(x, µ) = Va(µ)

Then we can recover a solution for a new parameter as little cost

(VT AhV)VT uh(µ) = VT fh(µ) VT V = I

dim(a) = N

N × N

N N

dim(V ) = N × N

.. if this behaves !

5 Thursday, June 13, 13

slide-6
SLIDE 6
  • .. we know the orthonormal basis -
  • .. and it allows an accurate representation -
  • .. and we can evaluate RHS ‘fast’-

.. but WHY (con’t)

So IF we can evaluate new solutions at cost -

uRB(µ)

V

So WHY ? - a promise to do more with less

O(N) O(N)

6 Thursday, June 13, 13

slide-7
SLIDE 7

Model reduction

We seek an accurate way to evaluate the solution at new parameter values at reduced complexity. input: parameter value µ ∈ D

  • utput: sh(µ) = l(uh(µ); µ)

Lh(uh(µ); µ) = 0 PDE solver

7 Thursday, June 13, 13

slide-8
SLIDE 8

When is that relevant ?

Examples in many CSE application domains

  • Optimization/inversion/control problems
  • Simulation based data bases
  • Uncertainty quantification
  • Sub-scale models in multi-scale modeling
  • In-situ/deployed modeling
  • D. Knezevic et al, 2010

8 Thursday, June 13, 13

slide-9
SLIDE 9

When does this work ?

X M u(µj) u(µ)

Assumption: The solution varies smoothly on a low- dimensional manifold under parameter variation. Choosing the samples well, we should be able to derive good approximations for all parameters For this to be successful there must be some structure to the solution under parameter variation

9 Thursday, June 13, 13

slide-10
SLIDE 10

x y z Geometry:

Check the assumption

Two dimensional parameterization with polar angle and frequency

rs: (k, θ) ∈ [1, 25] × [0, π], φ is fixed.

500 1000 1500 2000 2500 3000

singular values

1x10-17 1x10-16 1x10-15 1x10-14 1x10-13 1x10-12 1x10-11 1x10-10 1x10-9 1x10-8 1x10-7 1x10-6 0.00001 0.0001 0.001 0.01 0.1

φ θ

With 200 basis functions you can reach a precision of 1e-7!

10 Thursday, June 13, 13

slide-11
SLIDE 11

Reduced basis method

Low dimensional representations may exists for many - but not all - problems. Of course not a new observation -

  • KL-expansions/POD etc
  • Computes basis through SVD
  • Costly
  • Error ?
  • Krylov based methods
  • Computes basis through Krylov subspace
  • Error ?

Let’s consider a different approach

11 Thursday, June 13, 13

slide-12
SLIDE 12

A second look

We consider physical systems of the form where the solutions are implicitly parameterized by

L(x, µ)u(x, µ) = f(x, µ) u(x, µ) = g(x, µ)

x ∈ Ω x ∈ ∂Ω µ ∈ D ⊂ RM

  • How do we find the basis.
  • How do we ensure accuracy under

parameter variation ?

  • What about speed ?

12 Thursday, June 13, 13

slide-13
SLIDE 13

Let us define: The exact solution: Find such that The truth solution: Find such that

The truth

u(µ) ∈ X

a(u, µ, v) = f(µ, v), ∀v ∈ X ah(uh, µ, vh) = fh(µ, vh), ∀vh ∈ Xh uh(µ) ∈ Xh

dim(Xh) = N

The RB solution: Find such that

dim(XN) = N

uRB(µ) ∈ XN

N N ah(uRB, µ, vN) = fh(µ, vN), ∀vN ∈ XN

We always assume that

13 Thursday, June 13, 13

slide-14
SLIDE 14

The truth and errors

Solving for the truth is expensive - but we need to be able to trust the RB solution

u(µ) uRB(µ) u(µ) uh(µ) + uh(µ) uRB(µ)

We assume that

u(µ) uh(µ) ε

This is your favorite solver and it is assumed it can be as accurate as you desire - the truth Bounding we achieve two things

  • Ability to build a basis at minimal cost
  • Certify the quality of the model

14 Thursday, June 13, 13

slide-15
SLIDE 15

The error estimate

Consider the discrete truth problem A1,1uRB = fRB Express the solution as This results in the truth problem as well as the reduced problem A(µ)uh(µ) = fh(µ)

uh = uN + u⊥

uh ∈ Xh

uN ∈ XN A1,1 A1,2 A2,1 A2,2 uN u⊥

  • =

fRB f⊥

  • 15

Thursday, June 13, 13

slide-16
SLIDE 16

The error estimate

This yields the estimate for the error

A1,1 A1,2 A2,1 A2,2 uN − uRB u⊥

  • =
  • f⊥ − A2,1uRB
  • f⊥ − A2,1uRB
  • =

fRB − A1,1uRB f⊥ − A2,1uRB

  • = fh − AuRB = R(µ)

We can recognize the right hand side as and we recover

uh(µ) uRB(µ) A−1(µ)R(µ)

So with the residual and an estimate of the norm of the inverse of A we can bound the error

16 Thursday, June 13, 13

slide-17
SLIDE 17

RBM 101

We use the error estimator to construct the reduced basis in a greedy approach.

  • 1. Define a (fine)training set in parameter space
  • 2. Choose a member randomly and solve truth.
  • 3. Define
  • a. Find
  • b. Compute
  • c. Orthonormalize wrt
  • d. Add new solution basis
  • 4. Continue until

uRB = uh(µ1)

Πtrain

uh(µi+1)

uRB

Resulting in

uRM(µ) =

N

  • i=1

ui

N(µ)ξi

     

      



 

µi+1 = arg sup

µ∈Πtrain

εN(µ) sup

µ∈Πtrain

εN ≤ ε

17 Thursday, June 13, 13

slide-18
SLIDE 18

2D Pacman problem

Scattering by 2D PEC Pacman

Backscatter depends very sensitively on cutout angle and frequency.

1 2 3 4 5 6 −20 −10 10 20 30 Cylinder WedgeAngle = 18.5 Deg WedgeAngle = 21.5 Deg

TM polarization

Difference in scattering is clear in fields

18 Thursday, June 13, 13

slide-19
SLIDE 19

2D Pacman problem

9.6 11.6 14.3 18.5 21.5 −15 −10 −5 5 10 15

9.6 11.6 14.3 18.5 21.5 −15 −10 −5 5 10 15 Truth RB with 9 Bases RB with 11 Bases

Greedy approach selects critical angles early in the selection process Parameter is gap angle Convergence of output with O(10) basis elements

Output of interest - backscatter

19 Thursday, June 13, 13

slide-20
SLIDE 20

2D Pacman problem

9.6 11.6 14.3 18.5 21.5 −40 −20 20 40 RB output with 13 bases Output + error estimate Output − error estimate

9.6 11.6 14.3 18.5 21.5 −2 2 4 6 8 10 12 RB output with 15 bases Output + error estimate Output − error estimate 9.6 11.6 14.3 18.5 21.5 2 4 6 8 10 12 RB output with 17 bases Output + error estimate Output − error estimate

Convergence of error bounds over full parameter range

5 10 15 20 25 30 10

−5

10

Number of Bases Error/Error Estimate

Worst Case RBM Error Worst Case Error Estimate

Exponential convergence of predicted error estimator and real error over large training set

20 Thursday, June 13, 13

slide-21
SLIDE 21

RBM for wave catalogs

The challenge is .. use a solver to predict wave forms .. repeat for many parameter values to build catalog We shall consider the construction of a reduced basis based on a greedy approach

  • g CN = {Ψi}N

i=1.

Ψi ≡ h

µ= µi

dN(H) = min

CN max

  • µ

min

u∈WN ||u − h µ||.

Goal: Seek a finite dimensional basis

21 Thursday, June 13, 13

slide-22
SLIDE 22

RBM for wave catalogs

The algorithm for this is

Algorithm 1 Greedy algorithm for building a reduced basis space

1: Input: training space Ξ and waveforms sampled at training space HΞ 2: Randomly select some

µ1 ∈ Ξ

3: C1 = {h

µ1}

4: N = 1 5: ε = 1

We use normalized waveforms

6: while ε ≥ Tolerance do 7:

for µ ∈ Ξ do

8:

Compute Err( µ) = ||h

µ − PN(h µ)||

9:

end for

10:

Choose

  • µN+1 = arg max

µ∈Ξ Err(

µ)

11:

CN+1 = {h

µ1, ..., h µN, h µN+1}

12:

ε = Err( µN+1)

13:

N = N + 1

14: end while 15: εN = ε 16: Output: Greedy error εN, CN, representations PN(h

µ) ∈ WN = span (CN)

22 Thursday, June 13, 13

slide-23
SLIDE 23

RBM for wave catalogs

Test case is BNS inspiral for LIGO. Analytic wave form with 2 parameters

23 Thursday, June 13, 13

slide-24
SLIDE 24

RBM for wave catalogs

RBM sample points for two-parameter problem

24 Thursday, June 13, 13

slide-25
SLIDE 25

RBM for wave catalogs

Detector Overlap BBH BNS Error RB TM RB TM InitLIGO 10−2 165 2, 450 898 10, 028 10−5 170 1.2 × 106 904 4.3 × 106 2.5 × 10−13 182 5.9 × 1012 917 1.4 × 1013 AdvLIGO 10−2 1, 058 19, 336 5, 395 72, 790 10−5 1, 687 1.5 × 107 8, 958 4.9 × 107 2.5 × 10−13 1, 700 2.3 × 1014 8, 976 5.6 × 1014 AdvVirgo 10−2 1, 395 42, 496 7, 482 156, 127 10−5 1, 690 3.1 × 107 8, 960 8.3 × 107 2.5 × 10−13 1, 703 4.8 × 1014 8, 977 6.0 × 1014

The potential for savings are staggering !

25 Thursday, June 13, 13

slide-26
SLIDE 26

Bottlenecks

So where are the bottlenecks ? In the offline stage - In the online stage -

  • Large number of terms in

the affine expansion

  • Large number of terms in basis

a(u, µ, v) =

Qa

  • k=1

Θk(µ)ak(u, v)

Nd ∝ N αd

1 , 0 < α ≤ 1

  • The cost of greedy approach by

evaluating the train space Also for SCM, EIM

µi+1 = arg sup

µ∈Πtrain

εN(µ)

26 Thursday, June 13, 13

slide-27
SLIDE 27

Non-affine problems

Let us consider the extension of these techniques to problems described by integral equations Electric field integral equation (EFIE)

that ik ⇤

Γ×Γ

Gk(x, y)

  • j(x) · jt(y) − 1

k2 divΓj(x)divΓjt(y)

⇥ dxdy = F (jt) Gk(x, y) := eik|x−y| |x − y| . tised using the Raviart-Thomas

Truth approximation is a standard MoM solver. CERFACS

27 Thursday, June 13, 13

slide-28
SLIDE 28

Why integral equations ?

Acoustics Stokes flow Materials Electromagnetics Heat

Y.Liu, U Cincinnati Y.Liu, U Cincinnati Y.Liu, U Cincinnati FISC Solver, UIUC

  • C. Xenophontos,

U Cyprus

28 Thursday, June 13, 13

slide-29
SLIDE 29

Integral equations

One problem - the affine assumption fails

Caution: This is not feasible in the framework of the EFIE! a(uh, vh; µ) = ikZ

  • Γ
  • Γ

eik|x−y| |x−y|

⇥ uh(x) · vh(y) − 1

k2 divΓ,xuh(x) · divΓ,yvh(y)

⇤ dx dy f(vh; µ) = n×(p×n)

  • Γ

eikx·ˆ

s(θ,φ) · vh(x) dx

Solution - empirical interpolation method (EIM) Seek such that

· ides {µm}M

m=1

{ } IM(f)(x; µ) =

M

  • m=1

αm(µ)f(x; µm)

Barrault et al, 2004

29 Thursday, June 13, 13

slide-30
SLIDE 30

Scattering example

      

      



  

1 parameter, µ = k with D = [1, 25.5] (θ, φ) = ( π

6 , 0) fixed

Repartition of 23 first picked parameters:

5 10 15 20 25

k

  • 1
  • 0.5

0.5 1

:

5 10 15 20 25

N

1x10-8 1x10-7 1x10-6 0.00001 0.0001 0.001 0.01 0.1 1 10

relative error

φ θ

30 Thursday, June 13, 13

slide-31
SLIDE 31

Scattering example

10 12 14 16 18 20

k

0.01 0.1 1

10 12 14 16 18 20

k

  • 40
  • 20

20 40

RCS

upper error bar lower error bar rcs(u_N) rcs(u_h) 10 12 14 16 18 20

k

  • 40
  • 20

20 40

RCS

upper error bar lower error bar rcs(u_N) rcs(u_h) 10 12 14 16 18 20

k

  • 40
  • 20

20 40

RCS

Upper error bar lower error bar rcs(u_N) rcs(u_h)

Stability parameter N=21 N=22 N=23

10 12 14 16 18 20

k

0.1 1 10 100

Efficiency of error estimate

N = 10 N = 16 N = 23

  • Fig. 4.16. Efficiency of the error estimator over the parameter space for N = 10, 16, 23 during

31 Thursday, June 13, 13

slide-32
SLIDE 32

Bottlenecks

So where are the bottlenecks ? In the offline stage - In the online stage -

  • Large number of terms in

the affine expansion

  • Large number of terms in basis

a(u, µ, v) =

Qa

  • k=1

Θk(µ)ak(u, v)

Nd ∝ N αd

1 , 0 < α ≤ 1

  • The cost of greedy approach by

evaluating the train space Also for SCM, EIM

µi+1 = arg sup

µ∈Πtrain

εN(µ)

32 Thursday, June 13, 13

slide-33
SLIDE 33

Strategy for high-d sampling

A typical greedy approach

Input: A train set Ξtrain ⊂ D, a tolerance tol > 0 Output: SN and WN

1: Initialization: Choose an initial parameter value µ1 ∈ Ξtrain, set S1 = {µ1}, compute

v(µ1), set W1 = {v(µ1)}, and N = 1 ;

2: while maxµ∈Ξ η(µ; WN) > tol do 3:

For all µ ∈ Ξtrain, compute η(µ; WN) ;

4:

Choose µN+1 = argmaxµ∈Ξtrainη(µ; WN);

5:

Set SN+1 = SN ∪ {µN+1};

6:

Compute v(µN+1), and set WN+1 = WN ∪ {v(µN+1)};

7:

N ← N + 1;

8: end while

Algorithm 1: A Typical Greedy Algorithm

For a high-d problem with a large training set, this is expensive

33 Thursday, June 13, 13

slide-34
SLIDE 34

Strategy for high-d sampling

We introduce the saturation assumption

η(µ; WM) ≤ Csa η(µ; WN) for some Csa > 0 for all 0 < N < M

Different interpretations

  • For - error is strictly decreasing
  • For - error is allowed to increase (intermittently)

Csa ≥ 1

Csa < 1

We shall use this approach to propose two different samplings

  • An approach using just this saturation assumption
  • An adaptive sampling with additional benefits

34 Thursday, June 13, 13

slide-35
SLIDE 35

Strategy for high-d sampling

Strategy I - When looking for the max error over the training set, recompute estimator only for those points where

Csaη(µ, WN) ≥ errortmpmax

saved saved

∞ ∈

train

3: while maxµ∈Ξtrain ηsaved(µ) ≥ tol do 4:

errortmpmax = 0;

5:

for all µ ∈ Ξtrain do

6:

if Csaηsaved(µ) > errortmpmax then

7:

Compute η(µ; WN) , and let ηsaved(µ) = η(µ, WN);

8:

if ηsaved(µ) > errortmpmax then

9:

errortmpmax = ηsaved(µ), and let µmax = µ;

10:

end if

11:

end if

12:

end for Choose

N+1 =

, set =

N+1 ;

35 Thursday, June 13, 13

slide-36
SLIDE 36

Strategy for high-d sampling

Consider EIM example

F2(x; µ) = eikˆ

k·x

ˆ k = (sin θ cos ϕ, sin θ sin ϕ, cos θ)T

50 100 150 200 250

N

0.5 1 1.5 2 Mean of C(N) constant Max of C(N) constant

C(N) = η(µ, WN) η(µ, WN−1)

50 100 150 200 250

N

20 40 60 80 100

Percentage of full cost

36 Thursday, June 13, 13

slide-37
SLIDE 37

Strategy for high-d sampling

Consider a RBM example

1 2 3 3

− − −

α = ⇢ αi = 1002µi−1, x 2 Ri, i = 1, 2, α3 = 1, x 2 R3,

8 > > < r · (αru) = in Ω, =

  • n

Γ

r · αru · n = 1

r u =

αru · n = n = 1

− − −

where µ = (µ1, µ2) 2 [0, 1]2. becomes

2 4 6 8 10 12 14 16 18 10 20 30 40 50 60 70 80 90 100 110

N Percentage of full cost (%)

Csa=1 Mean Percentage for Csa=1 Csa =1.1 Mean Percentage for Csa=1.1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SN obtained by standard greedy algorithm SN obtained by SA greedy algorithm with Csa =1

37 Thursday, June 13, 13

slide-38
SLIDE 38

Strategy for high-d sampling

Strategy II -

  • Choose training set size as can be afforded and

sample randomly

  • Resample #points for which
  • Perform a safety check

η(µ, WN) < tol

14:

if ηsaved(µ) < tol then

15:

flag µ; // all flagged parameters will be removed

16:

end if end if

∪ { }

22:

Discard all flagged parameters from Ξtrain and their corresponding saved error estimation in ηsaved;

23:

Generate M − sizeof(Ξtrain) new samples, add them into Ξtrain such that sizeof(Ξtrain) = M; set ηsaved of all new points to ∞; + 1;

28:

Discard Ξtrain, generate M new parameters to form Ξtrain and all Ξ ;

38 Thursday, June 13, 13

slide-39
SLIDE 39

Strategy for high-d sampling

Consider EIM example

F2(x; µ) = eikˆ

k·x

ˆ k = (sin θ cos ϕ, sin θ sin ϕ, cos θ)T

50 100 150 200 250 300

N

0.0001 0.001 0.01 0.1 1 10 100

Standard greedy algorithm Error provided by A.E.G.A. Error of A.E.G.A. over a control set of points

50 100 150 200 250

N

0.0001 0.001 0.01 0.1 1 10 100

Standard greedy algorithm Error provided by AEGA Error of AEGA over a control set of points

50 100 150 200 250

N

0.0001 0.001 0.01 0.1 1 10 100

Standard greedy algorithm Error provided by AEGA Error of AEGA over a control set of points

50 100 150 200 250

N

1000 10000 100000 1x106

Total number of points

M = 1'000 M = 5'000 M = 25'000

M=1000 M=5000 M=25000

39 Thursday, June 13, 13

slide-40
SLIDE 40

Strategy for high-d sampling

50 100 150 200 250

N

  • 100
  • 50

50 100

Percentage (%)

Number of points where accuracy is checked (and the SA cannot be used) Number of points remained in training set

50 100 150 200 250

N

  • 100
  • 50

50 100

Percentage (%)

Number of points where accuracy is checked (and the SA cannot be used) Number of points remained in the training set

50 100 150 200 250

N

  • 100
  • 50

50 100

Percentage (%)

Number of points where accuracy is checked (and the SA cannot be used) Number of points remained in the training set

M=1000 M=5000 M=25000 The majority of the work is done on the initial set

40 Thursday, June 13, 13

slide-41
SLIDE 41

Strategy for high-d sampling

A high-d EIM example

F4(x; µ) = ✓ 1 + exp ✓ (x1 µ1)2 µ9 (x2 µ2)2 µ10 ◆◆ ✓ 1 + exp ✓ (x1 µ3)2 µ11 (x2 µ4)2 µ12 ◆◆ · ✓ 1 + exp ✓ (x1 µ5)2 µ13 (x2 µ6)2 µ14 ◆◆ ✓ 1 + exp ✓ (x1 µ7)2 µ15 (x2 µ8)2 µ16 ◆◆

with x = (x1, x2) 2 Ω = [0, 1]2 and µ1, . . . , µ8 2 [0.3, 0.7], µ9, . . . , µ16 2 [0.01, 0.05]. domain Ω = [0 1]2 is divided into a grid of 100 100 equidistant points to build Ω .

100 200 300 400

N

0.0001 0.001 0.01 0.1 1 M = 10'000 M = 100'000 M = 1'000'000 320 340 360 380 400 420 440

N

0.0001 0.001 M = 10'000 M = 100'000 M = 1'000'000 41 Thursday, June 13, 13

slide-42
SLIDE 42

Fundamental problem remains

While these tricks remain valuable, both offline and online cost typically scales with For d>>1 this quickly becomes very expensive

Goal

Reduce the dimensionality of the problem without impacting the predictive accuracy

M ∝ (QN)αd, 0 < α ≤ 1

The tool will be ANOVA expansions and sensitivity

42 Thursday, June 13, 13

slide-43
SLIDE 43

ANOVA Expansions

In many cases we need to evaluate

f(X(x))

  • f(X(x)) dx

X = (X1, . . . , Xd), d 1

which quickly becomes an expensive exercise. DEF: The ANOVA expansion (exact)

f(X) = f0 +

  • t⊆D

ft(Xt)

ft(Xt) =

  • Ad−|t| f(X)dXD/t −
  • w⊂t

fw(Xw) − f0 f0 =

  • Ad f(X) dX,
  • A0 f(X) dX0 = f(X)

D = {1, . . . , d} Ω = [0, 1]d Xt A|t|

|t| dimensional hypercube t indexed sub-vector

43 Thursday, June 13, 13

slide-44
SLIDE 44

ANOVA Expansions

A few characteristics -

  • The ANOVA expansion is unique and exact
  • It is a finite expansion with terms
  • All terms are mutually orthogonal

2d

Example:

f(α1, α2, α3) = f0 +

3

  • i=1

ˆ fi(αi) +

  • 1=i<j<d

ˆ fij(αi, αj)

f(X, s) = f0 +

  • t⊆D;|t|≤s

ft(Xt)

We have not achieved much yet. S = truncation dimension Now consider the truncated expansion

44 Thursday, June 13, 13

slide-45
SLIDE 45

ANOVA Expansions

Let us first introduce

Vt(f) =

  • Ad(ft(Xt))2 dX, V (f) =
  • |t|>0

Vt(f)

  • 0<|t|≤ps

Vt(f) ≥ qV (f) Err(X, ps) = 1 V (f)

  • Ad[fX − f(X, ps)]2 dX

Err(X, ps) ≤ 1 − q q ≤ 1

... dimension-specific variances Define the effective dimension through Then one can prove NOTE: If p<<d there is hope! Sobol’90

45 Thursday, June 13, 13

slide-46
SLIDE 46

Parametric compression

Recall subset specific variances

Vt(f) =

  • Ad(ft(Xt))2 dX, V (f) =
  • |t|>0

Vt(f)

and introduce sensitivities

S(t) = Vt V ,

elements

We can now estimate sensitivity of output on specific parameter through

  • Compute approximate ANOVA expansion - learn
  • Identity important parameters and compress
  • Compute lower dimensional model
  • i∈t

S(i)

46 Thursday, June 13, 13

slide-47
SLIDE 47

ANOVA Expansions

Example: 25 planets of uncertain mass pull in a unit mass space-ship

¨ x(t) =

p

i=1

mi ˆ ri/r2

i ,

x(t0) = x0.

mi = 1 p + 1[1 + 0.1 ∗ U(−1, 1)]

1 2 3 4 5 6 7 8 9 10 10

−16

10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

  • rder of ANOVA expansion

L2 error L∞ error

Full ANOVA based on Stroud-3

2% 2% 4% 3% 3% 2% 2% 2% 5% 2% 2% 2% 2% 26% 4% 2% 2% 3% 4% 10% 2% 2% 2% 4% 3%

−10 −5 5 10 −10 −5 5 10 15 x y trajectory important unimportant initial

Sensitivity index Active and passive “planets” Active # of parameters is 7 >3%

47 Thursday, June 13, 13

slide-48
SLIDE 48

Parameter compression for RBM

When extending this to PDEs and RBMs, key issue is How to evaluate sensitivity at small cost ? With the ability to build RB models, the following approach appears interesting

  • Build a very coarse RBM over all parameters.
  • Use RBM to build crude response surface
  • Explore this very coarse model to estimate sensitivities
  • Compress and develop RBM for important parameters

48 Thursday, June 13, 13

slide-49
SLIDE 49

Acoustic horn test

We consider a similar approach for the acoustic horn 8 parameters, describing wall impedance in horn

  • ut
  • 1

2 3 5 4 6 7 8 9 10

  • 49

Thursday, June 13, 13

slide-50
SLIDE 50

Combining RB and ANOVA

The model is

  • 8

> > > > > > > > > > < > > > > > > > > > > : ∆u + 4u = in Ω, (2i + 1 25)u + ∂u ∂n =

  • n

Γout, 2iu + ∂u ∂n = 4i

  • n

Γin, iµju + ∂u ∂n =

  • n

Γj, j = 1, · · · , 8, ∂u ∂n =

  • n
  • ther boundaries.

Total of 8 parameters - boundary impedance

  • >

> > > : rs µ = (µ1, µ2, · · · , µ8) ∈ [0, 1]8.

Functional output -

s(µ) = `(u) = real( Z

Γin

uds).

50 Thursday, June 13, 13

slide-51
SLIDE 51

Combining RB and ANOVA

The approach is as follows

  • Build coarse RB with high tolerance
  • Tolerance of 10E-3 leads to 31 RB for 8 parameter

problem.

  • Use this coarse RB to compute ANOVA expansion of
  • utput and compute sensitivity.
  • Results are
  • Similar results with tolerance of 1E-2 - 22 RB

formula is based on a 63-p at S3 = 0.4321, S5 = 0.4314,

Gauss-Patters S35 = 0.1256,

3

s S3 + S5 + S35 = 0.9891.

51 Thursday, June 13, 13

slide-52
SLIDE 52

Combining RB and ANOVA

Two boundaries are responsible for >99% of all variation parameters.

  • ut
  • 1

2 3 5 4 6 7 8 9 10

tol Number of RB emax eave 102 6 1.172 ⇥ 102 2.404 ⇥ 103 103 11 1.214 ⇥ 102 1.516 ⇥ 103 104 15 1.1213 ⇥ 102 1.516 ⇥ 103 105 17 1.1213 ⇥ 102 1.516 ⇥ 103

52 Thursday, June 13, 13

slide-53
SLIDE 53

Remarks

  • Multi element EIM for improved online performance
  • Sampling techniques to reduce offline cost
  • Compression through ANOVA expansions show promise
  • ANOVA can also be used to drive hp-type RBM in greedy

Combining these techniques allows for practical use

  • f RBM for high-dimensional problems

A few ideas on how to deal with the high-d problem

Thank you

53 Thursday, June 13, 13