Hessian-based sampling in high dimensions for goal-oriented model - - PowerPoint PPT Presentation

hessian based sampling in high dimensions for goal
SMART_READER_LITE
LIVE PREVIEW

Hessian-based sampling in high dimensions for goal-oriented model - - PowerPoint PPT Presentation

Hessian-based sampling in high dimensions for goal-oriented model order reduction Peng Chen Omar Ghattas Center for Computational Geosciences and Optimization The Institute for Computational Engineering and Sciences The University of Texas at


slide-1
SLIDE 1

Hessian-based sampling in high dimensions for goal-oriented model order reduction

Peng Chen Omar Ghattas Center for Computational Geosciences and Optimization The Institute for Computational Engineering and Sciences The University of Texas at Austin

QUIET 2017 - Quantification of Uncertainty: Improving Efficiency and Technology SISSA, International School for Advanced Studies, Trieste, 18-21 July 2017

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 1 / 32

slide-2
SLIDE 2

Parametrization Dimension Information

# pixels K = 12 # modes K = 1

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-3
SLIDE 3

Parametrization Dimension Information

# pixels K = 22 # modes K = 2

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-4
SLIDE 4

Parametrization Dimension Information

# pixels K = 42 # modes K = 4

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-5
SLIDE 5

Parametrization Dimension Information

# pixels K = 82 # modes K = 8

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-6
SLIDE 6

Parametrization Dimension Information

# pixels K = 162 # modes K = 16

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-7
SLIDE 7

Parametrization Dimension Information

# pixels K = 322 # modes K = 32

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-8
SLIDE 8

Parametrization Dimension Information

# pixels K = 642 # modes K = 64

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-9
SLIDE 9

Parametrization Dimension Information

# pixels K = 1282 # modes K = 128

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-10
SLIDE 10

Parametrization Dimension Information

# pixels K = 2562 # modes K = 256

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-11
SLIDE 11

Parametrization Dimension Information

# pixels K = 5122 # modes K = 512

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-12
SLIDE 12

Parametrization Dimension Information

# pixels K = 10242 # modes K = 1024

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 2 / 32

slide-13
SLIDE 13

Outline

1

Model order reduction for parametric PDEs

2

Hessian-based sampling

3

Numerical experiments

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 3 / 32

slide-14
SLIDE 14

Parameters

Let P ⊂ RK denote a K-dimensional parameter space, where K ∈ N ∪ ∞. p = (p1, . . . , pK) ∈ P. The parameter p lives in a box, w.l.o.g., P = [− √ 3, √ 3]K, with uniform distribution p ∼ µ = U([− √ 3, √ 3]K), with mean ¯ p = 0, and covariance C = I. The parameter p lives in the whole space, i.e., P = RK, with Gaussian distribution p ∼ µ = N(¯ p, C), with mean ¯ p, and covariance C, s.p.d. Eg., C is discretized from a covariance operator C, given by C = (−δ△ + γI)−α, which is self adjoint, positive, and of trace class.

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 4 / 32

slide-15
SLIDE 15

Parameters

Let P ⊂ RK denote a K-dimensional parameter space, where K ∈ N ∪ ∞. p = (p1, . . . , pK) ∈ P. The parameter p lives in a box, w.l.o.g., P = [− √ 3, √ 3]K, with uniform distribution p ∼ µ = U([− √ 3, √ 3]K), with mean ¯ p = 0, and covariance C = I. The parameter p lives in the whole space, i.e., P = RK, with Gaussian distribution p ∼ µ = N(¯ p, C), with mean ¯ p, and covariance C, s.p.d. Eg., C is discretized from a covariance operator C, given by C = (−δ△ + γI)−α, which is self adjoint, positive, and of trace class.

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 4 / 32

slide-16
SLIDE 16

Parameters

Let P ⊂ RK denote a K-dimensional parameter space, where K ∈ N ∪ ∞. p = (p1, . . . , pK) ∈ P. The parameter p lives in a box, w.l.o.g., P = [− √ 3, √ 3]K, with uniform distribution p ∼ µ = U([− √ 3, √ 3]K), with mean ¯ p = 0, and covariance C = I. The parameter p lives in the whole space, i.e., P = RK, with Gaussian distribution p ∼ µ = N(¯ p, C), with mean ¯ p, and covariance C, s.p.d. Eg., C is discretized from a covariance operator C, given by C = (−δ△ + γI)−α, which is self adjoint, positive, and of trace class.

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 4 / 32

slide-17
SLIDE 17

Parameters

Let P ⊂ RK denote a K-dimensional parameter space, where K ∈ N ∪ ∞. p = (p1, . . . , pK) ∈ P. The parameter p lives in a box, w.l.o.g., P = [− √ 3, √ 3]K, with uniform distribution p ∼ µ = U([− √ 3, √ 3]K), with mean ¯ p = 0, and covariance C = I. The parameter p lives in the whole space, i.e., P = RK, with Gaussian distribution p ∼ µ = N(¯ p, C), with mean ¯ p, and covariance C, s.p.d. Eg., C is discretized from a covariance operator C, given by C = (−δ△ + γI)−α, which is self adjoint, positive, and of trace class.

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 4 / 32

slide-18
SLIDE 18

Parametric PDEs

Let V denote a Hilbert space with dual V′. Given p ∈ P, µ-a.e., find u ∈ V such that a(u, v; p) = f(v) ∀v ∈ V. a(·, ·; p) : V × V → R is a bilinear form, e.g., a(u, v; p) =

  • D

κ(p)∇u · ∇vdx. f(·) ∈ V′ is a linear functional. s(p) = s(u(p)) ∈ R is a QoI.

Ex 1. heat conduction in thermal blocks

κ(p) =

K

  • k=1

k−βχDk(x)pk p ∼ U([− √ 3, √ 3]K) K = 162 = 256

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 5 / 32

slide-19
SLIDE 19

Parametric PDEs

Let V denote a Hilbert space with dual V′. Given p ∈ P, µ-a.e., find u ∈ V such that a(u, v; p) = f(v) ∀v ∈ V. a(·, ·; p) : V × V → R is a bilinear form, e.g., a(u, v; p) =

  • D

κ(p)∇u · ∇vdx. f(·) ∈ V′ is a linear functional. s(p) = s(u(p)) ∈ R is a QoI.

Ex 2. subsurface flow in a porous medium

κ(p) = ep log-normal diffusion with p ∈ N(¯ p, C) K = 1292 = 16, 641

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 5 / 32

slide-20
SLIDE 20

Model order reduction – formulation (Maday, Patera, Rozza, et. al.)

Finite element approximation

Finite element space Vh, dim(Vh) = Nh Given p ∈ P, find uh ∈ Vh s.t. a(uh, vh; p) = f(vh) ∀vh ∈ Vh The algebraic system is Ah(p)uh = fh V = [ψ1, . . . , ψN] VTuh = uN VTAh(p)V = AN(p) VTfh = fN

Reduced basis approximation

Reduced basis space VN ⊂ Vh, dim(VN) = N Given p ∈ P, find uN ∈ VN s.t. a(uN, vN; p) = f(vN) ∀vN ∈ VN The algebraic system is AN(p)uN = fN

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 6 / 32

slide-21
SLIDE 21

Model order reduction – algorithms (Maday, Patera, Rozza, et. al.)

POD/SVD

Samples Ξt = {pn, n = 1, . . . , Nt} Compute snapshots U = [uh(p1), . . . , uh(pNt)] Perform SVD U = VΣWT Extract bases V[1 : N, :] N = argminn En(Σ) ≥ 1 − ε

Greedy algorithm

Samples Ξt = {pn, n = 1, . . . , Nt} Initialize VN for N = 1 as VN = span{uh(p1)} Pick next sample such that pN+1 = argmaxp∈Ξt ∆N(p) Update bases VN+1 as VN ⊕ span{uh(pN+1)}

Offline-Online

Affine assumption/approx. a = Q

q=1 θq(p)aq

Offline computation once Aq

N = VTAq hV, fN = VTfh

Online assemble AN(p) = Q

q=1 θq(p)Aq N

Online solve and evaluate AN(p)uN = fN, s(p) = sT

NuN

Goal-oriented a-posteriori error estimate ∆N(p) – dual weighted residual

∆N(p) = f(ϕN) − a(uN, ϕN; p), where dual Prob.: a(wN, ϕN; p) = s(wN) ∀wN ∈ WN. ∆N(p) = ¯ fT

NϕN − Q

  • q=1

θq(p)ϕT

N ¯

Aq

NuN, where ¯

fN = WTfh, and ¯ Aq

N = WTAq hV

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 7 / 32

slide-22
SLIDE 22

Model order reduction – samples

random quasi-random centroidal Voronoi tessellation (Du, Gunzburger, et. al. )

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 8 / 32

slide-23
SLIDE 23

Model order reduction – samples

tensor grid sparse grid anisotropic sparse grid (Liao, Elman, et. al. ) (C., Schwab, et. al. )

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 8 / 32

slide-24
SLIDE 24

Model order reduction – samples

hp-adaptive-rb adaptive-add-remove hybrid goal-oriented adaptive (Eftang, Patera, et. al. ) (Hesthaven, Stamm, et. al.) (C., Quarteroni, et. al. )

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 8 / 32

slide-25
SLIDE 25

Outline

1

Model order reduction for parametric PDEs

2

Hessian-based sampling

3

Numerical experiments

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 9 / 32

slide-26
SLIDE 26

Hessian-based sampling – Hessian

Hessian H ∈ RK×K, the second-order partial derivatives of s with respect to p, i.e., Hkl = ∂2s ∂pk∂pl , k, l ∈ 1, . . . , K. The eigendirections corresponding to the leading eigenvalues of the Hessian are the directions along which the s changes the most in the parameter space.

H =

  • 2

−2 −2 2

  • , λ1 = 1, λ2 = 0

Thus, sampling in the subspace of leading eigendirections presumably provide the most representative samples that capture the variation of the s, at least locally. It has been widely used in large-scale computation for solving nonlinear problems, control/optimization, parameter estimation, data assimilation

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 10 / 32

slide-27
SLIDE 27

Hessian-based sampling – Hessian

Hessian H ∈ RK×K, the second-order partial derivatives of s with respect to p, i.e., Hkl = ∂2s ∂pk∂pl , k, l ∈ 1, . . . , K. The eigendirections corresponding to the leading eigenvalues of the Hessian are the directions along which the s changes the most in the parameter space.

H =

  • 2

−2 −2 2

  • , λ1 = 1, λ2 = 0

Thus, sampling in the subspace of leading eigendirections presumably provide the most representative samples that capture the variation of the s, at least locally. It has been widely used in large-scale computation for solving nonlinear problems, control/optimization, parameter estimation, data assimilation

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 10 / 32

slide-28
SLIDE 28

Hessian-based sampling – Hessian

Hessian H ∈ RK×K, the second-order partial derivatives of s with respect to p, i.e., Hkl = ∂2s ∂pk∂pl , k, l ∈ 1, . . . , K. The eigendirections corresponding to the leading eigenvalues of the Hessian are the directions along which the s changes the most in the parameter space.

H =

  • 2

−2 −2 2

  • , λ1 = 1, λ2 = 0

Thus, sampling in the subspace of leading eigendirections presumably provide the most representative samples that capture the variation of the s, at least locally. It has been widely used in large-scale computation for solving nonlinear problems, control/optimization, parameter estimation, data assimilation

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 10 / 32

slide-29
SLIDE 29

Hessian-based sampling – Hessian

Hessian H ∈ RK×K, the second-order partial derivatives of s with respect to p, i.e., Hkl = ∂2s ∂pk∂pl , k, l ∈ 1, . . . , K. The eigendirections corresponding to the leading eigenvalues of the Hessian are the directions along which the s changes the most in the parameter space.

H =

  • 2

−2 −2 2

  • , λ1 = 1, λ2 = 0

Thus, sampling in the subspace of leading eigendirections presumably provide the most representative samples that capture the variation of the s, at least locally. It has been widely used in large-scale computation for solving nonlinear problems, control/optimization, parameter estimation, data assimilation

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 10 / 32

slide-30
SLIDE 30

Hessian-based sampling – C-preconditioned Hessian

Let squad denote the quadratic/Taylor approximation of s given by squad(p) = s(¯ p) + gT

¯ p(p − ¯

p) + 1 2(p − ¯ p)TH¯

p(p − ¯

p), (1) where g¯

p and H¯ p represent the gradient and the Hessian of s at ¯

p. The expectation of squad can be computed as E[squad] = s(¯ p) + 1 2tr(˜ H¯

p),

(2) tr(˜ H¯

p): trace of the covariance preconditioned Hessian ˜

p = CH¯ p at the mean ¯

p. It is equivalent to the sum of all the eigenvalues, i.e., tr(˜ H¯

p) = K

  • k=1

λk(˜ H¯

p).

(3) If λk decay fast, sampling in a low-dimensional subspace of eigenvectors: pL =

L

  • l=1

(p, ϕl)2ϕl. (4)

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 11 / 32

slide-31
SLIDE 31

Hessian-based sampling – C-preconditioned Hessian

Let squad denote the quadratic/Taylor approximation of s given by squad(p) = s(¯ p) + gT

¯ p(p − ¯

p) + 1 2(p − ¯ p)TH¯

p(p − ¯

p), (1) where g¯

p and H¯ p represent the gradient and the Hessian of s at ¯

p. The expectation of squad can be computed as E[squad] = s(¯ p) + 1 2tr(˜ H¯

p),

(2) tr(˜ H¯

p): trace of the covariance preconditioned Hessian ˜

p = CH¯ p at the mean ¯

p. It is equivalent to the sum of all the eigenvalues, i.e., tr(˜ H¯

p) = K

  • k=1

λk(˜ H¯

p).

(3) If λk decay fast, sampling in a low-dimensional subspace of eigenvectors: pL =

L

  • l=1

(p, ϕl)2ϕl. (4)

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 11 / 32

slide-32
SLIDE 32

Hessian-based sampling – C-preconditioned Hessian

Let squad denote the quadratic/Taylor approximation of s given by squad(p) = s(¯ p) + gT

¯ p(p − ¯

p) + 1 2(p − ¯ p)TH¯

p(p − ¯

p), (1) where g¯

p and H¯ p represent the gradient and the Hessian of s at ¯

p. The expectation of squad can be computed as E[squad] = s(¯ p) + 1 2tr(˜ H¯

p),

(2) tr(˜ H¯

p): trace of the covariance preconditioned Hessian ˜

p = CH¯ p at the mean ¯

p. It is equivalent to the sum of all the eigenvalues, i.e., tr(˜ H¯

p) = K

  • k=1

λk(˜ H¯

p).

(3) If λk decay fast, sampling in a low-dimensional subspace of eigenvectors: pL =

L

  • l=1

(p, ϕl)2ϕl. (4)

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 11 / 32

slide-33
SLIDE 33

Hessian-based sampling – C-preconditioned Hessian

Let squad denote the quadratic/Taylor approximation of s given by squad(p) = s(¯ p) + gT

¯ p(p − ¯

p) + 1 2(p − ¯ p)TH¯

p(p − ¯

p), (1) where g¯

p and H¯ p represent the gradient and the Hessian of s at ¯

p. The expectation of squad can be computed as E[squad] = s(¯ p) + 1 2tr(˜ H¯

p),

(2) tr(˜ H¯

p): trace of the covariance preconditioned Hessian ˜

p = CH¯ p at the mean ¯

p. It is equivalent to the sum of all the eigenvalues, i.e., tr(˜ H¯

p) = K

  • k=1

λk(˜ H¯

p).

(3) If λk decay fast, sampling in a low-dimensional subspace of eigenvectors: pL =

L

  • l=1

(p, ϕl)2ϕl. (4)

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 11 / 32

slide-34
SLIDE 34

Hessian-based sampling – from local to global Hessian

Hessian at the mean: let (λk, ϕk)K

k=1 denote the eigenpairs of ˜

p = CH¯ p, or

equivalently the generalized eigenpairs of (H¯

p, C−1) for computational efficiency

pϕk = λkC−1ϕk.

(5) Averaged Hessian: we can replace the Hessian at the mean by H =

  • P

Hpdµ(p) ≈ 1 M

M

  • m=1

Hpm, (6) with pm sampled according to its probability distribution µ. Combined Hessian: we compute the eigenvectors of Hessian at different samples Hpmϕm

k = λm k C−1ϕm k ,

m = 1, . . . , M. (7) Then we combine them with weights (e.g. wm

k =

  • λm

k ) and compress them by SVD

Φ = (w1

1ϕ1 1, . . . , w1 L1ϕ1 L1, . . . , wM 1 ϕM 1 , . . . , wM LMϕM LM).

(8)

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 12 / 32

slide-35
SLIDE 35

Hessian-based sampling – from local to global Hessian

Hessian at the mean: let (λk, ϕk)K

k=1 denote the eigenpairs of ˜

p = CH¯ p, or

equivalently the generalized eigenpairs of (H¯

p, C−1) for computational efficiency

pϕk = λkC−1ϕk.

(5) Averaged Hessian: we can replace the Hessian at the mean by H =

  • P

Hpdµ(p) ≈ 1 M

M

  • m=1

Hpm, (6) with pm sampled according to its probability distribution µ. Combined Hessian: we compute the eigenvectors of Hessian at different samples Hpmϕm

k = λm k C−1ϕm k ,

m = 1, . . . , M. (7) Then we combine them with weights (e.g. wm

k =

  • λm

k ) and compress them by SVD

Φ = (w1

1ϕ1 1, . . . , w1 L1ϕ1 L1, . . . , wM 1 ϕM 1 , . . . , wM LMϕM LM).

(8)

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 12 / 32

slide-36
SLIDE 36

Hessian-based sampling – from local to global Hessian

Hessian at the mean: let (λk, ϕk)K

k=1 denote the eigenpairs of ˜

p = CH¯ p, or

equivalently the generalized eigenpairs of (H¯

p, C−1) for computational efficiency

pϕk = λkC−1ϕk.

(5) Averaged Hessian: we can replace the Hessian at the mean by H =

  • P

Hpdµ(p) ≈ 1 M

M

  • m=1

Hpm, (6) with pm sampled according to its probability distribution µ. Combined Hessian: we compute the eigenvectors of Hessian at different samples Hpmϕm

k = λm k C−1ϕm k ,

m = 1, . . . , M. (7) Then we combine them with weights (e.g. wm

k =

  • λm

k ) and compress them by SVD

Φ = (w1

1ϕ1 1, . . . , w1 L1ϕ1 L1, . . . , wM 1 ϕM 1 , . . . , wM LMϕM LM).

(8)

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 12 / 32

slide-37
SLIDE 37

Hessian-based sampling – Hessian action

We employ a Lagrange multiplier method to compute the action of Hessian: L(u, v, p) = s(u) + f(v) − a(u, v; p), (9) where v is the adjoint variable or the Lagrange multiplier. With first order variation, we obtain the adjoint problem: find v ∈ V such that a(w, v; p) = s(w) ∀w ∈ V. (10) Given (u, v, p), we compute the Hessian action in ˆ p by the second order variation   ∂uuL ∂uvL ∂upL ∂vuL ∂vvL ∂vpL ∂puL ∂pvL ∂ppL     ˆ u ˆ v ˆ p   =   Hpˆ p   , (11) the incremental adjoint problem: find ˆ v ∈ V such that a(˜ u,ˆ v; p) = −∂pa(˜ u, v; p)ˆ p ∀˜ u ∈ V, (12) the incremental state problem: find ˆ u ∈ V such that a(ˆ u,˜ v; p) = −∂pa(u,˜ v; p)ˆ p ∀˜ v ∈ V, (13) and the Hessian action in direction ˆ p as Hpˆ p = −∂pa(ˆ u, v; p) − ∂pa(u,ˆ v; p) − ∂ppa(u, v; p)ˆ p. (14)

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 13 / 32

slide-38
SLIDE 38

Hessian-based sampling – Hessian action

We employ a Lagrange multiplier method to compute the action of Hessian: L(u, v, p) = s(u) + f(v) − a(u, v; p), (9) where v is the adjoint variable or the Lagrange multiplier. With first order variation, we obtain the adjoint problem: find v ∈ V such that a(w, v; p) = s(w) ∀w ∈ V. (10) Given (u, v, p), we compute the Hessian action in ˆ p by the second order variation   ∂uuL ∂uvL ∂upL ∂vuL ∂vvL ∂vpL ∂puL ∂pvL ∂ppL     ˆ u ˆ v ˆ p   =   Hpˆ p   , (11) the incremental adjoint problem: find ˆ v ∈ V such that a(˜ u,ˆ v; p) = −∂pa(˜ u, v; p)ˆ p ∀˜ u ∈ V, (12) the incremental state problem: find ˆ u ∈ V such that a(ˆ u,˜ v; p) = −∂pa(u,˜ v; p)ˆ p ∀˜ v ∈ V, (13) and the Hessian action in direction ˆ p as Hpˆ p = −∂pa(ˆ u, v; p) − ∂pa(u,ˆ v; p) − ∂ppa(u, v; p)ˆ p. (14)

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 13 / 32

slide-39
SLIDE 39

Hessian-based sampling – Hessian action

We employ a Lagrange multiplier method to compute the action of Hessian: L(u, v, p) = s(u) + f(v) − a(u, v; p), (9) where v is the adjoint variable or the Lagrange multiplier. With first order variation, we obtain the adjoint problem: find v ∈ V such that a(w, v; p) = s(w) ∀w ∈ V. (10) Given (u, v, p), we compute the Hessian action in ˆ p by the second order variation   ∂uuL ∂uvL ∂upL ∂vuL ∂vvL ∂vpL ∂puL ∂pvL ∂ppL     ˆ u ˆ v ˆ p   =   Hpˆ p   , (11) the incremental adjoint problem: find ˆ v ∈ V such that a(˜ u,ˆ v; p) = −∂pa(˜ u, v; p)ˆ p ∀˜ u ∈ V, (12) the incremental state problem: find ˆ u ∈ V such that a(ˆ u,˜ v; p) = −∂pa(u,˜ v; p)ˆ p ∀˜ v ∈ V, (13) and the Hessian action in direction ˆ p as Hpˆ p = −∂pa(ˆ u, v; p) − ∂pa(u,ˆ v; p) − ∂ppa(u, v; p)ˆ p. (14)

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 13 / 32

slide-40
SLIDE 40

Hessian-based sampling – Randomized SVD

Algorithm 1 Randomized SVD for generalized eigenvalue problem (Hp, C−1) Input: Hp, C−1, the number of eigenpairs L, an oversampling factor l = 5 ∼ 10. Output: eigenpairs (ΛL, ΨL) with ΛL = diag(λ1, . . . , λL) and ΨL = (ψ1, . . . , ψL).

  • 1. Draw a Gaussian random matrix Ω ∈ RK×(L+l).
  • 2. Compute Y = C(HpΩ).
  • 3. Compute QR-factorization Y = QR such that Q⊤C−1Q = IL+l.
  • 4. Form T = Q⊤HpQ and compute eigendecomposition T = SΛS⊤.
  • 5. Extract ΛL = Λ(1 : L, 1 : L) and ΨL = QSL with SL = S(:, 1 : L).

Computational cost is dominated by two (so-called double pass algorithm) HpΩ, HpQ which needs 4(L + l) linearized PDE solves, i.e., incremental problems. Approximation error is bounded by remaining eigenvalues ≤ C(L, l)(

l≥L λ2 l )1/2.

Nathan Halko, Per-Gunnar Martinsson, and Joel A. Tropp. “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions.” SIAM review 53.2 (2011): 217-288.

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 14 / 32

slide-41
SLIDE 41

Hessian-based sampling – Randomized SVD

Algorithm 2 Randomized SVD for generalized eigenvalue problem (Hp, C−1) Input: Hp, C−1, the number of eigenpairs L, an oversampling factor l = 5 ∼ 10. Output: eigenpairs (ΛL, ΨL) with ΛL = diag(λ1, . . . , λL) and ΨL = (ψ1, . . . , ψL).

  • 1. Draw a Gaussian random matrix Ω ∈ RK×(L+l).
  • 2. Compute Y = C(HpΩ).
  • 3. Compute QR-factorization Y = QR such that Q⊤C−1Q = IL+l.
  • 4. Form T = Q⊤HpQ and compute eigendecomposition T = SΛS⊤.
  • 5. Extract ΛL = Λ(1 : L, 1 : L) and ΨL = QSL with SL = S(:, 1 : L).

Computational cost is dominated by two (so-called double pass algorithm) HpΩ, HpQ which needs 4(L + l) linearized PDE solves, i.e., incremental problems. Approximation error is bounded by remaining eigenvalues ≤ C(L, l)(

l≥L λ2 l )1/2.

Nathan Halko, Per-Gunnar Martinsson, and Joel A. Tropp. “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions.” SIAM review 53.2 (2011): 217-288.

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 14 / 32

slide-42
SLIDE 42

Hessian-based sampling – Randomized SVD

Algorithm 3 Randomized SVD for generalized eigenvalue problem (Hp, C−1) Input: Hp, C−1, the number of eigenpairs L, an oversampling factor l = 5 ∼ 10. Output: eigenpairs (ΛL, ΨL) with ΛL = diag(λ1, . . . , λL) and ΨL = (ψ1, . . . , ψL).

  • 1. Draw a Gaussian random matrix Ω ∈ RK×(L+l).
  • 2. Compute Y = C(HpΩ).
  • 3. Compute QR-factorization Y = QR such that Q⊤C−1Q = IL+l.
  • 4. Form T = Q⊤HpQ and compute eigendecomposition T = SΛS⊤.
  • 5. Extract ΛL = Λ(1 : L, 1 : L) and ΨL = QSL with SL = S(:, 1 : L).

Computational cost is dominated by two (so-called double pass algorithm) HpΩ, HpQ which needs 4(L + l) linearized PDE solves, i.e., incremental problems. Approximation error is bounded by remaining eigenvalues ≤ C(L, l)(

l≥L λ2 l )1/2.

Nathan Halko, Per-Gunnar Martinsson, and Joel A. Tropp. “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions.” SIAM review 53.2 (2011): 217-288.

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 14 / 32

slide-43
SLIDE 43

Outline

1

Model order reduction for parametric PDEs

2

Hessian-based sampling

3

Numerical experiments

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 15 / 32

slide-44
SLIDE 44

Heat conduction in thermal blocks

Let V denote a Hilbert space with dual V′. Given p ∈ P, µ-a.e., find u ∈ V such that a(u, v; p) = f(v) ∀v ∈ V. a(·, ·; p) : V × V → R is a bilinear form, e.g., a(u, v; p) =

  • D

κ(p)∇u · ∇vdx. f = 0, u = 1 on top, u = 0 on bottom. The QoI s(u(p)) = 100

  • [0,0.1]2 u(p)dx.

Nt = 1000, uniform mesh of 65 × 65.

Ex 1. heat conduction in thermal blocks

κ(p) =

K

  • k=1

k−βχDk(x)pk, β = 1 p ∼ U([− √ 3, √ 3]K) K = 162 = 256

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 16 / 32

slide-45
SLIDE 45

RB errors POD vs Greedy for solution

50 100 150 200 # RB basis functions (N) 10-4 10-3 10-2 10-1 relative error

E[||ufe−urb||], POD, random max[||ufe−urb||], POD, random E[||ufe−urb||], Greedy, random max[||ufe−urb||], Greedy, random

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 17 / 32

slide-46
SLIDE 46

RB errors POD vs Greedy for QoI

50 100 150 200 # RB basis functions (N) 10-6 10-5 10-4 10-3 10-2 10-1 relative error

E[|sfe−srb|], POD, random max[|sfe−srb|], POD, random E[|sfe−srb|], Greedy, random max[|sfe−srb|], Greedy, random

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 18 / 32

slide-47
SLIDE 47

RB errors random vs Hessian for solution

50 100 150 200 # RB basis functions (N) 10-4 10-3 10-2 10-1 relative error

E[||ufe−urb||], POD, random max[||ufe−urb||], POD, random E[||ufe−urb||], Greedy, random max[||ufe−urb||], Greedy, random E[||ufe−urb||], POD, Hessian max[||ufe−urb||], POD, Hessian

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 19 / 32

slide-48
SLIDE 48

RB errors random vs Hessian for QoI

50 100 150 200 # RB basis functions (N) 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 relative error

E[|sfe−srb|], POD, random max[|sfe−srb|], POD, random E[|sfe−srb|], Greedy, random max[|sfe−srb|], Greedy, random E[|sfe−srb|], POD, Hessian max[|sfe−srb|], POD, Hessian

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 20 / 32

slide-49
SLIDE 49

RB errors with different # Hessian modes for solution

50 100 150 200 # RB basis functions (N) 10-4 10-3 10-2 10-1 relative error

E[||ufe−urb||], POD, random E[||ufe−urb||], POD, Hessian, 5 E[||ufe−urb||], POD, Hessian, 10 E[||ufe−urb||], POD, Hessian, 20

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 21 / 32

slide-50
SLIDE 50

RB errors with different # Hessian modes for QoI

50 100 150 200 # RB basis functions (N) 10-8 10-7 10-6 10-5 10-4 10-3 10-2 relative error

E[|sfe−srb|], POD, random E[|sfe−srb|], POD, Hessian, 5 E[|sfe−srb|], POD, Hessian, 10 E[|sfe−srb|], POD, Hessian, 20

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 22 / 32

slide-51
SLIDE 51

Decay of the eigenvalues of Hessian at mean

50 100 150 200 250 300 # Hessian eigenvalue (k) 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 Hessian eigenvalue

positive negative

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 23 / 32

slide-52
SLIDE 52

Comparison of different Hessians for solution

50 100 150 200 # RB basis functions (N) 10-3 10-2 10-1 relative error

E[||ufe−urb||], POD, Hessian, 20 E[||ufe−urb||], POD, AveragedHessian, 20 E[||ufe−urb||], POD, CombinedHessian, 20

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 24 / 32

slide-53
SLIDE 53

Comparison of different Hessians for QoI

50 100 150 200 # RB basis functions (N) 10-8 10-7 10-6 10-5 10-4 10-3 10-2 relative error

E[|sfe−srb|], POD, Hessian, 20 E[|sfe−srb|], POD, AveragedHessian, 20 E[|sfe−srb|], POD, CombinedHessian, 20

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 25 / 32

slide-54
SLIDE 54

Subsurface flow in porous medium

Let V denote a Hilbert space with dual V′. Given p ∈ P, µ-a.e., find u ∈ V such that a(u, v; p) = f(v) ∀v ∈ V. a(·, ·; p) : V × V → R is a bilinear form, e.g., a(u, v; p) =

  • D

κ(p)∇u · ∇vdx. f = 0, u = 1 on top, u = 0 on bottom. The QoI s(u(p)) = 100

  • [0,0.1]2 u(p)dx.

Nt = 1000, uniform mesh of 129 × 129.

Ex 2. subsurface flow in a porous medium

κ(p) = ep log-normal diffusion with p ∈ N(¯ p, C), C = (−∆ + 0.5I)−2 K = 1292 = 16, 641

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 26 / 32

slide-55
SLIDE 55

RB errors random vs Hessian for solution

50 100 150 200 # RB basis functions (N) 10-3 10-2 10-1 100 relative error

E[||ufe−urb||], POD, random max[||ufe−urb||], POD, random E[||ufe−urb||], POD, Hessian max[||ufe−urb||], POD, Hessian

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 27 / 32

slide-56
SLIDE 56

RB errors random vs Hessian for QoI

20 40 60 80 100 # RB basis functions (N) 10-4 10-3 10-2 10-1 100 relative error

POD, random POD, Hessian, 1 POD, Hessian, 3 POD, Hessian, 7 POD, Hessian, 15

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 28 / 32

slide-57
SLIDE 57

Decay of eigenvalues of Hessian H¯

p and covariance C 20 40 60 80 100 # eigenvalue (k) 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 eigenvalue

Covariance Hessian

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 29 / 32

slide-58
SLIDE 58

Comparison of different Hessians for QoI

20 40 60 80 100 # RB basis functions (N) 10-4 10-3 10-2 10-1 100 relative error

POD, Hessian, 15 POD, AveragedHessian, 15 POD, CombinedHessian, 15

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 30 / 32

slide-59
SLIDE 59

Summary

Even the solution manifold is high-dimensional, the manifold of the QoI may be low-dimensional, which can be detected by the eigenvalues of the Hessian. A scalable Hessian-based sampling algorithm is developed, whose cost is independent of the nominal dimensions but only intrinsic dimensions for QoI. Further investigation on adaptive Hessian sampling, local–global sampling, empirical interpolation, nonlinear problems, properties of different QoI. Rigorous analysis of goal-oriented error estimate for Hessian-based sampling.

P . Chen, and O. Ghattas. Hessian-based sampling in high-dimensional parameter space for goal-oriented model order reduction, preprint, 2017. P . Chen, U. Villa, and O. Ghattas. Hessian-based adaptive sparse quadrature for infinite-dimensional Bayesian inverse problems, preprint, 2017. P . Chen, U. Villa, and O. Ghattas. Taylor approximation and variance reduction for PDE-constrained optimal control problems under uncertainty, preprint, 2017.

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 31 / 32

slide-60
SLIDE 60

Parametrization Dimension Information

# pixels K = 642 # modes K = 64

P . Chen (ICES – UT Austin) Hessian-based sampling for model order reduction 18 July, 2017 32 / 32