Tensor-based algorithms for the model reduction of high dimensional - - PowerPoint PPT Presentation

tensor based algorithms for the model reduction of high
SMART_READER_LITE
LIVE PREVIEW

Tensor-based algorithms for the model reduction of high dimensional - - PowerPoint PPT Presentation

Tensor-based algorithms for the model reduction of high dimensional problems: application to stochastic fluid problems M. Billaud Friess marie.billaud-friess@ec-nantes.fr Joint work with A. Nouy, O. Zahm CEMRACS Luminy, 2013 Introduction


slide-1
SLIDE 1

Tensor-based algorithms for the model reduction of high dimensional problems: application to stochastic fluid problems

  • M. Billaud Friess

marie.billaud-friess@ec-nantes.fr Joint work with A. Nouy, O. Zahm

CEMRACS Luminy, 2013

slide-2
SLIDE 2

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

General context High dimensional problem in tensor spaces:

Given b ∈ Y′, seek u ∈ X solution of Lu = b.

  • L : X → Y′ a linear and continuous isomorphism
  • X = a

s

µ=1 Xµ ||·||X (resp. Y) a tensor Hilbert space of dual X′ (resp. Y′).

  • X (resp. Y) is equipped with the norm || · ||X (resp. || · ||Y)

Typical problems:

  • Stochastic partial differential equations (SPDE)
  • Parametric partial differential equations
  • High dimensional algebraic systems in tensor format arising from discretization

CEMRACS 2013

  • M. Billaud Friess (ECN)

2/ 37

slide-3
SLIDE 3

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Application: Stochastic PDEs arising from fluids

Problem: Find u : (x, ξ) ∈ Ω × Ξ → u(x, ξ) in X = L2(Ξ, dPξ; V) solution of L(u(·, ξ); ξ) = b(ξ), a.s. . with uncertainties represented by m ∈ N random variables on (Ξ, B, Pξ): ξ ∈ Rm, and V an Hilbert space of functions on Ω ⊂ Rd. Considered examples:

  • Reaction-Advection-Diffusion problem: non-symetric problem

L = −ν△ + c(ξ) · ∇ + a(ξ) with X = L2(Ξ, dPξ; H1

0(Ω))

  • Oseen problem: non-symetric saddle point problem

L = −ν(ξ)△ + a(ξ) · ∇ ∇ ∇·

  • with X = L2(Ξ, dPξ; H1

0(Ω)) × L2(Ξ, dPξ; L2(Ω))

Difficulty: Curse of dimensionality ❀ Model reduction

  • Reduced basis approaches [Rozza]
  • Low rank tensor approximation (Proper Generalized Decomposition) [Nouy]

CEMRACS 2013

  • M. Billaud Friess (ECN)

3/ 37

slide-4
SLIDE 4

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Low rank approximation

❶ Approximation in tensor subset for u ∈ X ≈ u ∈ SX ⊂ X Rank-r canonical tensors: Rr(X) =

  • r
  • i=1

s

  • µ=1

φµ

i ; φµ i ∈ Xµ

  • with X = a

s

  • µ=1

||·||X

Other: Tucker tensors, Tensor train tensors, Hierarchical Tucker tensors [Khoromskij] ❷ Best approximation in SX

  • u ∈ ΠSX(u) = arg min

v∈SX ||v − u||

  • u ∈ arg min

v∈SX ||Lv − b||∗

❸ Progressive constructions of approximations with Greedy approach [Temlyakov]

Limitation of the classical approach:

× Bad convergence rate for usual norm || · ||∗ (ex.: || · ||2 for non symmetric operator L) × Weakly coercive problems

CEMRACS 2013

  • M. Billaud Friess (ECN)

4/ 37

slide-5
SLIDE 5

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Low rank approximation

❶ Approximation in tensor subset for u ∈ X ≈ u ∈ SX ⊂ X Rank-r canonical tensors: Rr(X) =

  • r
  • i=1

φi ⊗ ψi; φi ∈ V, ψi ∈ S

  • with X = L2(Ξ, dPξ) ⊗ V = S ⊗ V

❀ Deterministic/Stochastic separation s = 2 Other: Tucker tensors, Tensor train tensors, Hierarchical Tucker tensors [Khoromskij] ❷ Best approximation in SX

  • u ∈ ΠSX(u) = arg min

v∈SX ||v − u||

  • u ∈ arg min

v∈SX ||Lv − b||∗

❸ Progressive constructions of approximations with Greedy approach [Temlyakov]

Limitation of the classical approach:

× Bad convergence rate for usual norm || · ||∗ (ex.: || · ||2 for non symmetric operator L) × Weakly coercive problems

CEMRACS 2013

  • M. Billaud Friess (ECN)

4/ 37

slide-6
SLIDE 6

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

  • Fig. Reaction-diffusion-advection problem : comparison of convergence error for

|| · ||∗ = || · ||2 for a R20 approximation 2 4 6 8 10 12 14 16 18 20 10−5 10−4 10−3 10−2 10−1 100 r ||u − u||2/||u||2 Reference Approximation

CEMRACS 2013

  • M. Billaud Friess (ECN)

5/ 37

slide-7
SLIDE 7

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Main goal of the talk

Goal: Present an approximation strategy to solve high dimensional PDEs (ex.: stochastic) in tensor subsets relying on best approximation problem formulated using ideal norms. 1 Ideal algorithm (IA) 2 Perturbed ideal algorithm (PA) 3 Ad-Re-Di problem 4 Oseen problem 5 Conclusions

CEMRACS 2013

  • M. Billaud Friess (ECN)

6/ 37

slide-8
SLIDE 8

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Outline

1 Ideal algorithm (IA) 2 Perturbed ideal algorithm (PA) 3 Ad-Re-Di problem 4 Oseen problem 5 Conclusions

CEMRACS 2013

  • M. Billaud Friess (ECN)

7/ 37

slide-9
SLIDE 9

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Ideal norm Problem:

Given b ∈ Y′ find the solution u of Lu = b

  • L : X → Y′ linear operator of adjoint L∗ : Y → X′, b ∈ Y′
  • Riesz operators RX : X → X′ (resp. RY : Y → Y′ )

∀u, w ∈ X u, wX = u, RXwX,X′ = RXu, wX′,X = RXu, RXuX′

  • L is continuous

sup

v∈X

sup

w∈Y

Lv, wY′,Y vXwY = β > 0.

  • L is weakly coercive

inf

v∈X sup w∈Y

Lv, wY′,Y vXwY = α > 0.

  • We have the stability condition for L

α||Lu||Y′ ≤ ||u||X ≤ β||Lu||Y′ ➥ Under these assumptions L is an isomorphism [Ern].

CEMRACS 2013

  • M. Billaud Friess (ECN)

8/ 37

slide-10
SLIDE 10

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

How to choose the norms || · ||X and || · ||Y′ ? [Cohen,Dahmen] || · ||X = ||L · ||Y′ ⇔ || · ||X′ = ||L∗ · ||Y

  • This choice leads to a problem ideally conditioned i.e. α = β = 1.
  • Possibility to choose a priori and arbitrary || · ||X ❀ "Goal oriented approximations"

Interpretation: Such a choice implies ∀v, w ∈ X v, wX = Lv, LwY′ = Lv, R−1

Y LwY′,Y = v, R−1 X L∗R−1 Y LwX

⇒ IX = R−1

X L∗R−1 Y L ⇔ RY = LR−1 X L∗ ⇔ RX = L∗R−1 Y L

Example: algebraic system

  • L ∈ Rn×n, u, b ∈ Rn
  • X = Y = Rn
  • RX = I, RY = LL∗
  • ||u||Y = ||L∗u||2 = ||L∗u||X

CEMRACS 2013

  • M. Billaud Friess (ECN)

9/ 37

slide-11
SLIDE 11

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Best approximation problem

❶ Best approximation problem u ≈ u in SX ⊂ X

  • u ∈ ΠSX(u) = arg min

v∈SX ||v − u||X ⇔ ||L

u − b||Y′ = min

v∈SX ||Lv − b||Y′

❷ Equivalent problem: min

v∈SX ||Lv − b||Y′

  • Non computable norm || · ||Y′

❸ Exact gradient type algorithm

We seek {uk, yk}k≥0 ⊂ SX × Y given u0

h = 0 s.t.

  • yk

= R−1

Y (Luk − b)),

uk+1 ∈ ΠSX(uk − R−1

X L∗yk).

  • This ideal gradient type algorithm converges in one iteration.
  • R−1

Y (Lv − b) not affordable in practice !

  • How to compute practically ΠSX ?

CEMRACS 2013

  • M. Billaud Friess (ECN)

10/ 37

slide-12
SLIDE 12

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Best approximation problem

❶ Best approximation problem u ≈ u in SX ⊂ X

  • u ∈ ΠSX(u) = arg min

v∈SX ||v − u||X ⇔ ||L

u − b||Y′ = min

v∈SX ||Lv − b||Y′

❷ Equivalent problem: min

v∈SX ||R−1 Y (Lv − b)||Y

  • Non computable norm || · ||Y′

❸ Exact gradient type algorithm

We seek {uk, yk}k≥0 ⊂ SX × Y given u0

h = 0 s.t.

  • yk

= R−1

Y (Luk − b)),

uk+1 ∈ ΠSX(uk − R−1

X L∗yk).

  • This ideal gradient type algorithm converges in one iteration.
  • R−1

Y (Lv − b) not affordable in practice !

  • How to compute practically ΠSX ?

CEMRACS 2013

  • M. Billaud Friess (ECN)

10/ 37

slide-13
SLIDE 13

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Outline

1 Ideal algorithm (IA) 2 Perturbed ideal algorithm (PA) 3 Ad-Re-Di problem 4 Oseen problem 5 Conclusions

CEMRACS 2013

  • M. Billaud Friess (ECN)

11/ 37

slide-14
SLIDE 14

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

First step To compute:

Find yk ∈ Y s.t. yk = R−1

Y (Luk − b)

  • Λδ : Y → Y is a non linear mapping s.t ∀y ∈ {L(SX − b); v ∈ SX} we have

||Λδ(y) − y||Y′ ≤ δ||y||Y′, δ ∈ (0, 1)

  • yk is an approximation of R−1

Y L(uk − b) "with" a precision δ

How ?

  • Preconditionned iterative solver [Powell & al.]
  • Greedy construction in a fixed small low-rank subset (ex.: R1) [Temlyakov]

CEMRACS 2013

  • M. Billaud Friess (ECN)

12/ 37

slide-15
SLIDE 15

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

First step To compute:

Find yk ∈ Y s.t. yk = ΛδR−1

Y (Luk − b)

  • Λδ : Y → Y is a non linear mapping s.t ∀y ∈ {L(SX − b); v ∈ SX} we have

||Λδ(y) − y||Y′ ≤ δ||y||Y′, δ ∈ (0, 1)

  • yk is an approximation of R−1

Y L(uk − b) "with" a precision δ

How ?

  • Preconditionned iterative solver [Powell & al.]
  • Greedy construction in a fixed small low-rank subset (ex.: R1) [Temlyakov]

CEMRACS 2013

  • M. Billaud Friess (ECN)

12/ 37

slide-16
SLIDE 16

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Second step To compute:

Find uk ∈ Sk

X s.t. uk+1 ∈ Cε(uk − R−1 X (L∗yk)).

  • Cε : X → X is a non linear mapping giving an approx. of uk − R−1

X (L∗yk)

− either with a fixed rank r, Cε = ΠSX. ➥ In that case Sk

X = SX is fixed at each iteration.

− or with a fixed precision ε ∈ (0, 1) s.t. Cε = ΠSk

X and ||Cε(v) − v||X ≤ ε||v||X,

∀v ∈ X ➥ In that case Sk

X = Rrk(X) may change at each iteration.

How ?

  • 1. s = 2 : Rank-r SVD, s > 2 : HOSVD, Alternating Minimization Algorithm
  • 2. Greedy construction in a fixed small low-rank subset: adaptativity, large rank rk, fixed

precision

CEMRACS 2013

  • M. Billaud Friess (ECN)

13/ 37

slide-17
SLIDE 17

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Summary of the algorithm

Set u0 = 0, Z0

0 = 0 and SY a low rank tensor subset.

Gradient loop: for k = 0 to K do

  • 1. Projection: yk

0 = arg min y∈Zk

||y − rk||Y; with rk = R−1

Y (Luk − b)

  • 2. Set m = 0;

Loop for Λd: while err(yk

m, rk) ≤ δ do

a) m = m + 1; b) Correction: wk

m = arg min y∈SY

||yk

m−1 + w − rk||Y;

c) Set Zk

m = Zk m−1 + span{wk m};

d) Projection: yk

m = arg min y∈Zk

m

||y − rk||Y;

  • 3. Compute uk+1 ∈ Cε(uk − R−1

X L∗yk m)

Remark: Stopping criterion based on err(yk

m, rk) = ||rk − yk m||Y/||rk||Y ≤ δ ❀ ||yk m − yk m+p||Y/||yk m+p||Y ≤ δ

CEMRACS 2013

  • M. Billaud Friess (ECN)

14/ 37

slide-18
SLIDE 18

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Greedy approximations for Cε:

Weak greedy algorithm: [Temlyakov]

Let SX be a low rank tensor subset, find {um}m≥1, with um ∈ SX + · · · + SX = Sm

X ⊂ X

for u0 = 0 1) Correction wm: wm ∈ arg min

v∈SX

||ΛδR−1

Y (Lv − b + Lum−1)||Y

2) Update: um = um−1 + wm ∈ Sm

X

3) Stopping criterion: If m = r or ||Cε(v) − v||X ≤ ε||v||X then uk = um

Convergence result: [Billaud,Nouy,Zahm] The weak greedy algorithm for a fixed rank subset SX converges under some criterions depending on δ.

CEMRACS 2013

  • M. Billaud Friess (ECN)

15/ 37

slide-19
SLIDE 19

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Convergence of the overall algorithm ... ... with fixed rank:

Proposition: [Billaud,Nouy,Zahm]

Assuming 0 < 2δ < 1 then the sequence {uk}k≥0 is s.t. ||Ek||X ≤ (2δ)k||E0||X + 1 1 − 2δ ||EΠ||X, ∀k > 0 with Ek = (uk − u) and EΠ = (u − ΠSX(u))

... with fixed precision:

Proposition:

Assuming 0 < δ(1 + ε) < 1 then the sequence {uk}k≥0 is s.t. ||Ek||X ≤ ((1 + ε)δ)k||E0||X + ε 1 − (1 + ε)δ ||u||X, ∀k > 0.

Remarks:

  • Convergence in two phases : decreasing and stagnation for both kind of algorithms.
  • Practical interest of the second algorithm

CEMRACS 2013

  • M. Billaud Friess (ECN)

16/ 37

slide-20
SLIDE 20

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Outline

1 Ideal algorithm (IA) 2 Perturbed ideal algorithm (PA) 3 Ad-Re-Di problem 4 Oseen problem 5 Conclusions

CEMRACS 2013

  • M. Billaud Friess (ECN)

17/ 37

slide-21
SLIDE 21

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Test case

Stochastic reaction-advection-diffusion problem: We seek u(x, ξ) satisfying a.e. in Ξ −△u + c∇u + au = f, in Ω u = 0,

  • n ∂Ω
  • f(x) = 1Ω1(x) − 1Ω2(x)
  • c(x, ξ) = ξ1( 1

2 − x2, x1 − 1 2), ξ1 ∈ U(−350, 350)

  • a(x, ξ) = ξ2, ξ2 ∈ ln U(0.1, 100)

Ω Ω2 Ω1 x1 = 0 x2 = 0 x2 = 1 x1 = 1 Discretization:

  • Approx. x = (x1, x2): Q1 with N = 1521 nodes ❀VN ⊂ V
  • Approx. ξ: piecewise polynomial chaos of degree 5 with the dimension S = 72 ❀SP ⊂ S

Algebraic system: dimension s = 2 Lu = b with u ∈ X = RN ⊗ RP, L ∈ RN×N ⊗ RP×P and b ∈ RN ⊗ RP.

CEMRACS 2013

  • M. Billaud Friess (ECN)

18/ 37

slide-22
SLIDE 22

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Tested norms:

  • 1. Canonical norm: ||v||2

2 = P

  • i=1

N

  • i=1

(vij)2 : RX = IN ⊗ IP.

  • 2. Weighted norm: ||v||2

w = P

  • i=1

N

  • i=1

(w(xi)vij)2 : RX = diag(w(xi))2 ⊗ IP, w(x) = 1000 · 1D(x) + 1Ω\D(x).

❀ "Goal oriented": measure a quantity of interest localized in D: Q(v) = 1 |D|

  • D

vdx. Ω D x1 = 0 x2 = 0 x2 = 1 x1 = 1 Confronted approaches: Approximation in Rr(X), with r fixed. Singular Value Decomposition (SVD): Ideal rank-r reference approximation Classical algorithm (CA): min

v∈SX ||Lv − b||2

Perturbed algorithm (PA) : with δ ∈ {0.01, 0.05, 0.2, 0.5, 0.9} with fixed rank r

CEMRACS 2013

  • M. Billaud Friess (ECN)

19/ 37

slide-23
SLIDE 23

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

First results

Fig.1 Convergence error for || · ||2 and || · ||w with SX = R20(X), − δ = 0.9, − δ = 0.5, − δ = 0.2, − δ = 0.05, − δ = 0.01, − SVD, - - - CA

5 10 15 20 10−4 10−2 100 r ||u − u||2/||u||2 5 10 15 20 10−7 10−4 10−1 r ||u − u||w/||u||w

  • Better result with PA then for CA for both norms
  • Convergence closer to SVD for decreasing δ → 0
  • Convergence slightly deteriorated for the w-norm

CEMRACS 2013

  • M. Billaud Friess (ECN)

20/ 37

slide-24
SLIDE 24

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Interest of a weighted norm

Fig.2 Convergence error with rank for both || · ||2 and || · ||w with − uw, − u2 5 10 15 20 10−4 10−2 100 r ||u − u||2/||u||2 5 10 15 20 10−4 10−2 100 r ||u − u||w/||u||w

  • Similar approximations when compared to the reference with the 2-norm

uw is a better approximation when computed and compared to reference with w-norm.

CEMRACS 2013

  • M. Billaud Friess (ECN)

21/ 37

slide-25
SLIDE 25

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Fig.3 Comparison of first spatial modes of u2 and uw

  • Different spatial modes of

uw and u2

  • Features differently captured

CEMRACS 2013

  • M. Billaud Friess (ECN)

22/ 37

slide-26
SLIDE 26

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Fig.4 Comparison of the mean value and variance of u2 and uw

5 10 15 20 10−14 10−7 100 r error Mean of Q( u)

  • uw
  • u2

5 10 15 20 10−14 10−7 100 r error Variance of Q( u)

  • uw
  • u2
  • Better mean and variance for

uw than u2

CEMRACS 2013

  • M. Billaud Friess (ECN)

23/ 37

slide-27
SLIDE 27

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Gradient algorithm

Fig.5 Convergence study of the gradient type algorithm for different δ for the 2-norm and SX = R10(X) 5 10 15 20 25 30 100 101 102 k ||u − uk||2/||u − ΠSX(u)||2 δ = 9.0e − 01 δ = 5.0e − 01 δ = 2.0e − 01 δ = 5.0e − 02 δ = 1.0e − 02

  • Convergence behavior consistent with theoric result (decreasing +stagnation)
  • Good result, either for "large" δ (near 0.5)
  • Similar result for both 2-norm and w-norm

CEMRACS 2013

  • M. Billaud Friess (ECN)

24/ 37

slide-28
SLIDE 28

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Tab.1 Measured convergence rates of the linear phase for gradient type algorithm.

❍❍❍ ❍

r δ 0.90 0.50 0.20 0.05 0.01 4 0.78 0.36 ≈ 0 ≈ 0 ≈ 0 10 0.82 0.42 0.183 ≈ 0 ≈ 0 20 0.86 0.48 0.197 0.051 0.011 Tab.2 Measured stagnation values for gradient type algorithm. δ 0.90 0.50 0.20 0.05 0.01 2δ/(1 − 2δ)

  • 6.6e-1

1.1e-1 2.1e-2 4 3.3e-1 5.6e-2 4.9e-3 3.5e-4 3.0e-5 10 5.2e-1 1.3e-1 1.7e-2 1.8e-3 3.3e-5 20 6.4e-1 1.5e-1 1.9e-2 1.2e-3 7.3e-5

  • Convergence rate near δ than 2δ
  • Stagnation values overestimated: smaller than theoretical bound 2δ/(1 − 2δ)

CEMRACS 2013

  • M. Billaud Friess (ECN)

25/ 37

slide-29
SLIDE 29

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Outline

1 Ideal algorithm (IA) 2 Perturbed ideal algorithm (PA) 3 Ad-Re-Di problem 4 Oseen problem 5 Conclusions

CEMRACS 2013

  • M. Billaud Friess (ECN)

26/ 37

slide-30
SLIDE 30

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

General framework Problem:

Given b = (f, g) ∈ Y′ = X′

1 × M′ 2, find (u, p) ∈ X = X2 × M1 solution of

Au + B∗

1 p

= f, B2u = g, ⇔ Lu = b.

  • Continous and linear operators A : X2 → X′

1, Bi : Xi → M′ i with A∗ : X1 → X′ 2, B∗ i : Mi → Xi

  • A is weakly coercive:

inf

u∈K2 sup v∈K1

Av, wX′

1,X1

||u||X2||v||X1 ≥ α > 0. with Ki =

  • v ∈ Xi; ∀q ∈ Mi; Biv, qM′

i ,Mi = 0

  • Bi satisfies the inf-sup condition:

inf

q∈Mi

sup

v∈Xi

Biv, qM′

i ,Mi

||v||Xi||q||Mi = βi > 0.

  • The product space X is equipped with ·, ·X = ·, ·X2 + ·, ·M1.

➥ Under these assumptions L is an isomorphism [Bernardi,Ern].

CEMRACS 2013

  • M. Billaud Friess (ECN)

27/ 37

slide-31
SLIDE 31

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Best approximation problem

❶ Best approximation problem in SX2 ⊂ X2, and in SM1 ⊂ M1

ΠSX2 (u) × ΠSM1 (p) = arg min

(v,q)∈SX2 ×SM1

||v − u||2

X2 + ||q − p||2 M1

❷ Equivalent approximation problem:

arg min

(v,q)∈SX2 ×SM1

||R−1

Y (L(v, q) − (f, g))||Y

❸ Perturbed gradient type algorithm

Seek {uk, pk, yk}k≥0 ⊂ SX2 × SM1 × Y given u0, p0 s.t.        yk = (vk, qk) = ΛδR−1

Y (Luk − b),

uk+1 ∈ ΠSX2 (uk − R−1

X2 (A∗vk + B∗ 2 qk)),

pk+1 ∈ ΠSM1 (pk − R−1

M1 (B1vk)).

  • Different approximations are constructed for p and u.
  • Problem fully coupled when computing yk.
  • Perturbed algorithm:
  • 1. approximation with "δ-precision" of yk
  • 2. approximation with a fixed rank (ru, rp) or fixed precision (η, µ) for u and p separately.

CEMRACS 2013

  • M. Billaud Friess (ECN)

28/ 37

slide-32
SLIDE 32

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Convergence results

Proposition 1: fixed rank

Assuming 0 < δ < 1/ √ 8 then the sequence {uk, pk}k≥0 is s.t. ||Ek||X ≤ ( √ 8δ)k||E0||X + 1 1 − √ 8δ ||EΠ||X, ∀k > 0 with Ek = (uk − u, pk − p), EΠ = (u − ΠSX2 (u), p − ΠSM1 (p))

Proposition 2: fixed precision η, µ

Assuming 0 < δ(1 + ǫ) < 1/ √ 2 then the sequence {uk, pk}k≥0 is s.t. ||Ek||X ≤ ( √ 2δ(1 + ǫ))k||E0||X + ε 1 − √ 2δ(1 + ε) ||u||X, ∀k > 0 with ǫ = max(η, µ), u = (u, p)

  • Two phases of convergence : decreasing, stagnation
  • A priori estimate when iterates are computed with fixed precision depend on the more coarse

approximation.

  • Global error estimation due to R−1

Y

CEMRACS 2013

  • M. Billaud Friess (ECN)

29/ 37

slide-33
SLIDE 33

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Test case

Oseen equations with uncertain parameters: Given (Ξ, B(Ξ), Pξ) a probability space, find u ∈ V = L2(Ξ; [H1(Ω)]d), p ∈ P = L2(Ξ; L2(Ω)) satisfying a.e. in Ξ    −ν△u + a∇u + p = 0, in Ω ∇ · u = 0, in Ω u = u,

  • n ∂Ω

x1 = 0 x2 = 0 x2 = 1 x1 = 1 u = (1, 0) u = (0, 0) u = (0, 0) u = (0, 0)

Uncertainties on a and ν are represented by random variables

  • ν(ξ) = ν0 + ξ1, ξ1 ∼ U(−1, 1)
  • a(ξ) = a0(1 + ξ2), ξ2 ∼ N(0, 1)

Discretization: Stochastic Galerkin approach

  • Space: P2 − P1 ❀ n = 2189, m = 568 nodes
  • Uncertainties: polynomial chaos of degree 3 ❀ p = 10

➥ Vn ⊗ Sp ⊂ V and Vm ⊗ Sp ⊂ P

CEMRACS 2013

  • M. Billaud Friess (ECN)

30/ 37

slide-34
SLIDE 34

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Test case

Oseen equations with uncertain parameters: Given (Ξ, B(Ξ), Pξ) a probability space, find u ∈ V = L2(Ξ; [H1

0(Ω)]d), p ∈ P = L2(Ξ; L2(Ω))

satisfying a.e. in Ξ    −ν△u + a∇u + p = 0, in Ω ∇ · u = 0, in Ω u = 0,

  • n ∂Ω

x1 = 0 x2 = 0 x2 = 1 x1 = 1 u = (1, 0) u = (0, 0) u = (0, 0) u = (0, 0)

Uncertainties on a and ν are represented by random variables

  • ν(ξ) = ν0 + ξ1, ξ1 ∼ U(−1, 1)
  • a(ξ) = a0(1 + ξ2), ξ2 ∼ N(0, 1)

Discretization: Stochastic Galerkin approach

  • Space: P2 − P1 ❀ n = 2189, m = 568 nodes
  • Uncertainties: polynomial chaos of degree 3 ❀ p = 10

➥ Vn ⊗ Sp ⊂ V and Vm ⊗ Sp ⊂ P

CEMRACS 2013

  • M. Billaud Friess (ECN)

30/ 37

slide-35
SLIDE 35

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Algebraic system: dimension s = 2 Find u ∈ X2 = Rn ⊗ Rp and p ∈ M1 = Rn ⊗ RP s.t. A B∗ B u p

  • =

f g

  • with A ∈ Rn×n ⊗ RP×P, B ∈ Rm×n ⊗ RP×P and f ∈ Rn ⊗ RP, g ∈ Rm ⊗ RP.

Canonical norm: ||v||2

2 = P

  • i=1

N

  • i=1

(vij)2 ⇒ RX2 = In ⊗ IP, RM1 = Im ⊗ IP ⇒ RY = AR−1

X2 A∗ + B∗R−1 M1 B

AR−1

X2 B∗

BR−1

X2 A∗

BR−1

X2 B∗

  • .

Confronted approaches: Approximations with fixed precisions η, µ

  • Singular Value Decomposition (SVD): Ideal rank-r reference approximation since s = 2
  • Perturbed Algorithm (PA): δ ∈ {0.05, 0.1, 0.2, 0.5, 0.7} with η = 1.e − 6 and µ = 1.e − 4

CEMRACS 2013

  • M. Billaud Friess (ECN)

31/ 37

slide-36
SLIDE 36

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

First results

Fig.6 Comparison of the relative convergence to reference in || · ||2 for both pressure and velocity

2 4 6 8 10 10−16 10−12 10−8 10−4 100

modes m ||˜ u − uref||X2/||uref||X2

2 4 6 8 10 10−16 10−12 10−8 10−4 100

modes m ||˜ p − pref||M1/||pref||M1

δ=0.7 δ=0.5 δ=0.2 δ=0.1 δ=0.05 SVD

  • Good agreement with the reference
  • Influence of the pressure approximation on the velocity

CEMRACS 2013

  • M. Billaud Friess (ECN)

32/ 37

slide-37
SLIDE 37

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Fig.7 Comparison of both mean and expectations of velocities and pressure for the SVD reference (left) and the computed approximation PA (right)

  • Mean profiles in good agreement with the deterministic profiles
  • Good agreement between the two approaches

CEMRACS 2013

  • M. Billaud Friess (ECN)

33/ 37

slide-38
SLIDE 38

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Gradient algorithm

Fig.8 Convergence study of the gradient type algorithm for different δ for the 2-norm and with the resp. precisions η = 10−6, µ = 10−4

5 10 15 20 10−5 10−4 10−3 10−2 10−1 100

Iterations k ||u − uk||X/||u||X δ=0.7 δ=0.5 δ=0.2 δ=0.1 δ=0.05 Tab.3 Measured convergence rates of the linear phase for gradient type algorithm. δ 0.70 0.50 0.20 0.1 0.05 Th. 0.99 0.71 0.28 0.14 0.07 Me. 0.37 0.27 0.06 0.09 0.03 Tab.4 Measured stagnation values for gradient type algorithm. δ 0.70 0.50 0.20 0.1 0.05

  • Th. (10−4)

100 3.41 1.39 1.16 1.07

  • Me. (10−5)

5.53 5.28 5.08 5.04 5.03 ➥ Both rate of convergence and stagnation value

  • verestimated

CEMRACS 2013

  • M. Billaud Friess (ECN)

34/ 37

slide-39
SLIDE 39

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Outline

1 Ideal algorithm (IA) 2 Perturbed ideal algorithm (PA) 3 Ad-Re-Di problem 4 Oseen problem 5 Conclusions

CEMRACS 2013

  • M. Billaud Friess (ECN)

35/ 37

slide-40
SLIDE 40

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Overview

Conclusion: Robust model reduction approach Ideal minimal residual formulation with ideal norms ❀ Convergence improvement Validation on SPDEs with application arising from fluids mechanics Arbitrary choice of the norm associated to a specific problem ❀ Goal oriented Need of improvement: Algorithm for R−1

Y

  • Preconditioned solvers [Powell]
  • Approximation of R−1

Y

in low tensor subsets [Giraldi,Nouy,Legrain]

  • Stopping criterion for the dual problem ❀ Still an open problem

Futher exploration:

  • "Real" goal oriented explorations ❀ PhD of O.Zahm, in collaboration with A. Nouy
  • Uzawa algorithm for HD saddle point problems in collaboration with V. Erlacher

CEMRACS 2013

  • M. Billaud Friess (ECN)

36/ 37

slide-41
SLIDE 41

Introduction Ideal algorithm (IA) Perturbed ideal algorithm (PA) Ad-Re-Di problem Oseen problem Conclusions

Overview

Conclusion: Robust model reduction approach Ideal minimal residual formulation with ideal norms ❀ Convergence improvement Validation on SPDEs with application arising from fluids mechanics Arbitrary choice of the norm associated to a specific problem ❀ Goal oriented Need of improvement: Algorithm for R−1

Y

  • Preconditioned solvers [Powell]
  • Approximation of R−1

Y

in low tensor subsets [Giraldi,Nouy,Legrain]

  • Stopping criterion for the dual problem ❀ Still an open problem

Futher exploration:

  • "Real" goal oriented explorations ❀ PhD of O.Zahm, in collaboration with A. Nouy
  • Uzawa algorithm for HD saddle point problems in collaboration with V. Erlacher

Thank you for your attention

CEMRACS 2013

  • M. Billaud Friess (ECN)

36/ 37

slide-42
SLIDE 42

Bibliography

[Bernardi] C. Bernardi, C. Canuto, Y. Maday: Generalized inf-sup conditions for Chebyshev spectral approximation of the Stokes problem, SIAM J. Numer. Anal., Vol.25, Issue. 6, pp. 1237–1271, 1988. [Billaud] Billaud Friess, M., Nouy, A. and Zahm, O., A tensor approximation method based on ideal minimal residual formulations for the solution of high dimensional problems, In revision, 2013 [Cohen] Cohen, A., Dahmen, W. and Welper, G., Adaptivity and Variational Stabilization for Convection-Diffusion Equations, Preprint, 2011 [Dahmen] Dahmen, W., Huang, C. and Schwab, C., Adaptative Petrov-Galerkin methods for first order transport equations, IGPM Report 321, RWTH Aachen, Volume 150, Pages 425–467, 2011 [Ern] A. Ern and J.-L. Guermond, Theory and practice of finite elements, volume 159 of applied mathematical sciences, 2004. [Giraldi] L. Giraldi, A. Nouy, G. Legrain: Low-rank approximate inverse for preconditioning tensor-structured linear systems, arXiv:1304.6004, 2013. [Nouy] Nouy, A., Proper Generalized Decompositions and Separated Representations for the Numerical Solution of High Dimensional Stochastic Problems, Archives of Computational Methods in Engineering, Volume 17, Number 4, 2010 [Powell] C.E. Powell, D.J. Silvester: Preconditionning steady-state Navier-Stokes equations with random data, MIMS EPrint 2012.35, 2012. [Temlyakov] Temlyakov, A., Greedy approximation, 2011