On the solution of Bingham fluids and a Preconditioned - - PowerPoint PPT Presentation

on the solution of bingham fluids and a preconditioned
SMART_READER_LITE
LIVE PREVIEW

On the solution of Bingham fluids and a Preconditioned - - PowerPoint PPT Presentation

On the solution of Bingham fluids and a Preconditioned Douglas-Rachford splitting method for Viscoplastic fluids Sofa Lpez joint work with Sergio Gonzlez LAWOC 2018 EPN Quito, September 6th 2018 1 / 42 Motivation: Bingham fluids 2 /


slide-1
SLIDE 1

On the solution of Bingham fluids and a Preconditioned Douglas-Rachford splitting method for Viscoplastic fluids

Sofía López joint work with Sergio González

LAWOC 2018 EPN Quito, September 6th 2018

1 / 42

slide-2
SLIDE 2

Motivation: Bingham fluids

2 / 42

slide-3
SLIDE 3

Motivation: Bingham fluids

◮ Exhibit no deformation up to a certain level of stress (yield

stress).

◮ Behaves like a rigid solid when the shear stress is less

than the yield stress.

◮ Above the yield stress, the material flows like a fluid.

2 / 42

slide-4
SLIDE 4

Stationary Bingham-flow in Geometries of Rd

Flow equations for a Bingham fluid in Ω ⊂ Rd:                div u = 0, q = 2µE(u) + √ 2g E(u)

E(u)

if E(u) = 0, q ≤ g if E(u) = 0, Div σ − ∇p + f = 0 in Ω, +Boundary Conditions. where

◮ u: velocity field of the fluid, ◮ q : stress tensor, ◮ p: pressure ◮ E: strain velocity tensor

3 / 42

slide-5
SLIDE 5

Stationary Bingham-flow in a pipe

Simplified model: fluid flows under pressure drop, strain velocity tensor reduces to ∇u.            −∆u − div q = f, in Ω, q = g ∇u

|∇u|,

si∇u = 0 |q| ≤ g si ∇u = 0 u = 0,

  • n Γ.

4 / 42

slide-6
SLIDE 6

Stationary Bingham-flow in a pipe

Simplified model: fluid flows under pressure drop, strain velocity tensor reduces to ∇u.            −∆u − div q = f, in Ω, q = g ∇u

|∇u|,

si∇u = 0 |q| ≤ g si ∇u = 0 u = 0,

  • n Γ.

Optimality system of the energy minimization problem

(Mosolov-Miasnikov, 1965).

4 / 42

slide-7
SLIDE 7

Energy minimization problem

Energy minimization problem: nondifferentiable convex functional min

u∈H1

0(Ω)

J(u) := 1 2

|∇u|2 dx + g

|∇u| dx −

fu dx.

5 / 42

slide-8
SLIDE 8

Energy minimization problem

Energy minimization problem: nondifferentiable convex functional min

u∈H1

0(Ω)

J(u) := 1 2

|∇u|2 dx + g

|∇u| dx −

fu dx. Necessary and sufficient optimality condition (Variational inequality): find u ∈ H1

0(Ω) such that

(∇u, ∇(v − u)) dx + g

|∇v| dx − g

|∇u| dx ≥

f(v − u) dx, ∀v ∈ H1

0(Ω).

5 / 42

slide-9
SLIDE 9

Duality

Primal Problem

min

u∈H1

0(Ω)

J(u) := 1 2

|∇u|2 dx + g

|∇u| dx −

fu dx.

Dual Problem

sup

|q(x)|≤g

− 1 2

|∇u|2 dx subject to:

(∇u, ∇v) dx − (f, v) + (q, ∇v) = 0, for all v ∈ H1

0(Ω)

6 / 42

slide-10
SLIDE 10

Duality

Primal approach:

◮ The location of the interface between the yielded and

unyielded zones is crucial in solving flow problems.

7 / 42

slide-11
SLIDE 11

Duality

Primal approach:

◮ The location of the interface between the yielded and

unyielded zones is crucial in solving flow problems.

◮ Previous contributions: replace the fluid properties with a

bi-viscosity model.

7 / 42

slide-12
SLIDE 12

Duality

Primal approach:

◮ The location of the interface between the yielded and

unyielded zones is crucial in solving flow problems.

◮ Previous contributions: replace the fluid properties with a

bi-viscosity model.

◮ Regularized methods: replace the non-differentiable term

g

  • Ω |∇u| dx by a regularization.

7 / 42

slide-13
SLIDE 13

Duality

Primal approach:

◮ The location of the interface between the yielded and

unyielded zones is crucial in solving flow problems.

◮ Previous contributions: replace the fluid properties with a

bi-viscosity model.

◮ Regularized methods: replace the non-differentiable term

g

  • Ω |∇u| dx by a regularization.

Difficulty: Important discrepancies arise with respect to the

  • riginal problem.

Primal approach: direct global regularization

Khatib, Wilson (2001), Glowinski-Lions-Tremolieres (1976), Glowinski (1984), Frigaard-Nouar (2005), Dean-Glowinski-Guidoboni (2007),...

7 / 42

slide-14
SLIDE 14

Duality

Dual approach        min

|q(x)|≤g 1 2

  • Ω |∇u|2

subject to:

  • Ω |∇u|2 + (q, ∇v) = (f, v), for all v ∈ H1

0(Ω)

8 / 42

slide-15
SLIDE 15

Duality

Dual approach        min

|q(x)|≤g 1 2

  • Ω |∇u|2

subject to:

  • Ω |∇u|2 + (q, ∇v) = (f, v), for all v ∈ H1

0(Ω)

No unique solution!

8 / 42

slide-16
SLIDE 16

Duality

Penalized Dual approach        min

|q(x)|≤g 1 2

  • Ω |∇u|2 + 1

2γ q2 L2

subject to:

  • Ω |∇u|2 + (q, ∇v) = (f, v), for all v ∈ H1

0(Ω)

8 / 42

slide-17
SLIDE 17

Duality

Penalized Dual approach        min

|q(x)|≤g 1 2

  • Ω |∇u|2 + 1

2γ q2 L2

subject to:

  • Ω |∇u|2 + (q, ∇v) = (f, v), for all v ∈ H1

0(Ω)

Theorem

There exists a unique solution (qγ, yγ) ∈ L2(Ω) × H1

0(Ω) to the

penalized dual problem.

Multiplier approach: use of dual information

Glowinski (1984), Glowinski-Le Tallec (1989), Sánchez (1998), Roquet-Saramito (2003,2008), Huilgol-You (2005), Dean et al. (2007), Muravleva-Muravleva (2009), Olshanskii (2009)

8 / 42

slide-18
SLIDE 18

Regularized optimality system

◮ This penalization corresponds to a regularization of the

primal problem.

9 / 42

slide-19
SLIDE 19

Regularized optimality system

◮ This penalization corresponds to a regularization of the

primal problem.

◮ It changes the functional structure locally.

9 / 42

slide-20
SLIDE 20

Regularized optimality system

◮ This penalization corresponds to a regularization of the

primal problem.

◮ It changes the functional structure locally.

Let us introduce, for γ > 0, the function ψγ such that: ψγ : z → ψγ(z) =

  • g|z| − g2

if|z| > g

γ γ 2|z|2

if|z| ≤ g

γ .

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 γ=5 γ=10 γ=50

9 / 42

slide-21
SLIDE 21

Regularized optimality system

◮ This penalization corresponds to a regularization of the

primal problem.

◮ It changes the functional structure locally.

Let us introduce, for γ > 0, the function ψγ such that: ψγ : z → ψγ(z) =

  • g|z| − g2

if|z| > g

γ γ 2|z|2

if|z| ≤ g

γ .

min

u∈H1

0(Ω)

J(u) := 1 2

|∇u|2 dx + g

ψγ(∇u) dx −

fu dx.

◮ Primal problem turns into the minimization of a

continuously differentiable function.

◮ Dual problem is a constrained minimization of a quadratic

functional.

10 / 42

slide-22
SLIDE 22

Optimality system

Recall the extremality conditions:            −∆u − div q = f, in Ω, q = g ∇u

|∇u|,

si∇u = 0 |q| ≤ g si ∇u = 0 u = 0,

  • n Γ.

11 / 42

slide-23
SLIDE 23

Optimality system

Recall the extremality conditions:            −∆u − div q = f, in Ω, q = g ∇u

|∇u|,

si∇u = 0 |q| ≤ g si ∇u = 0 u = 0,

  • n Γ.

Regularized optimality system

− ∆u − div q = f, in Ω, max (g, γ|∇uγ|) qγ = gγ∇uγ, for γ > 0.

11 / 42

slide-24
SLIDE 24

Optimality system

Recall the extremality conditions:            −∆u − div q = f, in Ω, q = g ∇u

|∇u|,

si∇u = 0 |q| ≤ g si ∇u = 0 u = 0,

  • n Γ.

Regularized optimality system

− ∆u − div q = f, in Ω, max (g, γ|∇uγ|) qγ = gγ∇uγ, for γ > 0.

11 / 42

slide-25
SLIDE 25

Optimality system

Discretization: finite differences

     Ah u + Qh q − f = 0, max(g e, γξ(∇h u)) ⋆ q − gγ∇h q = 0 for γ > 0.

12 / 42

slide-26
SLIDE 26

Optimality system

Discretization: finite differences

     Ah u + Qh q − f = 0, max(g e, γξ(∇h u)) ⋆ q − gγ∇h q = 0 for γ > 0. Difficulty for Newton type algorithm: max function is not differentiable!

12 / 42

slide-27
SLIDE 27

Semismooth Newton method

Definition (Newton differentiability)

If there exists a neighborhood N(x∗) ⊂ S and a family of mappings G : N(x∗) → L(X, Y) such that

limhX→0

F(x∗+h)−F(x∗)−G(x∗+h)(h)Y hX

= 0,

then F is called Newton differentiable at x∗.

13 / 42

slide-28
SLIDE 28

Semismooth Newton method

Definition (Newton differentiability)

If there exists a neighborhood N(x∗) ⊂ S and a family of mappings G : N(x∗) → L(X, Y) such that

limhX→0

F(x∗+h)−F(x∗)−G(x∗+h)(h)Y hX

= 0,

then F is called Newton differentiable at x∗.

Differentiability of the max function

The mapping y → max(0, y) from Rn → Rn with g(y) =

  • 1 if y ≥ 0

0 if y < 0 as generalized derivative, is Newton differentiable.

13 / 42

slide-29
SLIDE 29

Semismooth Newton method

Definition (Newton differentiability)

If there exists a neighborhood N(x∗) ⊂ S and a family of mappings G : N(x∗) → L(X, Y) such that

limhX→0

F(x∗+h)−F(x∗)−G(x∗+h)(h)Y hX

= 0,

then F is called Newton differentiable at x∗.

Semi-smooth Newton step

xk+1 = xk − G(xk)−1F(xk).

De los Reyes, González (2009,2012)

14 / 42

slide-30
SLIDE 30

Regularization + Parallelization

How to solve this type of systems for a big amount of discretization points?

15 / 42

slide-31
SLIDE 31

Regularization + Parallelization: Domain Decomposition

Let us consider to divide the domain Ω in overlapped subdomains Ωl such that Ω =

K

  • l=1

Ωl.

◮ K, subdomians ◮ Γ1 = ∂Ω1 ∩ Ω2, ◮ Γ2 = ∂Ω2 ∩ Ω1

16 / 42

slide-32
SLIDE 32

Regularization + Parallelization: Domain Decomposition for Bingham

For 1 < l < K

           −Ahul + Qhql − f = 0, in Ωl, max(g, γ|∇ul|)ql − gγ∇ul = 0 a.e Ωl, uk+1

l

= uk

l−1

  • n Γl1,

uk+1

l

= uk

l+1

  • n Γl2,

17 / 42

slide-33
SLIDE 33

Regularization + Parallelization: Numerics

−1 −0.5 0.5 1 −1 −0.5 0.5 1 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 −1 −0.5 0.5 1 −1 −0.5 0.5 1 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 −1 −0.5 0.5 1 −1 −0.5 0.5 1 0.2 0.4 0.6 0.8 1 1.2 1.4 −1 −0.5 0.5 1 −1 −0.5 0.5 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

◮ Reformulate the smaller problems with new boundary

conditions.

◮ Time reduction.

18 / 42

slide-34
SLIDE 34

Regularization

Summary:

◮ Primal approach ◮ Multiplier approach: uses dual information ◮ Optimality regularized system

19 / 42

slide-35
SLIDE 35

Regularization

Summary:

◮ Primal approach ◮ Multiplier approach: uses dual information ◮ Optimality regularized system

◮ SSN Method ◮ SSN Method + Parallelization 19 / 42

slide-36
SLIDE 36

Regularization

Summary:

◮ Primal approach ◮ Multiplier approach: uses dual information ◮ Optimality regularized system

◮ SSN Method ◮ SSN Method + Parallelization

Regularization may mask the true effect of the yield stress in the fluid. min

u∈H1

0(Ω)

J(u) := 1 2

|∇u|2 dx −

fu dx

  • F(u)

+ g

|∇u| dx

  • G∗(Ku)

19 / 42

slide-37
SLIDE 37

Regularization: Primal-Dual approach

min

u∈H1

0(Ω)

J(u) := 1 2

|∇u|2 dx −

fu dx

  • F(u)

+ g

|∇u| dx

  • G∗(Ku)

20 / 42

slide-38
SLIDE 38

Regularization: Primal-Dual approach

min

u∈H1

0(Ω)

J(u) := 1 2

|∇u|2 dx −

fu dx

  • F(u)

+ g

|∇u| dx

  • G∗(Ku)

Primal Problem: min

u∈H1

F(u) + G∗(Ku),

20 / 42

slide-39
SLIDE 39

Regularization: Primal-Dual approach

min

u∈H1

0(Ω)

J(u) := 1 2

|∇u|2 dx −

fu dx

  • F(u)

+ g

|∇u| dx

  • G∗(Ku)

Primal Problem: min

u∈H1

F(u) + G∗(Ku), Dual Problem: max

q∈L2(Ω) −F∗(−K∗q) − G(q),

20 / 42

slide-40
SLIDE 40

Regularization: Primal-Dual approach

min

u∈H1

0(Ω)

J(u) := 1 2

|∇u|2 dx −

fu dx

  • F(u)

+ g

|∇u| dx

  • G∗(Ku)

Primal Problem: min

u∈H1

F(u) + G∗(Ku), Dual Problem: max

q∈L2(Ω) −F∗(−K∗q) − G(q),

Saddle-point problem min

u∈H1

max

q∈L2(Ω) Ku, q + F(u) − G(q),

The functionals F : H1

0(Ω) → R∞, G : L2(Ω) → R∞ are proper,

convex and l.s.c and K = ∇ : H1

0(Ω) → L2(Ω) is a continuous

linear mapping.

20 / 42

slide-41
SLIDE 41

Methodology:Douglas-Rachford Method

Primal-dual optimality system for saddle point problem:

  • 0 ∈ −Ku + ∂G(q),

0 ∈ K∗q + ∂F(u),

21 / 42

slide-42
SLIDE 42

Methodology:Douglas-Rachford Method

Primal-dual optimality system for saddle point problem:

  • 0 ∈ −Ku + ∂G(q),

0 ∈ K∗q + ∂F(u), if we consider A = ∂F ∂G

  • and

B = K∗ −K

  • ,

the optimality condition becomes: find 0 ∈ Az + Bz for z = (u, q). With A, B maximally monotone operators

L.M Briceño-Arias, P .L.Combettes, 2011.

21 / 42

slide-43
SLIDE 43

Primal-Dual Methods and Aug. Lagrangians

◮ Augmented Lagrangian Methods: Fortin, Glowinski and Le

Tallec (1983, 1989).

◮ Alternating Direction Methods (ADMM), solving Augmented

Lagrangian Method inexactly: Glowinski, Fortin (1989).

◮ Primal-Dual Chambolle-Pock Algorithm: A. Chambolle, T.

Pock (2011).

◮ Modifications of ADMM ...

22 / 42

slide-44
SLIDE 44

Methodology: Douglas-Rachford Method

◮ Douglash-Rachford methods consider general maximal

monotone choices of A and B.

◮ The Douglas-Rachford iteration reads as:

zk+1 = (I + σB)−1((I + σA)−1(I − σB)

  • forward-backward

+σB)zk

23 / 42

slide-45
SLIDE 45

Methodology: Douglas-Rachford Method

◮ Douglash-Rachford methods consider general maximal

monotone choices of A and B.

◮ The Douglas-Rachford iteration reads as:

zk+1 = (I + σB)−1((I + σA)−1(I − σB)

  • forward-backward

+σB)zk

◮ Forward step is corrected or cancelled out in some sense

by σBzk.

◮ Backward step is taken on B.

23 / 42

slide-46
SLIDE 46

Methodology: Douglas-Rachford Method

◮ Douglash-Rachford methods consider general maximal

monotone choices of A and B.

◮ The Douglas-Rachford iteration reads as:

zk+1 = (I + σB)−1((I + σA)−1(I − σB)

  • forward-backward

+σB)zk

◮ Forward step is corrected or cancelled out in some sense

by σBzk.

◮ Backward step is taken on B.

However, if we introduce vk = zk + σBzk we can rewrite the iteration as follows:

  • zk+1 = JσB(vk),

vk+1 = vk + JσA(2zk+1 − vk) − zk+1, where σ > 0 and JσB = (I + σB)−1 and JσA = (I + σA)−1 are the resolvent operators.

23 / 42

slide-47
SLIDE 47

Methodology: Douglas-Rachford Method

◮ The method asks, in each iteration, the solution of some

implicit equation for the evaluation of the resolvents.

24 / 42

slide-48
SLIDE 48

Methodology: Douglas-Rachford Method

◮ The method asks, in each iteration, the solution of some

implicit equation for the evaluation of the resolvents.

◮ Incorporate a linear subproblem to evaluate the resolvents. ◮ The linear subproblem may be solved approximately. ◮ The inexact solution is performed by preconditioners. K.Bredies, H.Sun (2015)

24 / 42

slide-49
SLIDE 49

1-2 Linear subproblem solved approximately

Introduce: w with w ∈ σAz. Then, the problem reads as:

  • =
  • σBz + w

−z + (σA)−1w

  • A

(1)

25 / 42

slide-50
SLIDE 50

1-2 Linear subproblem solved approximately

Introduce: w with w ∈ σAz. Then, the problem reads as:

  • =
  • σBz + w

−z + (σA)−1w

  • A

(1) Let x = (z, w) the problem becomes 0 ∈ Ax.

25 / 42

slide-51
SLIDE 51

3 Inexact solution is performed by preconditioners

Preconditioner: Introduce a linear and continuous preconditioner M such that: 0 ∈ M(xk+1 − xk) + Axk+1

26 / 42

slide-52
SLIDE 52

3 Inexact solution is performed by preconditioners

Preconditioner: Introduce a linear and continuous preconditioner M such that: 0 ∈ M(xk+1 − xk) + Axk+1 Applying Douglas-Rachford in order to solve it in terms of xk = (zk, wk) and the resolvents, leads to the iteration

  • zk+1 = JσB(zk − wk),

wk+1 = vk + J(σA)−1(2zk+1 − zk + wk),

26 / 42

slide-53
SLIDE 53

3 Inexact solution is performed by preconditioners

Since we introduced w = (˜ u, ˜ q) ∈ σAz, then the optimality condition corresponds to

Optimality conditions for auxiliary variables

  • ˜

u = σ∂F(u), ˜ q = σ∂G(q),

27 / 42

slide-54
SLIDE 54

3 Inexact solution is performed by preconditioners

Since we introduced w = (˜ u, ˜ q) ∈ σAz, then the optimality condition corresponds to

Optimality conditions for auxiliary variables

  • ˜

u = σ∂F(u), ˜ q = σ∂G(q), In view of primal-dual variables, the preconditioner acts on each variable separately introducing the following operators: N1, N2 linear, continuous and self-adjoint

27 / 42

slide-55
SLIDE 55

3 Inexact solution is performed by preconditioners

therefore, to solve 0 ∈ M(xk+1 − xk) + Axk+1 M is defined by M =     N1 −I N2 −I −I I −I I    

28 / 42

slide-56
SLIDE 56

3 Inexact solution is performed by preconditioners

Let ¯ u = u − ˜ u and ¯ q = q − ˜ q, the iteration reads as:            uk+1 = N−1

1 [(N1 − I)uk + ¯

uk − σK∗qk+1], qk+1 = N−1

2 [(N2 − I)qk + ¯

qk + σKuk+1], ¯ uk+1 = ¯ uk + (I + σ∂F)−1[2uk+1 − ¯ uk] − uk+1, ¯ qk+1 = ¯ qk + (I + σ∂G)−1[2qk+1 − ¯ qk] − qk+1.

29 / 42

slide-57
SLIDE 57

3 Inexact solution is performed by preconditioners

Let ¯ u = u − ˜ u and ¯ q = q − ˜ q, the iteration reads as:            uk+1 = N−1

1 [(N1 − I)uk + ¯

uk − σK∗qk+1], qk+1 = N−1

2 [(N2 − I)qk + ¯

qk + σKuk+1], ¯ uk+1 = ¯ uk + (I + σ∂F)−1[2uk+1 − ¯ uk] − uk+1, ¯ qk+1 = ¯ qk + (I + σ∂G)−1[2qk+1 − ¯ qk] − qk+1. Taking N2 = I and since uk+1 and qk+1 are implicit, we can plug

  • ne variable into the other and have:

(N1 + σ2K∗K)uk+1 = (N1 − I)uk + ¯ uk − σK∗¯ qk

29 / 42

slide-58
SLIDE 58

3 Inexact solution is performed by preconditioners

Introduce an operator M such that M = N1 + σ2K∗K then, we have xk+1 = xk + M−1[¯ xk − σK∗¯ yk − (I + σ2K∗K)xk].

30 / 42

slide-59
SLIDE 59

3 Inexact solution is performed by preconditioners

Introduce an operator M such that M = N1 + σ2K∗K then, we have xk+1 = xk + M−1[¯ xk − σK∗¯ yk − (I + σ2K∗K)xk]. Introducing T = I + σ2K∗K and bk = ¯ xk − σK∗¯ yk this can be rewritten as xk+1 = xk + M−1(bk − Txk)

30 / 42

slide-60
SLIDE 60

3 Inexact solution is performed by preconditioners

Introduce an operator M such that M = N1 + σ2K∗K then, we have xk+1 = xk + M−1[¯ xk − σK∗¯ yk − (I + σ2K∗K)xk]. Introducing T = I + σ2K∗K and bk = ¯ xk − σK∗¯ yk this can be rewritten as xk+1 = xk + M−1(bk − Txk) M is a preconditioner for the operator equation Txk+1 = bk.

30 / 42

slide-61
SLIDE 61

Algorithm PDR

Summarizing, the algorithm reads as:

Algorithm

Initialization: u0, q0, ¯ u0, ¯ q0 initial guess, σ > 0, stepsize, T = I + σK∗K Iteration:                bk = ¯ uk+1 − σK∗¯ qk uk+1 = uk + M−1(bk − Tuk), qk+1 = ¯ uk + σKuk+1, ¯ uk+1 = ¯ uk + (I + σ∂F)−1[2uk+1 − ¯ uk] − uk+1, ¯ qk+1 = ¯ qk + (I + σ∂G)−1[2qk+1 − ¯ qk] − qk+1.

31 / 42

slide-62
SLIDE 62

Aplication to Bingham fluids

min

u∈H1

0(Ω)

J(u) := 1 2

|∇u|2 dx −

fu dx

  • F(u)

+ g

|∇u| dx

  • G∗(Ku)

,

Things to do:

◮ We have to find a preconditioner for T = I − σ2K∗K. ◮ Calculate the resolvents (I + σ∂F)−1 and (I + σ∂G)−1

32 / 42

slide-63
SLIDE 63

Aplication to Bingham fluids

Using the red-black ordering five-point stencil, T yields in a symmetric block representation:

33 / 42

slide-64
SLIDE 64

Aplication to Bingham fluids

Then, we have T = D1 A A∗ D1

  • ,

(2) and the symmetric Gauss Seidel preconditioner M can be employed: M = (D − E)D−1(D − F). Where (D − E) and (D − F) are the lower and upper part of T including de diagonal, respectively.

  • T. Goldstein, S. Osher (2009).

34 / 42

slide-65
SLIDE 65

Aplication to Bingham fluids

Finally, the resolvents for the Bingham minimization problem are: (I + σ∂F)−1(u) = argminu′∈H1 σF(u′) + 1 2u′ − u2dx, With the norm u2

H1

0 =

|∇u|2. Then, the argmin u′ is the solution of the problem −(σ + 1)

∆u′v dx − σ

fv dx +

∆uv dx = 0,

35 / 42

slide-66
SLIDE 66

Aplication to Bingham fluids

Finally, the resolvents for the Bingham minimization problem are: (I + σ∂F)−1(u) = argminu′∈H1 σF(u′) + 1 2u′ − u2dx, With the norm u2

H1

0 =

|∇u|2. Then, the argmin u′ is the solution of the problem −(σ + 1)

∆u′v dx − σ

fv dx +

∆uv dx = 0, which corresponds to the weak formulation of the problem:

  • −(σ + 1)∆u′ = σf − ∆u,

u′ = 0

  • n ∂Ω.

(3)

35 / 42

slide-67
SLIDE 67

Aplication to Bingham fluids

And since G = Iq≤g, we have that (I + σ∂G)−1(q) = Pα(q) = q max(1, |q|/g)

36 / 42

slide-68
SLIDE 68

Preliminary results

f = 1, g = 0.2, stopping criteria: uk+1 − uk ≤ 1e − 4, h = 0.04 and σ = h

37 / 42

slide-69
SLIDE 69

Preliminary results

f = 1, g = 0.2, stopping criteria: uk+1 − uk ≤ 1e − 4, h = 0.04 and σ = h

38 / 42

slide-70
SLIDE 70

Preliminary results

f = 1, g = 0.2, stopping criteria: uk+1 − uk ≤ 1e − 4, h = 0.04 and σ = 0.07

39 / 42

slide-71
SLIDE 71

Numerical Experiments for Bingham flow in a pipe

Since convergence is achieved with no restrictions on the parameter σ, results can be improved: f = 1, g = 0.2, stopping criteria: uk+1 − uk ≤ 1e − 4, h = 0.04 and σ = 0.07

40 / 42

slide-72
SLIDE 72

Outlook and perspectives

Advantages:

◮ No regularization or change of the functional. ◮ Red-Black ordering suggests an strategy for parallelization.

On going work and perspectives:

◮ Design the best preconditioner taylored for Bingham. ◮ Use the solution as a warm start for another method, i.e

Multigrid technique.

◮ Choice of the parameter.

41 / 42

slide-73
SLIDE 73

THANK YOU

42 / 42