Null Space Gradient Flows for Shape Optimization of Multiphysics Systems
Florian Feppon Gr´ egoire Allaire, Charles Dapogny Julien Cortial, Felipe Bordeu New trends in PDE constrained optimization RICAM (Linz) – October 15th, 2019
Null Space Gradient Flows for Shape Optimization of Multiphysics - - PowerPoint PPT Presentation
Null Space Gradient Flows for Shape Optimization of Multiphysics Systems Florian Feppon Gr egoire Allaire, Charles Dapogny Julien Cortial, Felipe Bordeu New trends in PDE constrained optimization RICAM (Linz) October 15th, 2019 Outline
Florian Feppon Gr´ egoire Allaire, Charles Dapogny Julien Cortial, Felipe Bordeu New trends in PDE constrained optimization RICAM (Linz) – October 15th, 2019
We are interested in multiphysics systems featuring ◮ fluids: velocity–pressure (✈, p) ✈ ✉ ✈
We are interested in multiphysics systems featuring ◮ fluids: velocity–pressure (✈, p) ◮ thermal exchanges: temperature field T, convected by ✈ ✉ ✈
We are interested in multiphysics systems featuring ◮ fluids: velocity–pressure (✈, p) ◮ thermal exchanges: temperature field T, convected by ✈ ◮ mechanical structures: displacement ✉, subjected to fluid-structure interaction with ✈ and thermoelasticity with T.
Proposed system
Ωf Ωs Γ
v0
∂ΩD
f
∂ΩD
s
u0
n
◮ Incompressible Navier-Stokes equations for (✈, p) in Ωf −div(σf (✈, p)) + ρ∇✈ ✈ = ❢f in Ωf ✈ ✉ ✉ ❢ ✉ ♥ ✈ ♥
Proposed system
Ωf Ωs Γ
v0
∂ΩD
f
∂ΩD
s
u0
n
◮ Incompressible Navier-Stokes equations for (✈, p) in Ωf −div(σf (✈, p)) + ρ∇✈ ✈ = ❢f in Ωf ◮ Steady-state convection-diffusion for Tf and Ts in Ωf and Ωs: −div(kf ∇Tf ) + ρ✈ · ∇Tf = Qf in Ωf −div(ks∇Ts) = Qs in Ωs ✉ ✉ ❢ ✉ ♥ ✈ ♥
Proposed system
Ωf Ωs Γ
v0
∂ΩD
f
∂ΩD
s
u0
n
◮ Incompressible Navier-Stokes equations for (✈, p) in Ωf −div(σf (✈, p)) + ρ∇✈ ✈ = ❢f in Ωf ◮ Steady-state convection-diffusion for Tf and Ts in Ωf and Ωs: −div(kf ∇Tf ) + ρ✈ · ∇Tf = Qf in Ωf −div(ks∇Ts) = Qs in Ωs ◮ Linearized thermoelasticity with fluid-structure interaction for ✉ in Ωs: −div(σs(✉, Ts)) = ❢s in Ωs σs(✉, Ts) · ♥ = σf (✈, p) · ♥
The method of Hadamard
min
Γ
J(Γ)
Ωf Ωs Γ θ Γθ
The method of Hadamard
min
Γ
J(Γ)
Ωf Ωs Γ θ Γθ
Γθ = (I + θ)Γ, where θ ∈ W 1,∞ (D, Rd), ||θ||W 1,∞(Rd,Rd)< 1.
The method of Hadamard
min
Γ
J(Γ)
Ωf Ωs Γ θ Γθ
Γθ = (I + θ)Γ, where θ ∈ W 1,∞ (D, Rd), ||θ||W 1,∞(Rd,Rd)< 1. J(Γθ) = J(Γ) + dJ dθ(θ) + o(θ), where |o(θ)| ||θ||W 1,∞(D,Rd)
θ→0
− − − → 0.
Shape derivative of arbitrary functionals
Proposition
Let J(Γ, ✉, T, ✈, p) an arbitrary functional with continuous partial derivatives and ✉(Γ), T(Γ), ✈(Γ), p(Γ) the above state variables. Then, if these are smooth enough, Γ → J(Γ, ✉(Γ), T(Γ), ✈(Γ), p(Γ)) is shape differentiable and the derivative reads: ✈ ✉ ❢ ✇ ✈ ✇ ♥ ✇ ✈ ♥ ♥ ✈ ✇ ♥ ♥ ♥ ♥ ♥ ♥ ♥ ✉ r ❢ r ♥ r ✉ ♥ ♥ ✉ r ♥ ♥
Shape derivative of arbitrary functionals
Proposition
Let J(Γ, ✉, T, ✈, p) an arbitrary functional with continuous partial derivatives and ✉(Γ), T(Γ), ✈(Γ), p(Γ) the above state variables. Then, if these are smooth enough, Γ → J(Γ, ✉(Γ), T(Γ), ✈(Γ), p(Γ)) is shape differentiable and the derivative reads: d dθ
= ∂J ∂θ (θ) +
(❢f · ✇ − σf (✈, p) : ∇✇ + ♥ · σf (✇, q)∇✈ · ♥ + ♥ · σf (✈, p)∇✇ · ♥)(θ · ♥)ds +
∂Ts ∂♥ ∂Ss ∂♥ + 2kf ∂Tf ∂♥ ∂Sf ∂♥
+
(σs(✉, Ts) : ∇r − ❢s · r − ♥ · Ae(r)∇✉ · ♥ − ♥ · σs(✉, Ts)∇r · ♥) (θ · ♥)ds
Shape derivative of arbitrary functionals
Proposition
Let J(Γ, ✉, T, ✈, p) an arbitrary functional with continuous partial derivatives and ✉(Γ), T(Γ), ✈(Γ), p(Γ) the above state variables. Then, if these are smooth enough, Γ → J(Γ, ✉(Γ), T(Γ), ✈(Γ), p(Γ)) is shape differentiable and the derivative reads: d dθ
= ∂J ∂θ (θ) +
(❢f · ✇ − σf (✈, p) : ∇✇ + ♥ · σf (✇, q)∇✈ · ♥ + ♥ · σf (✈, p)∇✇ · ♥)(θ · ♥)ds +
∂Ts ∂♥ ∂Ss ∂♥ + 2kf ∂Tf ∂♥ ∂Sf ∂♥
+
(σs(✉, Ts) : ∇r − ❢s · r − ♥ · Ae(r)∇✉ · ♥ − ♥ · σs(✉, Ts)∇r · ♥) (θ · ♥)ds J is a “transported” functional: J(θ, ˆ ✈, ˆ p, ˆ T, ˆ ✉) := J(Γθ, ˆ ✈ ◦ (I + θ)−1, ˆ p ◦ (I + θ)−1, ˆ T ◦ (I + θ)−1, ˆ ✉ ◦ (I + θ)−1).
Shape derivative of arbitrary functionals
Proposition
Let J(Γ, ✉, T, ✈, p) an arbitrary functional with continuous partial derivatives and ✉(Γ), T(Γ), ✈(Γ), p(Γ) the above state variables. Then, if these are smooth enough, Γ → J(Γ, ✉(Γ), T(Γ), ✈(Γ), p(Γ)) is shape differentiable and the derivative reads: d dθ
= ∂J ∂θ (θ) +
(❢f · ✇ − σf (✈, p) : ∇✇ + ♥ · σf (✇, q)∇✈ · ♥ + ♥ · σf (✈, p)∇✇ · ♥)(θ · ♥)ds +
∂Ts ∂♥ ∂Ss ∂♥ + 2kf ∂Tf ∂♥ ∂Sf ∂♥
+
(σs(✉, Ts) : ∇r − ❢s · r − ♥ · Ae(r)∇✉ · ♥ − ♥ · σs(✉, Ts)∇r · ♥) (θ · ♥)ds Partial derivative for J with respect to the shape.
Shape derivative of arbitrary functionals
Proposition
Let J(Γ, ✉, T, ✈, p) an arbitrary functional with continuous partial derivatives and ✉(Γ), T(Γ), ✈(Γ), p(Γ) the above state variables. Then, if these are smooth enough, Γ → J(Γ, ✉(Γ), T(Γ), ✈(Γ), p(Γ)) is shape differentiable and the derivative reads: d dθ
= ∂J ∂θ (θ) +
(❢f · ✇ − σf (✈, p) : ∇✇ + ♥ · σf (✇, q)∇✈ · ♥ + ♥ · σf (✈, p)∇✇ · ♥)(θ · ♥)ds +
∂Ts ∂♥ ∂Ss ∂♥ + 2kf ∂Tf ∂♥ ∂Sf ∂♥
+
(σs(✉, Ts) : ∇r − ❢s · r − ♥ · Ae(r)∇✉ · ♥ − ♥ · σs(✉, Ts)∇r · ♥) (θ · ♥)ds Three “adjoint” terms for each of the three physics.
Shape derivative of arbitrary functionals
Proposition
Let J(Γ, ✉, T, ✈, p) an arbitrary functional with continuous partial derivatives and ✉(Γ), T(Γ), ✈(Γ), p(Γ) the above state variables. Then, if these are smooth enough, Γ → J(Γ, ✉(Γ), T(Γ), ✈(Γ), p(Γ)) is shape differentiable and the derivative reads: d dθ
= ∂J ∂θ (θ) +
(❢f · ✇ − σf (✈, p) : ∇✇ + ♥ · σf (✇, q)∇✈ · ♥ + ♥ · σf (✈, p)∇✇ · ♥)(θ · ♥)ds +
∂Ts ∂♥ ∂Ss ∂♥ + 2kf ∂Tf ∂♥ ∂Sf ∂♥
+
(σs(✉, Ts) : ∇r − ❢s · r − ♥ · Ae(r)∇✉ · ♥ − ♥ · σs(✉, Ts)∇r · ♥) (θ · ♥)ds Three “adjoint” terms for each of the three physics.
Shape derivative of arbitrary functionals
Proposition
Let J(Γ, ✉, T, ✈, p) an arbitrary functional with continuous partial derivatives and ✉(Γ), T(Γ), ✈(Γ), p(Γ) the above state variables. Then, if these are smooth enough, Γ → J(Γ, ✉(Γ), T(Γ), ✈(Γ), p(Γ)) is shape differentiable and the derivative reads: d dθ
= ∂J ∂θ (θ) +
(❢f · ✇ − σf (✈, p) : ∇✇ + ♥ · σf (✇, q)∇✈ · ♥ + ♥ · σf (✈, p)∇✇ · ♥)(θ · ♥)ds +
∂Ts ∂♥ ∂Ss ∂♥ + 2kf ∂Tf ∂♥ ∂Sf ∂♥
+
(σs(✉, Ts) : ∇r − ❢s · r − ♥ · Ae(r)∇✉ · ♥ − ♥ · σs(✉, Ts)∇r · ♥) (θ · ♥)ds Three “adjoint” terms for each of the three physics.
Shape derivative of arbitrary functionals
Proposition
Let J(Γ, ✉, T, ✈, p) an arbitrary functional with continuous partial derivatives and ✉(Γ), T(Γ), ✈(Γ), p(Γ) the above state variables. Then, if these are smooth enough, Γ → J(Γ, ✉(Γ), T(Γ), ✈(Γ), p(Γ)) is shape differentiable and the derivative reads: d dθ
= ∂J ∂θ (θ) +
(❢f · ✇ − σf (✈, p) : ∇✇ + ♥ · σf (✇, q)∇✈ · ♥ + ♥ · σf (✈, p)∇✇ · ♥)(θ · ♥)ds +
∂Ts ∂♥ ∂Ss ∂♥ + 2kf ∂Tf ∂♥ ∂Sf ∂♥
+
(σs(✉, Ts) : ∇r − ❢s · r − ♥ · Ae(r)∇✉ · ♥ − ♥ · σs(✉, Ts)∇r · ♥) (θ · ♥)ds Adjoint variables ✇, q, Sf , Ss, r are solved in a reversed cascade.
◮ Our goal: solve constrained shape optimization problems min
Γ
J(Γ, ✈(Γ), p(Γ), T(Γ), ✉(Γ)) s.t. gi(Γ, ✈(Γ), p(Γ), T(Γ), ✉(Γ)) = 0, 1 ≤ i ≤ p hi(Γ, ✈(Γ), p(Γ), T(Γ), ✉(Γ)) ≤ 0, 1 ≤ i ≤ q . with arbitrary functionals J, gi, hi; ◮ if possible, no fine tunings of optimization algorithm parameters; ◮ must deal with unfeasible initializations.
For the exposure of our method, let us consider min
x∈V
J(x) s.t.
❤(x) ≤ 0, with ◮ J : V → R, ❣ : V → Rp and ❤ : V → Rq Fr´ echet differentiable
For the exposure of our method, let us consider min
x∈V
J(x) s.t.
❤(x) ≤ 0, with ◮ J : V → R, ❣ : V → Rp and ❤ : V → Rq Fr´ echet differentiable ◮ V a Hilbert space equipped with a scalar product (·, ·)V .
min
(x1,x2)∈R2
J(x1, x2) = x2
1 + (x2 + 3)2
s.t.
1 + x2
≤ 0 h2(x1, x2) = −x1 − x2 − 2 ≤ 0
We extend classical dynamical systems approaches to constrained
❣ ❣ ❣ ❣ ❣
❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣
❣ ❣ ❣
We extend classical dynamical systems approaches to constrained
◮ For unconstrained optimization, the celebrated gradient flow: ˙ x = −∇J(x) ❣ ❣ ❣ ❣ ❣
❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣
❣ ❣ ❣
We extend classical dynamical systems approaches to constrained
◮ For unconstrained optimization, the celebrated gradient flow: ˙ x = −∇J(x) ◮ For equality constrained optimization, projected gradient flow (Tanabe (1980)): ˙ x = −(I − D❣ T(D❣D❣ T)−1D❣)∇J(x) (gradient flow on V = {x ∈ V | ❣(x) = 0})
❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣
❣ ❣ ❣
We extend classical dynamical systems approaches to constrained
◮ For unconstrained optimization, the celebrated gradient flow: ˙ x = −∇J(x) ◮ For equality constrained optimization, projected gradient flow (Tanabe (1980)): ˙ x = −(I − D❣ T(D❣D❣ T)−1D❣)∇J(x) (gradient flow on V = {x ∈ V | ❣(x) = 0}) Then Yamashita (1980) added a Gauss-Newton direction:
˙ x = −αJ(I−D❣ T(D❣D❣ T)−1D❣)∇J(x)−αCD❣ T(D❣D❣ T)−1❣(x)
❣ ❣ ❣
We extend classical dynamical systems approaches to constrained
◮ For unconstrained optimization, the celebrated gradient flow: ˙ x = −∇J(x) ◮ For equality constrained optimization, projected gradient flow (Tanabe (1980)): ˙ x = −(I − D❣ T(D❣D❣ T)−1D❣)∇J(x) (gradient flow on V = {x ∈ V | ❣(x) = 0}) Then Yamashita (1980) added a Gauss-Newton direction:
˙ x = −αJ(I−D❣ T(D❣D❣ T)−1D❣)∇J(x)−αCD❣ T(D❣D❣ T)−1❣(x)
❣(x(t)) = ❣(x(0))e−αC t and J(x(t)) decreases if ❣(x(t)) = 0.
For both equality constraints ❣(x) = 0 and inequality constraints ❤(x) ≤ 0, we consider: ˙ x = −αJξJ(x(t)) − αCξC(x(t)) with ξJ(x) := (I − D❈ T
I(x)D❈ T
I(x))(∇J(x))
ξC(x) = D❈ T
I(x)D❈ T
I(x)(x),
For both equality constraints ❣(x) = 0 and inequality constraints ❤(x) ≤ 0, we consider: ˙ x = −αJξJ(x(t)) − αCξC(x(t)) with ξJ(x) := (I − D❈ T
I(x)D❈ T
I(x))(∇J(x))
ξC(x) = D❈ T
I(x)D❈ T
I(x)(x),
❈
I(x) =
| (hi(x))i∈
I(x)
T
For both equality constraints ❣(x) = 0 and inequality constraints ❤(x) ≤ 0, we consider: ˙ x = −αJξJ(x(t)) − αCξC(x(t)) with ξJ(x) := (I − D❈ T
I(x)D❈ T
I(x))(∇J(x))
ξC(x) = D❈ T
I(x)D❈ T
I(x)(x),
I(x) is an “optimal” subset of the active or violated constraints which can be computed by mean of a dual subproblem.
I(x) | µ∗
i (x) > 0}.
❈
I(x) =
| (hi(x))i∈
I(x)
T
Definition
Let (λ∗(x), µ∗(x)) ∈ Rp × RCard
I(x) the solutions of the following
dual minimization problem: (λ∗(x), µ∗(x)) := arg min
λ∈Rp µ∈R
q(x), µ0
||∇J(x)+D❣(x)Tλ+D❤
I(x)(x)Tµ||V .
Define I(x) the set obtained by collecting the non zero components of µ∗(x):
I(x) | µ∗
i (x) > 0}.
The dual subproblem
The best descent direction −ξJ(x) must be proportional to ξ∗ = arg min
ξ∈V
DJ(x)ξ s.t. D❣(x)ξ = 0 D❤
I(x)(x)ξ ≤ 0
||ξ||V ≤ 1. where ❤
I(x)(x) = (hi(x))i∈ I(x)
The dual subproblem
Proposition
ξ∗(x) is explicitly given by: ξ∗(x) = − Π❈
I(x)(∇J(x))
||Π❈
I(x)(∇J(x))||V
, with Π❈
I(x)(∇J(x)) = (I − D❈ T
I(x)D❈ T
I(x))(∇J(x))
❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈
The dual subproblem
Proposition
ξ∗(x) is explicitly given by: ξ∗(x) = − Π❈
I(x)(∇J(x))
||Π❈
I(x)(∇J(x))||V
, with Π❈
I(x)(∇J(x)) = (I − D❈ T
I(x)D❈ T
I(x))(∇J(x))
whence our definition
˙ x = −αJξJ(x(t)) − αCξC(x(t)) ξJ(x) := (I − D❈ T
I(x)D❈ T
I(x))(∇J(x))
ξC(x) = D❈ T
I(x)D❈ T
I(x)(x),
We can prove:
❣(x(t)) = e−αC t❣(x(0)) and ❤
I(x(t)) ≤ e−αC t❤(x(0))
❈
We can prove:
❣(x(t)) = e−αC t❣(x(0)) and ❤
I(x(t)) ≤ e−αC t❤(x(0))
I(x(t)) is sufficiently
small
We can prove:
❣(x(t)) = e−αC t❣(x(0)) and ❤
I(x(t)) ≤ e−αC t❤(x(0))
I(x(t)) is sufficiently
small
For shape optimization ˙ x = −αJξJ(x(t)) − αCξC(x(t)) works the same with ξJ(x) := (I − D❈ T
I(x)D❈ T
I(x))(∇J(x))
ξC(x) = D❈ T
I(x)D❈ T
I(x)(x),
where the transpose T and the gradient ∇ must be computed with respect to , V thanks to an identification problem.
For shape optimization ˙ x = −αJξJ(x(t)) − αCξC(x(t)) works the same with ξJ(x) := (I − D❈ T
I(x)D❈ T
I(x))(∇J(x))
ξC(x) = D❈ T
I(x)D❈ T
I(x)(x),
where the transpose T and the gradient ∇ must be computed with respect to , V thanks to an identification problem. This encompasses the celebrated regularization and extension step
Lift-drag minimization:
min − Lift(Γ, ✈(Γ), p(Γ)) s.t. Drag(Γ, ✈(Γ), p(Γ)) ≤ DRAG0 Vol(Ωf ) = V0 ❳(Ωs) := 1 |Ωs|
①dx = ①0, Lift(Γ, ✈(Γ), p(Γ)) := −
❡y·σf (✈, p)♥ds, Drag(Γ, ✈(Γ), p(Γ)) :=
σf (✈, p) : ∇✈dx.
Figure: Optimized 2-d lift-drag flow profile.
Lift-drag minimization:
min − Lift(Γ, ✈(Γ), p(Γ)) s.t. Drag(Γ, ✈(Γ), p(Γ)) ≤ DRAG0 Vol(Ωf ) = V0 ❳(Ωs) := 1 |Ωs|
①dx = ①0, Lift(Γ, ✈(Γ), p(Γ)) := −
❡y·σf (✈, p)♥ds, Drag(Γ, ✈(Γ), p(Γ)) :=
σf (✈, p) : ∇✈dx.
Figure: Optimized 2-d lift-drag flow profile.
Lift-drag minimization in 3-d:
(a) Initial shape (b) Optimized design (c) Optimized design (other 3-d views)
Lift-drag minimization in 3-d, convergence histories.
20 40 60 80 100 120 −0.04 −0.03 −0.02 −0.01 0.00
(a) Objective function.
20 40 60 80 100 120 0.030 0.031 0.032 0.033 0.034 0.035 0.036
(b) Volume constraint.
20 40 60 80 100 120 −0.000004 −0.000003 −0.000002 −0.000001 0.000000 0.000001 0.000002 +5e−1 x y z
(c) Center of mass constraint.
20 40 60 80 100 120 0.025 0.030 0.035 0.040 0.045
(d) Drag constraint.
Bi-tube heat exchanger with non penetrating constraint
dmin
Γ D
min
Ωf ⊂D
J(Ωf ) = −
ρcp✈ · ∇Tdx −
ρcp✈ · ∇Tdx
DP(Ωf ) =
f
pds −
f
pds ≤ DP0 Qhot↔cold(Ωf ) dmin.
Bi-tube heat exchanger with non penetrating constraint
(a) Initial temperature field (b) Final temperature field. (c) Intermediate iterations
3-d compliance minimization problem with fluid-structure interaction
3-d compliance minimization problem with fluid-structure interaction
(a) Initial shape (b) Final design
3-d compliance minimization problem with fluid-structure interaction
(a) Initial shape (b) Final design
elements.
(a) Final design.
Figure: Linear elastic deformation.
3-d compliance minimization problem with fluid-structure interaction
Figure: Intermediate iterations 0, 40, 100, 125, 175 and 300.
Feppon, F., Allaire, G., Bordeu, F., Cortial, J., and Dapogny, C. Shape optimization of a coupled thermal fluid-structure problem in a level set mesh evolution framework. SeMA Journal (2019). Feppon, F., Allaire, G., and Dapogny, C. Null space gradient flows for constrained optimization with applications to shape optimization. HAL preprint hal-01972915 (2019). Feppon, F., Allaire, G., and Dapogny, C. A variational formulation for computing shape derivatives of geometric constraints along rays. HAL preprint hal-01879571 (2019).