Shape optimization for interface identification in nonlocal models - - PowerPoint PPT Presentation
Shape optimization for interface identification in nonlocal models - - PowerPoint PPT Presentation
Shape optimization for interface identification in nonlocal models Volker Schulz and Christian Vollmann www.alop.uni-trier.de Why nonlocal operators? Because of a wealth of application fields: fractional diffusion (Brockmann et al. 2008,
Why nonlocal operators?
Because of a wealth of application fields:
- fractional diffusion (Brockmann et al. 2008, D’Elia and Gunzburger 2013, Harbir 2015,...)
- peridynamics (Silling 2000, Du and Zhou 2010/11, D’Elia et al. 2016,...)
- image processing (Gilboa and Osher 2009, Lou et al. 2010, Peyre et al. 2008,...)
- cardiology (Cusimano et al. 2015,...)
- machine learning (Rosasco et al. 2010,...)
- finance (Lvendoskii et al. 2004,...)
- growth models in economics (Augeraud-Veron et al. 2019 [survey],
Frerick/Müller-Fürstenberger/Sachs/Somorowsky 2019,...)
1
Why nonlocal operators?
Because of a wealth of application fields:
- fractional diffusion (Brockmann et al. 2008, D’Elia and Gunzburger 2013, Harbir 2015,...)
- peridynamics (Silling 2000, Du and Zhou 2010/11, D’Elia et al. 2016,...)
- image processing (Gilboa and Osher 2009, Lou et al. 2010, Peyre et al. 2008,...)
- cardiology (Cusimano et al. 2015,...)
- machine learning (Rosasco et al. 2010,...)
- finance (Lvendoskii et al. 2004,...)
- growth models in economics (Augeraud-Veron et al. 2019 [survey],
Frerick/Müller-Fürstenberger/Sachs/Somorowsky 2019,...) Because of interesting structures:
- full matrices lacking sparsity
- nevertheless, on structured grids, tensor based methods exist for fractional Laplacians
limiting the overall effort to O(n log n) – also in the optimal control case (Heidel/Khoromskaia/Khoromskij/Schulz 2018)
- general nonlocal operators on structured grids provide Toeplitz structures leading to high
efficiency (Vollmann/Schulz 2019) → numerical solution of nonlocal shape optimization problems is a new challenge and has to be done on unstructured meshes
1
x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0.
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds
2
y y y y y y x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds
2
y y y y y y x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds ⇔ 1 h
- u(x, t + h) − u(x, t)
- = 1
h t+h
t
- Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds
h→0
− − − → ∂ ∂t u(x, t) =
- Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y))dy + f(x, t)
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds ⇔ 1 h
- u(x, t + h) − u(x, t)
- = 1
h t+h
t
- Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds
h→0
− − − → ∂ ∂t u(x, t) =
- Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y))dy + f(x, t)
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds ⇔ 1 h
- u(x, t + h) − u(x, t)
- = 1
h t+h
t
- Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds
h→0
− − − → ∂ ∂t u(x, t) =
- Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y))dy + f(x, t)
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds ⇔ 1 h
- u(x, t + h) − u(x, t)
- = 1
h t+h
t
- Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds
h→0
− − − → ∂ ∂t u(x, t) =
- Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y))dy + f(x, t)
2
y γ(y, x) γ(x, y) x
Ω Ωc
Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds ⇔ 1 h
- u(x, t + h) − u(x, t)
- = 1
h t+h
t
- Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds
h→0
− − − → ∂ ∂t u(x, t)
- =0
=
- Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y))dy + f(x, t)
2
y φ(x, y) x
Ω ΩI
S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds ⇔ 1 h
- u(x, t + h) − u(x, t)
- = 1
h t+h
t
- Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds
h→0
− − − → ∂ ∂t u(x, t)
- =0
=
- Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y)
=:φ(x,y)χS(x) (y)
)dy + f(x, t)
2
y φ(x, y) x
Ω ΩI
S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds ⇔ 1 h
- u(x, t + h) − u(x, t)
- = 1
h t+h
t
- Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds
h→0
− − − → 0 =
- S(x)
(u(y, t)φ(y, x) − u(x, t)φ(x, y))dy + f(x, t)
2
y φ(x, y) x
Ω ΩI
S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h
t Rd u(y, s)γ(y, x)dy −
- Rd u(x, s)γ(x, y)dy
- ds +
t+h
t
f(x, s)ds ⇔ 1 h
- u(x, t + h) − u(x, t)
- = 1
h t+h
t
- Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds
h→0
− − − → 0 =
- S(x)
(u(y, t)φ(y, x) − u(x, t)φ(x, y))dy
- =:Lu(x)
+f(x, t)
2
y φ(x, y) x
Ω ΩI
S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Steady-state nonlocal Dirichlet problem − Lu = f
- n Ω
u = g
- n ΩI
2
y φ(x, y) x
Ω ΩI
S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Steady-state nonlocal Dirichlet problem
- − Lu = f
- n Ω
u = g
- n ΩI
2
y φ(x, y) x
Ω Γ ΩI
S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Steady-state nonlocal Dirichlet problem
- − LΓu = fΓ
- n Ω
u = g
- n ΩI
2
Overview
1 The optimization problem 2 Shape derivatives 3 Numerical results
Overview
1 The optimization problem 2 Shape derivatives 3 Numerical results
- 1. Problem formulation
min
(u,Γ) 1 2 u − ¯
u2
L2(Ω) + R(Γ)
s.t. LΓu = fΓ Ω1 Ω2 Γ Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2
3
- 1. Problem formulation
min
(u,Γ) 1 2 u − ¯
u2
L2(Ω) + R(Γ)
s.t. LΓu = fΓ Ω1 Ω2 Γ Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2 Well studied example (e.g. Schulz, Siebenborn, Welker; SIOPT 2016): LΓu(x) = − div kΓ∇u(x) kΓ(x) =
- k1(x)
: x ∈ Ω1 k2(x) : x ∈ Ω2
- shape derivative used as
force for elastic mesh deformation fΓ(x) =
- f1(x)
: x ∈ Ω1 f2(x) : x ∈ Ω2
- in line with shape Hessian
estimation on the boundary (Eppler/Harbrecht/Schnei- der 2007)
3
- 1. Problem formulation
min
(u,Γ) 1 2 u − ¯
u2
L2(Ω) + R(Γ)
s.t. LΓu = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2 Here: LΓu(x):=
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x)) dy γΓ(x, y) =
- γ1(x, y)
: x ∈ Ω1 γ2(x, y) : x ∈ Ω2 ∪ ΩI fΓ(x) =
- f1(x)
: x ∈ Ω1 f2(x) : x ∈ Ω2
3
- 1. Problem formulation
min
(u,Γ) 1 2 u − ¯
u2
L2(Ω) + R(Γ)
s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2 Here: LΓu(x):=
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x)) dy γΓ(x, y) =
- γ1(x, y)
: x ∈ Ω1 γ2(x, y) : x ∈ Ω2 ∪ ΩI fΓ(x) =
- f1(x)
: x ∈ Ω1 f2(x) : x ∈ Ω2 µ > 0 small perturbation parameter to guarantee regularity
3
- 1. Problem formulation
min
(u,Γ) 1 2 u − ¯
u2
L2(Ω) + R(Γ)
s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2
- Shape definition
→ Here: Simple closed, smooth curve Γ ⊂ Ω.
3
- 1. Problem formulation
min
(u,Γ) 1 2 u − ¯
u2
L2(Ω) + R(Γ)
s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2
- Shape definition
→ Here: Simple closed, smooth curve Γ ⊂ Ω.
- Derivative concept
→ Here: Eulerian derivative For a smooth vector field V: Ω → R2 let Ft := Id +tV (perturbation of identity), then we define DJ(Γ)[V] := lim
tց0
1 t
- J(Ft(Γ)) − J(Γ)
- .
3
- 1. Problem formulation
Optimal control for nonlocal operators
- M. D’Elia and M. Gunzburger: Optimal distributed control of nonlocal steady diffusion problems. SICON
(2014).
- M. D’Elia and M. Gunzburger: Identification of the diffusion parameter in nonlocal steady diffusion problems.
- Appl. Math. Optim. (2016)
- M. D’Elia, C. Glusa, and E. Otárola: A priori error estimates for the optimal control of the integral fractional
- Laplacian. SICON (2019)
- E. Otárola and A.J. Salgado: Sparse optimal control for fractional diffusion. CMAM (2018)
- . . .
→ Control variable is typically modeled to be an element of a suitable function space. 4
- 1. Problem formulation
Optimal control for nonlocal operators
- M. D’Elia and M. Gunzburger: Optimal distributed control of nonlocal steady diffusion problems. SICON
(2014).
- M. D’Elia and M. Gunzburger: Identification of the diffusion parameter in nonlocal steady diffusion problems.
- Appl. Math. Optim. (2016)
- M. D’Elia, C. Glusa, and E. Otárola: A priori error estimates for the optimal control of the integral fractional
- Laplacian. SICON (2019)
- E. Otárola and A.J. Salgado: Sparse optimal control for fractional diffusion. CMAM (2018)
- . . .
→ Control variable is typically modeled to be an element of a suitable function space.
Shape optimization for nonlocal operators
- J.F. Bonder and J.F. Spedaletti: Some nonlocal optimal design problems. Jour. of Math. Anal. and Appl.
(2018)
- E. Parini and A. Salort: Compactness and dichotomy in nonlocal shape optimization. arXiv:1806.01165
(2018)
- Y. Sire, J.L. Vázquez, and B. Volzone: Symmetrization for fractional elliptic and parabolic equations and an
isoperimetric application. Chin. Ann. of Math., Ser. B (2017).
- J. Fernández Bonder, A. Ritorto, and A. Salort: A class of shape optimization problems for some nonlocal
- perators. Advances in Calculus of Variations (2017).
- A.-L. Dalibard and D. Gérard-Varet: On shape optimization problems involving the fractional Laplacian.
ESAIM: COCV (2013).
- A. Burchard, R. Choksi, and I. Topaloglu: Nonlocal shape optimization via interactions of attractive and
repulsive potentials. arXiv:1512.07282 (2018).
- . . . ?
→ All of them are of theoretical nature and do not involve numerics. 4
Overview
1 The optimization problem 2 Shape derivatives 3 Numerical results
- 2. Shape Derivatives
- Optimization approach
→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) min
(u,Γ) J(u, Γ)
s.t. c(u, Γ) = 0 where J(u, Γ) := 1
2 u − ¯
u2
L2(Ω)
c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·)
5
- 2. Shape Derivatives
- Optimization approach
→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1
2 u − ¯
u2
L2(Ω)
c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·)
5
- 2. Shape Derivatives
- Optimization approach
→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1
2 u − ¯
u2
L2(Ω)
c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′
Γ + JΓ, but u′ Γ is hard to determine. 5
- 2. Shape Derivatives
- Optimization approach
→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1
2 u − ¯
u2
L2(Ω)
c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′
Γ + JΓ, but u′ Γ is hard to determine.
Therefore we consider the Lagrange functional L(u, Γ, v) := 1
2 u − ¯
u2
L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v). 5
- 2. Shape Derivatives
- Optimization approach
→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1
2 u − ¯
u2
L2(Ω)
c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′
Γ + JΓ, but u′ Γ is hard to determine.
Therefore we consider the Lagrange functional L(u, Γ, v) := 1
2 u − ¯
u2
L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v).
We aim to find a saddle point (u, Γ, v), such that 0 ≡ DvL(u, Γ, v) = AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) (state)
5
- 2. Shape Derivatives
- Optimization approach
→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1
2 u − ¯
u2
L2(Ω)
c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′
Γ + JΓ, but u′ Γ is hard to determine.
Therefore we consider the Lagrange functional L(u, Γ, v) := 1
2 u − ¯
u2
L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v).
We aim to find a saddle point (u, Γ, v), such that 0 ≡ DvL(u, Γ, v) = AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) (state) 0 ≡ DuL(u, Γ, v) = (u − ¯ u, ·)L2 + A∗
Γ(v, ·) + µ(∇v, ∇·)L2
(adjoint)
5
- 2. Shape Derivatives
- Optimization approach
→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1
2 u − ¯
u2
L2(Ω)
c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′
Γ + JΓ, but u′ Γ is hard to determine.
Therefore we consider the Lagrange functional L(u, Γ, v) := 1
2 u − ¯
u2
L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v).
We aim to find a saddle point (u, Γ, v), such that 0 ≡ DvL(u, Γ, v) = AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) (state) 0 ≡ DuL(u, Γ, v) = (u − ¯ u, ·)L2 + A∗
Γ(v, ·) + µ(∇v, ∇·)L2
(adjoint) 0 ≡ DΓL(u, Γ, v) = DΓAΓ(u, v) − DΓℓΓ(v) (design)
5
- 2. Shape Derivatives
- Optimization approach
→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1
2 u − ¯
u2
L2(Ω)
c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′
Γ + JΓ, but u′ Γ is hard to determine.
Therefore we consider the Lagrange functional L(u, Γ, v) := 1
2 u − ¯
u2
L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v).
We aim to find a saddle point (u, Γ, v), such that 0 ≡ DvL(u, Γ, v) = AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) (state) 0 ≡ DuL(u, Γ, v) = (u − ¯ u, ·)L2 + A∗
Γ(v, ·) + µ(∇v, ∇·)L2
(adjoint) 0 ≡ DΓL(u, Γ, v) = DΓAΓ(u, v) − DΓℓΓ(v) (design) → Shape derivatives of 1
2 u − ¯
u2
L2(Ω) and µ(∇u, ∇v)L2(Ω) are zero. 5
- 2. Shape Derivatives
- Optimization approach
→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1
2 u − ¯
u2
L2(Ω)
c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′
Γ + JΓ, but u′ Γ is hard to determine.
Therefore we consider the Lagrange functional L(u, Γ, v) := 1
2 u − ¯
u2
L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v).
We aim to find a saddle point (u, Γ, v), such that 0 ≡ DvL(u, Γ, v) = AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) (state) 0 ≡ DuL(u, Γ, v) = (u − ¯ u, ·)L2 + A∗
Γ(v, ·) + µ(∇v, ∇·)L2
(adjoint) 0 ≡ DΓL(u, Γ, v) = DΓAΓ(u, v) − DΓℓΓ(v) (design) → Shape derivatives of 1
2 u − ¯
u2
L2(Ω) and µ(∇u, ∇v)L2(Ω) are zero.
→ Crucial here: Shape derivatives of AΓ(u, v) and ℓΓ(v).
5
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V] − DΓℓΓ(v)[V] Ω1 Γ γ1 Ω2 ΩI γ2
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V] − DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω)
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =
- Ω
v(x)
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy
- dx
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =
- Ω
v(x)
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy
- dx
=
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =
- Ω
v(x)
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy
- dx
=
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =
- i=1,2
- j=1,2
Ωi
- Ωj
v(x)u(x)γi(x, y)dydx −
- Ωi
- Ωj
v(x)u(y)γj(y, x)dydx
- 6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =
- Ω
v(x)
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy
- dx
=
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =
- i=1,2
- j=1,2
Ωi
- Ωj
v(x)u(x)γi(x, y)dydx −
- Ωi
- Ωj
v(x)u(y)γj(y, x)dydx
- =
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)u(x)γi(x, y)dydx− =
- Ωj
- Ωi
v(y)u(x)γj(x, y)dydx
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =
- Ω
v(x)
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy
- dx
=
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =
- i=1,2
- j=1,2
Ωi
- Ωj
v(x)u(x)γi(x, y)dydx −
- Ωi
- Ωj
v(x)u(y)γj(y, x)dydx
- =
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)u(x)γi(x, y)dydx− =
- Ωj
- Ωi
v(y)u(x)γj(x, y)dydx =
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)u(x)γi(x, y)dydx− =
- Ωi
- Ωj
v(y)u(x)γi(y, x)dydx
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =
- Ω
v(x)
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy
- dx
=
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =
- i=1,2
- j=1,2
Ωi
- Ωj
v(x)u(x)γi(x, y)dydx −
- Ωi
- Ωj
v(x)u(y)γj(y, x)dydx
- =
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)u(x)γi(x, y)dydx− =
- Ωj
- Ωi
v(y)u(x)γj(x, y)dydx =
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)u(x)γi(x, y)dydx− =
- Ωi
- Ωj
v(y)u(x)γi(y, x)dydx
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =
- Ω
v(x)
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy
- dx
=
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)u(x)γi(x, y) −
- Ωi
- Ωj
v(x)u(y)γj(y, x)dydx
- =
- i=1,2
- j=1,2
- Ωi
- Ωj
u(x)(v(x) − v(y)) γi(x, y) dydx
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =
- Ω
v(x)
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy
- dx
=
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)u(x)γi(x, y) −
- Ωi
- Ωj
v(x)u(y)γj(y, x)dydx
- =
- i=1,2
- j=1,2
- Ωi
- Ωj
u(x)(v(x) − v(y)) γi(x, y)
- =φi(x,y)χSi(x) (y)
dydx
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =
- Ω
v(x)
- Ω∪ΩI
(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy
- dx
=
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =
- i=1,2
- j=1,2
- Ωi
- Ωj
v(x)u(x)γi(x, y) −
- Ωi
- Ωj
v(x)u(y)γj(y, x)dydx
- =
- i=1,2
- j=1,2
- Ωi
- Ωj
u(x)(v(x) − v(y)) γi(x, y)
- =φi(x,y)χSi(x) (y)
dydx =
- i=1,2
- Ωi
- Si(x)
u(x)(v(x) − v(y))φi(x, y)dydx
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) =
- i=1,2
- Ωi
- Si(x)
u(x)(v(x) − v(y))φi(x, y) dydx
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) =
- i=1,2
- Ωi
- Si(x)
u(x)(v(x) − v(y))φi(x, y) dydx ⇒ DΓAΓ(u, v)[V] = d dt
- t=0+ AFt(Γ)(u, v)[V]
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) =
- i=1,2
- Ωi
- Si(x)
u(x)(v(x) − v(y))φi(x, y)
- =:ψi(x,y)
dydx ⇒ DΓAΓ(u, v)[V] = d dt
- t=0+ AFt(Γ)(u, v)[V]
= d dt
- t=0+
- i=1,2
- Ft(Ωi)
- Si(x)
ψi(x, y)dydx
6
- 2. Shape Derivatives
Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]
- ↑
− DΓℓΓ(v)[V]
- Ω1
Γ γ1 Ω2 ΩI γ2 AΓ(u, v) =
- i=1,2
- Ωi
- Si(x)
u(x)(v(x) − v(y))φi(x, y)
- =:ψi(x,y)
dydx ⇒ DΓAΓ(u, v)[V] = d dt
- t=0+ AFt(Γ)(u, v)[V]
= d dt
- t=0+
- i=1,2
- Ft(Ωi)
- Si(x)
ψi(x, y)dydx . . . =
- i=1,2
- Ωi
- Si(x)
(∇xψi(x, y) + ∇yψi(x, y))T V + ψi(x, y) div V dydx
6
- 2. Shape Derivatives
Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ)
Theorem (Vollmann/Schulz 2019)
Let the partial kernel functions φ1, φ2 be differentiable and let the interaction sets be translation invariant, so that Si(x) = x + Si(0). Further, let u = u(Γ) and v = v(Γ) solve the state and adjoint equation, respectively, then we find DJred(Γ)[V] = −
- Ω
∇fT
Γ V v dx −
- Ω
fΓv div V dx −
- u − u, ∇uT V
- L2(Ω)
+ µ
- Ω
∇uT ∇v div Vdx −
- Ω
∇uT ∇V + ∇VT ∇vdx
- +
- i=1,2
- Ωi
- Si(x)
u(x) (v(x) − v(y)) φi(x, y) div V(x) dydx +
- i=1,2
- Ωi
- Si(x)
u(x)(v(x) − v(y)) (∇xφi(x, y) + ∇yφi(x, y))T V(x) dydx.
7
Overview
1 The optimization problem 2 Shape derivatives 3 Numerical results
Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:
1 Initialize: ¯
u, γΓ, fΓ, Γ0
2 while DJred(Γk) > tol do 8
Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:
1 Initialize: ¯
u, γΓ, fΓ, Γ0
2 while DJred(Γk) > tol do 3
(1) Assemble stiffness matrices and solve state and adjoint equation
4
→ u(Γk), v(Γk)
8
Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:
1 Initialize: ¯
u, γΓ, fΓ, Γ0
2 while DJred(Γk) > tol do 3
(1) Assemble stiffness matrices and solve state and adjoint equation
4
→ u(Γk), v(Γk)
5
(2) Compute the mesh deformation (“gradient”)
6
Assemble shape derivative DJred(Γk)[V]
7
Assemble linear elasticity aΓk and solve the deformation equation
8
aΓk(Uk, V) = DJred(Γk)[V] for all V
9
→ Uk
8
Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:
1 Initialize: ¯
u, γΓ, fΓ, Γ0
2 while DJred(Γk) > tol do 3
(1) Assemble stiffness matrices and solve state and adjoint equation
4
→ u(Γk), v(Γk)
5
(2) Compute the mesh deformation (“gradient”)
6
Assemble shape derivative DJred(Γk)[V]
7
Assemble linear elasticity aΓk and solve the deformation equation
8
aΓk(Uk, V) = DJred(Γk)[V] for all V
9
→ Uk
10
(3) Backtracking line search (with parameters α = 1, τ, c ∈ (0, 1))
11
while Jred(Γk − αUk) ≥ cJred(Γk) do
12
α = τα
13
end
14
→ αk
8
Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:
1 Initialize: ¯
u, γΓ, fΓ, Γ0
2 while DJred(Γk) > tol do 3
(1) Assemble stiffness matrices and solve state and adjoint equation
4
→ u(Γk), v(Γk)
5
(2) Compute the mesh deformation (“gradient”)
6
Assemble shape derivative DJred(Γk)[V]
7
Assemble linear elasticity aΓk and solve the deformation equation
8
aΓk(Uk, V) = DJred(Γk)[V] for all V
9
→ Uk
10
(3) Backtracking line search (with parameters α = 1, τ, c ∈ (0, 1))
11
while Jred(Γk − αUk) ≥ cJred(Γk) do
12
α = τα
13
end
14
→ αk
15
(4) Update mesh
16
Ωk+1 = Ωk − αkUk(Ωk) = {x − αkUk(x): x ∈ Ωk}
17 end 8
Reduced problem: min
Γ
Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:
1 Initialize: ¯
u, γΓ, fΓ, Γ0
2 while DJred(Γk) > tol do 3
(1) Assemble stiffness matrices and solve state and adjoint equation
4
→ u(Γk), v(Γk)
5
(2) Compute the mesh deformation (“gradient”)
6
Assemble shape derivative DJred(Γk)[V]
7
Assemble linear elasticity aΓk and solve the deformation equation
8
aΓk(Uk, V) = DJred(Γk)[V] for all V
9
→ Uk
10
(3) Backtracking line search (with parameters α = 1, τ, c ∈ (0, 1))
11
while Jred(Γk − αUk) ≥ cJred(Γk) do
12
α = τα
13
end
14
→ αk
15
(4) Update mesh
16
Ωk+1 = Ωk − αkUk(Ωk) = {x − αkUk(x): x ∈ Ωk}
17 end 8
- 3. Numerical results
for γΓ(x, y) := (φ1(x, y)χΩ1(x) + φ2(x, y)χΩ2(x))χBδ,∞(y) where φ1(x, y) = 1 1000 cδ φ2(x, y) = 100cδ
- 1 − y − x∞
δ 2 and fΓ(x) := 100χΩ1(x) + χΩ\Ω1(x)
9
- 3. Numerical results I
10
- 3. Numerical results II
10
- 3. Numerical results III
10
Summary (arXiv:1909.08884)
min
(u,Γ) 1 2 u − ¯
u2
L2(Ω) + R(Γ)
s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Done:
- Definition of an interface-dependent nonlocal coupled system.
- Shape derivative of the corresponding nonlocal bilinear form.
- Implementation of a shape optimization algorithm.
11
Summary (arXiv:1909.08884)
min
(u,Γ) 1 2 u − ¯
u2
L2(Ω) + R(Γ)
s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Done:
- Definition of an interface-dependent nonlocal coupled system.
- Shape derivative of the corresponding nonlocal bilinear form.
- Implementation of a shape optimization algorithm.
To do:
- More analytic investigations for shape derivative.
- Local limit: Do we recover corresponding local shape derivatives?
- Analyze different inner products for determining the gradient.
11
Summary (arXiv:1909.08884)
min
(u,Γ) 1 2 u − ¯
u2
L2(Ω) + R(Γ)
s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Done:
- Definition of an interface-dependent nonlocal coupled system.
- Shape derivative of the corresponding nonlocal bilinear form.
- Implementation of a shape optimization algorithm.
Thank you for your attention!
—————————————————————————————
Volker Schulz Christian Vollmann
(volker.schulz@uni-trier.de) (vollmann@uni-trier.de)
11