Shape optimization for interface identification in nonlocal models - - PowerPoint PPT Presentation

shape optimization for interface identification in
SMART_READER_LITE
LIVE PREVIEW

Shape optimization for interface identification in nonlocal models - - PowerPoint PPT Presentation

Shape optimization for interface identification in nonlocal models Volker Schulz and Christian Vollmann www.alop.uni-trier.de Why nonlocal operators? Because of a wealth of application fields: fractional diffusion (Brockmann et al. 2008,


slide-1
SLIDE 1

Shape optimization for interface identification in nonlocal models

Volker Schulz

and

Christian Vollmann

www.alop.uni-trier.de

slide-2
SLIDE 2

Why nonlocal operators?

Because of a wealth of application fields:

  • fractional diffusion (Brockmann et al. 2008, D’Elia and Gunzburger 2013, Harbir 2015,...)
  • peridynamics (Silling 2000, Du and Zhou 2010/11, D’Elia et al. 2016,...)
  • image processing (Gilboa and Osher 2009, Lou et al. 2010, Peyre et al. 2008,...)
  • cardiology (Cusimano et al. 2015,...)
  • machine learning (Rosasco et al. 2010,...)
  • finance (Lvendoskii et al. 2004,...)
  • growth models in economics (Augeraud-Veron et al. 2019 [survey],

Frerick/Müller-Fürstenberger/Sachs/Somorowsky 2019,...)

1

slide-3
SLIDE 3

Why nonlocal operators?

Because of a wealth of application fields:

  • fractional diffusion (Brockmann et al. 2008, D’Elia and Gunzburger 2013, Harbir 2015,...)
  • peridynamics (Silling 2000, Du and Zhou 2010/11, D’Elia et al. 2016,...)
  • image processing (Gilboa and Osher 2009, Lou et al. 2010, Peyre et al. 2008,...)
  • cardiology (Cusimano et al. 2015,...)
  • machine learning (Rosasco et al. 2010,...)
  • finance (Lvendoskii et al. 2004,...)
  • growth models in economics (Augeraud-Veron et al. 2019 [survey],

Frerick/Müller-Fürstenberger/Sachs/Somorowsky 2019,...) Because of interesting structures:

  • full matrices lacking sparsity
  • nevertheless, on structured grids, tensor based methods exist for fractional Laplacians

limiting the overall effort to O(n log n) – also in the optimal control case (Heidel/Khoromskaia/Khoromskij/Schulz 2018)

  • general nonlocal operators on structured grids provide Toeplitz structures leading to high

efficiency (Vollmann/Schulz 2019) → numerical solution of nonlocal shape optimization problems is a new challenge and has to be done on unstructured meshes

1

slide-4
SLIDE 4

x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles

2

slide-5
SLIDE 5

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y

2

slide-6
SLIDE 6

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density

2

slide-7
SLIDE 7

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0.

2

slide-8
SLIDE 8

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds

2

slide-9
SLIDE 9

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds

2

slide-10
SLIDE 10

y y y y y y x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds

2

slide-11
SLIDE 11

y y y y y y x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds

2

slide-12
SLIDE 12

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds

2

slide-13
SLIDE 13

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds ⇔ 1 h

  • u(x, t + h) − u(x, t)
  • = 1

h t+h

t

  • Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds

h→0

− − − → ∂ ∂t u(x, t) =

  • Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y))dy + f(x, t)

2

slide-14
SLIDE 14

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds ⇔ 1 h

  • u(x, t + h) − u(x, t)
  • = 1

h t+h

t

  • Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds

h→0

− − − → ∂ ∂t u(x, t) =

  • Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y))dy + f(x, t)

2

slide-15
SLIDE 15

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds ⇔ 1 h

  • u(x, t + h) − u(x, t)
  • = 1

h t+h

t

  • Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds

h→0

− − − → ∂ ∂t u(x, t) =

  • Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y))dy + f(x, t)

2

slide-16
SLIDE 16

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds ⇔ 1 h

  • u(x, t + h) − u(x, t)
  • = 1

h t+h

t

  • Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds

h→0

− − − → ∂ ∂t u(x, t) =

  • Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y))dy + f(x, t)

2

slide-17
SLIDE 17

y γ(y, x) γ(x, y) x

Ω Ωc

Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds ⇔ 1 h

  • u(x, t + h) − u(x, t)
  • = 1

h t+h

t

  • Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds

h→0

− − − → ∂ ∂t u(x, t)

  • =0

=

  • Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y))dy + f(x, t)

2

slide-18
SLIDE 18

y φ(x, y) x

Ω ΩI

S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds ⇔ 1 h

  • u(x, t + h) − u(x, t)
  • = 1

h t+h

t

  • Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds

h→0

− − − → ∂ ∂t u(x, t)

  • =0

=

  • Rd(u(y, t)γ(y, x) − u(x, t)γ(x, y)

=:φ(x,y)χS(x) (y)

)dy + f(x, t)

2

slide-19
SLIDE 19

y φ(x, y) x

Ω ΩI

S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds ⇔ 1 h

  • u(x, t + h) − u(x, t)
  • = 1

h t+h

t

  • Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds

h→0

− − − → 0 =

  • S(x)

(u(y, t)φ(y, x) − u(x, t)φ(x, y))dy + f(x, t)

2

slide-20
SLIDE 20

y φ(x, y) x

Ω ΩI

S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Conservation law: Let x ∈ Ω, t ≥ 0 and consider a time horizon h > 0. u(x, t + h) = u(x, t) + t+h

t Rd u(y, s)γ(y, x)dy −

  • Rd u(x, s)γ(x, y)dy
  • ds +

t+h

t

f(x, s)ds ⇔ 1 h

  • u(x, t + h) − u(x, t)
  • = 1

h t+h

t

  • Rd (u(y, s)γ(y, x) − u(x, s)γ(x, y)) dy + f(x, s)ds

h→0

− − − → 0 =

  • S(x)

(u(y, t)φ(y, x) − u(x, t)φ(x, y))dy

  • =:Lu(x)

+f(x, t)

2

slide-21
SLIDE 21

y φ(x, y) x

Ω ΩI

S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Steady-state nonlocal Dirichlet problem − Lu = f

  • n Ω

u = g

  • n ΩI

2

slide-22
SLIDE 22

y φ(x, y) x

Ω ΩI

S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Steady-state nonlocal Dirichlet problem

  • − Lu = f
  • n Ω

u = g

  • n ΩI

2

slide-23
SLIDE 23

y φ(x, y) x

Ω Γ ΩI

S(x) Ω bounded domain in Rd u(x, t) density of particles γ(x, y) probability/tendency that a particle moves from x to y f(x, t) external source density Steady-state nonlocal Dirichlet problem

  • − LΓu = fΓ
  • n Ω

u = g

  • n ΩI

2

slide-24
SLIDE 24

Overview

1 The optimization problem 2 Shape derivatives 3 Numerical results

slide-25
SLIDE 25

Overview

1 The optimization problem 2 Shape derivatives 3 Numerical results

slide-26
SLIDE 26
  • 1. Problem formulation

min

(u,Γ) 1 2 u − ¯

u2

L2(Ω) + R(Γ)

s.t. LΓu = fΓ Ω1 Ω2 Γ Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2

3

slide-27
SLIDE 27
  • 1. Problem formulation

min

(u,Γ) 1 2 u − ¯

u2

L2(Ω) + R(Γ)

s.t. LΓu = fΓ Ω1 Ω2 Γ Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2 Well studied example (e.g. Schulz, Siebenborn, Welker; SIOPT 2016): LΓu(x) = − div kΓ∇u(x) kΓ(x) =

  • k1(x)

: x ∈ Ω1 k2(x) : x ∈ Ω2

  • shape derivative used as

force for elastic mesh deformation fΓ(x) =

  • f1(x)

: x ∈ Ω1 f2(x) : x ∈ Ω2

  • in line with shape Hessian

estimation on the boundary (Eppler/Harbrecht/Schnei- der 2007)

3

slide-28
SLIDE 28
  • 1. Problem formulation

min

(u,Γ) 1 2 u − ¯

u2

L2(Ω) + R(Γ)

s.t. LΓu = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2 Here: LΓu(x):=

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x)) dy γΓ(x, y) =

  • γ1(x, y)

: x ∈ Ω1 γ2(x, y) : x ∈ Ω2 ∪ ΩI fΓ(x) =

  • f1(x)

: x ∈ Ω1 f2(x) : x ∈ Ω2

3

slide-29
SLIDE 29
  • 1. Problem formulation

min

(u,Γ) 1 2 u − ¯

u2

L2(Ω) + R(Γ)

s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2 Here: LΓu(x):=

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x)) dy γΓ(x, y) =

  • γ1(x, y)

: x ∈ Ω1 γ2(x, y) : x ∈ Ω2 ∪ ΩI fΓ(x) =

  • f1(x)

: x ∈ Ω1 f2(x) : x ∈ Ω2 µ > 0 small perturbation parameter to guarantee regularity

3

slide-30
SLIDE 30
  • 1. Problem formulation

min

(u,Γ) 1 2 u − ¯

u2

L2(Ω) + R(Γ)

s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2

  • Shape definition

→ Here: Simple closed, smooth curve Γ ⊂ Ω.

3

slide-31
SLIDE 31
  • 1. Problem formulation

min

(u,Γ) 1 2 u − ¯

u2

L2(Ω) + R(Γ)

s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Ω = (0, 1)2 = Ω1 ˙ ∪ Γ ˙ ∪ Ω2

  • Shape definition

→ Here: Simple closed, smooth curve Γ ⊂ Ω.

  • Derivative concept

→ Here: Eulerian derivative For a smooth vector field V: Ω → R2 let Ft := Id +tV (perturbation of identity), then we define DJ(Γ)[V] := lim

tց0

1 t

  • J(Ft(Γ)) − J(Γ)
  • .

3

slide-32
SLIDE 32
  • 1. Problem formulation

Optimal control for nonlocal operators

  • M. D’Elia and M. Gunzburger: Optimal distributed control of nonlocal steady diffusion problems. SICON

(2014).

  • M. D’Elia and M. Gunzburger: Identification of the diffusion parameter in nonlocal steady diffusion problems.
  • Appl. Math. Optim. (2016)
  • M. D’Elia, C. Glusa, and E. Otárola: A priori error estimates for the optimal control of the integral fractional
  • Laplacian. SICON (2019)
  • E. Otárola and A.J. Salgado: Sparse optimal control for fractional diffusion. CMAM (2018)
  • . . .

→ Control variable is typically modeled to be an element of a suitable function space. 4

slide-33
SLIDE 33
  • 1. Problem formulation

Optimal control for nonlocal operators

  • M. D’Elia and M. Gunzburger: Optimal distributed control of nonlocal steady diffusion problems. SICON

(2014).

  • M. D’Elia and M. Gunzburger: Identification of the diffusion parameter in nonlocal steady diffusion problems.
  • Appl. Math. Optim. (2016)
  • M. D’Elia, C. Glusa, and E. Otárola: A priori error estimates for the optimal control of the integral fractional
  • Laplacian. SICON (2019)
  • E. Otárola and A.J. Salgado: Sparse optimal control for fractional diffusion. CMAM (2018)
  • . . .

→ Control variable is typically modeled to be an element of a suitable function space.

Shape optimization for nonlocal operators

  • J.F. Bonder and J.F. Spedaletti: Some nonlocal optimal design problems. Jour. of Math. Anal. and Appl.

(2018)

  • E. Parini and A. Salort: Compactness and dichotomy in nonlocal shape optimization. arXiv:1806.01165

(2018)

  • Y. Sire, J.L. Vázquez, and B. Volzone: Symmetrization for fractional elliptic and parabolic equations and an

isoperimetric application. Chin. Ann. of Math., Ser. B (2017).

  • J. Fernández Bonder, A. Ritorto, and A. Salort: A class of shape optimization problems for some nonlocal
  • perators. Advances in Calculus of Variations (2017).
  • A.-L. Dalibard and D. Gérard-Varet: On shape optimization problems involving the fractional Laplacian.

ESAIM: COCV (2013).

  • A. Burchard, R. Choksi, and I. Topaloglu: Nonlocal shape optimization via interactions of attractive and

repulsive potentials. arXiv:1512.07282 (2018).

  • . . . ?

→ All of them are of theoretical nature and do not involve numerics. 4

slide-34
SLIDE 34

Overview

1 The optimization problem 2 Shape derivatives 3 Numerical results

slide-35
SLIDE 35
  • 2. Shape Derivatives
  • Optimization approach

→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) min

(u,Γ) J(u, Γ)

s.t. c(u, Γ) = 0 where J(u, Γ) := 1

2 u − ¯

u2

L2(Ω)

c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·)

5

slide-36
SLIDE 36
  • 2. Shape Derivatives
  • Optimization approach

→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1

2 u − ¯

u2

L2(Ω)

c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·)

5

slide-37
SLIDE 37
  • 2. Shape Derivatives
  • Optimization approach

→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1

2 u − ¯

u2

L2(Ω)

c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′

Γ + JΓ, but u′ Γ is hard to determine. 5

slide-38
SLIDE 38
  • 2. Shape Derivatives
  • Optimization approach

→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1

2 u − ¯

u2

L2(Ω)

c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′

Γ + JΓ, but u′ Γ is hard to determine.

Therefore we consider the Lagrange functional L(u, Γ, v) := 1

2 u − ¯

u2

L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v). 5

slide-39
SLIDE 39
  • 2. Shape Derivatives
  • Optimization approach

→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1

2 u − ¯

u2

L2(Ω)

c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′

Γ + JΓ, but u′ Γ is hard to determine.

Therefore we consider the Lagrange functional L(u, Γ, v) := 1

2 u − ¯

u2

L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v).

We aim to find a saddle point (u, Γ, v), such that 0 ≡ DvL(u, Γ, v) = AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) (state)

5

slide-40
SLIDE 40
  • 2. Shape Derivatives
  • Optimization approach

→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1

2 u − ¯

u2

L2(Ω)

c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′

Γ + JΓ, but u′ Γ is hard to determine.

Therefore we consider the Lagrange functional L(u, Γ, v) := 1

2 u − ¯

u2

L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v).

We aim to find a saddle point (u, Γ, v), such that 0 ≡ DvL(u, Γ, v) = AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) (state) 0 ≡ DuL(u, Γ, v) = (u − ¯ u, ·)L2 + A∗

Γ(v, ·) + µ(∇v, ∇·)L2

(adjoint)

5

slide-41
SLIDE 41
  • 2. Shape Derivatives
  • Optimization approach

→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1

2 u − ¯

u2

L2(Ω)

c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′

Γ + JΓ, but u′ Γ is hard to determine.

Therefore we consider the Lagrange functional L(u, Γ, v) := 1

2 u − ¯

u2

L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v).

We aim to find a saddle point (u, Γ, v), such that 0 ≡ DvL(u, Γ, v) = AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) (state) 0 ≡ DuL(u, Γ, v) = (u − ¯ u, ·)L2 + A∗

Γ(v, ·) + µ(∇v, ∇·)L2

(adjoint) 0 ≡ DΓL(u, Γ, v) = DΓAΓ(u, v) − DΓℓΓ(v) (design)

5

slide-42
SLIDE 42
  • 2. Shape Derivatives
  • Optimization approach

→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1

2 u − ¯

u2

L2(Ω)

c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′

Γ + JΓ, but u′ Γ is hard to determine.

Therefore we consider the Lagrange functional L(u, Γ, v) := 1

2 u − ¯

u2

L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v).

We aim to find a saddle point (u, Γ, v), such that 0 ≡ DvL(u, Γ, v) = AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) (state) 0 ≡ DuL(u, Γ, v) = (u − ¯ u, ·)L2 + A∗

Γ(v, ·) + µ(∇v, ∇·)L2

(adjoint) 0 ≡ DΓL(u, Γ, v) = DΓAΓ(u, v) − DΓℓΓ(v) (design) → Shape derivatives of 1

2 u − ¯

u2

L2(Ω) and µ(∇u, ∇v)L2(Ω) are zero. 5

slide-43
SLIDE 43
  • 2. Shape Derivatives
  • Optimization approach

→ Here: Formal Lagrangian analogous to (Laurain/Sturm 2016) Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) s.t. c(u, Γ) = 0 where J(u, Γ) := 1

2 u − ¯

u2

L2(Ω)

c(u, Γ) := AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) Then DJred(Γ) = Juu′

Γ + JΓ, but u′ Γ is hard to determine.

Therefore we consider the Lagrange functional L(u, Γ, v) := 1

2 u − ¯

u2

L2(Ω) + AΓ(u, v) + µ(∇u, ∇v)L2(Ω) − ℓΓ(v).

We aim to find a saddle point (u, Γ, v), such that 0 ≡ DvL(u, Γ, v) = AΓ(u, ·) + µ(∇u, ∇·)L2 − ℓΓ(·) (state) 0 ≡ DuL(u, Γ, v) = (u − ¯ u, ·)L2 + A∗

Γ(v, ·) + µ(∇v, ∇·)L2

(adjoint) 0 ≡ DΓL(u, Γ, v) = DΓAΓ(u, v) − DΓℓΓ(v) (design) → Shape derivatives of 1

2 u − ¯

u2

L2(Ω) and µ(∇u, ∇v)L2(Ω) are zero.

→ Crucial here: Shape derivatives of AΓ(u, v) and ℓΓ(v).

5

slide-44
SLIDE 44
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V] − DΓℓΓ(v)[V] Ω1 Γ γ1 Ω2 ΩI γ2

6

slide-45
SLIDE 45
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V] − DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2

6

slide-46
SLIDE 46
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2

6

slide-47
SLIDE 47
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω)

6

slide-48
SLIDE 48
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =

v(x)

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy

  • dx

6

slide-49
SLIDE 49
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =

v(x)

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy

  • dx

=

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx

6

slide-50
SLIDE 50
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =

v(x)

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy

  • dx

=

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =

  • i=1,2
  • j=1,2

Ωi

  • Ωj

v(x)u(x)γi(x, y)dydx −

  • Ωi
  • Ωj

v(x)u(y)γj(y, x)dydx

  • 6
slide-51
SLIDE 51
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =

v(x)

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy

  • dx

=

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =

  • i=1,2
  • j=1,2

Ωi

  • Ωj

v(x)u(x)γi(x, y)dydx −

  • Ωi
  • Ωj

v(x)u(y)γj(y, x)dydx

  • =
  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)u(x)γi(x, y)dydx− =

  • Ωj
  • Ωi

v(y)u(x)γj(x, y)dydx

6

slide-52
SLIDE 52
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =

v(x)

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy

  • dx

=

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =

  • i=1,2
  • j=1,2

Ωi

  • Ωj

v(x)u(x)γi(x, y)dydx −

  • Ωi
  • Ωj

v(x)u(y)γj(y, x)dydx

  • =
  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)u(x)γi(x, y)dydx− =

  • Ωj
  • Ωi

v(y)u(x)γj(x, y)dydx =

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)u(x)γi(x, y)dydx− =

  • Ωi
  • Ωj

v(y)u(x)γi(y, x)dydx

6

slide-53
SLIDE 53
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =

v(x)

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy

  • dx

=

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =

  • i=1,2
  • j=1,2

Ωi

  • Ωj

v(x)u(x)γi(x, y)dydx −

  • Ωi
  • Ωj

v(x)u(y)γj(y, x)dydx

  • =
  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)u(x)γi(x, y)dydx− =

  • Ωj
  • Ωi

v(y)u(x)γj(x, y)dydx =

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)u(x)γi(x, y)dydx− =

  • Ωi
  • Ωj

v(y)u(x)γi(y, x)dydx

6

slide-54
SLIDE 54
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =

v(x)

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy

  • dx

=

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)u(x)γi(x, y) −

  • Ωi
  • Ωj

v(x)u(y)γj(y, x)dydx

  • =
  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

u(x)(v(x) − v(y)) γi(x, y) dydx

6

slide-55
SLIDE 55
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =

v(x)

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy

  • dx

=

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)u(x)γi(x, y) −

  • Ωi
  • Ωj

v(x)u(y)γj(y, x)dydx

  • =
  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

u(x)(v(x) − v(y)) γi(x, y)

  • =φi(x,y)χSi(x) (y)

dydx

6

slide-56
SLIDE 56
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) = (LΓu, v)L2(Ω) =

v(x)

  • Ω∪ΩI

(u(x)γΓ(x, y) − u(y)γΓ(y, x))dy

  • dx

=

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)(u(x)γi(x, y) − u(y)γj(y, x))dydx =

  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

v(x)u(x)γi(x, y) −

  • Ωi
  • Ωj

v(x)u(y)γj(y, x)dydx

  • =
  • i=1,2
  • j=1,2
  • Ωi
  • Ωj

u(x)(v(x) − v(y)) γi(x, y)

  • =φi(x,y)χSi(x) (y)

dydx =

  • i=1,2
  • Ωi
  • Si(x)

u(x)(v(x) − v(y))φi(x, y)dydx

6

slide-57
SLIDE 57
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) =

  • i=1,2
  • Ωi
  • Si(x)

u(x)(v(x) − v(y))φi(x, y) dydx

6

slide-58
SLIDE 58
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) =

  • i=1,2
  • Ωi
  • Si(x)

u(x)(v(x) − v(y))φi(x, y) dydx ⇒ DΓAΓ(u, v)[V] = d dt

  • t=0+ AFt(Γ)(u, v)[V]

6

slide-59
SLIDE 59
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) =

  • i=1,2
  • Ωi
  • Si(x)

u(x)(v(x) − v(y))φi(x, y)

  • =:ψi(x,y)

dydx ⇒ DΓAΓ(u, v)[V] = d dt

  • t=0+ AFt(Γ)(u, v)[V]

= d dt

  • t=0+
  • i=1,2
  • Ft(Ωi)
  • Si(x)

ψi(x, y)dydx

6

slide-60
SLIDE 60
  • 2. Shape Derivatives

Goal: DJred(Γ)[V] = DΓL(u, Γ, v)[V] = DΓAΓ(u, v)[V]

− DΓℓΓ(v)[V]

  • Ω1

Γ γ1 Ω2 ΩI γ2 AΓ(u, v) =

  • i=1,2
  • Ωi
  • Si(x)

u(x)(v(x) − v(y))φi(x, y)

  • =:ψi(x,y)

dydx ⇒ DΓAΓ(u, v)[V] = d dt

  • t=0+ AFt(Γ)(u, v)[V]

= d dt

  • t=0+
  • i=1,2
  • Ft(Ωi)
  • Si(x)

ψi(x, y)dydx . . . =

  • i=1,2
  • Ωi
  • Si(x)

(∇xψi(x, y) + ∇yψi(x, y))T V + ψi(x, y) div V dydx

6

slide-61
SLIDE 61
  • 2. Shape Derivatives

Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ)

Theorem (Vollmann/Schulz 2019)

Let the partial kernel functions φ1, φ2 be differentiable and let the interaction sets be translation invariant, so that Si(x) = x + Si(0). Further, let u = u(Γ) and v = v(Γ) solve the state and adjoint equation, respectively, then we find DJred(Γ)[V] = −

∇fT

Γ V v dx −

fΓv div V dx −

  • u − u, ∇uT V
  • L2(Ω)

+ µ

∇uT ∇v div Vdx −

∇uT ∇V + ∇VT ∇vdx

  • +
  • i=1,2
  • Ωi
  • Si(x)

u(x) (v(x) − v(y)) φi(x, y) div V(x) dydx +

  • i=1,2
  • Ωi
  • Si(x)

u(x)(v(x) − v(y)) (∇xφi(x, y) + ∇yφi(x, y))T V(x) dydx.

7

slide-62
SLIDE 62

Overview

1 The optimization problem 2 Shape derivatives 3 Numerical results

slide-63
SLIDE 63

Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:

1 Initialize: ¯

u, γΓ, fΓ, Γ0

2 while DJred(Γk) > tol do 8

slide-64
SLIDE 64

Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:

1 Initialize: ¯

u, γΓ, fΓ, Γ0

2 while DJred(Γk) > tol do 3

(1) Assemble stiffness matrices and solve state and adjoint equation

4

→ u(Γk), v(Γk)

8

slide-65
SLIDE 65

Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:

1 Initialize: ¯

u, γΓ, fΓ, Γ0

2 while DJred(Γk) > tol do 3

(1) Assemble stiffness matrices and solve state and adjoint equation

4

→ u(Γk), v(Γk)

5

(2) Compute the mesh deformation (“gradient”)

6

Assemble shape derivative DJred(Γk)[V]

7

Assemble linear elasticity aΓk and solve the deformation equation

8

aΓk(Uk, V) = DJred(Γk)[V] for all V

9

→ Uk

8

slide-66
SLIDE 66

Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:

1 Initialize: ¯

u, γΓ, fΓ, Γ0

2 while DJred(Γk) > tol do 3

(1) Assemble stiffness matrices and solve state and adjoint equation

4

→ u(Γk), v(Γk)

5

(2) Compute the mesh deformation (“gradient”)

6

Assemble shape derivative DJred(Γk)[V]

7

Assemble linear elasticity aΓk and solve the deformation equation

8

aΓk(Uk, V) = DJred(Γk)[V] for all V

9

→ Uk

10

(3) Backtracking line search (with parameters α = 1, τ, c ∈ (0, 1))

11

while Jred(Γk − αUk) ≥ cJred(Γk) do

12

α = τα

13

end

14

→ αk

8

slide-67
SLIDE 67

Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:

1 Initialize: ¯

u, γΓ, fΓ, Γ0

2 while DJred(Γk) > tol do 3

(1) Assemble stiffness matrices and solve state and adjoint equation

4

→ u(Γk), v(Γk)

5

(2) Compute the mesh deformation (“gradient”)

6

Assemble shape derivative DJred(Γk)[V]

7

Assemble linear elasticity aΓk and solve the deformation equation

8

aΓk(Uk, V) = DJred(Γk)[V] for all V

9

→ Uk

10

(3) Backtracking line search (with parameters α = 1, τ, c ∈ (0, 1))

11

while Jred(Γk − αUk) ≥ cJred(Γk) do

12

α = τα

13

end

14

→ αk

15

(4) Update mesh

16

Ωk+1 = Ωk − αkUk(Ωk) = {x − αkUk(x): x ∈ Ωk}

17 end 8

slide-68
SLIDE 68

Reduced problem: min

Γ

Jred(Γ) := J(u(Γ), Γ) Shape optimization algorithm:

1 Initialize: ¯

u, γΓ, fΓ, Γ0

2 while DJred(Γk) > tol do 3

(1) Assemble stiffness matrices and solve state and adjoint equation

4

→ u(Γk), v(Γk)

5

(2) Compute the mesh deformation (“gradient”)

6

Assemble shape derivative DJred(Γk)[V]

7

Assemble linear elasticity aΓk and solve the deformation equation

8

aΓk(Uk, V) = DJred(Γk)[V] for all V

9

→ Uk

10

(3) Backtracking line search (with parameters α = 1, τ, c ∈ (0, 1))

11

while Jred(Γk − αUk) ≥ cJred(Γk) do

12

α = τα

13

end

14

→ αk

15

(4) Update mesh

16

Ωk+1 = Ωk − αkUk(Ωk) = {x − αkUk(x): x ∈ Ωk}

17 end 8

slide-69
SLIDE 69
  • 3. Numerical results

for γΓ(x, y) := (φ1(x, y)χΩ1(x) + φ2(x, y)χΩ2(x))χBδ,∞(y) where φ1(x, y) = 1 1000 cδ φ2(x, y) = 100cδ

  • 1 − y − x∞

δ 2 and fΓ(x) := 100χΩ1(x) + χΩ\Ω1(x)

9

slide-70
SLIDE 70
  • 3. Numerical results I

10

slide-71
SLIDE 71
  • 3. Numerical results II

10

slide-72
SLIDE 72
  • 3. Numerical results III

10

slide-73
SLIDE 73

Summary (arXiv:1909.08884)

min

(u,Γ) 1 2 u − ¯

u2

L2(Ω) + R(Γ)

s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Done:

  • Definition of an interface-dependent nonlocal coupled system.
  • Shape derivative of the corresponding nonlocal bilinear form.
  • Implementation of a shape optimization algorithm.

11

slide-74
SLIDE 74

Summary (arXiv:1909.08884)

min

(u,Γ) 1 2 u − ¯

u2

L2(Ω) + R(Γ)

s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Done:

  • Definition of an interface-dependent nonlocal coupled system.
  • Shape derivative of the corresponding nonlocal bilinear form.
  • Implementation of a shape optimization algorithm.

To do:

  • More analytic investigations for shape derivative.
  • Local limit: Do we recover corresponding local shape derivatives?
  • Analyze different inner products for determining the gradient.

11

slide-75
SLIDE 75

Summary (arXiv:1909.08884)

min

(u,Γ) 1 2 u − ¯

u2

L2(Ω) + R(Γ)

s.t. LΓu − µ∆u = fΓ Ω1 Γ γ1 Ω2 ΩI γ2 Done:

  • Definition of an interface-dependent nonlocal coupled system.
  • Shape derivative of the corresponding nonlocal bilinear form.
  • Implementation of a shape optimization algorithm.

Thank you for your attention!

—————————————————————————————

Volker Schulz Christian Vollmann

(volker.schulz@uni-trier.de) (vollmann@uni-trier.de)

11