Convex discretization of functionals involving the Monge-Amp` ere - - PowerPoint PPT Presentation

convex discretization of functionals involving the monge
SMART_READER_LITE
LIVE PREVIEW

Convex discretization of functionals involving the Monge-Amp` ere - - PowerPoint PPT Presentation

Convex discretization of functionals involving the Monge-Amp` ere operator Quentin M erigot CNRS / Universit e Paris-Dauphine Joint work with J.D. Benamou, G. Carlier and E. Oudet 1 1. Motivation: Gradient flows in Wasserstein space


slide-1
SLIDE 1

1

Convex discretization of functionals involving the Monge-Amp` ere operator

Quentin M´ erigot CNRS / Universit´ e Paris-Dauphine Joint work with J.D. Benamou, G. Carlier and ´

  • E. Oudet
slide-2
SLIDE 2

2

  • 1. Motivation: Gradient flows in Wasserstein space
slide-3
SLIDE 3

3

◮ Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd);

  • x2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} Pac

2 (Rd) = P2(Rd) ∩ L1(Rd)

Rd Rd π µ ν p1 p2 Definition: W2

2(µ, ν) := minπ∈Γ(µ,ν)

  • x − y2 d π(x, y).

Background: Optimal transport

slide-4
SLIDE 4

3

Theorem (Brenier): Given µ ∈ Pac

2 (Rd) and ν ∈ P2(Rd),

∃φ ∈ K such that ∇φ#µ = ν, and W2

2(µ, ν) =

  • Rd x − ∇φ(x)2 d µ(x)

◮ Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd);

  • x2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} Pac

2 (Rd) = P2(Rd) ∩ L1(Rd)

Rd Rd π µ ν p1 p2

[Brenier ’91]

◮ Relation to convex functions: Def: K := finite convex functions on Rd Definition: W2

2(µ, ν) := minπ∈Γ(µ,ν)

  • x − y2 d π(x, y).

Background: Optimal transport

slide-5
SLIDE 5

3

Theorem (Brenier): Given µ ∈ Pac

2 (Rd) and ν ∈ P2(Rd),

∃φ ∈ K such that ∇φ#µ = ν, and W2

2(µ, ν) =

  • Rd x − ∇φ(x)2 d µ(x)

◮ Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd);

  • x2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} Pac

2 (Rd) = P2(Rd) ∩ L1(Rd)

Rd Rd π µ ν p1 p2

[Brenier ’91]

◮ Relation to convex functions: Def: K := finite convex functions on Rd Definition: W2

2(µ, ν) := minπ∈Γ(µ,ν)

  • x − y2 d π(x, y).

Given any µ ∈ Pac

2 (Rd), we get a ”parameterization” of P2(Rd),

Background: Optimal transport

  • r more precisely, an onto map K → P2(Rd), φ → ∇φ#µ.
slide-6
SLIDE 6

4

U(ν) :=   

  • Rd U(σ(x)) d x if d ν = σ d Hd

+ ∞ if not E(ν) :=

  • Rd V (x) d ν(x) +
  • Rd W(x − y) d[ν ⊗ ν](x, y)

minν∈P2(Rd) U(ν) + E(ν) internal energy potential energy interaction energy ◮ Equilibrium states of gazes:

Gas equilibrium and displacement convexity

ν = particle distribution

slide-7
SLIDE 7

4

U(ν) :=   

  • Rd U(σ(x)) d x if d ν = σ d Hd

+ ∞ if not E(ν) :=

  • Rd V (x) d ν(x) +
  • Rd W(x − y) d[ν ⊗ ν](x, y)

minν∈P2(Rd) U(ν) + E(ν) internal energy potential energy interaction energy ◮ Equilibrium states of gazes: rdU(r−d) is convex non-increasing, U(0) = 0 = ⇒ U is displacement-convex ◮ Displacement convexity Theorem: V, W : Rd → R are convex functions = ⇒ E is displacement-convex Definition: F is displacement-convex if for any W2-geodesic (νt) in Pac

2 (Rd),

the function t → F(νt) is convex.

[McCann ’94]

Gas equilibrium and displacement convexity

ν = particle distribution − → strict convexity = ⇒ uniqueness of minimum

slide-8
SLIDE 8

5

◮ Generalized displacement convexity Definition: F is convex under generalized displacement (u.g.d.) if for any µ in Pac

2 (Rd), the function φ ∈ H → F(∇φ#µ) is convex.

Convexity under generalized displacements

slide-9
SLIDE 9

5

◮ Generalized displacement convexity Definition: F is convex under generalized displacement (u.g.d.) if for any µ in Pac

2 (Rd), the function φ ∈ H → F(∇φ#µ) is convex.

Convexity under generalized displacements

µt = ((1 − t)id + t∇φ)#µ

geodesic for W2

slide-10
SLIDE 10

5

◮ Generalized displacement convexity Definition: F is convex under generalized displacement (u.g.d.) if for any µ in Pac

2 (Rd), the function φ ∈ H → F(∇φ#µ) is convex.

Convexity under generalized displacements

µt = ((1 − t)id + t∇φ)#µ µt = ((1 − t)∇φ0 + t∇φ1)#µ

geodesic for W2 ”generalized geodesic”

slide-11
SLIDE 11

5

◮ Generalized displacement convexity Definition: F is convex under generalized displacement (u.g.d.) if for any µ in Pac

2 (Rd), the function φ ∈ H → F(∇φ#µ) is convex.

Convexity under generalized displacements

rdU(r−d) is convex non-increasing, U(0) = 0 = ⇒ U is convex u.g.d Theorem: V, W : Rd → R are convex functions = ⇒ E is convex u.g.d

[McCann ’94]

µt = ((1 − t)id + t∇φ)#µ µt = ((1 − t)∇φ0 + t∇φ1)#µ

geodesic for W2 ”generalized geodesic”

slide-12
SLIDE 12

5

◮ Generalized displacement convexity Definition: F is convex under generalized displacement (u.g.d.) if for any µ in Pac

2 (Rd), the function φ ∈ H → F(∇φ#µ) is convex.

Convexity under generalized displacements

E(∇φ#ρ) =

  • V (∇φ(x))ρ(x) d x +
  • W(∇φ(x) − ∇φ(z))ρ(z)ρ(y) d x d y

◮ Proof: Non-smooth change of variable formula: rdU(r−d) is convex non-increasing, U(0) = 0 = ⇒ U is convex u.g.d Theorem: V, W : Rd → R are convex functions = ⇒ E is convex u.g.d

[McCann ’94]

µt = ((1 − t)id + t∇φ)#µ µt = ((1 − t)∇φ0 + t∇φ1)#µ

geodesic for W2 ”generalized geodesic”

slide-13
SLIDE 13

5

◮ Generalized displacement convexity Definition: F is convex under generalized displacement (u.g.d.) if for any µ in Pac

2 (Rd), the function φ ∈ H → F(∇φ#µ) is convex.

Convexity under generalized displacements

E(∇φ#ρ) =

  • V (∇φ(x))ρ(x) d x +
  • W(∇φ(x) − ∇φ(z))ρ(z)ρ(y) d x d y

U(∇φ#ρ) =

  • U
  • ρ(x)

MA[φ](x)

  • MA[φ](x) d x

MA[φ](x) := det(D2φ(x)) ◮ Proof: Non-smooth change of variable formula: Minkowski determinant inequality: A ∈ SDP(Rd) → det(A)1/d is concave rdU(r−d) is convex non-increasing, U(0) = 0 = ⇒ U is convex u.g.d Theorem: V, W : Rd → R are convex functions = ⇒ E is convex u.g.d

[McCann ’94]

µt = ((1 − t)id + t∇φ)#µ µt = ((1 − t)∇φ0 + t∇φ1)#µ

geodesic for W2 ”generalized geodesic”

slide-14
SLIDE 14

6

Heat equation as a Wasserstein gradient flow

◮ Solution ρ(t, .) = gradient flow in L2(Rd) of D(ρ) := 1

2

  • ∇ρ(x)2 d x.

∂ρ ∂t = ∆ρ ρ(0, .) = ρ0 ρ(t, .) ∈ Pac(Rd) Heat equation

slide-15
SLIDE 15

6

Heat equation as a Wasserstein gradient flow

◮ Solution ρ(t, .) = gradient flow in L2(Rd) of D(ρ) := 1

2

  • ∇ρ(x)2 d x.

∂ρ ∂t = ∆ρ ρ(0, .) = ρ0 Time-discretization using an implicit Euler scheme: for τ > 0, ρ(t, .) ∈ Pac(Rd) Heat equation ρτ

k+1 = arg minσ∈Pac(Rd) 1 2τ ρτ k − σ2 L2(Rd) + D(σ).

slide-16
SLIDE 16

6

Heat equation as a Wasserstein gradient flow

◮ Solution ρ(t, .) = gradient flow in L2(Rd) of D(ρ) := 1

2

  • ∇ρ(x)2 d x.

∂ρ ∂t = ∆ρ ρ(0, .) = ρ0 Time-discretization using an implicit Euler scheme: for τ > 0, ρ(t, .) ∈ Pac(Rd) Heat equation ◮ Jordan, Kinderleherer, Otto: the heat equation is a gradient flow, for W2(Rd), of the functional U(ρ) :=

  • ρ(x) log ρ(x) d x = − entropy of ρ.

ρτ

k+1 = arg minσ∈Pac(Rd) 1 2τ ρτ k − σ2 L2(Rd) + D(σ).

slide-17
SLIDE 17

6

Heat equation as a Wasserstein gradient flow

◮ Solution ρ(t, .) = gradient flow in L2(Rd) of D(ρ) := 1

2

  • ∇ρ(x)2 d x.

∂ρ ∂t = ∆ρ ρ(0, .) = ρ0 Time-discretization using an implicit Euler scheme: for τ > 0, ρ(t, .) ∈ Pac(Rd) Heat equation Corresponding time-discrete scheme: ◮ Jordan, Kinderleherer, Otto: the heat equation is a gradient flow, for W2(Rd), of the functional U(ρ) :=

  • ρ(x) log ρ(x) d x = − entropy of ρ.

ρτ

k+1 = arg minσ∈Pac(Rd) 1 2τ ρτ k − σ2 L2(Rd) + D(σ).

ρτ

k+1 = arg minσ∈P(Rd) 1 2τ W2(ρτ k, σ)2 + U(σ).

slide-18
SLIDE 18

6

Heat equation as a Wasserstein gradient flow

◮ Solution ρ(t, .) = gradient flow in L2(Rd) of D(ρ) := 1

2

  • ∇ρ(x)2 d x.

∂ρ ∂t = ∆ρ ρ(0, .) = ρ0 Time-discretization using an implicit Euler scheme: for τ > 0, ρ(t, .) ∈ Pac(Rd)

[Jordan, Kinderlehrer, Otto ’99]

Heat equation Corresponding time-discrete scheme: ◮ Jordan, Kinderleherer, Otto: the heat equation is a gradient flow, for W2(Rd), of the functional U(ρ) :=

  • ρ(x) log ρ(x) d x = − entropy of ρ.

ρτ

k+1 = arg minσ∈Pac(Rd) 1 2τ ρτ k − σ2 L2(Rd) + D(σ).

ρτ

k+1 = arg minσ∈P(Rd) 1 2τ W2(ρτ k, σ)2 + U(σ).

◮ Convergence analysis for the linear Fokker-Planck equation.

slide-19
SLIDE 19

7

◮ Generalization to some evolution PDEs, where ρ(t, .) ∈ Pac(Rd) ∂ρ ∂t = div [ρ∇(U ′(ρ) + V + W ∗ ρ)] ρ(0, .) = ρ0

Diffusive PDEs as Wasserstein gradient flows

(∗)

slide-20
SLIDE 20

7

◮ Generalization to some evolution PDEs, where ρ(t, .) ∈ Pac(Rd) ∂ρ ∂t = div [ρ∇(U ′(ρ) + V + W ∗ ρ)] ρ(0, .) = ρ0

Diffusive PDEs as Wasserstein gradient flows

E(ν) :=

  • Rd V (x) d ν(x) +
  • Rd W(x − y) d[ν ⊗ ν](x, y),

Introducing the functionals (∗) the PDE (∗) can be formally be seen as the W2-gradient flow of U + E. U(ν) :=

  • Rd U(σ(x)) d x if

d ν d Hd = σ, and U(ν) = +∞ if ν ∈ Pac(Rd)

slide-21
SLIDE 21

7

◮ Generalization to some evolution PDEs, where ρ(t, .) ∈ Pac(Rd) ∂ρ ∂t = div [ρ∇(U ′(ρ) + V + W ∗ ρ)] ◮ JKO time discrete scheme: for τ > 0, ρ(0, .) = ρ0

Diffusive PDEs as Wasserstein gradient flows

E(ν) :=

  • Rd V (x) d ν(x) +
  • Rd W(x − y) d[ν ⊗ ν](x, y),

Introducing the functionals (∗) the PDE (∗) can be formally be seen as the W2-gradient flow of U + E. ρτ

k+1 = arg minσ∈P(Rd) 1 2τ W2(ρτ k, σ)2 + U(σ) + E(σ)

U(ν) :=

  • Rd U(σ(x)) d x if

d ν d Hd = σ, and U(ν) = +∞ if ν ∈ Pac(Rd)

slide-22
SLIDE 22

7

◮ Generalization to some evolution PDEs, where ρ(t, .) ∈ Pac(Rd) ∂ρ ∂t = div [ρ∇(U ′(ρ) + V + W ∗ ρ)] ◮ JKO time discrete scheme: for τ > 0, ρ(0, .) = ρ0

Diffusive PDEs as Wasserstein gradient flows

E(ν) :=

  • Rd V (x) d ν(x) +
  • Rd W(x − y) d[ν ⊗ ν](x, y),

Introducing the functionals (∗) the PDE (∗) can be formally be seen as the W2-gradient flow of U + E. ρτ

k+1 = arg minσ∈P(Rd) 1 2τ W2(ρτ k, σ)2 + U(σ) + E(σ)

U(ν) :=

  • Rd U(σ(x)) d x if

d ν d Hd = σ, and U(ν) = +∞ if ν ∈ Pac(Rd)

◮ Many applications: porous medium equation, cell movement via chemotaxis, crowd motion with congestion, models of cities in economy, etc.

slide-23
SLIDE 23

8

Numerical applications of the JKO scheme

◮ Numerical applications of the JKO scheme are still limited:

slide-24
SLIDE 24

8

Numerical applications of the JKO scheme

1D: monotone rearrangement e.g. [Kinderleherer-Walkington 99], [Blanchet-Calvez-Carrillo 08] ◮ Numerical applications of the JKO scheme are still limited:

[Agueh-Bowles 09]

slide-25
SLIDE 25

8

Numerical applications of the JKO scheme

1D: monotone rearrangement e.g. [Kinderleherer-Walkington 99], [Blanchet-Calvez-Carrillo 08] 2D: optimal transport plans → diffeomorphisms

[Carrillo-Moll 09]

U = hard congestion term

[Maury-Roudneff-Chupin-Santambrogio 10]

◮ Numerical applications of the JKO scheme are still limited:

[Agueh-Bowles 09]

slide-26
SLIDE 26

8

Numerical applications of the JKO scheme

◮ Our goal is to approximate a JKO step numerically, in dimension ≥ 2: minν∈P(Y )

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗Y )

1D: monotone rearrangement e.g. [Kinderleherer-Walkington 99], [Blanchet-Calvez-Carrillo 08] 2D: optimal transport plans → diffeomorphisms

[Carrillo-Moll 09]

U = hard congestion term

[Maury-Roudneff-Chupin-Santambrogio 10]

◮ Numerical applications of the JKO scheme are still limited: For X, Y convex bounded and µ ∈ Pac(X),

[Agueh-Bowles 09]

slide-27
SLIDE 27

8

Numerical applications of the JKO scheme

◮ Our goal is to approximate a JKO step numerically, in dimension ≥ 2: Part 2: Convex discretization of the problem (∗Y ) under McCann’s hypotheses. Part 3: Γ-convergence results from the discrete problem to the continuous one. Part 4: Numerical simulations: non-linear diffusion, crowd motion. minν∈P(Y )

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗Y )

1D: monotone rearrangement e.g. [Kinderleherer-Walkington 99], [Blanchet-Calvez-Carrillo 08] 2D: optimal transport plans → diffeomorphisms

[Carrillo-Moll 09]

U = hard congestion term

[Maury-Roudneff-Chupin-Santambrogio 10]

◮ Numerical applications of the JKO scheme are still limited: For X, Y convex bounded and µ ∈ Pac(X), ◮ Plan of the talk:

[Agueh-Bowles 09]

slide-28
SLIDE 28

9

  • 2. Convex discretization of a JKO step
slide-29
SLIDE 29

10

minφ∈K

  • X F(φ(x), ∇φ(x)) d µ(x)

[Chon´ e-Le Meur ’99]

◮ Negative results: PL functions over a fixed mesh

Prior work: functionals involving the gradient

K := finite convex functions on Rd

slide-30
SLIDE 30

10

minφ∈K

  • X F(φ(x), ∇φ(x)) d µ(x)

[Carlier-Lachand-Robert-Maury ’01] [Chon´ e-Le Meur ’99]

◮ Negative results: PL functions over a fixed mesh ◮ Discretization using convex interpolates

Prior work: functionals involving the gradient

K := finite convex functions on Rd K(P) := restriction of convex functions to a finite set P ⊆ X

slide-31
SLIDE 31

10

∀p, q ∈ P, φ(q) ≥ φ(p) + q − p|Gφ(p)}

[Ekeland—Moreno-Bromberg ’10] p

minφ∈K

  • X F(φ(x), ∇φ(x)) d µ(x)

KG(P) := {(φ, Gφ) : P → R × Rd such that

[Carlier-Lachand-Robert-Maury ’01] [Chon´ e-Le Meur ’99]

◮ Negative results: PL functions over a fixed mesh ◮ Discretization using convex interpolates ◮ Discretization using convex interpolates with gradients

Prior work: functionals involving the gradient

φ(p) + x − p|Gφ(p)

K := finite convex functions on Rd K(P) := restriction of convex functions to a finite set P ⊆ X P ⊆ X

slide-32
SLIDE 32

10

µP :=

p∈P µpδp, discretization of the measure µ

∀p, q ∈ P, φ(q) ≥ φ(p) + q − p|Gφ(p)}

[Ekeland—Moreno-Bromberg ’10] p

minφ∈K

  • X F(φ(x), ∇φ(x)) d µ(x)

KG(P) := {(φ, Gφ) : P → R × Rd such that minφ∈KG(P )

  • p∈P F(φ(p), Gφ(p))µp

[Carlier-Lachand-Robert-Maury ’01] [Chon´ e-Le Meur ’99]

◮ Negative results: PL functions over a fixed mesh ◮ Discretization using convex interpolates ◮ Discretization using convex interpolates with gradients

Prior work: functionals involving the gradient

φ(p) + x − p|Gφ(p)

K := finite convex functions on Rd K(P) := restriction of convex functions to a finite set P ⊆ X P ⊆ X

slide-33
SLIDE 33

10

µP :=

p∈P µpδp, discretization of the measure µ

∀p, q ∈ P, φ(q) ≥ φ(p) + q − p|Gφ(p)} |P|2 linear constraints − → adaptive method

[Ekeland—Moreno-Bromberg ’10] [Mirebeau ’14] p

minφ∈K

  • X F(φ(x), ∇φ(x)) d µ(x)

KG(P) := {(φ, Gφ) : P → R × Rd such that minφ∈KG(P )

  • p∈P F(φ(p), Gφ(p))µp

[Carlier-Lachand-Robert-Maury ’01] [Chon´ e-Le Meur ’99]

◮ Negative results: PL functions over a fixed mesh ◮ Discretization using convex interpolates ◮ Discretization using convex interpolates with gradients

[Oudet-M. ’14]

Prior work: functionals involving the gradient

φ(p) + x − p|Gφ(p) [Oberman ’14]

K := finite convex functions on Rd − → exterior parameterization K(P) := restriction of convex functions to a finite set P ⊆ X P ⊆ X

slide-34
SLIDE 34

11

Convex functions with constrained gradient

minν∈P(Y )

1 2τ W2 2(µ, ν) + U(ν) + E(ν)

◮ Let X, Y be bounded convex domains, and µ ∈ Pac(X) with density ρ,

slide-35
SLIDE 35

11

Convex functions with constrained gradient

minν∈P(Y )

1 2τ W2 2(µ, ν) + U(ν) + E(ν)

◮ Let X, Y be bounded convex domains, and µ ∈ Pac(X) with density ρ, ⇐ ⇒ minφ∈KY

1 2τ W2 2(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

Def: KY := {φ ∈ K; ∇φ ∈ Y a.e.}

slide-36
SLIDE 36

11

Convex functions with constrained gradient

minν∈P(Y )

1 2τ W2 2(µ, ν) + U(ν) + E(ν)

◮ Let X, Y be bounded convex domains, and µ ∈ Pac(X) with density ρ, ⇐ ⇒ minφ∈KY

1 2τ W2 2(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

Definition: KY (P) = {ψ|P ; ψ ∈ KY } Def: KY := {φ ∈ K; ∇φ ∈ Y a.e.} ◮ Discretization of the problem: finite set P ⊆ X, µP :=

p∈P µpδp

− → finite-dimensional polyhedron

= φ : P → R P

slide-37
SLIDE 37

11

Convex functions with constrained gradient

minν∈P(Y )

1 2τ W2 2(µ, ν) + U(ν) + E(ν)

◮ Let X, Y be bounded convex domains, and µ ∈ Pac(X) with density ρ, ⇐ ⇒ minφ∈KY

1 2τ W2 2(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

Definition: KY (P) = {ψ|P ; ψ ∈ KY } Def: KY := {φ ∈ K; ∇φ ∈ Y a.e.} ◮ Discretization of the problem: finite set P ⊆ X, µP :=

p∈P µpδp

− → finite-dimensional polyhedron − → any φ : P → R defines a function in KY : +∞

= φ : P → R

φKR

P

φKY := max{ψ; ψ ∈ KY and ψ|P ≤ φ}

slide-38
SLIDE 38

11

Convex functions with constrained gradient

minν∈P(Y )

1 2τ W2 2(µ, ν) + U(ν) + E(ν)

◮ Let X, Y be bounded convex domains, and µ ∈ Pac(X) with density ρ, ⇐ ⇒ minφ∈KY

1 2τ W2 2(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

Definition: KY (P) = {ψ|P ; ψ ∈ KY } Def: KY := {φ ∈ K; ∇φ ∈ Y a.e.} ◮ Discretization of the problem: finite set P ⊆ X, µP :=

p∈P µpδp

− → finite-dimensional polyhedron − → any φ : P → R defines a function in KY :

= φ : P → R

φK[−1,1]

P

φKY := max{ψ; ψ ∈ KY and ψ|P ≤ φ}

slide-39
SLIDE 39

11

Convex functions with constrained gradient

minν∈P(Y )

1 2τ W2 2(µ, ν) + U(ν) + E(ν)

◮ Let X, Y be bounded convex domains, and µ ∈ Pac(X) with density ρ, ⇐ ⇒ minφ∈KY

1 2τ W2 2(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

− → NB: φ ∈ KY (P) iff φ = φKY |P . Definition: KY (P) = {ψ|P ; ψ ∈ KY } Def: KY := {φ ∈ K; ∇φ ∈ Y a.e.} ◮ Discretization of the problem: finite set P ⊆ X, µP :=

p∈P µpδp

− → finite-dimensional polyhedron − → any φ : P → R defines a function in KY :

= φ : P → R

φK[−1,1]

P

φKY := max{ψ; ψ ∈ KY and ψ|P ≤ φ}

slide-40
SLIDE 40

11

Convex functions with constrained gradient

minν∈P(Y )

1 2τ W2 2(µ, ν) + U(ν) + E(ν)

◮ Let X, Y be bounded convex domains, and µ ∈ Pac(X) with density ρ, ⇐ ⇒ minφ∈KY

1 2τ W2 2(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

− → NB: φ ∈ KY (P) iff φ = φKY |P . Definition: KY (P) = {ψ|P ; ψ ∈ KY } Def: KY := {φ ∈ K; ∇φ ∈ Y a.e.} ◮ Discretization of the problem: finite set P ⊆ X, µP :=

p∈P µpδp

minφ∈KY (P )

1 2τ W2 2(µP , ”∇φKY #µP ”) + U(”∇φKY #µP ”) + E(”∇φKY #µP ”)

− → finite-dimensional polyhedron − → any φ : P → R defines a function in KY :

= φ : P → R

φK[−1,1]

P

φKY := max{ψ; ψ ∈ KY and ψ|P ≤ φ}

slide-41
SLIDE 41

12

Absolutely continuous pushforward by ∂φKY

◮ Given φ ∈ KY (P), the function ψ = φKY is usually non-smooth at p ∈ P − → subdifferential ∂ψ(x) := {y ∈ Rd; ∀z ∈ Rd, ψ(z) ≥ ψ(x) + z − x|y}.

slide-42
SLIDE 42

12

Absolutely continuous pushforward by ∂φKY

◮ Given φ ∈ KY (P), the function ψ = φKY is usually non-smooth at p ∈ P − → subdifferential ∂ψ(x) := {y ∈ Rd; ∀z ∈ Rd, ψ(z) ≥ ψ(x) + z − x|y}. Definition: For φ ∈ KY (P) and µP =

p∈P µpδp, we let Vp := ∂φKY (p)

Y p

Vp := ∂φKY (p)

slide-43
SLIDE 43

12

Absolutely continuous pushforward by ∂φKY

◮ Given φ ∈ KY (P), the function ψ = φKY is usually non-smooth at p ∈ P − → subdifferential ∂ψ(x) := {y ∈ Rd; ∀z ∈ Rd, ψ(z) ≥ ψ(x) + z − x|y}. Definition: For φ ∈ KY (P) and µP =

p∈P µpδp, we let Vp := ∂φKY (p)

piecewise-constant density σ of Gac

φ#µp

(i) σ is constant on the subdifferentials {Vp}p∈P ; (ii) for p ∈ P,

  • Vp σ(x) d x = µp.

Explicit formula: σ =

p∈P µp Hd(Vp)1Vp

Y p

Vp := ∂φKY (p)

and define Gac

φ#µP as the probability measure on Y whose density σ satisfies:

Y

slide-44
SLIDE 44

13

Convex discretization of the internal energy

Proposition: Under McCann’s assumption, φ ∈ KY (P) → U(Gac

φ#µP ) is convex.

slide-45
SLIDE 45

13

Convex discretization of the internal energy

Proposition: Under McCann’s assumption, φ ∈ KY (P) → U(Gac

φ#µP ) is convex.

Discrete Monge-Amp` ere operator: MAY [φ](p) := Hd(∂φKY (p)).

slide-46
SLIDE 46

13

Convex discretization of the internal energy

U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p)

Proposition: Under McCann’s assumption, φ ∈ KY (P) → U(Gac

φ#µP ) is convex.

Proof: Given U(σ) =

  • U(σ(x)) d x and σ =

p∈P [µp/MA[φ](p)]1∂φKY (p),

Discrete Monge-Amp` ere operator: MAY [φ](p) := Hd(∂φKY (p)). NB: similarity with U(∇φ#ρ) =

  • U
  • ρ(x)

MA[φ](x)

  • MA[φ](x) d x
slide-47
SLIDE 47

13

Convex discretization of the internal energy

U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p)

Proposition: Under McCann’s assumption, φ ∈ KY (P) → U(Gac

φ#µP ) is convex.

Proof: Given U(σ) =

  • U(σ(x)) d x and σ =

p∈P [µp/MA[φ](p)]1∂φKY (p),

= −

p∈P µp log(MAY [φ](p)) + p∈P µp log(µp)

case U(r) = r log r Discrete Monge-Amp` ere operator: MAY [φ](p) := Hd(∂φKY (p)).

slide-48
SLIDE 48

13

Convex discretization of the internal energy

U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p)

Proposition: Under McCann’s assumption, φ ∈ KY (P) → U(Gac

φ#µP ) is convex.

Proof: Given U(σ) =

  • U(σ(x)) d x and σ =

p∈P [µp/MA[φ](p)]1∂φKY (p),

= −

p∈P µp log(MAY [φ](p)) + p∈P µp log(µp)

Lemma: Given φ0, φ1 ∈ KY (P), φt := (1 − t)φ0 + tφ1 and φt = [φt]KY , ∂φt(p) ⊇ (1 − t)∂φ0(p) ⊕ t∂φ1(p). case U(r) = r log r ⊕ = Minkowski sum Discrete Monge-Amp` ere operator: MAY [φ](p) := Hd(∂φKY (p)).

slide-49
SLIDE 49

13

Convex discretization of the internal energy

U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p)

Proposition: Under McCann’s assumption, φ ∈ KY (P) → U(Gac

φ#µP ) is convex.

Proof: Given U(σ) =

  • U(σ(x)) d x and σ =

p∈P [µp/MA[φ](p)]1∂φKY (p),

= −

p∈P µp log(MAY [φ](p)) + p∈P µp log(µp)

log Hd(∂φt) ≥ log Hd((1 − t)∂φ0(p) ⊕ t∂φ1(p)). ≥ (1 − t) log Hd(∂φ0(p)) + t log Hd(∂φ1(p))). Brunn-Minkowski Lemma: Given φ0, φ1 ∈ KY (P), φt := (1 − t)φ0 + tφ1 and φt = [φt]KY , ∂φt(p) ⊇ (1 − t)∂φ0(p) ⊕ t∂φ1(p). case U(r) = r log r ⊕ = Minkowski sum = ⇒ Discrete Monge-Amp` ere operator: MAY [φ](p) := Hd(∂φKY (p)).

slide-50
SLIDE 50

13

Convex discretization of the internal energy

U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p)

Proposition: Under McCann’s assumption, φ ∈ KY (P) → U(Gac

φ#µP ) is convex.

Proof: Given U(σ) =

  • U(σ(x)) d x and σ =

p∈P [µp/MA[φ](p)]1∂φKY (p),

= −

p∈P µp log(MAY [φ](p)) + p∈P µp log(µp)

log Hd(∂φt) ≥ log Hd((1 − t)∂φ0(p) ⊕ t∂φ1(p)). ≥ (1 − t) log Hd(∂φ0(p)) + t log Hd(∂φ1(p))). Brunn-Minkowski Lemma: Given φ0, φ1 ∈ KY (P), φt := (1 − t)φ0 + tφ1 and φt = [φt]KY , ∂φt(p) ⊇ (1 − t)∂φ0(p) ⊕ t∂φ1(p). case U(r) = r log r ⊕ = Minkowski sum = ⇒ Discrete Monge-Amp` ere operator: MAY [φ](p) := Hd(∂φKY (p)). = ⇒ U(Gac

φt#µP ) ≤ (1 − t)U(Gac φ0#µP ) + tU(Gac φ1#µP )

slide-51
SLIDE 51

14

Convex discretization of the potential energy

◮ Difficulty: Lack of convexity of φ ∈ KY (P) → E(Gac

φ#µP ).

E.g. P = {p±, q}, p± = (0, ±1), q = (2, 0)), Y = [−1, 1]2, φt = (1 − t)1q, µP = 0.1(δp+ + δp−) + 0.8δq. t = 0 t = 1 t →

  • x2 d Gac

φt#µ(x)

∂[φ0]KY (p−)

.85 .7 t = 0

t = 1 ∂[φ1]KY (p−)

slide-52
SLIDE 52

14

Convex discretization of the potential energy

◮ Difficulty: Lack of convexity of φ ∈ KY (P) → E(Gac

φ#µP ).

E.g. P = {p±, q}, p± = (0, ±1), q = (2, 0)), Y = [−1, 1]2, φt = (1 − t)1q, µP = 0.1(δp+ + δp−) + 0.8δq. t = 0 t = 1 t →

  • x2 d Gac

φt#µ(x)

∂[φ0]KY (p−)

.85 .7 t = 0

t = 1 ∂[φ1]KY (p−)

Def: Given (φ, Gφ) ∈ KG

Y (P), we define Gφ#µP = p∈P µpδGφ(p).

Def: KG

Y (P) = {(φ, Gφ) : P → R × Rd; φ ∈ KY (P), ∀p, Gφ(p) ∈ ∂φKY (p)}.

◮ We follow Ekeland-Moreno ’10, and include the gradient as an unknown: E(Gφ#µP ) =

p∈P V (Gφ(p))µp + p,q∈P W(Gφ(p) − Gφ(q))µpµq

slide-53
SLIDE 53

15

Summary of the convex discretization

and rdU(r−d) is convex non-increasing, U(0) = 0. Theorem: Assume that V, W : Rd → R are convex functions minφ∈KG

Y (P )

1 2τ W2 2(µP , Gφ#µP ) + U(Gac φ#µP ) + E(Gφ#µP )

Then, the following optimization problem is convex: The minimum is unique if e.g. V and rdU(r−d) are strictly convex.

[Carlier-Benamou-M.-Oudet]

◮ The two notions of push-forward seems necessary.

slide-54
SLIDE 54

15

Summary of the convex discretization

and rdU(r−d) is convex non-increasing, U(0) = 0. Theorem: Assume that V, W : Rd → R are convex functions minφ∈KG

Y (P )

1 2τ W2 2(µP , Gφ#µP ) + U(Gac φ#µP ) + E(Gφ#µP )

Then, the following optimization problem is convex: The minimum is unique if e.g. V and rdU(r−d) are strictly convex. U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p) < +∞

◮ If limr→0 rU(r−1) = +∞, U is a barrier for convexity, i.e. = ⇒ φ is in the interior of KY (P).

[Carlier-Benamou-M.-Oudet]

NB: |P| non-linear constraints vs |P|2 linear constraints ◮ The two notions of push-forward seems necessary.

slide-55
SLIDE 55

15

Summary of the convex discretization

and rdU(r−d) is convex non-increasing, U(0) = 0. Theorem: Assume that V, W : Rd → R are convex functions minφ∈KG

Y (P )

1 2τ W2 2(µP , Gφ#µP ) + U(Gac φ#µP ) + E(Gφ#µP )

Then, the following optimization problem is convex: The minimum is unique if e.g. V and rdU(r−d) are strictly convex. U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p) < +∞

◮ If limr→0 rU(r−1) = +∞, U is a barrier for convexity, i.e. = ⇒ φ is in the interior of KY (P).

[Carlier-Benamou-M.-Oudet]

NB: |P| non-linear constraints vs |P|2 linear constraints ◮ Generalization to non-negatively cross-curved cost functions ◮ The two notions of push-forward seems necessary.

[Figalli-Kim-McCann]

slide-56
SLIDE 56

15

Summary of the convex discretization

and rdU(r−d) is convex non-increasing, U(0) = 0. Theorem: Assume that V, W : Rd → R are convex functions minφ∈KG

Y (P )

1 2τ W2 2(µP , Gφ#µP ) + U(Gac φ#µP ) + E(Gφ#µP )

Then, the following optimization problem is convex: The minimum is unique if e.g. V and rdU(r−d) are strictly convex. U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p) < +∞

◮ If limr→0 rU(r−1) = +∞, U is a barrier for convexity, i.e. = ⇒ φ is in the interior of KY (P).

[Carlier-Benamou-M.-Oudet]

NB: |P| non-linear constraints vs |P|2 linear constraints ◮ Generalization to non-negatively cross-curved cost functions ◮ The two notions of push-forward seems necessary. ◮ Other convex discretization using wide-stencils. Convergence is unclear.

[Figalli-Kim-McCann]

slide-57
SLIDE 57

16

  • 3. A Γ-convergence result
slide-58
SLIDE 58

17

Convergence theorem

JKO step: X, Y bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c minν∈P(Y )

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗)

slide-59
SLIDE 59

17

Convergence theorem

JKO step: X, Y bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(Y ), W2) minν∈P(Y )

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

Hypotheses: (∗)

slide-60
SLIDE 60

17

Convergence theorem

JKO step: X, Y bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(Y ), W2) minν∈P(Y )

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

Hypotheses: (∗) (C2) U(ρ) =

  • U(ρ(x)) d x, with U ≥ M is convex.

example: U(r) = r log r, U(r) = rm/(m − 1) with m > 0.

slide-61
SLIDE 61

17

Convergence theorem

JKO step: X, Y bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(Y ), W2) minν∈P(Y )

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

Hypotheses: (∗) (C2) U(ρ) =

  • U(ρ(x)) d x, with U ≥ M is convex.

= McCann’s condition for displacement-convexity

example: U(r) = r log r, U(r) = rm/(m − 1) with m > 0.

slide-62
SLIDE 62

17

Convergence theorem

JKO step: X, Y bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(Y ), W2) minν∈P(Y )

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

Hypotheses: (∗) Theorem: Let Pn ⊆ X finite, µn ∈ P(Pn) with lim W2(µn, µ) = 0, and: minφ∈KG

Y (Pn)

1 2τ W2(µn, Gφ#µn) + E(Gφ#µn) + U(Gac φ#µn)

(∗)n If φn minimizes (∗)n, then νn := Gac

φn#µn is a minimizing sequence for (∗).

(C2) U(ρ) =

  • U(ρ(x)) d x, with U ≥ M is convex.

[Carlier-Benamou-M.-Oudet] = McCann’s condition for displacement-convexity

example: U(r) = r log r, U(r) = rm/(m − 1) with m > 0.

slide-63
SLIDE 63

17

Convergence theorem

JKO step: X, Y bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(Y ), W2) minν∈P(Y )

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

Hypotheses: (∗) Theorem: Let Pn ⊆ X finite, µn ∈ P(Pn) with lim W2(µn, µ) = 0, and: minφ∈KG

Y (Pn)

1 2τ W2(µn, Gφ#µn) + E(Gφ#µn) + U(Gac φ#µn)

(∗)n If φn minimizes (∗)n, then νn := Gac

φn#µn is a minimizing sequence for (∗).

(C2) U(ρ) =

  • U(ρ(x)) d x, with U ≥ M is convex.

[Carlier-Benamou-M.-Oudet] = McCann’s condition for displacement-convexity

ac ac

example: U(r) = r log r, U(r) = rm/(m − 1) with m > 0.

slide-64
SLIDE 64

18

Proof of the convergence theorem: Lower bound

Let m := minσ∈P(Y ) F(σ) mn := minφn∈KY (Pn) Fn(Gac

φn#µn)

◮ Step 1: lim inf mn ≥ m JKO step: X, Y bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c F(σ) := W2

2(µ, σ) + E(σ) + U(σ)

Fn(σ) := W2

2(µn, σ) + E(σ) + U(σ)

slide-65
SLIDE 65

18

Proof of the convergence theorem: Lower bound

Let m := minσ∈P(Y ) F(σ) mn := minφn∈KY (Pn) Fn(Gac

φn#µn)

◮ Step 1: lim inf mn ≥ m JKO step: X, Y bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c ◮ Step 2: lim sup mn ≤ m = minσ∈P(Y ) F(σ) F(σ) := W2

2(µ, σ) + E(σ) + U(σ)

Fn(σ) := W2

2(µn, σ) + E(σ) + U(σ)

slide-66
SLIDE 66

18

Proof of the convergence theorem: Lower bound

Let m := minσ∈P(Y ) F(σ) mn := minφn∈KY (Pn) Fn(Gac

φn#µn)

◮ Step 1: lim inf mn ≥ m JKO step: X, Y bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c ◮ Step 2: lim sup mn ≤ m = minσ∈P(Y ) F(σ) s.t. limn→∞ σ − σnL∞(Y ) = 0, where σn :=

d Gac

φ#µn

d Hd

.

Using (C2) and a convolution argument

F(σ) := W2

2(µ, σ) + E(σ) + U(σ)

Fn(σ) := W2

2(µn, σ) + E(σ) + U(σ)

⇐ = Given any probability density σ ∈ C0(Y ) with ε < σ < ε−1, ∃φn ∈ KY (Pn)

slide-67
SLIDE 67

19

Proof of the convergence theorem: Upper bound

Y X ∇φ µ σ = ∇φ#µ

◮ Step 2’: s.t. limn→∞ σ − σnL∞(Y ) = 0, where σn :=

d Gac

φn#µn

d Hd

. Given any prob. density σ ∈ C0(Y ) s.t. ε < σ < ε−1, ∃φn ∈ KY (Pn)

slide-68
SLIDE 68

19

Proof of the convergence theorem: Upper bound

Y Y X spt(µn) = Pn ⊆ X ∇φ µ σ = ∇φ#µ σn

◮ Step 2’: s.t. ∀p ∈ Pn, σ(V n

p ) = µp, where V n p := (∂[φn]KY (p))

∂[φn]KY

Lemma: Given µn =

p∈Pn µpδp in P(X), there exists φn ∈ KY (Pn)

p V n

p

s.t. limn→∞ σ − σnL∞(Y ) = 0, where σn :=

d Gac

φn#µn

d Hd

. Given any prob. density σ ∈ C0(Y ) s.t. ε < σ < ε−1, ∃φn ∈ KY (Pn)

slide-69
SLIDE 69

19

Proof of the convergence theorem: Upper bound

Y Y X spt(µn) = Pn ⊆ X ∇φ µ σ = ∇φ#µ σn

◮ Step 2’: s.t. ∀p ∈ Pn, σ(V n

p ) = µp, where V n p := (∂[φn]KY (p))

(a) σn :=

d Gac

φn#µn

d Hd

=

p∈Pn σ(V n

p )

Hd(V n

p )1V n p .

∂[φn]KY

Lemma: Given µn =

p∈Pn µpδp in P(X), there exists φn ∈ KY (Pn)

p V n

p

s.t. limn→∞ σ − σnL∞(Y ) = 0, where σn :=

d Gac

φn#µn

d Hd

. Given any prob. density σ ∈ C0(Y ) s.t. ε < σ < ε−1, ∃φn ∈ KY (Pn)

slide-70
SLIDE 70

19

Proof of the convergence theorem: Upper bound

Y Y X spt(µn) = Pn ⊆ X ∇φ µ σ = ∇φ#µ σn

◮ Step 2’: s.t. ∀p ∈ Pn, σ(V n

p ) = µp, where V n p := (∂[φn]KY (p))

(a) σn :=

d Gac

φn#µn

d Hd

=

p∈Pn σ(V n

p )

Hd(V n

p )1V n p .

(b) If maxp∈Pn diam(V n

p )

n→∞

− − − → 0 , then

∂[φn]KY

Lemma: Given µn =

p∈Pn µpδp in P(X), there exists φn ∈ KY (Pn)

p V n

p

s.t. limn→∞ σ − σnL∞(Y ) = 0, where σn :=

d Gac

φn#µn

d Hd

. Given any prob. density σ ∈ C0(Y ) s.t. ε < σ < ε−1, ∃φn ∈ KY (Pn) limn→∞ σ − σnL∞(Y ) = 0. (1)

slide-71
SLIDE 71

19

Proof of the convergence theorem: Upper bound

Y Y X spt(µn) = Pn ⊆ X ∇φ µ σ = ∇φ#µ σn

◮ Step 2’: s.t. ∀p ∈ Pn, σ(V n

p ) = µp, where V n p := (∂[φn]KY (p))

(a) σn :=

d Gac

φn#µn

d Hd

=

p∈Pn σ(V n

p )

Hd(V n

p )1V n p .

(b) If maxp∈Pn diam(V n

p )

n→∞

− − − → 0 , then

∂[φn]KY

Lemma: Given µn =

p∈Pn µpδp in P(X), there exists φn ∈ KY (Pn)

p V n

p

s.t. limn→∞ σ − σnL∞(Y ) = 0, where σn :=

d Gac

φn#µn

d Hd

. Given any prob. density σ ∈ C0(Y ) s.t. ε < σ < ε−1, ∃φn ∈ KY (Pn) (c) Assume : ∃pn ∈ Pn s.t. diam(V n

p ) ≥ ε.

diam(∂φ(x)) ≥ ε where x = limn pn ∈ X. limn→∞ σ − σnL∞(Y ) = 0. (1) not (1) Up to extraction, [φn]KY

.∞

− − − → φ, so that (2)

slide-72
SLIDE 72

19

Proof of the convergence theorem: Upper bound

Y Y X spt(µn) = Pn ⊆ X ∇φ µ σ = ∇φ#µ σn

◮ Step 2’: s.t. ∀p ∈ Pn, σ(V n

p ) = µp, where V n p := (∂[φn]KY (p))

(a) σn :=

d Gac

φn#µn

d Hd

=

p∈Pn σ(V n

p )

Hd(V n

p )1V n p .

(b) If maxp∈Pn diam(V n

p )

n→∞

− − − → 0 , then

∂[φn]KY

Lemma: Given µn =

p∈Pn µpδp in P(X), there exists φn ∈ KY (Pn)

p V n

p

s.t. limn→∞ σ − σnL∞(Y ) = 0, where σn :=

d Gac

φn#µn

d Hd

. Given any prob. density σ ∈ C0(Y ) s.t. ε < σ < ε−1, ∃φn ∈ KY (Pn) (c) Assume : ∃pn ∈ Pn s.t. diam(V n

p ) ≥ ε.

diam(∂φ(x)) ≥ ε where x = limn pn ∈ X. (d) Moreover, ∇φ#ρ = σ. By Caffarelli’s regularity limn→∞ σ − σnL∞(Y ) = 0. (1) not (1) Up to extraction, [φn]KY

.∞

− − − → φ, so that theorem, φ ∈ C1: Contradiction of . (2) (2)

slide-73
SLIDE 73

20

  • 4. Numerical results
slide-74
SLIDE 74

21

Computing the discrete Monge-Amp` ere operator

U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p)

with MAY [φ](p) := Hd(∂φKY (p)). ◮ Goal: fast computation of MAY [φ](p) and its 1st/2nd derivatives w.r.t φ

slide-75
SLIDE 75

21

Computing the discrete Monge-Amp` ere operator

Y Definition: Given a function φ : P → R, ◮ We rely on the notion of Laguerre (or power) cell in computational geometry Lagφ

P (p) := {y ∈ Rd; ∀q ∈ P, φ(q) ≥ φ(p) + q − p|y}

U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p)

with MAY [φ](p) := Hd(∂φKY (p)).

Lagφ

P (p)

p ◮ Goal: fast computation of MAY [φ](p) and its 1st/2nd derivatives w.r.t φ

slide-76
SLIDE 76

21

Computing the discrete Monge-Amp` ere operator

Lagφ

P (p) := {y; ∀q ∈ P, q − y2 ≥ p − y2}

Y Definition: Given a function φ : P → R, ◮ We rely on the notion of Laguerre (or power) cell in computational geometry Lagφ

P (p) := {y ∈ Rd; ∀q ∈ P, φ(q) ≥ φ(p) + q − p|y}

− → For φ(p) = p2/2, one gets the Voronoi cell: − → Decomposition of Rd into convex polyhedra U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p)

with MAY [φ](p) := Hd(∂φKY (p)).

Lagφ

P (p)

p ◮ Goal: fast computation of MAY [φ](p) and its 1st/2nd derivatives w.r.t φ − → Computation in time O(|P| log |P|) in 2D

slide-77
SLIDE 77

21

Computing the discrete Monge-Amp` ere operator

Lagφ

P (p) := {y; ∀q ∈ P, q − y2 ≥ p − y2}

Y Definition: Given a function φ : P → R, ◮ We rely on the notion of Laguerre (or power) cell in computational geometry Lagφ

P (p) := {y ∈ Rd; ∀q ∈ P, φ(q) ≥ φ(p) + q − p|y}

− → For φ(p) = p2/2, one gets the Voronoi cell: Lemma: For φ ∈ KY (P) and p ∈ P, ∂φKY (p) = Lagφ

P (p) ∩ Y .

− → Decomposition of Rd into convex polyhedra U(Gac

φ#µP ) = p∈P U (µp/MAY [φ](p)) MAY [φ](p)

with MAY [φ](p) := Hd(∂φKY (p)).

Lagφ

P (p)

p ◮ Goal: fast computation of MAY [φ](p) and its 1st/2nd derivatives w.r.t φ − → Computation in time O(|P| log |P|) in 2D

slide-78
SLIDE 78

22

Computing the discrete Monge-Amp` ere operator

◮ Global construction of the intersections (Lagφ

P (p) ∩ Y )p∈P in 2D.

◮ Combinatorics stored as an (abstract) triangulation T of the finite set P ∪ S, i.e. s3 s2 s1 s4 Assumption: ∂Y = ∪s∈Ss,

∂φKY (p1) ∂φKY (p2)

s1 s2 s3 s4

p2 p1

(p1, s1, s2) ∈ T iff (Lagφ

P (p1) ∩ s1 ∩ s2 ∩ Y ) = ∅

with S = finite family of segments. (p1, p2, p3) ∈ T iff (Lagφ

P (p1) ∩ Lagφ P (p2) ∩ Lagφ P (p3)) ∩ Y = ∅

(p1, p2, s1) ∈ T iff (Lagφ

P (p1) ∩ Lagφ P (p2) ∩ s1) ∩ Y = ∅

Computation in time O(|P| log |P| + |S|) in 2D. T

slide-79
SLIDE 79

22

Computing the discrete Monge-Amp` ere operator

◮ Global construction of the intersections (Lagφ

P (p) ∩ Y )p∈P in 2D.

s3 s2 s1 s4 Assumption: ∂Y = ∪s∈Ss,

∂φKY (p1) ∂φKY (p2)

s1 s2 s3 s4

p2 p1

with S = finite family of segments. ◮ Computation of MAY (p) = H2(Lagφ

P (p) ∩ Y ) and its derivatives. ∂MAY (p) ∂φ(q)

= 0 = ⇒ (p, q) is an edge of T

∂2MAY (p) ∂φ(r)∂φ(q) = 0 =

⇒ (p, q, r) is a triangle of T The sparsity structure of the Jacobian/Hessian is encoded in T: T

slide-80
SLIDE 80

23

Example 1: Nonlinear diffusion on point clouds

   ∂ρ ∂t = ∆ρm

  • n X

∇ρ ⊥ nX

  • n ∂X

fast diffusion equation: m ∈ [1 − 1/d, 1) porous medium equation: m > 1 Gradient flow in (P(X), W2). for U(ρ) =

  • U(ρ(x)) d x with U(r) =

rm m−1

(∗)

[Otto]

slide-81
SLIDE 81

23

Example 1: Nonlinear diffusion on point clouds

   ∂ρ ∂t = ∆ρm

  • n X

∇ρ ⊥ nX

  • n ∂X

fast diffusion equation: m ∈ [1 − 1/d, 1) porous medium equation: m > 1 Gradient flow in (P(X), W2). for U(ρ) =

  • U(ρ(x)) d x with U(r) =

rm m−1

Algorithm: Input: µ0 :=

x∈P0 δx |P0|, τ > 0

For k ∈ {0, ..., T} φ ← arg minφ∈KG

X(Pk)

1 2τ W2 2(µk, Gφ#µk) + U(Gac φ#µk)

µk+1 ← Gφ#µk; Pk+1 ← spt(µk+1) (∗)

[Otto]

X Pk X Pk+1 x Gφk(x)

Newton’s method

slide-82
SLIDE 82

24

Example 2: Crowd motion and congestion

µk+1 = minν∈P(X)

1 2τ W2 2(µk, ν) + E(ν) + U(ν)

E(ν) :=

  • X V (x) d ν(x)

U(ν) :=

  • 0 if d ν/ d Hd ≤ 1,

+ ∞ if not

[Maury-Roudneff-Chupin-Santambrogio 10]

◮ Gradient flow model of crowd motion with congestion, with a JKO scheme: Prop: The congestion term U is convex under generalized displacements.

slide-83
SLIDE 83

24

Example 2: Crowd motion and congestion

µk+1 = minν∈P(X)

1 2τ W2 2(µk, ν) + E(ν) + U(ν)

E(ν) :=

  • X V (x) d ν(x)

U(ν) :=

  • 0 if d ν/ d Hd ≤ 1,

+ ∞ if not

[Maury-Roudneff-Chupin-Santambrogio 10]

◮ Gradient flow model of crowd motion with congestion, with a JKO scheme: We solve this problem with a relaxed hard congestion term: Uα(ρ) := −

  • ρ(x)α log(1 − ρ(x)1/d) d x

6 4 2 r = 0 r = 1 α=1 α=5 α=10

Prop: (i) Uα is convex under gen. displacements ◮ Convex optimization problem if V is λ-convex (V + λ.2 convex) and τ ≤ λ/2.

r → −rα log(1 − r1/d)

Uα − → U as α → ∞. βU1 − → U as β → 0.

Γ Γ

Prop: The congestion term U is convex under generalized displacements. (ii)

slide-84
SLIDE 84

25

Example 2: Crowd motion and congestion

V (x) = x − (2, 0)2 + 5 exp(−5x2/2) Initial density on X = [−2, 2]2 1 P = 200 × 200 regular grid.

(−2, −2) (2, 2)

Potential Algorithm: Input: µ0 ∈ P(P), τ > 0, α > 0, β ≥ 1. For k ∈ {0, ..., T} φ ← arg minφ∈KG

X(Pk)

1 2τ W2 2(µk, Gφ#µk) + E(Gφ#µk) + αUβ(Gac φ#µk)

ν ← Gφ#µk; µk+1 ← projection of ν|[−2,2)×[−2,2] on P.