Convex discretization of functionals involving the Monge-Amp` ere - - PowerPoint PPT Presentation

convex discretization of functionals involving the monge
SMART_READER_LITE
LIVE PREVIEW

Convex discretization of functionals involving the Monge-Amp` ere - - PowerPoint PPT Presentation

Convex discretization of functionals involving the Monge-Amp` ere operator Quentin M erigot CNRS / Universit e Paris-Dauphine Joint work with J.D. Benamou, G. Carlier and E. Oudet Workshop on Optimal Transport in the Applied Sciences


slide-1
SLIDE 1

1

Convex discretization of functionals involving the Monge-Amp` ere operator

Quentin M´ erigot CNRS / Universit´ e Paris-Dauphine Joint work with J.D. Benamou, G. Carlier and ´

  • E. Oudet

Workshop on Optimal Transport in the Applied Sciences December 8-12, 2014 — RICAM, Linz

slide-2
SLIDE 2

2

  • 1. Motivation: Gradient flows in Wasserstein space
slide-3
SLIDE 3

3

Background: Optimal transport

◮ Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd);

  • x2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} Pac

2 (Rd) = P2(Rd) ∩ L1(Rd)

Rd Rd π µ ν p1 p2 Definition: W2

2(µ, ν) := minπ∈Γ(µ,ν)

  • x − y2 d π(x, y).
slide-4
SLIDE 4

3

Background: Optimal transport

◮ Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd);

  • x2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} Pac

2 (Rd) = P2(Rd) ∩ L1(Rd)

Rd Rd π µ ν p1 p2 ◮ Relation to convex functions: Def: K := finite convex functions on Rd Definition: W2

2(µ, ν) := minπ∈Γ(µ,ν)

  • x − y2 d π(x, y).
slide-5
SLIDE 5

3

Background: Optimal transport

Theorem (Brenier): Given µ ∈ Pac

2 (Rd) the map φ ∈ K → ∇φ#µ ∈ P2(Rd)

is surjective and moreover, ◮ Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd);

  • x2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} Pac

2 (Rd) = P2(Rd) ∩ L1(Rd)

Rd Rd π µ ν p1 p2

[Brenier ’91]

◮ Relation to convex functions: Def: K := finite convex functions on Rd Definition: W2

2(µ, ν) := minπ∈Γ(µ,ν)

  • x − y2 d π(x, y).

W2

2(µ, ∇φ#µ) =

  • Rd x − ∇φ(x)2 d µ(x)
slide-6
SLIDE 6

3

Background: Optimal transport

Theorem (Brenier): Given µ ∈ Pac

2 (Rd) the map φ ∈ K → ∇φ#µ ∈ P2(Rd)

is surjective and moreover, ◮ Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd);

  • x2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} Pac

2 (Rd) = P2(Rd) ∩ L1(Rd)

Rd Rd π µ ν p1 p2

[Brenier ’91]

◮ Relation to convex functions: Def: K := finite convex functions on Rd Definition: W2

2(µ, ν) := minπ∈Γ(µ,ν)

  • x − y2 d π(x, y).

Given any µ ∈ Pac

2 (Rd), we get a ”parameterization” of P2(Rd), as ”seen” from µ.

W2

2(µ, ∇φ#µ) =

  • Rd x − ∇φ(x)2 d µ(x)
slide-7
SLIDE 7

4

Motivation 1: Crowd Motion Under Congestion

[Maury-Roudneff-Chupin-Santambrogio 10]

ρτ

k+1 = minσ∈P2(X) 1 2τ W2 2(ρτ k, σ) + E(σ) + U(σ)

◮ JKO scheme for crowd motion with hard congestion: U(ν) :=

  • 0 if d ν = f d Hd, f ≤ 1

+ ∞ if not congestion potential energy E(ν) :=

  • Rd V (x) d ν(x)

(∗) where X ⊆ Rd is convex and bounded, and

slide-8
SLIDE 8

4

Motivation 1: Crowd Motion Under Congestion

[Maury-Roudneff-Chupin-Santambrogio 10]

minφ

1 2τ

  • Rd x − ∇φ(x)2ρτ

k(x) d x + E(∇φ#ρτ k) + U(∇φ#ρτ k)

ρτ

k+1 = minσ∈P2(X) 1 2τ W2 2(ρτ k, σ) + E(σ) + U(σ)

◮ JKO scheme for crowd motion with hard congestion: U(ν) :=

  • 0 if d ν = f d Hd, f ≤ 1

+ ∞ if not congestion potential energy E(ν) :=

  • Rd V (x) d ν(x)

◮ Assuming σ = ∇φ#ρτ

k with φ convex, the Wasserstein term becomes explicit:

On the other hand, the constraint becomes strongly nonlinear: (∗) (∗) ⇐ ⇒ U(∇φ#ρτ

k) < +∞ ⇐

⇒ det D2φ(x) ≥ ρk(x) where X ⊆ Rd is convex and bounded, and

slide-9
SLIDE 9

5

Motivation 2: Nonlinear Diffusion

∂ρ ∂t = div [ρ∇(U ′(ρ) + V + W ∗ ρ)] ρ(0, .) = ρ0 (∗) ρ(t, .) ∈ Pac(Rd)

slide-10
SLIDE 10

5

Motivation 2: Nonlinear Diffusion

∂ρ ∂t = div [ρ∇(U ′(ρ) + V + W ∗ ρ)] ρ(0, .) = ρ0 (∗) ◮ Formally, (∗) can be seen as the W2-gradient flow of U + E, with ρ(t, .) ∈ Pac(Rd) U(ν) :=   

  • Rd U(f(x)) d x if d ν = f d Hd

+ ∞ if not E(ν) :=

  • Rd V (x) d ν(x) +
  • Rd W(x − y) d[ν ⊗ ν](x, y)

internal energy, ex: U(r) = r log r = ⇒ entropy potential energy interaction energy

slide-11
SLIDE 11

5

Motivation 2: Nonlinear Diffusion

∂ρ ∂t = div [ρ∇(U ′(ρ) + V + W ∗ ρ)] ◮ JKO time discrete scheme: for τ > 0, ρ(0, .) = ρ0 (∗) ρτ

k+1 = arg minσ∈P(Rd) 1 2τ W2(ρτ k, σ)2 + U(σ) + E(σ)

◮ Formally, (∗) can be seen as the W2-gradient flow of U + E, with ρ(t, .) ∈ Pac(Rd)

[Jordan-Kinderlehrer-Otto ’98]

U(ν) :=   

  • Rd U(f(x)) d x if d ν = f d Hd

+ ∞ if not E(ν) :=

  • Rd V (x) d ν(x) +
  • Rd W(x − y) d[ν ⊗ ν](x, y)

internal energy, ex: U(r) = r log r = ⇒ entropy potential energy interaction energy

slide-12
SLIDE 12

5

Motivation 2: Nonlinear Diffusion

∂ρ ∂t = div [ρ∇(U ′(ρ) + V + W ∗ ρ)] ◮ JKO time discrete scheme: for τ > 0, ρ(0, .) = ρ0 (∗) ρτ

k+1 = arg minσ∈P(Rd) 1 2τ W2(ρτ k, σ)2 + U(σ) + E(σ)

− → Many applications: porous medium equation, cell movement via chemotaxis, Cournot-Nash equilibra, etc. ◮ Formally, (∗) can be seen as the W2-gradient flow of U + E, with ρ(t, .) ∈ Pac(Rd)

[Jordan-Kinderlehrer-Otto ’98]

U(ν) :=   

  • Rd U(f(x)) d x if d ν = f d Hd

+ ∞ if not E(ν) :=

  • Rd V (x) d ν(x) +
  • Rd W(x − y) d[ν ⊗ ν](x, y)

internal energy, ex: U(r) = r log r = ⇒ entropy potential energy interaction energy

slide-13
SLIDE 13

6

Displacement Convex Setting

minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

For X convex bounded and µ ∈ Pac(X), spt(ν) ⊆ X

slide-14
SLIDE 14

6

Displacement Convex Setting

minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

For X convex bounded and µ ∈ Pac(X),

(∗X)

minφ∈KX

1 2τ W2 2(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

⇐ ⇒ KX := {φ convex ; ∇φ ∈ X}

slide-15
SLIDE 15

6

Displacement Convex Setting

When is the minimization problem (∗X) convex ? E(ν) :=

  • Rd(V + W ∗ ν) d ν

U(ν) :=

  • Rd U

d ν

d Hd

  • d x

minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

For X convex bounded and µ ∈ Pac(X),

(∗X)

minφ∈KX

1 2τ W2 2(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

⇐ ⇒ KX := {φ convex ; ∇φ ∈ X}

slide-16
SLIDE 16

6

Displacement Convex Setting

When is the minimization problem (∗X) convex ? E(ν) :=

  • Rd(V + W ∗ ν) d ν

U(ν) :=

  • Rd U

d ν

d Hd

  • d x

minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

For X convex bounded and µ ∈ Pac(X),

(∗X)

minφ∈KX

1 2τ W2 2(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

⇐ ⇒ NB: U(∇φ#ρ) =

  • U
  • ρ(x)

MA[φ](x)

  • MA[φ](x) d x

MA[φ](x) := det(D2φ(x)) KX := {φ convex ; ∇φ ∈ X} (H2) rdU(r−d) is convex non-increasing, U(0) = 0. Theorem: (∗X) is convex if (H1) V, W : Rd → R are convex functions,

[McCann ’94]

slide-17
SLIDE 17

6

Displacement Convex Setting

When is the minimization problem (∗X) convex ? E(ν) :=

  • Rd(V + W ∗ ν) d ν

U(ν) :=

  • Rd U

d ν

d Hd

  • d x

minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

For X convex bounded and µ ∈ Pac(X),

(∗X)

minφ∈KX

1 2τ W2 2(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

⇐ ⇒ NB: U(∇φ#ρ) =

  • U
  • ρ(x)

MA[φ](x)

  • MA[φ](x) d x

MA[φ](x) := det(D2φ(x)) Goal: convergent and convex spatial discretization of (∗X) & numerical applications. KX := {φ convex ; ∇φ ∈ X} (H2) rdU(r−d) is convex non-increasing, U(0) = 0. Theorem: (∗X) is convex if (H1) V, W : Rd → R are convex functions,

[McCann ’94]

slide-18
SLIDE 18

7

Numerical applications of the JKO scheme

◮ Numerical applications of the (variational) JKO scheme are still limited:

slide-19
SLIDE 19

7

Numerical applications of the JKO scheme

1D: monotone rearrangement

e.g. [Kinderleherer-Walkington 99]

◮ Numerical applications of the (variational) JKO scheme are still limited:

[Blanchet-Calvez-Carrillo 08] [Agueh-Bowles 09]

slide-20
SLIDE 20

7

Numerical applications of the JKO scheme

1D: monotone rearrangement

e.g. [Kinderleherer-Walkington 99]

2D: optimal transport plans → diffeomorphisms

[Carrillo-Moll 09]

U = hard congestion term

[Maury-Roudneff-Chupin-Santambrogio 10]

◮ Numerical applications of the (variational) JKO scheme are still limited:

[Blanchet-Calvez-Carrillo 08] [Agueh-Bowles 09]

slide-21
SLIDE 21

7

Numerical applications of the JKO scheme

◮ Our goal is to approximate a JKO step numerically, in dimension ≥ 2: minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗X)

1D: monotone rearrangement

e.g. [Kinderleherer-Walkington 99]

2D: optimal transport plans → diffeomorphisms

[Carrillo-Moll 09]

U = hard congestion term

[Maury-Roudneff-Chupin-Santambrogio 10]

◮ Numerical applications of the (variational) JKO scheme are still limited: For X convex bounded and µ ∈ Pac(X),

[Blanchet-Calvez-Carrillo 08] [Agueh-Bowles 09]

slide-22
SLIDE 22

7

Numerical applications of the JKO scheme

◮ Our goal is to approximate a JKO step numerically, in dimension ≥ 2: Part 2: Convex discretization of the problem (∗X) under McCann’s hypotheses. Part 3: Γ-convergence results from the discrete problem to the continuous one. Part 4: Numerical simulations: non-linear diffusion, crowd motion. minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗X)

1D: monotone rearrangement

e.g. [Kinderleherer-Walkington 99]

2D: optimal transport plans → diffeomorphisms

[Carrillo-Moll 09]

U = hard congestion term

[Maury-Roudneff-Chupin-Santambrogio 10]

◮ Numerical applications of the (variational) JKO scheme are still limited: For X convex bounded and µ ∈ Pac(X), ◮ Plan of the talk:

[Blanchet-Calvez-Carrillo 08] [Agueh-Bowles 09]

slide-23
SLIDE 23

8

  • 2. Convex discretization of a JKO step
slide-24
SLIDE 24

9

Prior work: Discretizing the Space of Convex Functions

minφ∈K

  • X F(φ(x), ∇φ(x)) d µ(x)

◮ Finite elements: piecewise-linear convex functions K := {φ convex} − → number of constraints is ∼ N := card(P) P ⊆ X

  • ver a fixed mesh
slide-25
SLIDE 25

9

Prior work: Discretizing the Space of Convex Functions

minφ∈K

  • X F(φ(x), ∇φ(x)) d µ(x)

[Chon´ e-Le Meur ’99]

◮ Finite elements: piecewise-linear convex functions K := {φ convex} − → non-convergence result − → number of constraints is ∼ N := card(P) P ⊆ X

  • ver a fixed mesh
slide-26
SLIDE 26

9

Prior work: Discretizing the Space of Convex Functions

− → convergent, but number of constraints is ≃ N 2

[Ekeland—Moreno-Bromberg ’10]

minφ∈K

  • X F(φ(x), ∇φ(x)) d µ(x)

[Chon´ e-Le Meur ’99]

◮ Finite elements: piecewise-linear convex functions ◮ Finite differences, using convex interpolates K := {φ convex}

[Carlier-Lachand-Robert-Maury ’01]

− → non-convergence result − → number of constraints is ∼ N := card(P) P ⊆ X

  • ver a fixed mesh
slide-27
SLIDE 27

9

Prior work: Discretizing the Space of Convex Functions

− → convergent, but number of constraints is ≃ N 2

[Ekeland—Moreno-Bromberg ’10] [Mirebeau ’14]

minφ∈K

  • X F(φ(x), ∇φ(x)) d µ(x)

[Chon´ e-Le Meur ’99]

◮ Finite elements: piecewise-linear convex functions ◮ Finite differences, using convex interpolates

[Oudet-M. ’14] [Oberman ’14]

− → exterior parameterization K := {φ convex}

[Carlier-Lachand-Robert-Maury ’01]

− → non-convergence result − → adaptive method − → number of constraints is ∼ N := card(P) P ⊆ X

  • ver a fixed mesh
slide-28
SLIDE 28

9

Prior work: Discretizing the Space of Convex Functions

− → convergent, but number of constraints is ≃ N 2

[Ekeland—Moreno-Bromberg ’10] [Mirebeau ’14]

minφ∈K

  • X F(φ(x), ∇φ(x)) d µ(x)

[Chon´ e-Le Meur ’99]

◮ Finite elements: piecewise-linear convex functions ◮ Finite differences, using convex interpolates

[Oudet-M. ’14] [Oberman ’14]

− → exterior parameterization K := {φ convex}

[Carlier-Lachand-Robert-Maury ’01]

− → non-convergence result − → adaptive method − → number of constraints is ∼ N := card(P) P ⊆ X

  • ver a fixed mesh

Our functional involves the Monge-Amp` ere operator det D2φ: this will help us.

slide-29
SLIDE 29

10

Discretizing the Space of Convex Functions

minφ∈KX

1 2τ W2 2(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ)

KX := {φ cvx ; ∇φ ∈ X} Definition: Given P ⊆ X finite, KX(P) = {ψ|P ; ψ ∈ KX}

= φ : P → R P ⊆ X

slide-30
SLIDE 30

10

Discretizing the Space of Convex Functions

minφ∈KX

1 2τ W2 2(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ)

KX := {φ cvx ; ∇φ ∈ X} Definition: Given P ⊆ X finite, KX(P) = {ψ|P ; ψ ∈ KX} − → finite-dimensional convex set

= φ : P → R P ⊆ X

slide-31
SLIDE 31

10

Discretizing the Space of Convex Functions

minφ∈KX

1 2τ W2 2(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ)

KX := {φ cvx ; ∇φ ∈ X} Definition: Given P ⊆ X finite, KX(P) = {ψ|P ; ψ ∈ KX} − → finite-dimensional convex set − → extension of φ ∈ KX(P):

= φ : P → R

ˆ φ

P ⊆ X

ˆ φ := max{ψ; ψ ∈ KX and ψ|P ≤ φ} ∈ KX

φ : P → R ˆ φ : Rd → R ∈ KX

X = [−1, 1]

slide-32
SLIDE 32

10

Discretizing the Space of Convex Functions

minφ∈KX

1 2τ W2 2(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ)

KX := {φ cvx ; ∇φ ∈ X} Definition: Given P ⊆ X finite, KX(P) = {ψ|P ; ψ ∈ KX} − → finite-dimensional convex set − → extension of φ ∈ KX(P):

= φ : P → R

ˆ φ

P ⊆ X

ˆ φ := max{ψ; ψ ∈ KX and ψ|P ≤ φ} ∈ KX Discrete push-forward of µP =

p∈P µpδp ∈ P(P)

φ : P → R ˆ φ : Rd → R ∈ KX

X = [−1, 1]

slide-33
SLIDE 33

10

Discretizing the Space of Convex Functions

minφ∈KX

1 2τ W2 2(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ)

KX := {φ cvx ; ∇φ ∈ X} Definition: Given P ⊆ X finite, KX(P) = {ψ|P ; ψ ∈ KX} − → finite-dimensional convex set − → extension of φ ∈ KX(P):

= φ : P → R

ˆ φ

P ⊆ X

ˆ φ := max{ψ; ψ ∈ KX and ψ|P ≤ φ} ∈ KX X p

Vp

Discrete push-forward of µP =

p∈P µpδp ∈ P(P)

Definition: For φ ∈ KX(P), let Vp := ∂ ˆ φ(p) and

φ : P → R ˆ φ : Rd → R ∈ KX

X = [−1, 1]

slide-34
SLIDE 34

10

Discretizing the Space of Convex Functions

minφ∈KX

1 2τ W2 2(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ)

KX := {φ cvx ; ∇φ ∈ X} Definition: Given P ⊆ X finite, KX(P) = {ψ|P ; ψ ∈ KX} − → finite-dimensional convex set − → extension of φ ∈ KX(P):

= φ : P → R

ˆ φ

P ⊆ X

ˆ φ := max{ψ; ψ ∈ KX and ψ|P ≤ φ} ∈ KX X Gφ#µP :=

p∈P µp Hd(Vp) Hd

  • Vp ∈ Pac(X)

p

Vp

Discrete push-forward of µP =

p∈P µpδp ∈ P(P)

Definition: For φ ∈ KX(P), let Vp := ∂ ˆ φ(p) and

φ : P → R ˆ φ : Rd → R ∈ KX

X = [−1, 1]

slide-35
SLIDE 35

11

Convex Space-Discretization of One JKO Step

Theorem: Under McCann’s hypothesis (H1) and (H2), the problem minφ∈KG

X(P )

1 2τ W2 2(µP , Hφ#µP ) + E(Hφ#µP ) + U(Gφ#µP )

is convex, and the minimum is unique if rdU(r−d) is strictly convex.

[Benamou, Carlier, M., Oudet ’14]

slide-36
SLIDE 36

11

Convex Space-Discretization of One JKO Step

Theorem: Under McCann’s hypothesis (H1) and (H2), the problem minφ∈KG

X(P )

1 2τ W2 2(µP , Hφ#µP ) + E(Hφ#µP ) + U(Gφ#µP )

is convex, and the minimum is unique if rdU(r−d) is strictly convex.

[Benamou, Carlier, M., Oudet ’14]

◮ Unfortunately, two different definitions of push-forward seem necessary.

slide-37
SLIDE 37

11

Convex Space-Discretization of One JKO Step

Theorem: Under McCann’s hypothesis (H1) and (H2), the problem minφ∈KG

X(P )

1 2τ W2 2(µP , Hφ#µP ) + E(Hφ#µP ) + U(Gφ#µP )

is convex, and the minimum is unique if rdU(r−d) is strictly convex.

[Benamou, Carlier, M., Oudet ’14]

◮ Unfortunately, two different definitions of push-forward seem necessary.

  • f the discrete Monge-Amp`

ere operator: ◮ The convexity of φ ∈ KX(P) → U(Gφ#µP ) follows from the log-concavity MA[φ](p) := Hd(∂ ˆ φ(p)).

slide-38
SLIDE 38

11

Convex Space-Discretization of One JKO Step

Theorem: Under McCann’s hypothesis (H1) and (H2), the problem minφ∈KG

X(P )

1 2τ W2 2(µP , Hφ#µP ) + E(Hφ#µP ) + U(Gφ#µP )

is convex, and the minimum is unique if rdU(r−d) is strictly convex. U(Gφ#µP ) < +∞ = ⇒ ∀p ∈ P, MA[φ](p) = Hd(∂ ˆ φ(p)) > 0 ◮ If limr→∞ U(r)/r = +∞, the internal energy is a barrier for convexity, i.e. = ⇒ φ is in the interior of KX(P). NB: |P| non-linear constraints vs |P|2 linear constraints

[Benamou, Carlier, M., Oudet ’14]

◮ Unfortunately, two different definitions of push-forward seem necessary.

  • f the discrete Monge-Amp`

ere operator: ◮ The convexity of φ ∈ KX(P) → U(Gφ#µP ) follows from the log-concavity MA[φ](p) := Hd(∂ ˆ φ(p)).

slide-39
SLIDE 39

12

Convex Space-Discretization of the Internal Energy

Proposition: Under McCann’s assumption, φ ∈ KX(P) → U(Gφ#µP ) is convex.

slide-40
SLIDE 40

12

Convex Space-Discretization of the Internal Energy

Proposition: Under McCann’s assumption, φ ∈ KX(P) → U(Gφ#µP ) is convex. Discrete Monge-Amp` ere operator: MA[φ](p) := Hd(∂ ˆ φ(p)).

slide-41
SLIDE 41

12

Convex Space-Discretization of the Internal Energy

U(Gac

φ#µP ) = p∈P U (µp/MA[φ](p)) MA[φ](p)

Proposition: Under McCann’s assumption, φ ∈ KX(P) → U(Gφ#µP ) is convex. Proof: Recall Gφ#µP =

p∈P [µp/MA[φ](p)]1∂ ˆ φ(p). With U(σ) =

  • U(σ(x)) d x,

Discrete Monge-Amp` ere operator: MA[φ](p) := Hd(∂ ˆ φ(p)). NB: similarity with U(∇φ#ρ) =

  • U
  • ρ(x)

MA[φ](x)

  • MA[φ](x) d x
slide-42
SLIDE 42

12

Convex Space-Discretization of the Internal Energy

U(Gac

φ#µP ) = p∈P U (µp/MA[φ](p)) MA[φ](p)

Proposition: Under McCann’s assumption, φ ∈ KX(P) → U(Gφ#µP ) is convex. Proof: Recall Gφ#µP =

p∈P [µp/MA[φ](p)]1∂ ˆ φ(p). With U(σ) =

  • U(σ(x)) d x,

= −

p∈P µp log(MA[φ](p)) + p∈P µp log(µp)

case U(r) = r log r Discrete Monge-Amp` ere operator: MA[φ](p) := Hd(∂ ˆ φ(p)).

slide-43
SLIDE 43

12

Convex Space-Discretization of the Internal Energy

U(Gac

φ#µP ) = p∈P U (µp/MA[φ](p)) MA[φ](p)

Proposition: Under McCann’s assumption, φ ∈ KX(P) → U(Gφ#µP ) is convex. Proof: Recall Gφ#µP =

p∈P [µp/MA[φ](p)]1∂ ˆ φ(p). With U(σ) =

  • U(σ(x)) d x,

= −

p∈P µp log(MA[φ](p)) + p∈P µp log(µp)

Lemma: Given φ0, φ1 ∈ KX(P), φt := (1 − t)φ0 + tφ1 and ∂ ˆ φt(p) ⊇ (1 − t)∂ ˆ φ0(p) ⊕ t∂ ˆ φ1(p). case U(r) = r log r ⊕ = Minkowski sum Discrete Monge-Amp` ere operator: MA[φ](p) := Hd(∂ ˆ φ(p)).

slide-44
SLIDE 44

12

Convex Space-Discretization of the Internal Energy

U(Gac

φ#µP ) = p∈P U (µp/MA[φ](p)) MA[φ](p)

Proposition: Under McCann’s assumption, φ ∈ KX(P) → U(Gφ#µP ) is convex. Proof: Recall Gφ#µP =

p∈P [µp/MA[φ](p)]1∂ ˆ φ(p). With U(σ) =

  • U(σ(x)) d x,

= −

p∈P µp log(MA[φ](p)) + p∈P µp log(µp)

log Hd(∂ ˆ φt) ≥ log Hd((1 − t)∂ ˆ φ0(p) ⊕ t∂ ˆ φ1(p)). ≥ (1 − t) log Hd(∂ ˆ φ0(p)) + t log Hd(ˆ φ1(p))). Brunn-Minkowski Lemma: Given φ0, φ1 ∈ KX(P), φt := (1 − t)φ0 + tφ1 and ∂ ˆ φt(p) ⊇ (1 − t)∂ ˆ φ0(p) ⊕ t∂ ˆ φ1(p). case U(r) = r log r ⊕ = Minkowski sum = ⇒ Discrete Monge-Amp` ere operator: MA[φ](p) := Hd(∂ ˆ φ(p)).

slide-45
SLIDE 45

12

Convex Space-Discretization of the Internal Energy

U(Gac

φ#µP ) = p∈P U (µp/MA[φ](p)) MA[φ](p)

Proposition: Under McCann’s assumption, φ ∈ KX(P) → U(Gφ#µP ) is convex. Proof: Recall Gφ#µP =

p∈P [µp/MA[φ](p)]1∂ ˆ φ(p). With U(σ) =

  • U(σ(x)) d x,

= −

p∈P µp log(MA[φ](p)) + p∈P µp log(µp)

log Hd(∂ ˆ φt) ≥ log Hd((1 − t)∂ ˆ φ0(p) ⊕ t∂ ˆ φ1(p)). ≥ (1 − t) log Hd(∂ ˆ φ0(p)) + t log Hd(ˆ φ1(p))). Brunn-Minkowski Lemma: Given φ0, φ1 ∈ KX(P), φt := (1 − t)φ0 + tφ1 and ∂ ˆ φt(p) ⊇ (1 − t)∂ ˆ φ0(p) ⊕ t∂ ˆ φ1(p). case U(r) = r log r ⊕ = Minkowski sum = ⇒ Discrete Monge-Amp` ere operator: MA[φ](p) := Hd(∂ ˆ φ(p)). = ⇒ U(Gφt#µP ) ≤ (1 − t)U(Gφt#µP ) + tU(Gφt#µP )

slide-46
SLIDE 46

13

  • 3. A Γ-convergence result
slide-47
SLIDE 47

14

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗)

slide-48
SLIDE 48

14

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(X), W2) minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗)

slide-49
SLIDE 49

14

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(X), W2) minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗) (C2) U(ρ) =

  • U(ρ(x)) d x, where U ≥ M is convex.
slide-50
SLIDE 50

14

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(X), W2) minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗) (C2) U(ρ) =

  • U(ρ(x)) d x, where U ≥ M is convex.

= McCann’s condition for displacement-convexity

slide-51
SLIDE 51

14

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(X), W2) minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗) Theorem: Let Pn ⊆ X finite, µn ∈ P(Pn) with lim W2(µn, µ) = 0, and: minφ∈KG

X(Pn)

1 2τ W2(µn, Hφ#µn) + E(Hφ#µn) + U(Gφ#µn)

(∗)n If φn minimizes (∗)n, then (Gφn#µn) is a minimizing sequence for (∗). (C2) U(ρ) =

  • U(ρ(x)) d x, where U ≥ M is convex.

= McCann’s condition for displacement-convexity

[Benamou, Carlier, M., Oudet ’14]

slide-52
SLIDE 52

14

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(X), W2) minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗) Theorem: Let Pn ⊆ X finite, µn ∈ P(Pn) with lim W2(µn, µ) = 0, and: minφ∈KG

X(Pn)

1 2τ W2(µn, Hφ#µn) + E(Hφ#µn) + U(Gφ#µn)

(∗)n If φn minimizes (∗)n, then (Gφn#µn) is a minimizing sequence for (∗). (C2) U(ρ) =

  • U(ρ(x)) d x, where U ≥ M is convex.

= McCann’s condition for displacement-convexity

◮ Proof relies on Caffarelli’s regularity theorem.

[Benamou, Carlier, M., Oudet ’14]

slide-53
SLIDE 53

14

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c (C1) E continuous, U l.s.c on (P(X), W2) minν∈P(X)

1 2τ W2 2(µ, ν) + E(ν) + U(ν)

(∗) Theorem: Let Pn ⊆ X finite, µn ∈ P(Pn) with lim W2(µn, µ) = 0, and: minφ∈KG

X(Pn)

1 2τ W2(µn, Hφ#µn) + E(Hφ#µn) + U(Gφ#µn)

(∗)n If φn minimizes (∗)n, then (Gφn#µn) is a minimizing sequence for (∗). (C2) U(ρ) =

  • U(ρ(x)) d x, where U ≥ M is convex.

= McCann’s condition for displacement-convexity

◮ Proof relies on Caffarelli’s regularity theorem. ◮ When Pn is a regular grid, there is an alternative (and quantitative) argument

[Benamou, Carlier, M., Oudet ’14]

slide-54
SLIDE 54

15

Proof of the convergence theorem: Lower bound

Let m := minσ∈P(X) F(σ) mn := minφn∈KX(Pn) Fn(Gφn#µn) ◮ Step 1: lim inf mn ≥ m JKO step: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c F(σ) := W2

2(µ, σ) + E(σ) + U(σ)

Fn(σ) := W2

2(µn, σ) + E(σ) + U(σ)

slide-55
SLIDE 55

15

Proof of the convergence theorem: Lower bound

Let m := minσ∈P(X) F(σ) mn := minφn∈KX(Pn) Fn(Gφn#µn) ◮ Step 1: lim inf mn ≥ m JKO step: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c ◮ Step 2: lim sup mn ≤ m = minσ∈P(X) F(σ) F(σ) := W2

2(µ, σ) + E(σ) + U(σ)

Fn(σ) := W2

2(µn, σ) + E(σ) + U(σ)

slide-56
SLIDE 56

15

Proof of the convergence theorem: Lower bound

Let m := minσ∈P(X) F(σ) mn := minφn∈KX(Pn) Fn(Gφn#µn) ◮ Step 1: lim inf mn ≥ m JKO step: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c ◮ Step 2: lim sup mn ≤ m = minσ∈P(X) F(σ) s.t. limn→∞ σ − σnL∞(X) = 0, where σn is the density of Gφn#µn.

Using (C2) and a convolution argument

F(σ) := W2

2(µ, σ) + E(σ) + U(σ)

Fn(σ) := W2

2(µn, σ) + E(σ) + U(σ)

⇐ = Given any probability density σ ∈ C0(X) with ε < σ < ε−1, ∃φn ∈ KX(Pn)

slide-57
SLIDE 57

16

Proof of the convergence theorem: Lower bound

X X ∇φ µ σ = ∇φ#µ

◮ Step 2’: s.t. limn→∞ σ − σnL∞(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn)

slide-58
SLIDE 58

16

Proof of the convergence theorem: Lower bound

X X X spt(µn) = Pn ⊆ X ∇φ µ σ = ∇φ#µ σn

◮ Step 2’: ∀p ∈ Pn, σ(V n

p ) = µp, where V n p := ∂ ˆ

φn(p)

∂ ˆ φn

(a) Given µn =

p∈Pn µpδp, take φn ∈ KX(Pn) O.T. potential between µn and σ:

p V n

p

s.t. limn→∞ σ − σnL∞(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn)

slide-59
SLIDE 59

16

Proof of the convergence theorem: Lower bound

X X X spt(µn) = Pn ⊆ X ∇φ µ σ = ∇φ#µ σn

◮ Step 2’: ∀p ∈ Pn, σ(V n

p ) = µp, where V n p := ∂ ˆ

φn(p) (b) σn := Gφn#µn =

p∈Pn σ(V n

p )

Hd(V n

p )1V n p .

∂ ˆ φn

(a) Given µn =

p∈Pn µpδp, take φn ∈ KX(Pn) O.T. potential between µn and σ:

p V n

p

s.t. limn→∞ σ − σnL∞(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn)

slide-60
SLIDE 60

16

Proof of the convergence theorem: Lower bound

X X X spt(µn) = Pn ⊆ X ∇φ µ σ = ∇φ#µ σn

◮ Step 2’: ∀p ∈ Pn, σ(V n

p ) = µp, where V n p := ∂ ˆ

φn(p) (b) σn := Gφn#µn =

p∈Pn σ(V n

p )

Hd(V n

p )1V n p .

(c) If maxp∈Pn diam(V n

p )

n→∞

− − − → 0 , then

∂ ˆ φn

(a) Given µn =

p∈Pn µpδp, take φn ∈ KX(Pn) O.T. potential between µn and σ:

p V n

p

s.t. limn→∞ σ − σnL∞(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn) limn→∞ σ − σnL∞(X) = 0. (1)

slide-61
SLIDE 61

16

Proof of the convergence theorem: Lower bound

X X X spt(µn) = Pn ⊆ X ∇φ µ σ = ∇φ#µ σn

◮ Step 2’: ∀p ∈ Pn, σ(V n

p ) = µp, where V n p := ∂ ˆ

φn(p) (b) σn := Gφn#µn =

p∈Pn σ(V n

p )

Hd(V n

p )1V n p .

(c) If maxp∈Pn diam(V n

p )

n→∞

− − − → 0 , then

∂ ˆ φn

(a) Given µn =

p∈Pn µpδp, take φn ∈ KX(Pn) O.T. potential between µn and σ:

p V n

p

s.t. limn→∞ σ − σnL∞(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn) (d) Assume not (1): ∃pn ∈ Pn s.t. diam(V n

p ) ≥ ε.

diam(∂φ(x)) ≥ ε where x = limn pn ∈ X. limn→∞ σ − σnL∞(X) = 0. (1) Up to extraction, ˆ φn

.∞

− − − → φ, so that (2)

slide-62
SLIDE 62

16

Proof of the convergence theorem: Lower bound

X X X spt(µn) = Pn ⊆ X ∇φ µ σ = ∇φ#µ σn

◮ Step 2’: ∀p ∈ Pn, σ(V n

p ) = µp, where V n p := ∂ ˆ

φn(p) (b) σn := Gφn#µn =

p∈Pn σ(V n

p )

Hd(V n

p )1V n p .

(c) If maxp∈Pn diam(V n

p )

n→∞

− − − → 0 , then

∂ ˆ φn

(a) Given µn =

p∈Pn µpδp, take φn ∈ KX(Pn) O.T. potential between µn and σ:

p V n

p

s.t. limn→∞ σ − σnL∞(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn) (d) Assume not (1): ∃pn ∈ Pn s.t. diam(V n

p ) ≥ ε.

diam(∂φ(x)) ≥ ε where x = limn pn ∈ X. (e) Moreover, ∇φ#ρ = σ. By Caffarelli’s regularity limn→∞ σ − σnL∞(X) = 0. (1) Up to extraction, ˆ φn

.∞

− − − → φ, so that theorem, φ ∈ C1: Contradiction of . (2) (2)

slide-63
SLIDE 63

17

  • 4. Numerical results
slide-64
SLIDE 64

18

Computing the discrete Monge-Amp` ere operator

U(Gφ#µP ) =

p∈P U (µp/MA[φ](p)) MA[φ](p)

with MA[φ](p) := Hd(∂ ˆ φ(p)). ◮ Goal: fast computation of MA[φ](p) and its 1st/2nd derivatives w.r.t φ

slide-65
SLIDE 65

18

Computing the discrete Monge-Amp` ere operator

X Definition: Given a function φ : P → R, ◮ We rely on the notion of Laguerre (or power) cell in computational geometry Lagφ

P (p) := {y ∈ Rd; ∀q ∈ P, φ(q) ≥ φ(p) + q − p|y}

U(Gφ#µP ) =

p∈P U (µp/MA[φ](p)) MA[φ](p)

with MA[φ](p) := Hd(∂ ˆ φ(p)).

Lagφ

P (p)

p ◮ Goal: fast computation of MA[φ](p) and its 1st/2nd derivatives w.r.t φ

slide-66
SLIDE 66

18

Computing the discrete Monge-Amp` ere operator

X Definition: Given a function φ : P → R, ◮ We rely on the notion of Laguerre (or power) cell in computational geometry Lagφ

P (p) := {y ∈ Rd; ∀q ∈ P, φ(q) ≥ φ(p) + q − p|y}

Lemma: For φ ∈ KX(P) and p ∈ P, ∂ ˆ φ(p) = Lagφ

P (p) ∩ X.

U(Gφ#µP ) =

p∈P U (µp/MA[φ](p)) MA[φ](p)

with MA[φ](p) := Hd(∂ ˆ φ(p)).

Lagφ

P (p)

p ◮ Goal: fast computation of MA[φ](p) and its 1st/2nd derivatives w.r.t φ

slide-67
SLIDE 67

18

Computing the discrete Monge-Amp` ere operator

Lagφ

P (p) := {y; ∀q ∈ P, q − y2 ≥ p − y2}

X Definition: Given a function φ : P → R, ◮ We rely on the notion of Laguerre (or power) cell in computational geometry Lagφ

P (p) := {y ∈ Rd; ∀q ∈ P, φ(q) ≥ φ(p) + q − p|y}

− → For φ(p) = p2/2, one gets the Voronoi cell: Lemma: For φ ∈ KX(P) and p ∈ P, ∂ ˆ φ(p) = Lagφ

P (p) ∩ X.

U(Gφ#µP ) =

p∈P U (µp/MA[φ](p)) MA[φ](p)

with MA[φ](p) := Hd(∂ ˆ φ(p)).

Lagφ

P (p)

p ◮ Goal: fast computation of MA[φ](p) and its 1st/2nd derivatives w.r.t φ − → Computation in time O(|P| log |P|) in 2D

slide-68
SLIDE 68

19

Computing the discrete Monge-Amp` ere operator

◮ Global construction of the intersections (Lagφ

P (p) ∩ X)p∈P in 2D.

◮ Combinatorics stored as an (abstract) triangulation T of the finite set P ∪ S, i.e. s3 s2 s1 s4 Assumption: ∂X = ∪s∈Ss,

∂ ˆ φ(p1) ∂ ˆ φ(p2)

s1 s2 s3 s4

p2 p1

(p1, s1, s2) ∈ T iff (Lagφ

P (p1) ∩ s1 ∩ s2 ∩ X) = ∅

with S = finite family of segments. (p1, p2, p3) ∈ T iff (Lagφ

P (p1) ∩ Lagφ P (p2) ∩ Lagφ P (p3)) ∩ X = ∅

(p1, p2, s1) ∈ T iff (Lagφ

P (p1) ∩ Lagφ P (p2) ∩ s1) ∩ X = ∅

Computation in time O(|P| log |P| + |S|) in 2D. T

slide-69
SLIDE 69

19

Computing the discrete Monge-Amp` ere operator

◮ Global construction of the intersections (Lagφ

P (p) ∩ X)p∈P in 2D.

s3 s2 s1 s4 Assumption: ∂X = ∪s∈Ss,

∂ ˆ φ(p1) ∂ ˆ φ(p2)

s1 s2 s3 s4

p2 p1

with S = finite family of segments. ◮ Computation of MA(p) = H2(Lagφ

P (p) ∩ X) and its derivatives. ∂MA(p) ∂φ(q)

= 0 = ⇒ (p, q) is an edge of T

∂2MA(p) ∂φ(r)∂φ(q) = 0 =

⇒ (p, q, r) is a triangle of T The sparsity structure of the Jacobian/Hessian is encoded in T: T

slide-70
SLIDE 70

20

Example 1: Nonlinear diffusion on point clouds

   ∂ρ ∂t = ∆ρm

  • n X

∇ρ ⊥ nX

  • n ∂X

fast diffusion equation: m ∈ [1 − 1/d, 1) porous medium equation: m > 1 Gradient flow in (P(X), W2). for U(ρ) =

  • U(ρ(x)) d x with U(r) =

rm m−1

(∗)

[Otto]

slide-71
SLIDE 71

20

Example 1: Nonlinear diffusion on point clouds

   ∂ρ ∂t = ∆ρm

  • n X

∇ρ ⊥ nX

  • n ∂X

fast diffusion equation: m ∈ [1 − 1/d, 1) porous medium equation: m > 1 Gradient flow in (P(X), W2). for U(ρ) =

  • U(ρ(x)) d x with U(r) =

rm m−1

Algorithm: Input: µ0 :=

x∈P0 δx |P0|, τ > 0

For k ∈ {0, ..., T} φ ← arg minφ∈KG

X(Pk)

1 2τ W2 2(µk, Hφ#µk) + U(Gφ#µk)

µk+1 ← Gφ#µk; Pk+1 ← spt(µk+1) (∗)

[Otto]

X Pk X Pk+1 x Gφk(x)

Newton’s method

slide-72
SLIDE 72

21

Example 2: Crowd motion and congestion

µk+1 = minν∈P(X)

1 2τ W2 2(µk, ν) + E(ν) + U(ν)

E(ν) :=

  • X V (x) d ν(x)

U(ν) :=

  • 0 if d ν/ d Hd ≤ 1,

+ ∞ if not

[Maury-Roudneff-Chupin-Santambrogio 10]

◮ Gradient flow model of crowd motion with congestion, with a JKO scheme: Prop: The congestion term U is convex under generalized displacements.

slide-73
SLIDE 73

21

Example 2: Crowd motion and congestion

µk+1 = minν∈P(X)

1 2τ W2 2(µk, ν) + E(ν) + U(ν)

E(ν) :=

  • X V (x) d ν(x)

U(ν) :=

  • 0 if d ν/ d Hd ≤ 1,

+ ∞ if not

[Maury-Roudneff-Chupin-Santambrogio 10]

◮ Gradient flow model of crowd motion with congestion, with a JKO scheme: We solve this problem with a relaxed hard congestion term: Uα(ρ) := −

  • ρ(x)α log(1 − ρ(x)1/d) d x

6 4 2 r = 0 r = 1 α=1 α=5 α=10

Prop: (i) Uα is convex under gen. displacements ◮ Convex optimization problem if V is λ-convex (V + λ.2 convex) and τ ≤ λ/2.

r → −rα log(1 − r1/d)

Uα − → U as α → ∞. βU1 − → U as β → 0.

Γ Γ

Prop: The congestion term U is convex under generalized displacements. (ii)

slide-74
SLIDE 74

22

Example 2: Crowd motion and congestion

V (x) = x − (2, 0)2 + 5 exp(−5x2/2) Initial density on X = [−2, 2]2 1 P = 200 × 200 regular grid.

(−2, −2) (2, 2)

Potential Algorithm: Input: µ0 ∈ P(P), τ > 0, α > 0, β ≥ 1. For k ∈ {0, ..., T} φ ← arg minφ∈KG

X(Pk)

1 2τ W2 2(µk, Hφ#µk) + E(Hφ#µk) + αUβ(Gφ#µk)

ν ← Gφ#µk; µk+1 ← projection of ν|[−2,2)×[−2,2] on P.

slide-75
SLIDE 75

23

  • 4. Extension to Other Convexity-like Constraints
slide-76
SLIDE 76

24

Support Functions of Convex Bodies

Objective: Logarithmic barrier for the space of support functions of convex bodies.

slide-77
SLIDE 77

24

Support Functions of Convex Bodies

Objective: Logarithmic barrier for the space of support functions of convex bodies. − → interior point method for shape optimization problem: Minkowski, Meissner, etc.

slide-78
SLIDE 78

24

Support Functions of Convex Bodies

Objective: Logarithmic barrier for the space of support functions of convex bodies. Definition: Given a convex body K, hK : u ∈ Sd−1 → maxp∈Ku|p.

slide-79
SLIDE 79

24

Support Functions of Convex Bodies

Objective: Logarithmic barrier for the space of support functions of convex bodies. n1 n2 n3 n4 n5 P = {n1, . . . , nN} ⊆ Sd−1 ⊆ Rd Definition: Given a convex body K, hK : u ∈ Sd−1 → maxp∈Ku|p. Ks(P) := {hK|P ; K bounded convex body }

slide-80
SLIDE 80

24

Support Functions of Convex Bodies

Objective: Logarithmic barrier for the space of support functions of convex bodies. n1 n2 n3 n4 n5 P = {n1, . . . , nN} ⊆ Sd−1 ⊆ Rd Definition: Given a convex body K, hK : u ∈ Sd−1 → maxp∈Ku|p. Ks(P) := {hK|P ; K bounded convex body } K(h) := N

i=1{x; x|ni ≤ hi}

h2 h1 h5 h3 h4

K(h)

slide-81
SLIDE 81

24

Support Functions of Convex Bodies

Objective: Logarithmic barrier for the space of support functions of convex bodies. n1 n2 n3 n4 n5 P = {n1, . . . , nN} ⊆ Sd−1 ⊆ Rd Definition: Given a convex body K, hK : u ∈ Sd−1 → maxp∈Ku|p. Ks(P) := {hK|P ; K bounded convex body } K(h) := N

i=1{x; x|ni ≤ hi}

h2 h1 h5 h3 h4 Prop: Φ(h) := − N

i=1 log(Hd−1(ith face of K(h))) is a convex barrier for Ks(P).

K(h)

slide-82
SLIDE 82

24

Support Functions of Convex Bodies

Objective: Logarithmic barrier for the space of support functions of convex bodies. n1 n2 n3 n4 n5 P = {n1, . . . , nN} ⊆ Sd−1 ⊆ Rd Definition: Given a convex body K, hK : u ∈ Sd−1 → maxp∈Ku|p. Ks(P) := {hK|P ; K bounded convex body } K(h) := N

i=1{x; x|ni ≤ hi}

h2 h1 h5 h3 h4 Prop: Φ(h) := − N

i=1 log(Hd−1(ith face of K(h))) is a convex barrier for Ks(P).

Extension to (a) radial parameterization of convex bodies, (b) reflector surfaces, etc.

K(h)

slide-83
SLIDE 83

25

c-Concave Functions

Definition: φ : X → R is c-concave if ∃ψ : Y → R s.t. φ = miny c(x, y) + ψ(y) Cost function: c : X × Y → R

slide-84
SLIDE 84

25

c-Concave Functions

Definition: φ : X → R is c-concave if ∃ψ : Y → R s.t. φ = miny c(x, y) + ψ(y) Cost function: c : X × Y → R Kc(P) := {φ|P ; φ : X → R is c-concave }

slide-85
SLIDE 85

25

c-Concave Functions

Application: generalized principal-agent problem Definition: φ : X → R is c-concave if ∃ψ : Y → R s.t. φ = miny c(x, y) + ψ(y) Cost function: c : X × Y → R Kc(P) := {φ|P ; φ : X → R is c-concave } − → Characterization of the costs functions such that the space Kc is convex

[Figalli, Kim, McCann ’10]

slide-86
SLIDE 86

25

c-Concave Functions

Application: generalized principal-agent problem Definition: φ : X → R is c-concave if ∃ψ : Y → R s.t. φ = miny c(x, y) + ψ(y) Cost function: c : X × Y → R Kc(P) := {φ|P ; φ : X → R is c-concave } − → Characterization of the costs functions such that the space Kc is convex

[Figalli, Kim, McCann ’10]

− → Under the same hypotheses, there exists a convex logarithmic barrier for Kc(P):

slide-87
SLIDE 87

25

c-Concave Functions

Application: generalized principal-agent problem Definition: φ : X → R is c-concave if ∃ψ : Y → R s.t. φ = miny c(x, y) + ψ(y) Vorψ

c (p) ⊆ X

Voronoi diagram: Vorψ

c (p) = {x ∈ X; ∀q ∈ P, c(x, p) + ψ(p) ≤ c(x, q) + ψ(q)}

Cost function: c : X × Y → R Kc(P) := {φ|P ; φ : X → R is c-concave } − → Characterization of the costs functions such that the space Kc is convex

[Figalli, Kim, McCann ’10]

− → Under the same hypotheses, there exists a convex logarithmic barrier for Kc(P):

slide-88
SLIDE 88

25

c-Concave Functions

Application: generalized principal-agent problem Definition: φ : X → R is c-concave if ∃ψ : Y → R s.t. φ = miny c(x, y) + ψ(y) Vorψ

c (p) ⊆ X

exp−1

p (Vorψ c (p)) ⊆ Rd

is convex

expp := [∇c(., p)]−1

Voronoi diagram: Vorψ

c (p) = {x ∈ X; ∀q ∈ P, c(x, p) + ψ(p) ≤ c(x, q) + ψ(q)}

Cost function: c : X × Y → R Kc(P) := {φ|P ; φ : X → R is c-concave } − → Characterization of the costs functions such that the space Kc is convex

[Figalli, Kim, McCann ’10]

− → Under the same hypotheses, there exists a convex logarithmic barrier for Kc(P):

slide-89
SLIDE 89

25

c-Concave Functions

Application: generalized principal-agent problem Definition: φ : X → R is c-concave if ∃ψ : Y → R s.t. φ = miny c(x, y) + ψ(y) Vorψ

c (p) ⊆ X

exp−1

p (Vorψ c (p)) ⊆ Rd

is convex

expp := [∇c(., p)]−1

Voronoi diagram: Vorψ

c (p) = {x ∈ X; ∀q ∈ P, c(x, p) + ψ(p) ≤ c(x, q) + ψ(q)}

Cost function: c : X × Y → R Kc(P) := {φ|P ; φ : X → R is c-concave } − → Characterization of the costs functions such that the space Kc is convex

[Figalli, Kim, McCann ’10]

− → Under the same hypotheses, there exists a convex logarithmic barrier for Kc(P): Prop: Φ(h) := − N

p∈P log(Hd(exp−1 p (Vorψ c (p))) is a convex barrier for Kc(P).