Semi-Lagrangian Methods for Monge-Amp` ere Equations
Max Jensen University of Sussex joint work with Xiaobing Feng (University of Tennessee) http://arxiv.org/abs/1602.04758
Slides available at https://goo.gl/v42Zv8 1
Semi-Lagrangian Methods for Monge-Amp` ere Equations Max Jensen - - PowerPoint PPT Presentation
Semi-Lagrangian Methods for Monge-Amp` ere Equations Max Jensen University of Sussex joint work with Xiaobing Feng (University of Tennessee) http://arxiv.org/abs/1602.04758 Slides available at https://goo.gl/v42Zv8 1 Part 1 Monge-Amp` ere
Max Jensen University of Sussex joint work with Xiaobing Feng (University of Tennessee) http://arxiv.org/abs/1602.04758
Slides available at https://goo.gl/v42Zv8 1
Slides available at https://goo.gl/v42Zv8 2
◮ Consider soil
◮ f +(x) = height of pile at x, ◮ f −(x) = depth of hole at x,
with
x excarvation pile of soil y
◮ We want to move the soil into holes: transport plan x → s(x). ◮ Then
f + = f −(s) det(Ds) (D gradient)
◮ Cost of transport related quadratically to distance:
◮ optimal transport plan s∗ has a potential u∗ s∗ = ∂u∗ ◮ For smooth u we have the Monge-Amp`
ere equation f + = f −(Du) det(D2u) (D2 Hessian)
◮ Other motivation: prescribed Gaussian curvature. Slides available at https://goo.gl/v42Zv8 3
◮ Consider soil
◮ f +(x) = height of pile at x, ◮ f −(x) = depth of hole at x,
with
x excarvation pile of soil y
◮ We want to move the soil into holes: transport plan x → s(x). ◮ Then
f + = f −(s) det(Ds) (D gradient)
◮ Cost of transport related quadratically to distance:
◮ optimal transport plan s∗ has a potential u∗ s∗ = ∂u∗ ◮ For smooth u we have the Monge-Amp`
ere equation f + = f −(Du) det(D2u) (D2 Hessian)
◮ Other motivation: prescribed Gaussian curvature. Slides available at https://goo.gl/v42Zv8 3
◮ The simple Monge-Amp`
ere equation is one of the model problems for the development of numerical methods.
◮ It combines the following properties:
◮ fully nonlinear ◮ degenerate elliptic ◮ anisotropic
Let Ω ⊂ Rn be strictly convex. Let u|∂Ω = g and, with f (x) ≥ 0, f n n − det D2u = 0.
Slides available at https://goo.gl/v42Zv8 4
The operator F in a differential equation F(x, u(x), Du(x), D2u(x)) = 0 is called (degenerate) elliptic if, x ∈ Ω, r ∈ R, p ∈ Rn, F(x, r, p, X) ≤ F(x, r, p, Y ) whenever Y − X is pos. semidefinite where X, Y are symmetric matrices.
◮ Set
F(x, r, p, X) = f (x) n n − det X. Then F satisfies the ellipticity condition only for positive semi-definite X, Y .
◮ So a theory based on ellipticity is restricted to convex functions. Slides available at https://goo.gl/v42Zv8 5
The operator F in a differential equation F(x, u(x), Du(x), D2u(x)) = 0 is called (degenerate) elliptic if, x ∈ Ω, r ∈ R, p ∈ Rn, F(x, r, p, X) ≤ F(x, r, p, Y ) whenever Y − X is pos. semidefinite where X, Y are symmetric matrices.
◮ Set
F(x, r, p, X) = f (x) n n − det X. Then F satisfies the ellipticity condition only for positive semi-definite X, Y .
◮ So a theory based on ellipticity is restricted to convex functions. Slides available at https://goo.gl/v42Zv8 5
◮ Let u be convex and f ∈ C(Ω), f ≥ 0. ◮ u is viscosity subsolution (supersolution) of
f
n
n − det D2u = 0 if for all convex ψ ∈ C 2(Ω) and x0 ∈ Ω such that u − ψ has a local maximum (minimum) at x0 then f
n
n − det D2ψ ≤ 0 f
n
n − det D2ψ ≥ 0
◮ If u is viscosity sub- and supersolution it is a viscosity solution. ◮ If Ω ⊂ Rn is open bounded and strictly convex and f ≥ 0
continuous, then there exists a unique viscosity solution of the simple Monge-Amp` ere equation.
Slides available at https://goo.gl/v42Zv8 6
◮ Restrict the approximation space to convex functions?
◮ Convex finite element functions not dense in set of convex functions. ◮ To obtain convergence one needs to
admit non-convex functions into the approximation space.
◮ What about
◮ ellipticity, ◮ uniqueness?
◮ Other methods, such as finite
difference schemes, have the same density issue.
I nterpolant of (x1 + x2)2 (Aguilera, Morin; 2009)
Slides available at https://goo.gl/v42Zv8 7
◮ Generally (elliptic) Bellman equations of the type
Hu = 0 where Hw := sup
α∈A
(−Aα : D2w + bα · ∇w + cαw
non-divergence form
−r α).
◮ Hamilton-Jacobi-Bellman equations describe the optimal expected
cost in a controlled stochastic process.
◮ Well-established field with own range of numerical tools. Slides available at https://goo.gl/v42Zv8 8
◮ We define the Hamiltonian
H : S → R, A → sup
B∈S1
n
√ det Bf
where
◮ S is the set of self-adjoint linear mappings Rn → Rn, ◮ S1 := {A ∈ S : trace A = 1 and A ≥ 0}.
◮ We define the Monge-Amp`
ere mapping M : S → R, A → f n n − det(A).
H A = 0 ⇔ M A = 0 and A ≥ 0.
Slides available at https://goo.gl/v42Zv8 9
◮ Contour lines for diagonal A ∈ R2×2 and f = 1.
2 2 1 1 1 1 2 2 2 1 1 2 2 1 1 2 a b 1 1 1 1 2 2 2 1 1 2 2 1 1 2 a b
M a 0 0 b
a 0 0 b
enforced by a constraint on the approximation space, because the HJB operator only vanishes in first quadrant.
Slides available at https://goo.gl/v42Zv8 10
◮ v is a subsolution of HJB if for all smooth ψ
H(D2ψ(x)) ≤ 0 at every x0 which maximises v − ψ with v(x) = ψ(x).
◮ Supersolution similar. Subsolution + supersolution =: solution. ◮ Definition similar to that for Monge-Amp`
ere, but for HJB there is no convexity condition as ellipticity holds on the whole space.
◮ We complement the differential equation with Dirichlet conditions in
the pointwise sense: u(x) = g(x), x ∈ ∂Ω.
Slides available at https://goo.gl/v42Zv8 11
◮ From the general theory we have existence of a viscosity solution of
the Monge-Amp` ere problem.
◮ Let Ω be strictly convex and f ≥ 0. Then
{u ∈ C(Ω) convex and solves Mu = 0 in viscosity sense} = {u ∈ C(Ω) solves Hu = 0 in viscosity sense} Observe, regarding the proof:
◮ Monge-Amp`
ere has a smaller set of test functions (convexity).
◮ Monge-Amp`
ere only allows non-negative forcing term, otherwise difference u − ψ could be moved into forcing term and classical equivalence can be used.
Slides available at https://goo.gl/v42Zv8 12
◮ A comparison principle gives uniqueness.
◮ Let u be a subsolution and v be a supersolution of the Bellman
problem.
◮ Then u ≤ v on Ω if u ≤ v on ∂Ω. ◮ Notice that comparison is based on pointwise and not on viscosity
boundary conditions.
Slides available at https://goo.gl/v42Zv8 13
◮ Simple Monge-Amp`
ere equation f n n − det D2u = 0 Conditional uniqueness: comparison principle/well-posedness only for convex functions.
◮ Numerical difficulties: e.g. set of convex FE functions not dense in
set of convex functions.
◮ We propose to use for numerics an HJB formulation
sup
B∈S1
n
√ det Bf
◮ The HJB formulation has the same viscosity solution as the
Monge-Amp` ere problem, but unconditional comparison principle and well-posedness.
Slides available at https://goo.gl/v42Zv8 14
Slides available at https://goo.gl/v42Zv8 15
If a numerical method is
◮ monotone ◮ pointwise consistent on Ω in the viscosity sense ◮ stable in L∞
and it is ensured that numerical solutions exist, then the numerical solutions converge in L∞
loc to the viscosity solution of the (elliptic) PDE.
◮ Consider a second-order elliptic PDE with ‘irrational angle of
anisotropy‘ (even constant coefficient, linear PDE).
◮ Consider essentially a finite difference grid. ◮ If a numerical scheme is
◮ monotone, ◮ pointwise consistent,
then its stencil grows unboundedly as h → 0.
Slides available at https://goo.gl/v42Zv8 16
If a numerical method is
◮ monotone ◮ pointwise consistent on Ω in the viscosity sense ◮ stable in L∞
and it is ensured that numerical solutions exist, then the numerical solutions converge in L∞
loc to the viscosity solution of the (elliptic) PDE.
◮ Consider a second-order elliptic PDE with ‘irrational angle of
anisotropy‘ (even constant coefficient, linear PDE).
◮ Consider essentially a finite difference grid. ◮ If a numerical scheme is
◮ monotone, ◮ pointwise consistent,
then its stencil grows unboundedly as h → 0.
Slides available at https://goo.gl/v42Zv8 16
◮ Let
◮ λi be eigenvalue ◮ ei be normalised eigenvector
i λi ei eT i . ◮ Idea of discretization: substitute
B : D2φ(x) =
λi(ei eT
i ) : D2φ(x) =
λi ∂2
ei,eiφ(x)
=
λi φ(x + kei) − 2φ(x) + φ(x − kei) k2 + O(k2)
◮ Wasow-Motzkin: One has
k/h → ∞ as h → 0, where h is the diameter of mesh elements.
◮ No mixed derivatives is good
for monotonicity.
Slides available at https://goo.gl/v42Zv8 17
◮ Let
◮ λi be eigenvalue ◮ ei be normalised eigenvector
i λi ei eT i . ◮ Idea of discretization: substitute
B : D2φ(x) =
λi(ei eT
i ) : D2φ(x) =
λi ∂2
ei,eiφ(x)
=
λi φ(x + kei) − 2φ(x) + φ(x − kei) k2 + O(k2)
◮ Wasow-Motzkin: One has
k/h → ∞ as h → 0, where h is the diameter of mesh elements.
◮ No mixed derivatives is good
for monotonicity.
Slides available at https://goo.gl/v42Zv8 17
◮ Recall
H(D2u) = sup
B∈S1
n
√ det Bf
◮ In R2 we can parametrise S1 by the angle α of ‘first’ eigenvector and
the associated eigenvalue λ.
◮ Then B = λ e1 eT 1 + (1 − λ) e2 eT 2 = λ1 e1 eT 1 + λ2 e2 eT 2 . ◮ Our semi-Lagrangian scheme is
sup
α sup λi
2
λi u(x + kei) − 2u(x) + u(x − kei) k2 + f (x)
1/2
Slides available at https://goo.gl/v42Zv8 18
◮ We use piecewise linear finite elements as
approximation space.
◮ Near the boundary the size of the stencil is
reduced.
◮ On the boundary the boundary data g are
interpolated.
∂Ω
1 1 4 3 2 1 4 3
1
1 2 1
1 1
1
Slides available at https://goo.gl/v42Zv8 19
◮ Oliker-Prussner (88): FD geometric method for MA. ◮ Barles-Souganidis (91): Convergence framework for general PDEs. ◮ Kuo-Trudinger (90-93): Monotone FD methods for general PDEs. ◮ Camilli-Falcone (95): SL for diffusion processes. ◮ Baginski-Whitaker (96): Homotopy approach for MA. ◮ Crandall-Lions (96): SL for mean curvature flow. ◮ Krylov (02-14): Monotone FD methods for HJB. ◮ Glowinski-Dean-Caboussat (03-13): Least square-methods for MA ◮ Barles-Jakobsen (07-12): Monotone FD methods for HJB. ◮ Karlsen etc. (06-12): Monotone FD methods for HJB. ◮ Feng-Neilan-Lewis (06-): Vanishing moment method for MA and HJB ◮ Caffarelli-Souganidis (08): Monotone FD methods for general PDEs. ◮ B¨
◮ Oberman-Froese-Benamou (08-): Wide-stencil monotone schemes for MA. Slides available at https://goo.gl/v42Zv8 20
◮ Brenner-Gudi-Neilan-Sung (09-12): C 0 DG methods MA (classical solns) ◮ Awanou (12-): Spline FE methods for MA. ◮ J-Smears (12): Linear FE for isotropic HJB. ◮ Jakobsen etc. (12): Monotone semi-Lagrangian methods for HJB. ◮ Smears-S¨
uli (13): hp DGFE for (Cordes-) HJB (H2 solns).
◮ Lakkis-Pryer (’13): FE with discrete Hessian for MA. ◮ Feng-Kao-Lewis (13): G-monotone FD framework for general PDEs. ◮ Feng-Lewis (12-): Non-standard DG framework for general PDEs. ◮ Feng-Lewis-Neilan-Schnake (13-): DG-FE numerical calculus. ◮ Nochetto-Ntokgas-Zhang (14-): Max principle and convergence rates
(linear FE) for MA.
◮ Mirebeau-Benamou (14-15): Wide stencil monotone schemes for MA. ◮ Salgado-Zhang (15): Two-scale FE method for Isaacs equation. ◮ Reisinger-Rotaetxe Arto (16): CFL conditions near boundary of SL
schemes.
Slides available at https://goo.gl/v42Zv8 21
The semi-smooth Newton method is globally superlinearly converging to the unique solution of the discrete equations.
◮ Proof: (Bokanowski, Maroso, Zidani, 2009) + Krein-Milman thm
The numerical solutions are
◮ uniformly bounded ◮ consistent on Ω (notice that ∂Ω is not included) ◮ monotone ◮ Consistency can be ensured by k/h → ∞, boundedness through a
comparison function, monotone by construction.
Slides available at https://goo.gl/v42Zv8 22
The semi-smooth Newton method is globally superlinearly converging to the unique solution of the discrete equations.
◮ Proof: (Bokanowski, Maroso, Zidani, 2009) + Krein-Milman thm
The numerical solutions are
◮ uniformly bounded ◮ consistent on Ω (notice that ∂Ω is not included) ◮ monotone ◮ Consistency can be ensured by k/h → ∞, boundedness through a
comparison function, monotone by construction.
Slides available at https://goo.gl/v42Zv8 22
◮ As in the Barles-Souganidis argument, define the semi-continuous
envelopes u(x) := lim sup
y→x h→0
uh(y), u(x) := lim inf
y→x h→0
uh(y). Viscosity BC
◮ Are used in the original Barles-Souganidis proof ◮ Defined as
max(H(X), u(x) − f (x)) ≥ 0, min(H(Y ), u(x) − f (x)) ≤ 0) for all x ∈ ∂Ω, (p, X) ∈ J2,−
Ω
(x) and (q, Y ) ∈ J2,+
Ω (x). ◮ Existence of comparison principle with viscosity BCs unclear
◮ Anisotropic degenerate equation + Dirichlet BCs often not
for semi-continuous functions
◮ Do u and u satisfy viscosity BCs?
◮ Wasow-Motzkin: the method looks inconsistent near boundary. ◮ Hence unclear how to relate u and u to continuous problem.
Slides available at https://goo.gl/v42Zv8 23
Pointwise BC
◮ u(x) = g(x) for all x ∈ ∂Ω ◮ We have a comparison principle ◮ Do u and u satisfy pointwise BCs? ◮ The Wasow-Motzkin: that there
exists one test function for which consistency may fail, not for all.
◮ test functions φ(x) = M 2 |x − p|2
remain consistent.
◮ monotonicity: Fix x ∈ ∂Ω.
Choosing M, p, there the minimiser
lim inf
h→0,y→x uh(y) ≥ g(x)
always. kerP yℓ y∞ 2K y x ∂Bℓ ∂Ω
Slides available at https://goo.gl/v42Zv8 24
Pointwise BC
◮ u(x) = g(x) for all x ∈ ∂Ω ◮ We have a comparison principle ◮ Do u and u satisfy pointwise BCs? ◮ The Wasow-Motzkin: that there
exists one test function for which consistency may fail, not for all.
◮ test functions φ(x) = M 2 |x − p|2
remain consistent.
◮ monotonicity: Fix x ∈ ∂Ω.
Choosing M, p, there the minimiser
lim inf
h→0,y→x uh(y) ≥ g(x)
always. kerP yℓ y∞ 2K y x ∂Bℓ ∂Ω
Slides available at https://goo.gl/v42Zv8 24
◮ We have u(x) = u(x) = g(x) for x ∈ ∂Ω. ◮ Only in the above theorem we use convexity of the domain. ◮ . . . so finally
◮ Let Ω be strictly convex and f ≥ 0 be continuous. ◮ The numerical solutions converge uniformly to unique viscosity
solution of Bellman/Monge-Amp` ere problem.
Slides available at https://goo.gl/v42Zv8 25
◮ We have u(x) = u(x) = g(x) for x ∈ ∂Ω. ◮ Only in the above theorem we use convexity of the domain. ◮ . . . so finally
◮ Let Ω be strictly convex and f ≥ 0 be continuous. ◮ The numerical solutions converge uniformly to unique viscosity
solution of Bellman/Monge-Amp` ere problem.
Slides available at https://goo.gl/v42Zv8 25
◮ Experiment 1: smooth solution v(x) = |x|4 ◮ Experiment 2: non-smooth solution v(x) = |x1| ◮ Domain: union of unit circle and unit square ◮ Plot of numerical solutions on coarsest mesh:
1 1 4 3 2 1 4 3
1
1 2 1
1 1
1
Slides available at https://goo.gl/v42Zv8 26
1 2 3 4 5 6 7 10−3 10−2 10−1 Number of refinements u − uh∞/u∞ m = 2 m = 4 m = 8 m = 16 m = 32 m = 64
Slides available at https://goo.gl/v42Zv8 27
1 2 3 4 5 6 7 10−3 10−2 10−1 Number of refinements u − uh∞/u∞ m = 2 m = 4 m = 8 m = 16 m = 32 m = 64
Slides available at https://goo.gl/v42Zv8 28
m for quartic problem m for non-smooth problem DoF 2 4 8 16 32 64 2 4 8 16 32 64 91 5 5 5 4 5 5 4 5 6 5 5 5 329 5 5 6 10 5 5 4 5 6 7 9 6 1,249 5 5 7 9 12 5 5 5 6 6 7 11 4,865 5 6 7 9 12 13 5 5 7 7 8 9 19,201 5 6 7 11 12 16 7 5 6 7 7 7 76,289 6 6 6 10 12 15 8 6 6 7 8 8 304,129 5 6 6 9 12 15 7 6 6 7 8 8 1,214,465 5 5 7 8 10 14 8 5 7 7 8 9
Slides available at https://goo.gl/v42Zv8 29
The proposed method for Monge-Amp` ere:
◮ addresses uniqueness and convexity at PDE level ◮ is proved to converge to the viscosity solution ◮ respects boundary conditions in pointwise sense ◮ comes with a Newton method which always converges ◮ works without strict acuteness of the unstructured mesh ◮ we include ‘degenerate case’ f (x) = 0.
http://arxiv.org/abs/1602.04758
Slides available at https://goo.gl/v42Zv8 30
The proposed method for Monge-Amp` ere:
◮ addresses uniqueness and convexity at PDE level ◮ is proved to converge to the viscosity solution ◮ respects boundary conditions in pointwise sense ◮ comes with a Newton method which always converges ◮ works without strict acuteness of the unstructured mesh ◮ we include ‘degenerate case’ f (x) = 0.
http://arxiv.org/abs/1602.04758
Slides available at https://goo.gl/v42Zv8 30
Slides available at https://goo.gl/v42Zv8 31
◮ Gauss curvature:
K = κ1 · κ2
κi = principal curvature
◮ Represent surface as graph
x → u(x).
◮ Prescribe Gauss curvature at x by f (x). ◮ The resulting partial differential equation is
det D2u − f (1 + |Du|2)2 = 0.
◮ This problem arises in differential geometry, general relativity, . . . Slides available at https://goo.gl/v42Zv8 32
◮ We have paths, e.g. all roads from
Brighton B to Durham A.
◮ We have controls, e.g. steering
wheel + accelerator + brake.
◮ We have a cost functional on the
set of paths, e.g. the driving time
◮ We denote the minimal cost to get
from B to A by v(B).
◮ We can assign to every C on the
map a minimal cost.
◮ This defines the value function v. Slides available at https://goo.gl/v42Zv8 33
◮ We have paths, e.g. all roads from
Brighton B to Durham A.
◮ We have controls, e.g. steering
wheel + accelerator + brake.
◮ We have a cost functional on the
set of paths, e.g. the driving time
◮ We denote the minimal cost to get
from B to A by v(B).
◮ We can assign to every C on the
map a minimal cost.
◮ This defines the value function v. Slides available at https://goo.gl/v42Zv8 33
◮ We consider a system (such as a financial market, chemical process,
flight trajectory, . . . ) which evolves in time. At time t it takes state x(t): x : t → x(t)
◮ There is a parameter, which influences the evolution of the system.
It is called control. It is also a function of time: α : t → α(t) Examples:
◮ financial market - volatility ◮ chemical process - temperature ◮ flight trajectory - acceleration
◮ While mathematically irrelevant:
◮ In engineering the control is often (not always) a parameter we can
decide.
◮ In finance we are in many cases not able to choose the control – it is
an external parameter we are exposed to.
Slides available at https://goo.gl/v42Zv8 34
◮ The states and controls are linked through a differential equation,
giving the dynamics: α and ’initial’ conditions determine state x ˙ x = bα(x) + σα(x) dw(t), x(t) = x.
◮ cost
J(t, x, α) = expected cost = Etx T
t
r α(s)(s, x(s)) ds
+ Ψ(x(T))
.
◮ Starting a time t at position x we expect to pay, J(t, x, α) when the
control α is active.
◮ If σα = 0 for all α then the problem is deterministic and all blue
terms vanish.
Slides available at https://goo.gl/v42Zv8 35
◮ optimal control: Find ˆ
α such that J(t, x, ˆ α) ≤ J(t, x, α) for all α
◮ Besides α and in the deterministic case x, one often wants to find
the value function.
◮ value function: optimal cost starting at (t, x)
v(t, x) = min
α J(t, x, α) ◮ In other words, when starting at time t at x then v(t, x) gives us the
smallest possible expected cost: min
α J(t, x, α) = J(t, x, ˆ
α) (always provided an optimal control exists)
◮ In financial applications v is often the price of the financial product
Slides available at https://goo.gl/v42Zv8 36
◮ Dynamic programming principle: for any α
v(t, x) ≤ t+h
t
r α(s)(s, x(s)) ds + v(t + h, x(t + h))
◮ In words, if I travel from time t optimally then I’m faster than
travelling from t + h optimally.
◮ For an optimal control ˆ
α v(t, x) = t+h
t
r ˆ
α(s)(s, ˆ
x(s)) ds + v(t + h, ˆ x(t + h)).
◮ Here both sides just give the optimal travel time from time t
Slides available at https://goo.gl/v42Zv8 37
◮ Suppose v, α are smooth
‘v(t, x) ≤ t+h t rα(s)(s, x(s)) ds + v(t + h, x(t + h))’
r α(t)(t, x) = lim
h→0
1 h t+h
t
r α(s)(s, x(s))ds ≥ − lim
h→0
v(t + h, x(t + h)) − v(t, x) h = −∂tv − ∇v · ˙ x = −∂tv − ∇v · bα(t)
◮ Suppose optimal control ˆ
α exists r ˆ
α(t) = −∂tv − ∇v · b ˆ α(t) ◮ Hamilton-Jacobi-Bellman equation (HJB)
−∂tv + sup
α∈A
= 0.
Slides available at https://goo.gl/v42Zv8 38
◮ Still assume that v is smooth. ◮ The Brownian motion leads to an diffusion term with
aα Id = 1
2σα · (σα)T
r α(t)(t, x) = lim
h→0
1 hEtx t+h
t
r α(s)(s, x(s))ds ≥ − lim
h→0
Etxv(t + h, x(t + h)) − v(t, x) h = −∂tv − ∇v · bα−aα ∆v
◮ Again this is an identity if an optimal control ˆ
α exists
◮ Hamilton-Jacobi-Bellman equation
−∂tv + sup
α∈A
= 0.
Slides available at https://goo.gl/v42Zv8 39