Semi-Lagrangian Methods for Monge-Amp` ere Equations Max Jensen - - PowerPoint PPT Presentation

semi lagrangian methods for monge amp ere equations
SMART_READER_LITE
LIVE PREVIEW

Semi-Lagrangian Methods for Monge-Amp` ere Equations Max Jensen - - PowerPoint PPT Presentation

Semi-Lagrangian Methods for Monge-Amp` ere Equations Max Jensen University of Sussex joint work with Xiaobing Feng (University of Tennessee) http://arxiv.org/abs/1602.04758 Slides available at https://goo.gl/v42Zv8 1 Part 1 Monge-Amp` ere


slide-1
SLIDE 1

Semi-Lagrangian Methods for Monge-Amp` ere Equations

Max Jensen University of Sussex joint work with Xiaobing Feng (University of Tennessee) http://arxiv.org/abs/1602.04758

Slides available at https://goo.gl/v42Zv8 1

slide-2
SLIDE 2

Part 1 Monge-Amp` ere as Bellman problem

Slides available at https://goo.gl/v42Zv8 2

slide-3
SLIDE 3

One motivation: optimal transport

◮ Consider soil

◮ f +(x) = height of pile at x, ◮ f −(x) = depth of hole at x,

with

  • f + =
  • f − < ∞.

x excarvation pile of soil y

◮ We want to move the soil into holes: transport plan x → s(x). ◮ Then

f + = f −(s) det(Ds) (D gradient)

◮ Cost of transport related quadratically to distance:

◮ optimal transport plan s∗ has a potential u∗ s∗ = ∂u∗ ◮ For smooth u we have the Monge-Amp`

ere equation f + = f −(Du) det(D2u) (D2 Hessian)

◮ Other motivation: prescribed Gaussian curvature. Slides available at https://goo.gl/v42Zv8 3

slide-4
SLIDE 4

One motivation: optimal transport

◮ Consider soil

◮ f +(x) = height of pile at x, ◮ f −(x) = depth of hole at x,

with

  • f + =
  • f − < ∞.

x excarvation pile of soil y

◮ We want to move the soil into holes: transport plan x → s(x). ◮ Then

f + = f −(s) det(Ds) (D gradient)

◮ Cost of transport related quadratically to distance:

◮ optimal transport plan s∗ has a potential u∗ s∗ = ∂u∗ ◮ For smooth u we have the Monge-Amp`

ere equation f + = f −(Du) det(D2u) (D2 Hessian)

◮ Other motivation: prescribed Gaussian curvature. Slides available at https://goo.gl/v42Zv8 3

slide-5
SLIDE 5

Simple Monge-Amp` ere equation

◮ The simple Monge-Amp`

ere equation is one of the model problems for the development of numerical methods.

◮ It combines the following properties:

◮ fully nonlinear ◮ degenerate elliptic ◮ anisotropic

The simple Monge-Amp` ere equation

Let Ω ⊂ Rn be strictly convex. Let u|∂Ω = g and, with f (x) ≥ 0, f n n − det D2u = 0.

Slides available at https://goo.gl/v42Zv8 4

slide-6
SLIDE 6

Ellipticity and convexity

Definition (elliptic)

The operator F in a differential equation F(x, u(x), Du(x), D2u(x)) = 0 is called (degenerate) elliptic if, x ∈ Ω, r ∈ R, p ∈ Rn, F(x, r, p, X) ≤ F(x, r, p, Y ) whenever Y − X is pos. semidefinite where X, Y are symmetric matrices.

◮ Set

F(x, r, p, X) = f (x) n n − det X. Then F satisfies the ellipticity condition only for positive semi-definite X, Y .

◮ So a theory based on ellipticity is restricted to convex functions. Slides available at https://goo.gl/v42Zv8 5

slide-7
SLIDE 7

Ellipticity and convexity

Definition (elliptic)

The operator F in a differential equation F(x, u(x), Du(x), D2u(x)) = 0 is called (degenerate) elliptic if, x ∈ Ω, r ∈ R, p ∈ Rn, F(x, r, p, X) ≤ F(x, r, p, Y ) whenever Y − X is pos. semidefinite where X, Y are symmetric matrices.

◮ Set

F(x, r, p, X) = f (x) n n − det X. Then F satisfies the ellipticity condition only for positive semi-definite X, Y .

◮ So a theory based on ellipticity is restricted to convex functions. Slides available at https://goo.gl/v42Zv8 5

slide-8
SLIDE 8

Viscosity solution of the Monge-Amp` ere equation

Definition

◮ Let u be convex and f ∈ C(Ω), f ≥ 0. ◮ u is viscosity subsolution (supersolution) of

f

n

n − det D2u = 0 if for all convex ψ ∈ C 2(Ω) and x0 ∈ Ω such that u − ψ has a local maximum (minimum) at x0 then f

n

n − det D2ψ ≤ 0 f

n

n − det D2ψ ≥ 0

  • .

◮ If u is viscosity sub- and supersolution it is a viscosity solution. ◮ If Ω ⊂ Rn is open bounded and strictly convex and f ≥ 0

continuous, then there exists a unique viscosity solution of the simple Monge-Amp` ere equation.

Slides available at https://goo.gl/v42Zv8 6

slide-9
SLIDE 9

Problem

◮ Restrict the approximation space to convex functions?

Problem

◮ Convex finite element functions not dense in set of convex functions. ◮ To obtain convergence one needs to

admit non-convex functions into the approximation space.

◮ What about

◮ ellipticity, ◮ uniqueness?

◮ Other methods, such as finite

difference schemes, have the same density issue.

I nterpolant of (x1 + x2)2 (Aguilera, Morin; 2009)

Slides available at https://goo.gl/v42Zv8 7

slide-10
SLIDE 10

Introduction to Hamilton-Jacobi-Bellman

◮ Generally (elliptic) Bellman equations of the type

Hu = 0 where Hw := sup

α∈A

(−Aα : D2w + bα · ∇w + cαw

  • linear, 2nd order

non-divergence form

−r α).

◮ Hamilton-Jacobi-Bellman equations describe the optimal expected

cost in a controlled stochastic process.

◮ Well-established field with own range of numerical tools. Slides available at https://goo.gl/v42Zv8 8

slide-11
SLIDE 11

Classical equivalence

◮ We define the Hamiltonian

H : S → R, A → sup

B∈S1

  • −B : A +

n

√ det Bf

  • ,

where

◮ S is the set of self-adjoint linear mappings Rn → Rn, ◮ S1 := {A ∈ S : trace A = 1 and A ≥ 0}.

◮ We define the Monge-Amp`

ere mapping M : S → R, A → f n n − det(A).

Krylov (1987)

H A = 0 ⇔ M A = 0 and A ≥ 0.

Slides available at https://goo.gl/v42Zv8 9

slide-12
SLIDE 12

Uniqueness — on the whole space!

◮ Contour lines for diagonal A ∈ R2×2 and f = 1.

2 2 1 1 1 1 2 2 2 1 1 2 2 1 1 2 a b 1 1 1 1 2 2 2 1 1 2 2 1 1 2 a b

M a 0 0 b

  • H

a 0 0 b

  • ◮ From the numerical point now uniqueness does not need to be

enforced by a constraint on the approximation space, because the HJB operator only vanishes in first quadrant.

Slides available at https://goo.gl/v42Zv8 10

slide-13
SLIDE 13

Viscosity solutions of HJB

Viscosity Solution of HJB

◮ v is a subsolution of HJB if for all smooth ψ

H(D2ψ(x)) ≤ 0 at every x0 which maximises v − ψ with v(x) = ψ(x).

◮ Supersolution similar. Subsolution + supersolution =: solution. ◮ Definition similar to that for Monge-Amp`

ere, but for HJB there is no convexity condition as ellipticity holds on the whole space.

◮ We complement the differential equation with Dirichlet conditions in

the pointwise sense: u(x) = g(x), x ∈ ∂Ω.

Slides available at https://goo.gl/v42Zv8 11

slide-14
SLIDE 14

Equivalence in viscosity sense

◮ From the general theory we have existence of a viscosity solution of

the Monge-Amp` ere problem.

Theorem (Feng, J (2016))

◮ Let Ω be strictly convex and f ≥ 0. Then

{u ∈ C(Ω) convex and solves Mu = 0 in viscosity sense} = {u ∈ C(Ω) solves Hu = 0 in viscosity sense} Observe, regarding the proof:

◮ Monge-Amp`

ere has a smaller set of test functions (convexity).

◮ Monge-Amp`

ere only allows non-negative forcing term, otherwise difference u − ψ could be moved into forcing term and classical equivalence can be used.

Slides available at https://goo.gl/v42Zv8 12

slide-15
SLIDE 15

Comparison principle

◮ A comparison principle gives uniqueness.

Theorem (Feng, J (2016))

◮ Let u be a subsolution and v be a supersolution of the Bellman

problem.

◮ Then u ≤ v on Ω if u ≤ v on ∂Ω. ◮ Notice that comparison is based on pointwise and not on viscosity

boundary conditions.

Slides available at https://goo.gl/v42Zv8 13

slide-16
SLIDE 16

Summary of Part 1

◮ Simple Monge-Amp`

ere equation f n n − det D2u = 0 Conditional uniqueness: comparison principle/well-posedness only for convex functions.

◮ Numerical difficulties: e.g. set of convex FE functions not dense in

set of convex functions.

◮ We propose to use for numerics an HJB formulation

sup

B∈S1

  • −B : D2u +

n

√ det Bf

  • = 0.

◮ The HJB formulation has the same viscosity solution as the

Monge-Amp` ere problem, but unconditional comparison principle and well-posedness.

Slides available at https://goo.gl/v42Zv8 14

slide-17
SLIDE 17

Part 2 Semi-Lagrangian discretisation

Slides available at https://goo.gl/v42Zv8 15

slide-18
SLIDE 18

Theorem (Barles-Souganidis (1991))

If a numerical method is

◮ monotone ◮ pointwise consistent on Ω in the viscosity sense ◮ stable in L∞

and it is ensured that numerical solutions exist, then the numerical solutions converge in L∞

loc to the viscosity solution of the (elliptic) PDE.

Theorem (Wasow-Motzkin (1952); Kocan (1995))

◮ Consider a second-order elliptic PDE with ‘irrational angle of

anisotropy‘ (even constant coefficient, linear PDE).

◮ Consider essentially a finite difference grid. ◮ If a numerical scheme is

◮ monotone, ◮ pointwise consistent,

then its stencil grows unboundedly as h → 0.

Slides available at https://goo.gl/v42Zv8 16

slide-19
SLIDE 19

Theorem (Barles-Souganidis (1991))

If a numerical method is

◮ monotone ◮ pointwise consistent on Ω in the viscosity sense ◮ stable in L∞

and it is ensured that numerical solutions exist, then the numerical solutions converge in L∞

loc to the viscosity solution of the (elliptic) PDE.

Theorem (Wasow-Motzkin (1952); Kocan (1995))

◮ Consider a second-order elliptic PDE with ‘irrational angle of

anisotropy‘ (even constant coefficient, linear PDE).

◮ Consider essentially a finite difference grid. ◮ If a numerical scheme is

◮ monotone, ◮ pointwise consistent,

then its stencil grows unboundedly as h → 0.

Slides available at https://goo.gl/v42Zv8 16

slide-20
SLIDE 20

Semi-Lagrangian discretisation of B : D2φ

◮ Let

◮ λi be eigenvalue ◮ ei be normalised eigenvector

  • f diffusion coefficient B =

i λi ei eT i . ◮ Idea of discretization: substitute

B : D2φ(x) =

  • i

λi(ei eT

i ) : D2φ(x) =

  • i

λi ∂2

ei,eiφ(x)

=

  • i

λi φ(x + kei) − 2φ(x) + φ(x − kei) k2 + O(k2)

◮ Wasow-Motzkin: One has

k/h → ∞ as h → 0, where h is the diameter of mesh elements.

◮ No mixed derivatives is good

for monotonicity.

Slides available at https://goo.gl/v42Zv8 17

slide-21
SLIDE 21

Semi-Lagrangian discretisation of B : D2φ

◮ Let

◮ λi be eigenvalue ◮ ei be normalised eigenvector

  • f diffusion coefficient B =

i λi ei eT i . ◮ Idea of discretization: substitute

B : D2φ(x) =

  • i

λi(ei eT

i ) : D2φ(x) =

  • i

λi ∂2

ei,eiφ(x)

=

  • i

λi φ(x + kei) − 2φ(x) + φ(x − kei) k2 + O(k2)

◮ Wasow-Motzkin: One has

k/h → ∞ as h → 0, where h is the diameter of mesh elements.

◮ No mixed derivatives is good

for monotonicity.

Slides available at https://goo.gl/v42Zv8 17

slide-22
SLIDE 22

HJB scheme

◮ Recall

H(D2u) = sup

B∈S1

  • −B : D2u +

n

√ det Bf

  • ,

◮ In R2 we can parametrise S1 by the angle α of ‘first’ eigenvector and

the associated eigenvalue λ.

◮ Then B = λ e1 eT 1 + (1 − λ) e2 eT 2 = λ1 e1 eT 1 + λ2 e2 eT 2 . ◮ Our semi-Lagrangian scheme is

sup

α sup λi

2

  • j=1

λi u(x + kei) − 2u(x) + u(x − kei) k2 + f (x)

  • i λi

1/2

Slides available at https://goo.gl/v42Zv8 18

slide-23
SLIDE 23

HJB scheme

◮ We use piecewise linear finite elements as

approximation space.

◮ Near the boundary the size of the stencil is

reduced.

◮ On the boundary the boundary data g are

interpolated.

∂Ω

1 1 4 3 2 1 4 3

  • 1
  • 1

1

  • 1

1 2 1

  • 1
  • 1
  • 1

1 1

  • 1

1

  • 1

Slides available at https://goo.gl/v42Zv8 19

slide-24
SLIDE 24

Selected literature: HJB and Monge-Amp` ere

◮ Oliker-Prussner (88): FD geometric method for MA. ◮ Barles-Souganidis (91): Convergence framework for general PDEs. ◮ Kuo-Trudinger (90-93): Monotone FD methods for general PDEs. ◮ Camilli-Falcone (95): SL for diffusion processes. ◮ Baginski-Whitaker (96): Homotopy approach for MA. ◮ Crandall-Lions (96): SL for mean curvature flow. ◮ Krylov (02-14): Monotone FD methods for HJB. ◮ Glowinski-Dean-Caboussat (03-13): Least square-methods for MA ◮ Barles-Jakobsen (07-12): Monotone FD methods for HJB. ◮ Karlsen etc. (06-12): Monotone FD methods for HJB. ◮ Feng-Neilan-Lewis (06-): Vanishing moment method for MA and HJB ◮ Caffarelli-Souganidis (08): Monotone FD methods for general PDEs. ◮ B¨

  • hmer (08): L2 FE methods for general PDEs.

◮ Oberman-Froese-Benamou (08-): Wide-stencil monotone schemes for MA. Slides available at https://goo.gl/v42Zv8 20

slide-25
SLIDE 25

Selected literature: HJB and Monge-Amp` ere

◮ Brenner-Gudi-Neilan-Sung (09-12): C 0 DG methods MA (classical solns) ◮ Awanou (12-): Spline FE methods for MA. ◮ J-Smears (12): Linear FE for isotropic HJB. ◮ Jakobsen etc. (12): Monotone semi-Lagrangian methods for HJB. ◮ Smears-S¨

uli (13): hp DGFE for (Cordes-) HJB (H2 solns).

◮ Lakkis-Pryer (’13): FE with discrete Hessian for MA. ◮ Feng-Kao-Lewis (13): G-monotone FD framework for general PDEs. ◮ Feng-Lewis (12-): Non-standard DG framework for general PDEs. ◮ Feng-Lewis-Neilan-Schnake (13-): DG-FE numerical calculus. ◮ Nochetto-Ntokgas-Zhang (14-): Max principle and convergence rates

(linear FE) for MA.

◮ Mirebeau-Benamou (14-15): Wide stencil monotone schemes for MA. ◮ Salgado-Zhang (15): Two-scale FE method for Isaacs equation. ◮ Reisinger-Rotaetxe Arto (16): CFL conditions near boundary of SL

schemes.

Slides available at https://goo.gl/v42Zv8 21

slide-26
SLIDE 26

Towards convergence

Theorem (Feng, J (2014))

The semi-smooth Newton method is globally superlinearly converging to the unique solution of the discrete equations.

◮ Proof: (Bokanowski, Maroso, Zidani, 2009) + Krein-Milman thm

Theorem (Feng, J (2014))

The numerical solutions are

◮ uniformly bounded ◮ consistent on Ω (notice that ∂Ω is not included) ◮ monotone ◮ Consistency can be ensured by k/h → ∞, boundedness through a

comparison function, monotone by construction.

Slides available at https://goo.gl/v42Zv8 22

slide-27
SLIDE 27

Towards convergence

Theorem (Feng, J (2014))

The semi-smooth Newton method is globally superlinearly converging to the unique solution of the discrete equations.

◮ Proof: (Bokanowski, Maroso, Zidani, 2009) + Krein-Milman thm

Theorem (Feng, J (2014))

The numerical solutions are

◮ uniformly bounded ◮ consistent on Ω (notice that ∂Ω is not included) ◮ monotone ◮ Consistency can be ensured by k/h → ∞, boundedness through a

comparison function, monotone by construction.

Slides available at https://goo.gl/v42Zv8 22

slide-28
SLIDE 28

How should we pose boundary conditions?

◮ As in the Barles-Souganidis argument, define the semi-continuous

envelopes u(x) := lim sup

y→x h→0

uh(y), u(x) := lim inf

y→x h→0

uh(y). Viscosity BC

◮ Are used in the original Barles-Souganidis proof ◮ Defined as

max(H(X), u(x) − f (x)) ≥ 0, min(H(Y ), u(x) − f (x)) ≤ 0) for all x ∈ ∂Ω, (p, X) ∈ J2,−

(x) and (q, Y ) ∈ J2,+

Ω (x). ◮ Existence of comparison principle with viscosity BCs unclear

◮ Anisotropic degenerate equation + Dirichlet BCs often not

for semi-continuous functions

◮ Do u and u satisfy viscosity BCs?

◮ Wasow-Motzkin: the method looks inconsistent near boundary. ◮ Hence unclear how to relate u and u to continuous problem.

Slides available at https://goo.gl/v42Zv8 23

slide-29
SLIDE 29

How should we pose boundary conditions?

Pointwise BC

◮ u(x) = g(x) for all x ∈ ∂Ω ◮ We have a comparison principle ◮ Do u and u satisfy pointwise BCs? ◮ The Wasow-Motzkin: that there

exists one test function for which consistency may fail, not for all.

◮ test functions φ(x) = M 2 |x − p|2

remain consistent.

◮ monotonicity: Fix x ∈ ∂Ω.

Choosing M, p, there the minimiser

  • f uh − Ihφ arbitrarily close to x

lim inf

h→0,y→x uh(y) ≥ g(x)

always. kerP yℓ y∞ 2K y x ∂Bℓ ∂Ω

Slides available at https://goo.gl/v42Zv8 24

slide-30
SLIDE 30

How should we pose boundary conditions?

Pointwise BC

◮ u(x) = g(x) for all x ∈ ∂Ω ◮ We have a comparison principle ◮ Do u and u satisfy pointwise BCs? ◮ The Wasow-Motzkin: that there

exists one test function for which consistency may fail, not for all.

◮ test functions φ(x) = M 2 |x − p|2

remain consistent.

◮ monotonicity: Fix x ∈ ∂Ω.

Choosing M, p, there the minimiser

  • f uh − Ihφ arbitrarily close to x

lim inf

h→0,y→x uh(y) ≥ g(x)

always. kerP yℓ y∞ 2K y x ∂Bℓ ∂Ω

Slides available at https://goo.gl/v42Zv8 24

slide-31
SLIDE 31

Convergence

Theorem (Feng, J (2016))

◮ We have u(x) = u(x) = g(x) for x ∈ ∂Ω. ◮ Only in the above theorem we use convexity of the domain. ◮ . . . so finally

Theorem (Feng, J (2016))

◮ Let Ω be strictly convex and f ≥ 0 be continuous. ◮ The numerical solutions converge uniformly to unique viscosity

solution of Bellman/Monge-Amp` ere problem.

Slides available at https://goo.gl/v42Zv8 25

slide-32
SLIDE 32

Convergence

Theorem (Feng, J (2016))

◮ We have u(x) = u(x) = g(x) for x ∈ ∂Ω. ◮ Only in the above theorem we use convexity of the domain. ◮ . . . so finally

Theorem (Feng, J (2016))

◮ Let Ω be strictly convex and f ≥ 0 be continuous. ◮ The numerical solutions converge uniformly to unique viscosity

solution of Bellman/Monge-Amp` ere problem.

Slides available at https://goo.gl/v42Zv8 25

slide-33
SLIDE 33

Two numerical experiments

◮ Experiment 1: smooth solution v(x) = |x|4 ◮ Experiment 2: non-smooth solution v(x) = |x1| ◮ Domain: union of unit circle and unit square ◮ Plot of numerical solutions on coarsest mesh:

1 1 4 3 2 1 4 3

  • 1
  • 1

1

  • 1

1 2 1

  • 1
  • 1
  • 1

1 1

  • 1

1

  • 1

Slides available at https://goo.gl/v42Zv8 26

slide-34
SLIDE 34

Smooth problem: k = m · h

1 2 3 4 5 6 7 10−3 10−2 10−1 Number of refinements u − uh∞/u∞ m = 2 m = 4 m = 8 m = 16 m = 32 m = 64

Slides available at https://goo.gl/v42Zv8 27

slide-35
SLIDE 35

Non-Smooth problem: k = m · h

1 2 3 4 5 6 7 10−3 10−2 10−1 Number of refinements u − uh∞/u∞ m = 2 m = 4 m = 8 m = 16 m = 32 m = 64

Slides available at https://goo.gl/v42Zv8 28

slide-36
SLIDE 36

Number of Newton steps for 10−6 error

m for quartic problem m for non-smooth problem DoF 2 4 8 16 32 64 2 4 8 16 32 64 91 5 5 5 4 5 5 4 5 6 5 5 5 329 5 5 6 10 5 5 4 5 6 7 9 6 1,249 5 5 7 9 12 5 5 5 6 6 7 11 4,865 5 6 7 9 12 13 5 5 7 7 8 9 19,201 5 6 7 11 12 16 7 5 6 7 7 7 76,289 6 6 6 10 12 15 8 6 6 7 8 8 304,129 5 6 6 9 12 15 7 6 6 7 8 8 1,214,465 5 5 7 8 10 14 8 5 7 7 8 9

Slides available at https://goo.gl/v42Zv8 29

slide-37
SLIDE 37

Summary of Part 2

The proposed method for Monge-Amp` ere:

◮ addresses uniqueness and convexity at PDE level ◮ is proved to converge to the viscosity solution ◮ respects boundary conditions in pointwise sense ◮ comes with a Newton method which always converges ◮ works without strict acuteness of the unstructured mesh ◮ we include ‘degenerate case’ f (x) = 0.

Thank you for the attention!

http://arxiv.org/abs/1602.04758

Slides available at https://goo.gl/v42Zv8 30

slide-38
SLIDE 38

Summary of Part 2

The proposed method for Monge-Amp` ere:

◮ addresses uniqueness and convexity at PDE level ◮ is proved to converge to the viscosity solution ◮ respects boundary conditions in pointwise sense ◮ comes with a Newton method which always converges ◮ works without strict acuteness of the unstructured mesh ◮ we include ‘degenerate case’ f (x) = 0.

Thank you for the attention!

http://arxiv.org/abs/1602.04758

Slides available at https://goo.gl/v42Zv8 30

slide-39
SLIDE 39

Extra Slides

Slides available at https://goo.gl/v42Zv8 31

slide-40
SLIDE 40

Motivation: prescribed Gauss curvature

◮ Gauss curvature:

K = κ1 · κ2

  • f 2-dim surface in R3 where

κi = principal curvature

◮ Represent surface as graph

x → u(x).

◮ Prescribe Gauss curvature at x by f (x). ◮ The resulting partial differential equation is

det D2u − f (1 + |Du|2)2 = 0.

◮ This problem arises in differential geometry, general relativity, . . . Slides available at https://goo.gl/v42Zv8 32

slide-41
SLIDE 41

An optimal control problem

◮ We have paths, e.g. all roads from

Brighton B to Durham A.

◮ We have controls, e.g. steering

wheel + accelerator + brake.

◮ We have a cost functional on the

set of paths, e.g. the driving time

  • r petrol cost.

◮ We denote the minimal cost to get

from B to A by v(B).

◮ We can assign to every C on the

map a minimal cost.

◮ This defines the value function v. Slides available at https://goo.gl/v42Zv8 33

slide-42
SLIDE 42

An optimal control problem

◮ We have paths, e.g. all roads from

Brighton B to Durham A.

◮ We have controls, e.g. steering

wheel + accelerator + brake.

◮ We have a cost functional on the

set of paths, e.g. the driving time

  • r petrol cost.

◮ We denote the minimal cost to get

from B to A by v(B).

◮ We can assign to every C on the

map a minimal cost.

◮ This defines the value function v. Slides available at https://goo.gl/v42Zv8 33

slide-43
SLIDE 43

Optimal control problem

◮ We consider a system (such as a financial market, chemical process,

flight trajectory, . . . ) which evolves in time. At time t it takes state x(t): x : t → x(t)

◮ There is a parameter, which influences the evolution of the system.

It is called control. It is also a function of time: α : t → α(t) Examples:

◮ financial market - volatility ◮ chemical process - temperature ◮ flight trajectory - acceleration

◮ While mathematically irrelevant:

◮ In engineering the control is often (not always) a parameter we can

decide.

◮ In finance we are in many cases not able to choose the control – it is

an external parameter we are exposed to.

Slides available at https://goo.gl/v42Zv8 34

slide-44
SLIDE 44

◮ The states and controls are linked through a differential equation,

giving the dynamics: α and ’initial’ conditions determine state x ˙ x = bα(x) + σα(x) dw(t), x(t) = x.

◮ cost

J(t, x, α) = expected cost = Etx          T

t

r α(s)(s, x(s)) ds

  • running cost

+ Ψ(x(T))

  • final cost

         .

◮ Starting a time t at position x we expect to pay, J(t, x, α) when the

control α is active.

◮ If σα = 0 for all α then the problem is deterministic and all blue

terms vanish.

Slides available at https://goo.gl/v42Zv8 35

slide-45
SLIDE 45

Value function

◮ optimal control: Find ˆ

α such that J(t, x, ˆ α) ≤ J(t, x, α) for all α

◮ Besides α and in the deterministic case x, one often wants to find

the value function.

◮ value function: optimal cost starting at (t, x)

v(t, x) = min

α J(t, x, α) ◮ In other words, when starting at time t at x then v(t, x) gives us the

smallest possible expected cost: min

α J(t, x, α) = J(t, x, ˆ

α) (always provided an optimal control exists)

◮ In financial applications v is often the price of the financial product

  • f interest.

Slides available at https://goo.gl/v42Zv8 36

slide-46
SLIDE 46

Dynamic programming principle

◮ Dynamic programming principle: for any α

v(t, x) ≤ t+h

t

r α(s)(s, x(s)) ds + v(t + h, x(t + h))

◮ In words, if I travel from time t optimally then I’m faster than

travelling from t + h optimally.

◮ For an optimal control ˆ

α v(t, x) = t+h

t

r ˆ

α(s)(s, ˆ

x(s)) ds + v(t + h, ˆ x(t + h)).

◮ Here both sides just give the optimal travel time from time t

  • nwards.

Slides available at https://goo.gl/v42Zv8 37

slide-47
SLIDE 47

Deterministic Hamilton-Jacobi-Bellman equation

◮ Suppose v, α are smooth

‘v(t, x) ≤ t+h t rα(s)(s, x(s)) ds + v(t + h, x(t + h))’

r α(t)(t, x) = lim

h→0

1 h t+h

t

r α(s)(s, x(s))ds ≥ − lim

h→0

v(t + h, x(t + h)) − v(t, x) h = −∂tv − ∇v · ˙ x = −∂tv − ∇v · bα(t)

◮ Suppose optimal control ˆ

α exists r ˆ

α(t) = −∂tv − ∇v · b ˆ α(t) ◮ Hamilton-Jacobi-Bellman equation (HJB)

−∂tv + sup

α∈A

  • −bα · ∇v − r α
  • Hamiltonian H(t,x,∇v)

= 0.

Slides available at https://goo.gl/v42Zv8 38

slide-48
SLIDE 48

Stochastic Hamilton-Jacobi-Bellman equation

◮ Still assume that v is smooth. ◮ The Brownian motion leads to an diffusion term with

aα Id = 1

2σα · (σα)T

r α(t)(t, x) = lim

h→0

1 hEtx t+h

t

r α(s)(s, x(s))ds ≥ − lim

h→0

Etxv(t + h, x(t + h)) − v(t, x) h = −∂tv − ∇v · bα−aα ∆v

◮ Again this is an identity if an optimal control ˆ

α exists

◮ Hamilton-Jacobi-Bellman equation

−∂tv + sup

α∈A

  • −bα · ∇v −aα∆v − r α
  • Hamiltonian H(t,x,∇v,D2v)

= 0.

Slides available at https://goo.gl/v42Zv8 39