Overcoming the curse of dimensionality: from nonlinear Monte Carlo - - PowerPoint PPT Presentation

overcoming the curse of dimensionality from nonlinear
SMART_READER_LITE
LIVE PREVIEW

Overcoming the curse of dimensionality: from nonlinear Monte Carlo - - PowerPoint PPT Presentation

Overcoming the curse of dimensionality: from nonlinear Monte Carlo to deep artificial neural networks Arnulf Jentzen (ETH Zurich, Switzerland) Joint works with Christian Beck (ETH Zurich, Switzerland), Sebastian Becker (ZENAI AG, Switzerland),


slide-1
SLIDE 1

Overcoming the curse of dimensionality: from nonlinear Monte Carlo to deep artificial neural networks

Arnulf Jentzen (ETH Zurich, Switzerland) Joint works with Christian Beck (ETH Zurich, Switzerland), Sebastian Becker (ZENAI AG, Switzerland), Julius Berner (University of Vienna, Austria), Patrick Cheridito (ETH Zurich, Switzerland), Weinan E (Princeton University, USA), Dennis Elbrächter (University of Vienna, Austria), Philipp Grohs (University of Vienna, Austria), Jiequn Han (Princeton University, USA), Fabian Hornung (ETH Zurich, Switzerland, & KIT, Germany), Martin Hutzenthaler (University of Duisburg-Essen, Germany), Nor Jaafari (ZENAI AG, Switzerland), Thomas Kurse (University of Giessen, Germany), Tuan Anh Nguyen (University of Duisburg-Essen, Germany), Diyora Salimova (ETH Zurich, Switzerland), Christoph Schwab (ETH Zurich, Switzerland), Timo Welti (ETH Zurich, Switzerland), and Philippe von Wurstemberger (ETH Zurich, Switzerland)

slide-2
SLIDE 2

Computational problems from Financial Engineering (Evaluations of risks and financial products, XVA, optimal stopping), Operations Research (Optimal control, robots, game intelligence, optimal use of resources, formation of prices), Filtering (Chemical engineering, Kushner and Zakai equations)

  • ften require approximations for high-dimensional functions such as u: [0, 1]d → R for

d ∈ N large. Approximations methods such as finite element methods, finite differences, sparse grids suffer under the curse of dimensionality (Bellman 1957). Monte Carlo method based on Feynman-Kac formula: high-dimensional linear partial differential equations (PDEs) Deep BSDE method: Han, J, E 2017 PNAS, E, Han, J 2017 Comm. Math. Stat.,

. . .

slide-3
SLIDE 3

Theorem (Hutzenthaler, J, Kruse, Nguyen 2019) Let T, p, κ > 0, let f : R → R be Lipschitz, ∀ d ∈ N let gd ∈ C(Rd, R) and ud : [0, T] × Rd → R be an at most poly. grow. solution of

∂ud ∂t = ∆xud + f(ud)

with ud(0, ·) = gd, assume |gd(x)| ≤ κdκ(1 + xκ), let Al : Rl → Rl, l ∈ N, satisfy

Al(x1, . . . , xl) = (max{x1, 0}, . . . , max{xl, 0}), let

N = ∪L∈N ∪l0,...,lL∈N (×L

n=1(Rln×ln−1 × Rln)),

let R: N → ∪∞

a,b=1C(Ra, Rb) satisfy for all L ∈ N, l0, . . . , lL ∈ N,

Φ = ((W1, B1), . . . , (WL, BL)) ∈ ×L

n=1(Rln×ln−1 × Rln), x0 ∈ Rl0, . . ., xL−1 ∈ RlL−1

with ∀ n ∈ N ∩ (0, L): xn = Aln(Wnxn−1 + Bn) that

(RΦ)(x0) = WLxL−1 + BL,

let P : N → N be the number of parameters, and let (Gd,ε)d∈N, ε∈(0,1] ⊆ N satisfy

P(Gd,ε)≤κdκε−κ and |gd(x) − (RGd,ε)(x)| ≤ εκdκ(1 + xκ). Then ∃ (Ud,ε)d∈N,ε∈(0,1] ⊆ N, c > 0: ∀ d ∈ N, ε ∈ (0, 1]:

[0,T]×[0,1]d |ud(y) − (RUd,ε)(y)|p dy

1/p ≤ ε

and

P(Ud,ε) ≤ c dcε−c.

Linear PDEs: Grohs, Hornung, J, von Wurstemberger 2018; Berner, Grohs, J 2018; Elbrächter, Grohs, J, Schwab 2018; J, Salimova, Welti 2018

slide-4
SLIDE 4

Full history recursive Multilevel-Picard method: LetT > 0,L, p ≥ 0,Θ = ∪∞

n=1Zn,

∀ d ∈ N let gd ∈ C(Rd, R) satisfy ∀ x ∈ Rd : |gd(x)| ≤ L(1 + xp), let

f : R → R be Lipschitz, let (Ω, F, P) probab. sp., let W d,θ : [0, T] × Ω → Rd, d ∈ N, θ ∈ Θ, be i.i.d. Brownian motions, let Sθ : [0, T] × Ω → R, θ ∈ Θ, i.i.d. continuous satisfying ∀ t ∈ [0, T], θ ∈ Θ that Sθ

t is U[t,T]-distributed, assume that

(Sθ)θ∈Θ and (W d,θ)θ∈Θ,d∈N are independent, let U

d,θ n,M : [0, T] × Rd × Ω → R,

n, M ∈ Z, θ ∈ Θ, d ∈ N, satisfy ∀ d, n, M ∈ N, θ ∈ Θ, t ∈ [0, T], x ∈ Rd : U

d,θ

−1,M(t, x) = U

d,θ 0,M(t, x) = 0 and

U

d,θ n,M(t, x) = Mn

  • m=1

gd(x + W

d,(θ,0,−m) T−t

)

Mn

+ n−1

  • l=0

(T−t)

Mn−l Mn−l

  • m=1

f

  • U

d,(θ,l,m) l,M

  • S(θ,l,m)

t

, x + W

d,(θ,l,m) S(θ,l,m)

t

−t

  • −✶N(l) f
  • U

d,(θ,l,m) l−1,M

  • S(θ,l,m)

t

, x + W

d,(θ,l,m) S(θ,l,m)

t

−t

  • and ∀ d, n ∈ N let Costd,n ∈ N be the comp. cost of U

d,0 n,n (0, 0).

slide-5
SLIDE 5

Then i) ∀ d ∈ N: there exists ud : [0, T] × Rd → R at most polyn. grow. solution of

∂ud ∂t + 1

2∆xud + f(ud) = 0

with ud(T, ·) = gd and ii) ∀ δ > 0: there exist n: N × (0, ∞) → N and C > 0: ∀ d ∈ N, ε > 0:

  • E
  • |ud(0, 0) − Ud,0

nd,ε,nd,ε(0, 0)|21/2 ≤ ε

and Costd,nd,ε ≤ Cd1+p(1+δ)ε−(2+δ). Extensions: Algorithms/Simulations/Proofs: Fully nonlinear PDEs (Beck, E, J 2018 JNS), Optimal stopping (Becker, Cheridito, J 2018 JMLR), Uniform errors (Beck, Becker, Grohs, Jaafari, J 2018), Semilinear PDEs/CVA (Hutzenthaler, J, von Wurstemberger 2019), . . .