Optimal Randomized Algorithms for Integration Integration on - - PowerPoint PPT Presentation

optimal randomized algorithms for
SMART_READER_LITE
LIVE PREVIEW

Optimal Randomized Algorithms for Integration Integration on - - PowerPoint PPT Presentation

Randomized Algorithms for Infinite- dimensional Optimal Randomized Algorithms for Integration Integration on Function Spaces with underlying ANOVA decomposition Michael Gnewuch 1 University of Kaiserslautern, Germany October 16, 2013 Based


slide-1
SLIDE 1

Randomized Algorithms for Infinite- dimensional Integration

Optimal Randomized Algorithms for Integration on Function Spaces with underlying ANOVA decomposition

Michael Gnewuch1 University of Kaiserslautern, Germany October 16, 2013 Based on Joint Work with Jan Baldeaux (UTS Sydney) & Josef Dick (UNSW Sydney)

1Supported by the German Science Foundation DFG under Grant GN 91/3-1

and by the Australian Research Council ARC

1 / 18

slide-2
SLIDE 2

Randomized Algorithms for Infinite- dimensional Integration

ANOVA Decomposition (∞-Variate Functions)

Sequence space [0, 1]N, endowed with probability measure dx = ⊗j∈Ndxj. For f ∈ L2([0, 1]N), u ⊂f N: f∅(x) :=

  • [0,1]N f(y) dy ,

fu(x) :=

  • [0,1][N]\u f(xu, yN\u) dyN\u −
  • vu

fv(x) , where xu = (xj)j∈u, yN\u = (yj)j∈N\u. Then 1 fu(x) dxj = 0 for j ∈ u. This implies f =

  • u⊂f N

fu in L2([0, 1]N) , and Var(f) =

  • u⊂fN

Var(fu).

2 / 18

slide-3
SLIDE 3

Randomized Algorithms for Infinite- dimensional Integration

Function Spaces of Integrands

Construction of spaces of integrands f : [0, 1]N → R:

  • Reproducing kernel Hilbert space H = H(k) of univariate

functions f : [0, 1] → R with 1

0 f(x) dx = 0.

  • Hilbert spaces Hu of multivariate functions fu : [0, 1]u → R:

Hu := ⊗j∈uH for u ⊂f N, where H∅ = span{1}.

  • Hilbert space Hγ of functions of infinitely many variables:

Weights γ = (γu)u⊂f N with

u⊂fN γu < ∞

Hγ :=

u⊂f N

fu

  • fu ∈ Hu, f2

Hγ :=

  • u⊂f N

γ−1

u fu2 Hu < ∞

  • ,

where Hγ ⊂ L2([0, 1]N) and f =

u⊂f N fu ANOVA decomposition.

3 / 18

slide-4
SLIDE 4

Randomized Algorithms for Infinite- dimensional Integration

Weights

Product weights γ [Sloan & Wo´ zniakowski’98]: Let γ1 ≥ γ2 ≥ γ3 ≥ · · · ≥ 0. Then γu :=

j∈u γj.

Finite-order weights γ of order ω [Dick, Sloan, Wang & Wo´ zniakowski’06]: γu = 0 for all |u| > ω. Finite-intersection weights γ of degree ρ: Finite-order weights with |{u ⊂f N | γu > 0, u ∩ v = ∅}| ≤ 1 + ρ for all v ⊂f N, γv > 0. (Subclass of finite-intersection weights are the “finite-diameter weights” proposed by Creutzig.) decay := sup

  • p ∈ R
  • u⊂f N

γ1/p

u

< ∞

  • .

4 / 18

slide-5
SLIDE 5

Randomized Algorithms for Infinite- dimensional Integration

Integration, Algorithms & Cost Model

Integration functional I on Hγ : I(f) :=

  • [0,1]N f(x) dx

Admissable randomized algorithms: Qn(f) =

n

  • i=1

αif(t(i)

vi ; a),

where t(i)

vi ∈ [0, 1]vi, vi ⊂f N, a = 1/2

Nested Subspace Sampling [Creutzig, Dereich, M¨ uller-Gronbach, Ritter‘09]: Fix s ≥ 1. costnest(Qn) :=

n

  • i=1
  • max vi

s Unrestricted Subspace Sampling [Kuo, Sloan, Wasilkowski, Wo´ zniakowski‘10]: Fix s ≥ 1. costunr(Qn) :=

n

  • i=1

|vi|s

5 / 18

slide-6
SLIDE 6

Randomized Algorithms for Infinite- dimensional Integration

Randomized Setting

Error criterion: (worst case) randomized error eran(Q; Hγ)2 := sup

fHγ ≤1

E

  • (I(f) − Q(f))2

Nth minimal randomized error: mod ∈ {nest, unr}, eran

mod(N) := inf{eran(Q; Hγ) | Q adm. rand. alg., costmod(Q) ≤ N}.

“Convergence order” of eran

mod(N):

λran

mod := sup

  • t > 0 | sup

N∈N

eran

mod(N) · N t < ∞

  • .

6 / 18

slide-7
SLIDE 7

Randomized Algorithms for Infinite- dimensional Integration

Nested Subspace Sampling: Multilevel Algorithms

For levels k = 1, . . . , m: vk := {1, . . . , 2k}. n1 ≥ n2 ≥ n3 ≥ · · · . (Unbiased) RQMC Algorithms: Qvk(g) := 1 nk

nk

  • j=1

g(t(j,k)

vk

), t(j,k)

vk

∈ [0, 1]vk. Projections: Ψvkf(x) := f(xvk; a) for k ≥ 1 and Ψv0f(x) := 0. RQMC-Multilevel Algorithm: QML

m (f) := m

  • k=1

Qvk(Ψvkf − Ψvk−1f). Cost: costnest(QML

m ) = costunr(QML m ) ≤ m

  • k=1

2 nk 2ks.

7 / 18

slide-8
SLIDE 8

Randomized Algorithms for Infinite- dimensional Integration

Nested Subspace Sampling: Multilevel Algorithms

Projections: Ψvkf(x) = f(xvk; a) for k ≥ 1 and Ψv0f(x) = 0. RQMC-ML Algo.: QML

m (f) = m

  • k=1

Qvk(Ψvkf−Ψvk−1f). Then E

  • QML

m (f)

  • =

m

  • k=1

I(Ψvkf − Ψvk−1f) = I(Ψvmf), and E

  • I(f) − QML

m (f)

2 =|I(f) − I(Ψvmf)|2 +

m

  • k=1

Var

  • Qvk(Ψvkf − Ψvk−1f)
  • .

8 / 18

slide-9
SLIDE 9

Randomized Algorithms for Infinite- dimensional Integration

Multilevel Algorithms

Multilevel Monte Carlo algorithms were introduced in the context of integral equations and parametric integration by Heinrich (1998) and Heinrich and Sindambiwe (1999) and in the context of stochastic differential equations by Giles (2008). Multilevel quasi-Monte Carlo algorithms were tested by Giles and Waterhouse (2009). Multilevel Monte Carlo and quasi-Monte Carlo algorithms have been studied in a number of papers, see, e.g., the web page of Mike Giles http://people.maths.ox.ac.uk/gilesm/mlmc community.html for more recent information.

9 / 18

slide-10
SLIDE 10

Randomized Algorithms for Infinite- dimensional Integration

Nested Subspace Sampling

Unanchored reproducing kernel k of H: For x, y ∈ [0, 1]: k(x, y) = 1 3 + x2 + y2 2 − max{x, y} H = H(k) consists of functions f ∈ L2([0, 1]) with f absolutely continuous, f (1) ∈ L2([0, 1]), and 1

0 f(x) dx = 0.

k induces ANOVA decomposition on Hγ: f =

  • u⊂fN

fu, fu ∈ Hu, and 1 fu(x) dxj = 0 if j ∈ u.

10 / 18

slide-11
SLIDE 11

Randomized Algorithms for Infinite- dimensional Integration

Nested Subspace Sampling: Product Weights

Theorem [Baldeaux, G.‘12]. γ product weights, decay > 1. Then decay ≥ 1 + 3s : λran

nest = 3/2

1 + 3s ≥ decay > 1: λran

nest = decay −1

2s (Upper error bound via multilevel algorithms based on scrambled polynomial lattice rules (scrambling: Owen‘95; polynomial lattice rules: Niederreiter‘92); lower error bound holds for general randomized algorithms.) Comparison with previously known results for s = 1: [Hickernell, Niu, M¨ uller-Gronbach, Ritter‘10]: Multilevel algorithms ˜ QML

m

based on scrambled Niederreiter (t, m, s)-nets: decay ≥ 11: λran

nest = 3/2

[Baldeaux‘11]: Multilevel algorithms ˆ QML

m

based on scrambled polynomial lattice rules: decay ≥ 10: λran

nest = 3/2

11 / 18

slide-12
SLIDE 12

Randomized Algorithms for Infinite- dimensional Integration

Nested Subspace Sampling: Finite-Intersection Weights

Theorem [Baldeaux, G.‘12]. γ be finite-intersection weights, decay > 1. Then decay ≥ 1 + 3s : λran

nest = 3/2

1 + 3s ≥ decay > 1: λran

nest = decay −1

2s (Upper error bound achieved by multilevel algorithms based on scrambled polynomial lattice rules; lower error bound holds for general randomized algorithms.)

12 / 18

slide-13
SLIDE 13

Randomized Algorithms for Infinite- dimensional Integration

Unrestricted Subspace Sampling: CDAs (alias MDMs)

Anchored decomposition: f∅,a := f(a) and fu,a(x) := f(xu; a) −

  • vu

fv,a(x). A changing dimension algorithm (or multivariate decomposition method) QCD is of the form QCD(f) =

  • u⊂fN

Qu,nu(fu,a), Qu,nu using nu samples to approximate

  • [0,1]u fu,a(xu) dxu.

QCD is linear if Qu,nus are linear: fu,a(x) =

v⊆u(−1)|u\v|f(xv; a)

[Kuo, Sloan, Wasilkowski, Wo´ zniakowski’10a] Cost for evaluating fu,a in unrestricted model: O(2|u||u|s).

13 / 18

slide-14
SLIDE 14

Randomized Algorithms for Infinite- dimensional Integration

Changing Dimension Algorithms

Changing dimension algorithms (alias “multivariate decomposition methods”) for infinite-dimensional integration were introduced in [Kuo, Sloan, Wasilkowski, Wo´ zniakowski’10] and refined in [Plaskota & Wasilkowski’11]. These algorithms have also been adapted to infinite-dimensional approximation problems, see the papers of Wasilkowski and of Wasilkowski & Wo´ zniakowski. A similar idea was used for multivariate integration in [Griebel & Holtz’10] (“dimension-wise quadrature methods”).

14 / 18

slide-15
SLIDE 15

Randomized Algorithms for Infinite- dimensional Integration

Unrestricted Subspace Sampling

Unanchored reproducing kernel kχ of smoothness χ: x, y ∈ [0, 1] kχ(x, y) =

χ

  • τ=1

Bτ(x) τ! Bτ(y) τ! + (−1)χ+1 B2χ(|x − y|) (2χ)! , where Bτ is Bernoulli polynomial of degree τ. H = H(kχ) consists of functions f ∈ L2([0, 1]) with f, f (1), . . . , f (χ−1) absolutely continuous, f (χ) ∈ L2([0, 1]), and 1

0 f(x) dx = 0.

kχ induces ANOVA decomposition on Hγ: f =

  • u⊂fN

fu, fu ∈ Hu, and 1 fu(x) dxj = 0 if j ∈ u.

15 / 18

slide-16
SLIDE 16

Randomized Algorithms for Infinite- dimensional Integration

Unrestricted Subspace Sampling: Product Weights and Finite-Intersection Weights

Theorem [Dick, G.‘13]. γ product weights or finite-intersection weights, decay > 1. Then decay ≥ 2(χ + 1) : λran

unr = χ + 1/2

2(χ + 1) ≥ decay > 1: λran

unr = decay −1

2 (Upper error bounds achieved by changing dimension algorithms based on interlaced scrambled polynomial lattice rules [Dick‘11; Goda & Dick‘13]; lower error bounds holds for (rather) general randomized algorithms.) Result of theorem still holds if cost of function evaluation in points with k active variables costs O(eσk) for some σ ∈ (0, ∞)!

16 / 18

slide-17
SLIDE 17

Randomized Algorithms for Infinite- dimensional Integration

Generalizations

  • Similar results hold more generally for spaces Hγ induced by

more general kernels k : D × D → R, D ⊆ R, ρ probability measure on D if

  • D

k(x, y)ρ(dx) = 0 for all y ∈ D (ANOVA Case).

  • We have also results for 1 > s ≥ 0.
  • The results can also be transferred to non-ANOVA settings, as

the anchored or alternative unanchored settings [Dick, G., Hefter, Hinrichs, Ritter]

– for product weights (relying on results from [Hefter, Ritter 13]) – for more general weights (work in progress).

17 / 18

slide-18
SLIDE 18

Randomized Algorithms for Infinite- dimensional Integration

Thank you for your attention!

18 / 18