SLIDE 1 Computing integrals in many dimensions – What’s new?
Ian H. Sloan
i.sloan@unsw.edu.au The University of New South Wales – UNSW Australia Shanghai Jiao Tong University, 2015
SLIDE 2
Can we think of evaluating a 100-dimensional integral, I(F ) := 1 . . . 1 F (y1, . . . , y100)dy1 · · · dy100? Some scientists and engineers want to evaluate integrals in 100 or 1, 000 or even 1 million dimensions.
SLIDE 3 How?
We cannot use a product rule! Even a 2-point Gauss rule in each of 100 dimensions would need 2100 function evaluations! In general we want to approximate (with maybe large s) Is(F ) := 1 . . . 1 F (y1, . . . , ys)dy1 . . . dys =
THE WORKHORSE IS MONTE CARLO
SLIDE 4 Monte Carlo (MC)
QMC
N,s(F ) := 1
N
N
F (tk), with t1, . . . , tN chosen randomly and independently from a uniform distribution on [0, 1]s. Error: For F ∈ L2([0, 1]s), errorMC(F ) = σ(F ) √ N , where σ2(F ) = Is((F − Is(F ))2) = Is(F 2) − (Is(F ))2.
SLIDE 5
What sort of problems?
Need to be guided by applications. Artificial problem are often too easy (or too hard)!
SLIDE 6
Uncertainty is a major source of applications
PDE with random coefficients mathematical finance
SLIDE 7 Uncertainty as a main source of applications
PDE with random coefficients mathematical finance In 1995 Traub and Paskov computed values of ‘mortgage-backed
- bligations’. In the USA people buying a house borrow money from a
- bank. In return, the bank holds a ’mortgage’. USA mortgages last for
30 years, and may be repaid each month, making 30 × 12 = 360 repayment possibilities. The computed quantity is a 360-dimensional expected value - hence a 360-dimensional integral. The 1995 experiments used a ‘Quasi Monte Carlo’ method. The surprising message from the finance experiments: For this problem, Quasi Monte Carlo beats Monte Carlo.
SLIDE 8
A 64-dimensional option pricing example
Here is a result of a Monte Carlo calculation in 64 dimensions for a problem of pricing a digital Asian option: 0.50 0.51 0.52 214 213 212 N
SLIDE 9
A 64-dimensional option pricing example
And here’s the result of a QMC calculation for the same 64- dimensional problem of pricing a digital Asian option: 0.50 0.51 0.52 214 213 212 N 0.005 214 213 212 N
SLIDE 10 Quasi-Monte Carlo (QMC)
For QMC we take Is(F ) ≈ QN,s(F ) := 1 N
N
F (tk) , with t1, . . . , tN deterministic (and cleverly chosen). How to choose t1, . . . , tN? Low discrepancy points (Halton, Sobol′, Faure, Niederreiter,. . .) Lattice rules
SLIDE 11 Low discrepancy points and lattice rules
Algorithm: Is(F ) ≈ 1 N
N
F (tk) tk ∈ [0, 1]s
Monte Carlo method with 64 “random” points
2D Sobol′ sequence
with 64 points
SLIDE 12
Lattice rules
Lattice rules were introduced by number theorists in the late 1950s and 1960s – especially Korobov, Hlawka, Zaremba, L K Hua They were designed for periodic functions. But most integrands are not periodic.
SLIDE 13 We consider only the simplest kind of lattice rule, given by QN,s(z; F ) = 1 N
N−1
F
N
where z ∈ {1, . . . , N − 1}s, and the braces mean that each component of the s-vector in the braces is to be replaced by its fractional part
SLIDE 14 Example of lattice & shifted lattice rules
N = 34, z = (1, 21) N = 34, z = (1, 21), ∆ = (0.8, 0.1)
1 1
1
SLIDE 15 We consider only the simplest kind of lattice rule, given by QN,s(z; F ) = 1 N
N−1
F
N
where z ∈ {1, . . . , N − 1}s, and the braces mean that each component of the s-vector in the braces is to be replaced by its fractional part. The corresponding shifted lattice rule is QN,s(z,∆ ∆ ∆; F ) = 1 N
N−1
F
N + ∆ ∆ ∆
where ∆ ∆ ∆ ∈ [0, 1)s.
SLIDE 16 What’s new?
For an important class of problems, we now know how to DESIGN lattice rules
that is, we know how to find a good z in QN,s(z; F ) = 1 N
N−1
F
N
to obtain a guaranteed rate of convergence, independently of
- dimension. The problem class is:
PDE with random coefficients
SLIDE 17 PDE with random coefficients
PDE with random coefficients are now attracting great interest. Example: flow through a porous medium Darcy’s law is
where p = p(x) is pressure of the fluid
q(x) is velocity of the fluid a = a(x) is “permeability” of the medium Incompressibility: ∇ · q = 0 Together these give a second order elliptic PDE: ∇ · (a(x)∇p(x)) = 0
SLIDE 18
Modelling the permeability
Describing in all the microscopic pores and channels in a real material is commonly considered much too hard. So it is common engineering practice to model the permeability as a random field:
SLIDE 19 Example – the flow problem
For each realisation of a(x) we need to solve the PDE plus boundary
- conditions. A 2-dimensional example is:
1 1 x1 x2 p = 1 p = 0
∂p ∂n = 0 ∂p ∂n = 0
We have Dirichlet boundary conditions on ΓD, the left and right edges, and Neumann boundary conditions on ΓN, the top and bottom edges.
SLIDE 20 Finite element method
Standard piecewise-linear finite elements: 1 1 x1 x2 p = 1 p = 0
∂p ∂n = 0 ∂p ∂n = 0
SLIDE 21 Particle paths
For a particular realisation of the permeability field, after we have found the pressure field p, to find the position x of a particle of the fluid solve dx dt = q(x) = −a(x)∇p(x), subject to x = (0, 0.5) at t = 0. 1 1 x1 x2 p = 1 p = 0
∂p ∂n = 0 ∂p ∂n = 0
SLIDE 22 Particle displacement after time t = 0.1
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1
Note: For each realisation of the field the permeability field is different!
SLIDE 23 A simple model problem – the “uniform” case
−∇ · (a(x, y) ∇u(x, y)) = f(x) in D , u(x, y) = 0
∂D, y ∈ U := [0, 1]N, with D a bounded Lipschitz domain in Rd, and a(x, y) = a(x) +
∞
(yj − 1
2) ψj(x),
x ∈ D, y ∈ U , where y1, y2, . . . are independent random variables uniformly distributed on [0, 1]; with ψj such that
j ψj∞ < ∞,
and with a large enough and
j ψj∞ small enough to ensure
amax ≥ a(x, y) ≥ amin > 0, making the PDE strongly elliptic for every y.
In practice we must truncate the sum after s terms, so the random part of the problem is
SLIDE 24
Many approaches:
polynomial chaos, generalized polynomial chaos, stochastic Galerkin, stochastic collocation Monte Carlo multilevel Monte Carlo Quasi-Monte Carlo multilevel Quasi-Monte Carlo All methods face serious challenges when the effective dimensionality is high. And when all else fails, people turn to Monte Carlo methods. We aim to beat Monte Carlo.
SLIDE 25
Many contributors:
Many important results using various methods by Babuska, Nobile, Tempone Babuska, Tempone, Zouraris Barth, Schwab, Zollinger Charrier Charrier, Scheichl, Teckentrup Cliffe, Giles, Scheichl, Teckentrup Cliffe, Graham, Scheichl, Stals Cohen, Chkifa, Schwab Cohen, De Vore, Schwab Graham, Scheichl, Ullmann Hansen, Schwab Harbrecht, Peters, Siebenmorgen Hoang, Schwab Karniadakis, Xiu Kunoth, Schwab Nobile, Tempone, Webster Schwab, Todor Schillings, Schwab Teckentrup, Scheichl, Giles, Ullmann Webster . . .
SLIDE 26 Application of QMC to PDE with random coefficients
Graham, Kuo, Nuyens, Scheichl, Sloan (J. Comput. Physics, 2011)
- Application of QMC to the lognormal case, no error analysis
- Use circulant embedding to avoid truncation of KL expansion
Kuo, Schwab, Sloan (SIAM J. Numer. Anal., 2012)
- Application of QMC to the uniform case
[cf. Cohen, De Vore, Schwab, 2010]
- First complete error analysis: how to choose “weights”
Kuo, Schwab, Sloan (J. FoCM, to appear)
[cf. Heinrich 1998; Giles 2008]
- A multi-level version of the analysis for the uniform case
Schwab (Proceedings of MCQMC 2012)
- Application of QMC to the uniform case for affine operator equations
Le Gia (Proceedings of MCQMC 2012)
- Application of QMC to the uniform case for PDE over the sphere
SLIDE 27 Application of QMC to PDEs with random coefficients
Dick, Kuo, Le Gia, Nuyens, Schwab (SIAM J. Numer. Anal., 2014)
- Application of higher order QMC to the uniform case
Dick, Kuo, Le Gia, Schwab (submitted)
- A multi-level version of the higher order analysis for the uniform case
Dick, Le Gia, Schwab (submitted)
- Holomorphic extension of the higher order analysis for the uniform case
Graham, Kuo, Nichols, Scheichl, Schwab, Sloan (Numer. Math., 2014)
- Application of QMC to the lognormal case, full analysis and numerics
Graham, Kuo, Scheichl, Schwab, Sloan, Ullmann (in progress)
- A multi-level version of the analysis for the lognormal case
. . .
Also QMC works by Harbrecht, Peters, Siebenmorgen
SLIDE 28 The lognormal case
Recall the permeability in the uniform case: a(x, y) = a(x) +
∞
(yj − 1
2) ψj(x),
x ∈ D, y ∈ U , In the lognormal case a(x, y) = a(x) + exp
∞
yj µjξj(x) , x ∈ D, yj are i.i.d. standard normal random numbers µj, ξj are the eigenvalues and normalized eigenfunctions of the covariance operator for the Gaussian random field in the exponent.
SLIDE 29 The sampling (or non-intrusive) approach (MC and QMC)
In the Monte Carlo approach find random realisations of the field using the given probability model (e.g. in the uniform case we make independent choices of y1, . . . , ys
from a uniform distribution on [0, 1]);
for each such realisation of the permeability field we find an approximate solution of the flow problem; and to compute any desired physical quantity we average the values obtained over all the realisations. We aim to do better than Monte Carlo, by using Quasi-Monte Carlo.
SLIDE 30
What might we want to compute?
The mean pressure at a particular point or over a particular small region The effective permeability The mean ‘breakthrough time’ All are expected values – and expected values are integrals. If there are many random variables, then the expected values are high-dimensional integrals. IN THIS PROJECT WE APPROXIMATE EXPECTED VALUES USING QUASI-MONTE CARLO METHODS
SLIDE 31 More generally:
Suppose the problem is to compute the expected value of F (y) := G(u(·, y)) for some linear functional G of the solution u of the PDE. The expected value is in principle an infinite-dimensional integral, where the meaning is: I[F ] :=
:= lim
s→∞
- [0,1]s F (y1, . . . , ys, 1
2, 1 2, . . .)dy1 . . . dys.
Note that replacing ys+1, ys+2, . . . by 1
2 is equivalent to replacing a(x, y) by
as(x, y) := a(x) + s
j=1(yj − 1 2 ) ψj(x).
SLIDE 32 But if we are to use a lattice rule how to choose z?
Recall: the lattice rule for the integral over [0, 1]s is QN,s(z; F ) = 1 N
N−1
F
N
So we need to choose z. But there is no known formula for a “good” z, beyond s = 2. Can we construct a good z? Yes, it’s possible! The idea: Assume that F belongs to a particular Banach space H. And choose z to make the worst-case error in H small.
SLIDE 33 Worst-case error
Definition: The worst case error in the space H of a QMC rule QN,s(P ; ·) using the point set P = {t0, t1, · · · , tN−1} is eN,s(P ; H) := sup
F H≤1
|Is(F ) − QN,s(P ; F )| , i.e. it is the largest error of QN,s(P ; F ) for F in the unit ball of H. How to choose H? We need to be able to COMPUTE the worst-case error. A good choice: take H to be a (Hilbert) space of functions with square-integrable mixed first derivatives:
SLIDE 34 A good choice of H
A good choice for the norm squared of H is F 2
s,γ γ γ :=
1 γu
∂yu (yu; 1
1 1 2 2 2)
dyu, where (yu; 1
1 1 2 2 2)j =
yj if j ∈ u,
1 2 if j /
∈ u. The norm squared is the SUM OVER ALL SUBSETS u OF {1, . . . , s}.
For example, for u = {1, 3} the corresponding term is 1 γ{1,3} 1 1
∂y1∂y3 (y1, 1
2, y3, 1 2, 1 2, . . .)
dy1dy3
SLIDE 35 Weights
F 2
s,γ γ γ :=
1 γu
∂yu (yu; 1
1 1 2 2 2)
dyu, The norm squared is the SUM OVER ALL SUBSETS u OF {1, . . . , s}. The term for subset u is divided by the weight γu. So there are 2s weights! The weight γu is a positive number that measures the importance of the subset u. A small weight forces the corresponding derivative to be small. We denote by H = Hs,γ
γ γ the space with weights {γu}.
SLIDE 36 What’s so good about this H?
It’s a reproducing kernel Hilbert space with a very simple kernel: K(y, y′) =
γu
η(yj, y′
j)
, with η(y, y′) = min(y, y′) − 1
2
if y, y′ > 1
2, 1 2 − max(y, y′)
if y, y′ < 1
2,
What does this mean? It means K(y, ·) ∈ H, and f(·), K(y, ·)H = f(y) ∀y ∈ [0, 1]s ∀f ∈ H. The great thing about a RKHS with kernel is that there is a simple formula for the worst-case error:
SLIDE 37 The worst case error in a RKHS
For the QMC rule with points P = {t0, . . . , tN−1} the squared WCE is: e2
N,s(P ; H) =
- [0,1]s
- [0,1]s K(y, y′) dy dy′
− 2 N
N−1
1 N 2
N−1
N−1
K(ti, tk), Thus in a RKHS with a known kernel the WCE can be computed
except for the little fact that 2s terms are needed for each K(y, y′)! .
SLIDE 38 What’s new is that we know how to choose the weights γu
Originally (IHS and Wo´
zniakowski, 1998) there were only product weights,
γu =
αj, where α1 ≥ α2 ≥ . . . > 0 (also called weights!) were introduced to describe the decreasing importance of the successive coordinate directions. For these weights the worst-case error is easily computable, since K(x, y)=
αjηj(xj, yj) =
s
(1 + αjηj(xjyj)). BUT product weights are not appropriate for our problem of PDE with random coefficients.
SLIDE 39 Product weights are nice!
For product weights γu =
j∈u αj, it has been long known that
Theorem (IHS and Wo`
z 98).
The necessary and sufficient condition for the existence of QMC points for which the wce is bounded independently of s is
∞
αj < ∞. This is satisfied, for example, by αj = 1/j2. It is not satisfied by αj = 1 ∀j.
SLIDE 40 The convergence rate can beat MC
Theorem Sloan and Wo´
zniakowski (’01). If N is prime, and if
∞
α1/2
j
< ∞, then for each s∃ a shifted lattice rule Qlattice
N,s,z,∆ such that
eN,s,γ(Qlattice
N,s,z,∆) ≤ Cγ,δ
N 1−δ ∀δ > 0. But the original proofs of these theorems are non-constructive! They are existence proofs only.
SLIDE 41 How to choose the shift ∆? Randomly!
In the shifted lattice rule Qlattice
N,s,z,∆(F ) = 1
N
N
F kz N + ∆
∆ ∈ [0, 1)s, there are big advantages in choosing ∆ randomly – that is, ∆ IS CHOSEN FROM A UNIFORM DISTRIBUTION ON [0, 1)s. Why randomise? Like Monte Carlo, this family is an unbiased estimator of Is(F ). As a byproduct we get an estimate of the error – in practice we choose
say 30 random shifts, and evaluate the QMC rule for each of these. Then the mean gives an estimate of the integral, and the spread gives us an estimate of the error.
SLIDE 42 The benefits of randomising (continued)
The worst-case error becomes much easier to compute.
The worst-case error is replaced by its RMS average over shifts. esh
N,s(z; H) :=
F H≤1
N,s,·(z; F )
1/2 . For our particular Hilbert space Hs,γ
γ γ it is given by
esh
N,s(z; Hs,γ γ γ) =
γu 1 N
N−1
N }
1 12
1
12
|u|
1/2
, where B2(x) := x2 − x + 1/6.
For the shift-averaged WCE we can prove that a good z exists And we can even construct (fast!) a z for which this holds.
SLIDE 43 Constructing randomly shifted lattice rules
The component-by-component (or CBC) algorithm.
Korobov, IHS, Reztsov, Kuo, Joe
With CBC, a good generator z = (z1, . . . , zs) is constructed one component at a time: choose z1 = 1 choose z2 to minimise eN,s,2(z1, z2), then choose z3 to minimise eN,s,3(z1, z2, z3), then . . . so that at each step there are only (at most) N − 1 choices.
Here eN,s,d(z1, . . . , zd) is related in a known way to esh
N,d(z1, . . . , zd)
– for details see Dick, Kuo, Sloan, Acta Numerica 2013.
SLIDE 44
Fast CBC exists for product weights weights and shift-averaged WCEs
For product weights Cools and Nuyens (2006) developed fast CBC – requires a time of order only O(s N log N). The Nuyens and Cools implementation allows the CBC algorithm for product weights to be run with s in thousands, N in millions.
SLIDE 45 CBC with random shifts has small error
THEOREM Sloan, Kuo, Joe, 2002 Let N be prime, and let z1, . . . , zs be chosen by the new algorithm. Assume product weights γ γ γu =
j∈u αj such that ∞
αj < ∞. Then esh
N,s,γ γ γ(z) ≤
Dγ
γ γ
N 1/2
SLIDE 46 Numerical Example
EXAMPLE: γj = 1 j2 , j = 1, 2, . . . .
s zj esh
N,s,γ(z)
zj esh
N,s,γ(z)
N = 2, 003 N = 32, 003 1 1 2.04(-4) 1 1.28(-5) 2 765 3.02(-4) 9376 2.03(-5) 3 605 3.72(-4) 11835 2.58(-5) · · · · · · · · · · · · · · · 31 450 6.51(-4) 13604 5.51(-5) 32 241 6.53(-4) 3393 5.53(-5) · · · · · · · · · · · · · · · 99 973 7.01(-4) 15017 6.07(-5) 100 304 7.02(-4) 10489 6.08(-5)
The apparent rate of convergence is better than O
√ N
SLIDE 47 The CBC algorithm has optimal convergence!
THEOREM
Frances Kuo, J. Complexity, (2003)
Let N be prime, and let z1, z2, . . . , zs be chosen by the new
∞
α1/2
j
< ∞. Then ∀δ > 0 esh
N,s,γ(z) ≤ Cγ,δ
N 1−δ . Thus the optimal rate is achieved by the new algorithm!
SLIDE 48 So there is a good story for product weights γu =
j∈u αj.
But as we shall see product weights are not the right weights for PDE with random coefficients. So now back to general weights γu.
SLIDE 49 Theorem many authors
For N prime the CBC algorithm with general weights γ γ γu produces a z ∈ {1, . . . , N − 1}s such that for all λ ∈ (1/2, 1], esh
N,s(z, Hs,γ γ γ) ≤
1 N − 1
γλ
u Au(λ)
1/2λ
, where Au(λ) =
(2π2)λ + 1 12 λ|u| .
If N is not prime, replace N − 1 by Euler’s totient function φ(N).
We can take λ as close as we like to 1/2 so convergence is O(N −1+δ), but note that ζ(2λ) → ∞ as 2λ → 1 + .
Here ζ(z) =
∞
j−z, z > 1.
SLIDE 50 So how to choose the right weights?
If product weights are not the best weights, how to decide what are the best weights? Recall the worst case error for integration over [0, 1]s: eN,s,γ
γ γ(t1, . . . , tN) :=
sup
F s,γ
γ γ≤1
|Is(F ) − Qs,N(F )| . Now that we are given a particular F we can use the corresponding error bound: |Is(F ) − Qs,N(F )| ≤ eN,s,γ
γ γ(t1, . . . , tN) × F s,γ γ γ
SLIDE 51 So how to choose the right weights?
If product weights are not the best weights, how to decide what are the best weights? Recall the worst case error for integration over [0, 1]s: eN,s,γ
γ γ(t1, . . . , tN) :=
sup
F s,γ
γ γ≤1
|Is(F ) − Qs,N(F )| . Now that we are given a particular F we can use the corresponding error bound: |Is(F ) − Qs,N(F )| ≤ eN,s,γ
γ γ(t1, . . . , tN) × F s,γ γ γ
and choose weights that minimize the right-hand side
- r some upper bound on the right-hand side.
SLIDE 52 The norm of F
For the norm of F , remember that the norm involves the mixed first derivatives of F with respect to the components of y. In our PDE case we estimate these by differentiating the PDE with respect to the components of y. By using PDE tricks we finish up with F s,γ
γ γ ≤
|u|<∞
bu γ γ γu
1 2 G fH−1(D)
amin , where bu = (|u|!)2
j∈u
ψj∞ amin 2 . Is this sum finite? It can be if γ γ γu decays slowly enough.
SLIDE 53 Now to choose the weights:
Recall that we want to minimise the error bound, which is product of
shift-averaged worst-case error and norm of F .
The worst-case error (for a good choice of z) is bounded by:
esh
N,s(z, Hs,γ γ γ) ≤
1 N − 1
γ γ γuλAu
1/(2λ)
,
and the norm of F is bounded by F s,γ
γ γ ≤
|u|<∞
Bu γ γ γu
1 2 G fH−1(D)
amin .
SLIDE 54 We now choose weights to minimise
γ γ γu
λAu
1/2λ
×
|u|<∞
Bu γ γ γu
1 2
= ⇒ γ γ γu = Bu Au 1/(1+λ) . Thus: γ γ γu = (|u|!)
2 1+λ j∈u
αj – “POD” (for "product and order dependent") weights, where αj =
ψ∞ amin√ 2ζ(2λ)/(2π2)λ+1/12λ
SLIDE 55 Fast CBC
Fast CBC also exists for POD weights (Kuo, Schwab, IHS, ANZIAM J, 2012):
- nly a small modification is needed from an already existing CBC
algorithm for “order-dependent” weights γu = Γ|u| IHS, Wang and
Wo´ zniakowski 2004.
SLIDE 56 For the PDE application assuming exact PDE solution
Convergence theorem. If we choose γu for each subset u to the POD eights as above, and if z is constructed by CBC, then for all s and all δ > 0
- E[|Is(G(u) − Qs,N,∆(G(u)))|2] ≤
Cδ N 1−δ , provided ∞
j=1 ψj2/3 ∞
< ∞.
and appropriate slower convergence if the norms ψj∞ approach zero more slowly.
SLIDE 57 Numerical test
102 103 104 105 N 10-7 10-6 10-5 10-4 10-3 10-2 std.err.
ν =0.75, σ2 =0.25, λ =0.1 QMC, r =−0.811 MC, r =−0.55
SLIDE 58 More generally,
for our PDE with random coefficients, if
∞
ψjp
∞ < ∞
for some p < 1, we can construct QMC rules for which the integration error in computing expected values of functionals of the solution is of order N −(1−δ) if d ∈ (0, 2
3),
N
− 1 p − 1 2
and hence provably faster than the Monte Carlo rate N − 1
2 for all
p < 1. And the constant is bounded independently of dimension – so lifting the “curse of dimension”.
SLIDE 59 The full error
The full error includes not only the integration error, but also Error from finite-element approximation Error from truncation of the infinite expansion for a(x, y) after s terms. When the field a(., y) is rough, s may need to be very large, eg in thousands, to keep the truncation dimension small.
But to us that doesn’t matter, because the integration error is independent of s. And the cost of including more terms in the expansion is only linear in (s).
SLIDE 60 What else is new?
Similar analysis of the lognormal case. Analysis of a multilevel method for both uniform and lognormal cases:
[Kuo, Schwab, Sloan (to appear)]
I(G(u)) ≈ QL
∗ (G(u)) = L
Qsℓ,nℓ
hℓ − u sℓ−1 hℓ−1)
L
sℓnℓhℓ
−d
SLIDE 61 What’s newer still?
We know how to construct higher order QMC rules, with e.g. O(N −2), and even how to apply them the above PDE problem. (Dick,
Kuo, Le Gia, Nuyens and Schwab, submitted, for the uniform case) The best convergence rate achievable with a lattice rule is (close to) O(N −1). Lattice rules are now replaced by interlaced polynomial lattice rules, and POD weights by SPOD weights (standing for “smoothness-driven product and
Now there is no randomisation, and improved theoretical results when the random field is smooth enough: the convergence rate is .....
SLIDE 62 First higher order results
From Gantner and Schwab, submitted, for a model problem, which is an algebraic mockup of a PDE: F(y) := 1 + θ
s
ajyj
−1
,
SLIDE 63 Robert Gantner
100 101 102 103 104 105 106 107
Number of Points N =2m
10-14 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1
Quadrature Error 1 −2 Convergence of Quadrature Error for s =128
Polynomial Lattice Rule, α =2 Lattice Rule MC L2 error, 20 repetitions
SLIDE 64 General references
F Y Kuo and I H Sloan, Lifting the curse of dimensionality, Notices
- f the American Mathematical Society, December, 2005.
- J. Dick, F
.Y. Kuo, & I.H. Sloan, Numerical integration in high dimensions – the Quasi Monte Carlo way, Acta Numerica 2013.
. Pillichshammer, Digital Nets and Sequences, Cambridge, 2010.
SLIDE 65
Collaborators
Josef Dick, UNSW Ivan Graham, Bath Michael Griebel, Bonn Fred Hickernell, IIT Stephen Joe, Waikato Frances Kuo, UNSW Q Thong Le Gia, UNSW James Nichols, UNSW Erich Novak, Jena Dirk Nuyens, Leuven Leszek Plaskota, Warsaw Andrei Reztsov Rob Scheichl, Bath Christoph Schwab, ETH Elisabeth Ullmann, Hamburg Grzegorz Wasilkowski, Kentucky Henryk Wo´ zniakowski, Warsaw and Columbia