Quasi-Monte Carlo integration over the Euclidean space and - - PowerPoint PPT Presentation

quasi monte carlo integration over the euclidean space
SMART_READER_LITE
LIVE PREVIEW

Quasi-Monte Carlo integration over the Euclidean space and - - PowerPoint PPT Presentation

Quasi-Monte Carlo integration over the Euclidean space and applications Frances Kuo f.kuo@unsw.edu.au University of New South Wales, Sydney, Australia joint work with James Nichols (UNSW) Journal of Complexity 30 (2014) 444468. Frances


slide-1
SLIDE 1

Frances Kuo

Quasi-Monte Carlo integration

  • ver the Euclidean space

and applications

Frances Kuo

f.kuo@unsw.edu.au University of New South Wales, Sydney, Australia

joint work with James Nichols (UNSW) Journal of Complexity 30 (2014) 444–468.

slide-2
SLIDE 2

Frances Kuo

MC v.s. QMC in the unit cube

  • [0,1]s g(x

x x) dx x x ≈ 1 n

n

  • i=1

g(t t ti) Monte Carlo method Quasi-Monte Carlo methods t t ti random uniform t t ti deterministic n−1/2 convergence close to n−1 convergence or better

1 1 64 random points

  • 1

1 First 64 points of a 2D Sobol′ sequence

  • 1

1 A lattice rule with 64 points

  • more effective for earlier variables and lower-order projections
  • rder of variables irrelevant
  • rder of variables very important

use randomized QMC methods for error estimation

slide-3
SLIDE 3

Frances Kuo

QMC

Two main families of QMC methods: (t,m,s)-nets and (t,s)-sequences lattice rules

1 1 First 64 points of a 2D Sobol′ sequence

  • 1

1 A lattice rule with 64 points

  • (0,6,2)-net

Having the right number of points in various sub-cubes A group under addition modulo Z and includes the integer points

  • Niederreiter book (1992)

Sloan and Joe book (1994)

Important developments: component-by-component (CBC) construction higher order digital nets

Dick and Pillichshammer book (2010) Dick, Kuo, Sloan Acta Numerica (2013)

slide-4
SLIDE 4

Frances Kuo

Standard QMC analysis

Worst case error bound

  • [0,1]s g(x

x x) dx x x − 1 n

n

  • i=1

g(t t ti)

  • ≤ ewor(t

t t1, . . . ,t t tn) g “Standard” setting – weighted Sobolev space

  • [0,1]s g(x

x x) dx x x − 1 n

n

  • i=1

g(t t ti)

  • ≤ ewor

γ γ γ

(t t t1, . . . ,t t tn) gγ

γ γ

g2

γ γ γ =

  • u⊆{1:s}

1 γu

  • [0,1]|u|
  • ∂|u|g

∂x x xu (x x xu; 0)

  • 2

dx x xu

2s subsets “anchor” at 0 (also “unanchored”) “weights” Mixed first derivatives are square integrable Small weight γu means that g depends weakly on the variables x x xu

Choose weights to minimize the error bound

  • 2

n

  • u⊆{1:s}

γλ

u au

1/(2λ)

  • bound on worst case error (CBC)
  • u⊆{1:s}

bu γu 1/2

  • bound on norm

⇒ γu = bu au 1/(1+λ) Construct points (CBC) to minimize the worst case error

slide-5
SLIDE 5

Frances Kuo

Practical problems do not fit...

Practical integral over the Euclidean space

  • Rs q(z

z z) ρ(z z z) dz z z =

  • Rs f(y

y y)

s

  • j=1

φ(yj) dy y y

  • 1. Transformation

(translation, rotation, rescaling) φ is any univariate pdf =

  • [0,1]s f(Φ−1(x

x x)) dx x x

  • 2. Mapping to unit cube

φ - pdf Φ - cdf Φ−1 - icdf

≈ 1 n

n

  • i=1

f(Φ−1(t t ti))

  • 3. Applying QMC

Transformed integrand g = f ◦ Φ−1 rarely falls in the “standard” setting!

slide-6
SLIDE 6

Frances Kuo

Application 1: option pricing

Black-Scholes model: stock price follows a geometric Brownian motion

  • Rs max
  • 1

s

s

  • j=1

Sj(z z z) − K, 0

  • exp(− 1

2z

z zTΣ−1z z z)

  • (2π)s det(Σ)

dz z z Sj(z z z) = exp(· · · ajzj)

10

2

10

3

10

4

10

−4

10

−3

10

−2

10

−1

10

n standard error MC QMC + PCA naive QMC

=

  • Rs max
  • 1

s

s

  • j=1

Sj(Ay y y) − K, 0

  • s
  • j=1

φnor(yj) dy y y

  • 1. Transformation

RW/BB/PCA

Write Σ = AAT

  • Sub. z

z z = Ay y y

=

  • [0,1]s max
  • 1

s

s

  • j=1

Sj(AΦ−1

nor(x

x x)) − K, 0

  • dx

x x

  • 2. Mapping to the cube
  • Sub. y

y y = Φ−1

nor(x

x x) Issues: unbounded near the boundary of the unit cube, and kink, i.e., no square-integrable mixed first derivatives

slide-7
SLIDE 7

Frances Kuo

Application 2: maximum likelihood

Generalized linear mixed model

[K., Dunsmuir, Sloan, Wand, Womersley (2008)]

  • Rs
  • s
  • j=1

exp(τj(β + zj) − eβ+zj ) τj!

  • exp(− 1

2z

z zTΣ−1z z z)

  • (2π)s det(Σ)

dz z z

If we write Σ = AAT and substitute z z z = Ay y y followed by y y y = Φ−1

nor(x

x x), then we get very bad results...

no re-scaling (bad) no centering and no re-scaling (worse)

slide-8
SLIDE 8

Frances Kuo

Application 2: maximum likelihood

Generalized linear mixed model

[K., Dunsmuir, Sloan, Wand, Womersley (2008)]

  • Rs exp(T (z

z z)) dz z z = c

  • Rs exp(T (A∗y

y y + z z z∗))

s

  • j=1

1 φ(yj)

  • f(y

y y) s

  • j=1

φ(yj) dy y y

  • 1. Transformation

[centering, re-scaling]

  • Sub. z

z z = A∗y y y + z z z∗

= c

  • (0,1)s exp(F (A∗Φ−1(x

x x) + z z z∗))

s

  • j=1

1 φ(Φ−1(xj))

  • g(x

x x) = f(Φ−1(x x x))

du u u φ normal (good) φ logistic (better) φ Student-t (best)

  • 2. Mapping to the cube

[φ – free to choose]

  • Sub. y

y y = Φ−1(x x x) Issues: unbounded near the boundary of the unit cube, or huge derivatives near the boundary of the unit cube

slide-9
SLIDE 9

Frances Kuo

Application 3: PDE with random coeff.

Elliptic PDE with lognormal random coefficient

−∇ · (a( x, y y y) ∇u( x, y y y)) = forcing( x),

  • x ∈ D ⊆ Rd, d = 1, 2, 3

a( x, y y y) = exp

  • s
  • j=1

√µjξj( x) yj

  • ,

yj ∼ i.i.d. normal

  • Rs G(u(·, y

y y))

s

  • j=1

φnor(yj) dy y y G – linear functional =

  • [0,1]s G(u(·, Φ−1

nor(x

x x)) dx x x [Graham, K., Nuyens, Scheichl, Sloan (2010)] – circulant embedding [K., Schwab, Sloan (2011,2012,2014)] – “uniform” case, POD weights, fast CBC, multilevel

Differentiate the PDE to estimate the norm of integrand Minimize the error bound ⇒ POD weights γu = Γ|u|

  • j∈u

γj

[Dick, K., Le Gia, Nuyens, Schwab (2014)] – higher order [Dick, K., Le Gia, Schwab (2014)] – higher order, multilevel etc.

slide-10
SLIDE 10

Frances Kuo

A non-standard setting

Change of variables

  • Rs q(z

z z) ρ(z z z) dz z z =

  • Rs f(y

y y)

s

  • j=1

φ(yj) dy y y =

  • [0,1]s f(Φ−1(x

x x)) dx x x

g = f ◦ Φ−1 rarely belongs to weighted Sobolev space

Non-standard norm

[Wasilkowski & Wo´ zniakowski (2000)]

f2

γ γ γ =

  • u⊆{1:s}

1 γu

  • R|u|
  • ∂|u|f

∂y y yu (y y yu; 0)

  • 2

s

  • j=1

ψ2(yj) dy y yu

weight function

Nichols & K. (2014)

  • cf. [K., Sloan, Wasilkowski, Waterhouse (2010)]

Also unanchored variant, coordinate dependent ψj Randomly shifted lattice rules CBC error bound for general weights γu Convergence rate depends on the relationship between φ and ψ Fast CBC for POD weights γu = Γ|u|

  • j∈u γj

Important for applications: φ and ψ and γu are up to us to choose (tune)

slide-11
SLIDE 11

Frances Kuo

Application 3: PDE with random coeff.

Elliptic PDE with lognormal random coefficient

−∇ · (a( x, y y y) ∇u( x, y y y)) = forcing( x),

  • x ∈ D ⊆ Rd, d = 1, 2, 3

a( x, y y y) = exp

  • s
  • j=1

√µjξj( x) yj

  • ,

yj ∼ i.i.d. normal

  • Rs G(u(·, y

y y))

s

  • j=1

φnor(yj) dy y y G – linear functional =

  • [0,1]s G(u(·, Φ−1

nor(x

x x)) dx x x

Graham, K., Nichols, Scheichl, Schwab, Sloan (2014) Differentiate PDE to obtain bound on the norm Choose φ ≡ φnor Choose ψj(yj) = exp(−αjyj), αj > 0 Choose POD weights γu = Γ|u|

  • j∈u γj

10

3

10

4

10

5

10

  • 7

10

  • 6

10

  • 5

10

  • 4

10

  • 3

10

  • 2

10

  • 1

std.err.

ν =0.75, λC =0.1

10

3

10

4

10

5

10

  • 7

10

  • 6

10

  • 5

10

  • 4

10

  • 3

10

  • 2

10

  • 1

ν =0.75, λC =1.0

10

3

10

4

10

5

N 10

  • 7

10

  • 6

10

  • 5

10

  • 4

10

  • 3

10

  • 2

10

  • 1

std.err.

ν =1.5, λC =0.1

10

3

10

4

10

5

N 10

  • 7

10

  • 6

10

  • 5

10

  • 4

10

  • 3

10

  • 2

10

  • 1

ν =1.5, λC =1.0 QMC, σ 2

C =4.0

QMC, σ 2

C =1.0

QMC, σ 2

C =0.25

MC, σ 2

C =4.0

MC, σ 2

C =1.0

MC, σ 2

C =0.25

QMC, σ 2

C =4.0

QMC, σ 2

C =1.0

QMC, σ 2

C =0.25

MC, σ 2

C =4.0

MC, σ 2

C =1.0

MC, σ 2

C =0.25

QMC, σ 2

C =4.0

QMC, σ 2

C =1.0

QMC, σ 2

C =0.25

MC, σ 2

C =4.0

MC, σ 2

C =1.0

MC, σ 2

C =0.25

QMC, σ 2

C =4.0

QMC, σ 2

C =1.0

QMC, σ 2

C =0.25

MC, σ 2

C =4.0

MC, σ 2

C =1.0

MC, σ 2

C =0.25

slide-12
SLIDE 12

Frances Kuo

Application 2: maximum likelihood

Generalized linear mixed model

  • Rs exp(T (z

z z)) dz z z = c

  • Rs exp(T (A∗y

y y + z z z∗))

s

  • j=1

1 φ(yj)

  • f(y

y y) s

  • j=1

φ(yj) dy y y = c

  • (0,1)s exp(F (A∗Φ−1(x

x x) + z z z∗))

s

  • j=1

1 φ(Φ−1(xj))

  • g(x

x x) = f(Φ−1(x x x))

du u u φ normal (good) φ logistic (better) φ Student-t (best)

Sinescu, K., Sloan (2013) Differentiate integrand to obtain bound on the norm Choose φ ≡ φnor or φlogit or φstud Choose ψ ≡ 1 Choose POD weights γu = Γ|u|

  • j∈u γj
slide-13
SLIDE 13

Frances Kuo

Application 1: option pricing

Black-Scholes model

  • Rs max
  • 1

s

s

  • j=1

Sj(z z z) − K, 0

  • exp(− 1

2z

z zTΣ−1z z z)

  • (2π)s det(Σ)

dz z z Sj(z z z) = exp(· · · ajzj) =

  • Rs max
  • 1

s

s

  • j=1

Sj(Ay y y) − K, 0

  • s
  • j=1

φnor(yj) dy y y =

  • [0,1]s max
  • 1

s

s

  • j=1

Sj(AΦ−1

nor(x

x x)) − K, 0

  • dx

x x

10

2

10

3

10

4

10

−4

10

−3

10

−2

10

−1

10

n standard error MC QMC + PCA naive QMC

Griebel, K., Sloan (2010, 2013, 2014) ANOVA decomposition: g =

u⊆{1:s} gu in [0, 1]s

RW/BB: all gu with |u| ≤ s+1

2

belong to Sobolev space; PCA: similar

  • r f =

u fu in Rs

RW/BB: all fu with u = {1 : s} are smooth; PCA: similar ANOVA decomposition for ∞-variate function f: all terms are smooth Remains to see how to apply the theory of Nichols & K. (2014)

slide-14
SLIDE 14

Frances Kuo

Summary

Nice QMC theory over the unit cube Transformed integrand rarely falls in the standard setting

  • Rs q(z

z z) ρ(z z z) dz z z =

  • Rs f(y

y y)

s

  • j=1

φ(yj) dy y y =

  • [0,1]s f(Φ−1(x

x x)) dx x x Non-standard setting f2

γ γ γ =

  • u⊆{1:s}

1 γu

  • R|u|
  • ∂|u|f

∂y y yu (y y yu; 0)

  • 2

s

  • j=1

ψ2(yj) dy y yu φ and ψ and γu are up to us to choose (tune)

Application Transformation

φ ψ γu

  • 3. PDE

– φnor exp(−αjy) POD

  • 2. likelihood

centering, re-scaling

φnor, φlogit, φstud 1 POD

  • 1. option

RW, BB, PCA

φnor? 1? ?