Importance Sampling Methodology for Multidimensional Heavy-tailed - - PowerPoint PPT Presentation

importance sampling methodology for multidimensional
SMART_READER_LITE
LIVE PREVIEW

Importance Sampling Methodology for Multidimensional Heavy-tailed - - PowerPoint PPT Presentation

Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks Jose Blanchet (joint work with Jingchen Liu) Columbia IEOR Department RESIM Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 1 / 43 Agenda Introduction The


slide-1
SLIDE 1

Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks

Jose Blanchet (joint work with Jingchen Liu)

Columbia IEOR Department

RESIM

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 1 / 43

slide-2
SLIDE 2

Agenda

Introduction The One Dimensional Case Multidimensional Case Markov Random Walks Conclusions

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 2 / 43

slide-3
SLIDE 3

Introduction

What is the general goal of this talk?

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 3 / 43

slide-4
SLIDE 4

Introduction

What is the general goal of this talk? Discuss simulation methodology for estimating small expectations associated to rare events in multidimensional heavy-tailed settings.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 3 / 43

slide-5
SLIDE 5

Introduction

What is the general goal of this talk? Discuss simulation methodology for estimating small expectations associated to rare events in multidimensional heavy-tailed settings. What is the objective of the talk?

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 3 / 43

slide-6
SLIDE 6

Introduction

What is the general goal of this talk? Discuss simulation methodology for estimating small expectations associated to rare events in multidimensional heavy-tailed settings. What is the objective of the talk?

1

Illustrate our methodology for multidimensional regularly varying random walks.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 3 / 43

slide-7
SLIDE 7

Introduction

What is the general goal of this talk? Discuss simulation methodology for estimating small expectations associated to rare events in multidimensional heavy-tailed settings. What is the objective of the talk?

1

Illustrate our methodology for multidimensional regularly varying random walks.

2

Discuss optimality properties of our proposed simulation procedure.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 3 / 43

slide-8
SLIDE 8

Initial Notation

Let X1, X2,... are iid regularly varying in Rd (to be discussed –> TBD). EXi = η 2 Rd and A is an appropriate subset of Rd. Given b > 0 we write bA = fba : a 2 Ag. Sn = X1 + ... + Xn, (S0 = s). TbA = inffn 0 : Sn 2 bAg. Object of interest: ub (s, f ) = E (f (S0, S1, ..., STbA, TbA) I (TbA < ∞)) , for any f () such that 0 < δf f δ1

f

and ub (s, 1) ! 0 as b % ∞ (TBD).

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 4 / 43

slide-9
SLIDE 9

E¢ciency and Performance Guarantee Given any f for which there is δf 2 (0, ∞), so that δf f δ1

f

estimate ub (s, f ) = E (f (S0, S1, ..., STbA, TbA) I (TbA < ∞)) , with good relative precision. The relative error of an estimator Z for ub (s, f ) is Rel.Error = (Var (Z))1/2 ub (s) +

  • EZ

ub (s) 1

  • Suppose EZ = ub (s, f ) and that Var (Z) = O
  • ub (s, f )2

, then we say that Z is strongly e¢cient.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 5 / 43

slide-10
SLIDE 10

If Z is unbiased, then sampling n iid replications of Z gives an estimator b ub (n) = n1 ∑n

j=1 Zj such that

P (jb ub (n) ub (s, f )j εub (s, f )) Var (Z) nε2ub (s, f )2 . So, it takes O

  • ε2δ1Var (Z) /ub (s, f )2

replications to achieve ε-relative error with (1 δ) 100% con…dence. Any procedure for estimating ub (s, f ) (for arbitrary f ) must consider (Sk : 0 k TbA) (this requires on average O (Es (TbAj TbA < ∞))

  • perations).

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 6 / 43

slide-11
SLIDE 11

In summary, the best performance that one can expect for ε-relative precision and (1 δ) 100% con…dence based on iid replications of a given estimator involves O

  • ε2δ1Es (Tbj Tb < ∞)
  • perations.

An algorithm that achieves such performance is said to be optimal

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 7 / 43

slide-12
SLIDE 12

Agenda

Introduction The One Dimensional Case Multidimensional Case Markov Random Walks Conclusions

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 8 / 43

slide-13
SLIDE 13

The One Dimensional Case

There is a rich large deviations theory for heavy-tailed random walks based on subexponential rv’s P (X1 + X2 > b) = 2P (X1 > b) (1 + o (1)) as b ! ∞. Focus on an important class of a subexponential distributions is the class of regularly varying distributions (basically power-law type) P (X1 > t) = tαL (t) for α > 1 and L (tβ) /L (t) ! 1 as t % ∞ for each β > 0.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 9 / 43

slide-14
SLIDE 14

The One Dimensional Case

Let EXi = η < 0 and set A = [1, ∞), (we write Tb = TbA) estimate ruin (Pakes, Veraberbeker, Cohen... see text of Asmussen ’03) ub (s, 1) = ub (s) = Ps (Tb < ∞) .

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 10 / 43

slide-15
SLIDE 15

The One Dimensional Case

Let EXi = η < 0 and set A = [1, ∞), (we write Tb = TbA) estimate ruin (Pakes, Veraberbeker, Cohen... see text of Asmussen ’03) ub (s, 1) = ub (s) = Ps (Tb < ∞) . Strategy: use importance sampling consider the Markov kernel K () K (s0, s1 + ds1) = r 1 (s0, s1) P (s0 + X1 2 s1 + ds1) K (s0, s1) ds1 = r 1 (s0, s1) fX1 (s1 s0) ds1 (1) for a positive function r (). ((1) valid in presence of densities).

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 10 / 43

slide-16
SLIDE 16

The One Dimensional Case

Let EXi = η < 0 and set A = [1, ∞), (we write Tb = TbA) estimate ruin (Pakes, Veraberbeker, Cohen... see text of Asmussen ’03) ub (s, 1) = ub (s) = Ps (Tb < ∞) . Strategy: use importance sampling consider the Markov kernel K () K (s0, s1 + ds1) = r 1 (s0, s1) P (s0 + X1 2 s1 + ds1) K (s0, s1) ds1 = r 1 (s0, s1) fX1 (s1 s0) ds1 (1) for a positive function r (). ((1) valid in presence of densities). The importance sampling estimator is: Z =

Tb1

j=1

r (Sj, Sj+1) I (Tb < ∞) , the Sn’s are simulated under K ().

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 10 / 43

slide-17
SLIDE 17

The One Dimensional Case

Classical result: The conditional distribution of the random walk, given that Tb < ∞, gives an exact (zero variance) estimator for ub (s).

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 11 / 43

slide-18
SLIDE 18

The One Dimensional Case

Classical result: The conditional distribution of the random walk, given that Tb < ∞, gives an exact (zero variance) estimator for ub (s). Moral of the story:

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 11 / 43

slide-19
SLIDE 19

The One Dimensional Case

Classical result: The conditional distribution of the random walk, given that Tb < ∞, gives an exact (zero variance) estimator for ub (s). Moral of the story:

Select an importance sampler that mimics the behavior of such conditional distribution.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 11 / 43

slide-20
SLIDE 20

The One Dimensional Case

Theorem

(Asmussen and Kluppelberg): Conditional on Tb < ∞, we have that SuTb Tb , STb b b , Tb b

  • =

) (ηu, Z1, Z2)

  • n D(0, 1) R R as b % ∞, where Z1 and Z2 are Pareto with index

α 1. Interpretation: Prior to ruin, the random walk has drift η and a large jump of size b occurs suddenly in O (b) time...

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 12 / 43

slide-21
SLIDE 21

The One Dimensional Case

Theorem

(Asmussen and Kluppelberg): Conditional on Tb < ∞, we have that SuTb Tb , STb b b , Tb b

  • =

) (ηu, Z1, Z2)

  • n D(0, 1) R R as b % ∞, where Z1 and Z2 are Pareto with index

α 1. Interpretation: Prior to ruin, the random walk has drift η and a large jump of size b occurs suddenly in O (b) time... So, given that a jump hasn’t occurred by time k, then Sk ηk and the chance of reaching b in the next increment given that we eventually reach b (Tb < ∞) P (X > b ηk) R ∞

0 P (X > b ηu) du ηP (X > b ηk)

R ∞

b P (X > s) du

= O 1 b

  • .

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 12 / 43

slide-22
SLIDE 22

The One Dimensional Case

Family of changes-of-measure: Here s is the current position of the walk, f is the density (NOTE p (s) and a 2 (0, 1)) f X js (xj s) = p (s) fX (x) I (x > a (b s)) P (X > a (b s)) + (1 p (s)) fX (x) I (x a (b s)) P (X > a (b s)) In other words, s0 = s and s1 = s0 + x r (s0, s1)1 = p (s0) I (s1 s0 > a (b s0)) P (X > a (b s0)) + (1 p (s1)) I (s1 s0 a (b s0)) P (X a (b s0))

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 13 / 43

slide-23
SLIDE 23

The One Dimensional Case

Lyapunov Inequalities for Variance Control:

Lemma (B. & Glynn ’07)

Suppose that there is a positive function g () such that E K

s

g (S1) r (s, S1)2 g (s) ! = Es g (S1) r (s, S1) g (s)

  • 1

for all s b and g (s) 1 for s > b. Then, E K

s Z 2 = E K s Tb1

j=1

r (Sj, Sj+1)2 I (Tb < ∞) ! g (s) .

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 14 / 43

slide-24
SLIDE 24

The One Dimensional Case

Testing the Inequality: We wish to achieve strong e¢ciency, so we pick (for some κ > 0) g (s) = min κ Z ∞

bs P (X > u) du

2 , 1 ! . Pick (note that we need to select θ, κ > 0 to force Lyapunov ineq.) p (s) = θ P (X > b s) R ∞

bs P (X > s) du

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 15 / 43

slide-25
SLIDE 25

The One Dimensional Case

Testing the Inequality on g (s) < 1 (note that g 1): Es g (S1) r (s, S1) g (s)

  • =

E (g (s + X) ; X > a (b s)) P (X > a (b s)) p (s) g (s) +E (g (s + X) ; X a (b s)) P (X a (b s)) (1 p (s)) g (s)

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 16 / 43

slide-26
SLIDE 26

The One Dimensional Case

Testing the Inequality on g (s) < 1 (note that g 1): Es g (S1) r (s, S1) g (s)

  • =

E (g (s + X) ; X > a (b s)) P (X > a (b s)) p (s) g (s) +E (g (s + X) ; X a (b s)) P (X a (b s)) (1 p (s)) g (s)

  • P (X > a (b s))2

p (s) g (s) + E (g (s + X) ; X a (b s)) (1 p (s)) g (s)

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 17 / 43

slide-27
SLIDE 27

The One Dimensional Case

Testing the Inequality on g (s) < 1 (note that g 1): Es g (S1) r (s, S1) g (s)

  • =

E (g (s + X) ; X > a (b s)) P (X > a (b s)) p (s) g (s) +E (g (s + X) ; X a (b s)) P (X a (b s)) (1 p (s)) g (s)

  • P (X > a (b s))2

p (s) g (s) + E (g (s + X) ; X a (b s)) (1 p (s)) g (s)

  • aαP (X > a (b s))

θκ R ∞

bs P (X > u) du + 1 + 2 (η + θ)

P (X > (b s)) R ∞

bs P (X > u) du

  • Blanchet (Columbia)

IS for Heavy-tailed Walks 09/08 18 / 43

slide-28
SLIDE 28

The One Dimensional Case

Corresponding Algorithm: Select a 2 (0, 1), then choose θ and κ based on Lyapunov inequality AT EACH TIME STEP TEST

IF g (s) < 1 apply Imp. Sampling according p (s) - mixture ELSE do NOT apply I.S. and continue until hitting.

OUTPUT PRODUCT OF LIKELIHOOD RATIOS Z

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 19 / 43

slide-29
SLIDE 29

The One Dimensional Case

Termination of the algorithm: Intuitively, what can go wrong? Let M be the time until we get one large jump given S0 = 0 P (M > tb)

  • kb

j=0

(1 p (jη)) exp

  • tb

j=0

θ (α 1) b ηj !

  • exp
  • Z tb

θ (α 1) b ηs ds

  • =
  • 1

1 ηt θ(α1)/(η) . So, if θ < η/(α 1) then EXPECTED TERMINATION TIME COULD BE INFINITE!

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 20 / 43

slide-30
SLIDE 30

The One Dimensional Case

Optimal Selection of Parameters:

Lemma

For each ε1 > 0 there exists δ1, m > 0 such if θ = η δ1 and a = 1 δ1 then, if α > 2,

  • E K

0 Tb E0 (Tbj Tb < ∞)

  • ε1E0 (Tbj Tb < ∞) for all b m

Corollary

The importance sampling algorithm based on the previous selection of parameters is optimal.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 21 / 43

slide-31
SLIDE 31

The One Dimensional Case

Implementation Issues:

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 22 / 43

slide-32
SLIDE 32

The One Dimensional Case

Implementation Issues: Numerically test values of κ, θ and a before implementing

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 22 / 43

slide-33
SLIDE 33

The One Dimensional Case

Implementation Issues: Numerically test values of κ, θ and a before implementing Typically not too hard if one is willing to be somewhat conservative

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 22 / 43

slide-34
SLIDE 34

The One Dimensional Case

Implementation Issues: Numerically test values of κ, θ and a before implementing Typically not too hard if one is willing to be somewhat conservative BUT, as we saw, this can give in…nite expected hitting time

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 22 / 43

slide-35
SLIDE 35

The One Dimensional Case

Introducing a Controlled Bias: Note that P0 (Tb < ∞) = P0

  • Tb < Tβb

+ P0

  • Tβb < Tb, Tb < ∞
  • P0
  • Tb < Tβb

+ Pβb (Tb < ∞)

  • P0
  • Tb < Tβb

+ g (βb)1/2 . So, relative bias 0 1 P0

  • Tb < Tβb
  • P0 (Tb < ∞)

g (βb)1/2 P0 (Tb < ∞) g (βb)1/2 P0

  • Tb < Tβb
  • Blanchet (Columbia)

IS for Heavy-tailed Walks 09/08 23 / 43

slide-36
SLIDE 36

The One Dimensional Case

Complexity Count: RELATIVE BIAS = O

  • βα/2

. Want relative bias less than ε/2. Then, total complexity count (for ε-relative error with (1 δ) 100% con…dence) O

  • ε2δ1ε2/αE0 (Tbj Tb < ∞)
  • .

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 24 / 43

slide-37
SLIDE 37

The One Dimensional Case

Complexity Count: RELATIVE BIAS = O

  • βα/2

. Want relative bias less than ε/2. Then, total complexity count (for ε-relative error with (1 δ) 100% con…dence) O

  • ε2δ1ε2/αE0 (Tbj Tb < ∞)
  • .

Or letting β (b) % ∞ one gets a complexity count of order O

  • ε2δ1β (b) E0 (Tbj Tb < ∞)
  • .

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 24 / 43

slide-38
SLIDE 38

Agenda

Introduction The One Dimensional Case Multidimensional Case Markov Random Walks Conclusions

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 25 / 43

slide-39
SLIDE 39

The Multidimensional Case

Assumptions on X Multidimensional regular variation: There exists a (Radon) measure µ () such that P (X 2 Ab) P (jXj > b) = ) µ (A) (sense of vague convergence) for A 2 B

  • Rdnf0g
  • . Resnick (2007),

Hult et al (2005),... Example: X follows a standard t with α degrees of freedom, then P (X 2 bA) =

Z

A

m (1 + yT y/v)(α+d)/2 dy, µ (A) =

Z

A

dy (yT y)(α+d)/2

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 26 / 43

slide-40
SLIDE 40

The Multidimensional Case

Assumptions on X A bit of intuition about multivariate regular variation: µ () contains the information about the directions where large jumps

  • ccur

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 27 / 43

slide-41
SLIDE 41

The Multidimensional Case

Assumptions on X A bit of intuition about multivariate regular variation: µ () contains the information about the directions where large jumps

  • ccur

µ (fy : jyj > r; y/ jyj 2 Sg) = r ασ (S)

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 27 / 43

slide-42
SLIDE 42

The Multidimensional Case

Assumptions on X A bit of intuition about multivariate regular variation: µ () contains the information about the directions where large jumps

  • ccur

µ (fy : jyj > r; y/ jyj 2 Sg) = r ασ (S) Example: For the t distribution σ () is the uniform distribution on the unit sphere.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 27 / 43

slide-43
SLIDE 43

The Multidimensional Case

Assumptions on EX 2 Rd and A Assumption (CONE): There are linearly independent vectors v

1 , v 2 , ..., v m 2 Rd (kv i k = 1) and δ > 0 such that v T i

β δ for all 1 i m and for all z 2 A there is at least one i for which v T

i

z δ. η A

*

v

η A

*

v

Diagram illustrating Assumption A or a two dimensional random walk

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 28 / 43

slide-44
SLIDE 44

The Multidimensional Case

Assumptions on EX 2 Rd and A Previous assumption rules out the following situation: A η A η Here the drift goes parallel to A and the process will eventually hit A (this is the analogue of having EX = 0 in d = 1).

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 29 / 43

slide-45
SLIDE 45

The Multidimensional Case

Assumptions on µ () and A Avoid degenerate situations where A is "very thin" relative to µ () (example A a line in the case of a t distribution)...

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 30 / 43

slide-46
SLIDE 46

The Multidimensional Case

Assumptions on µ () and A Avoid degenerate situations where A is "very thin" relative to µ () (example A a line in the case of a t distribution)... Assumption (POS): Assume that α > 1 (regularly varying index) and there exists δ1 > 0 such that inf

0hδ1 µ (A hη) > 0

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 30 / 43

slide-47
SLIDE 47

The Multidimensional Case

Ruin Probabilities: (Hult, Lindskog, Mikosch and Samorodnitsky ’05, Hult and Lindskog ’06) P0 (TAb < ∞)

  • Z ∞

P (X + tη 2 bA) dt

  • bP (jXj > b)

Z ∞

µ (A tη) dt as b % ∞, where A = fA tη : t 0g Remark: By assumption POS

Z ∞

µ (A tη) dt 2 (0, ∞).

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 31 / 43

slide-48
SLIDE 48

The Multidimensional Case

Selection of the change-of-measure:

A A v1 v2 η

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 32 / 43

slide-49
SLIDE 49

The Multidimensional Case

Selection of the change-of-measure: C a

i (s, b) = fx : v T i

x > a

  • δb v T

i

s

  • g

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 33 / 43

slide-50
SLIDE 50

The Multidimensional Case

Selection of the change-of-measure: C a

i (s, b) = fx : v T i

x > a

  • δb v T

i

s

  • g

qX js (x) = p (s) fX (x) I ([m

i=1C a i (s, b))

P ([m

i=1C a i (s, b))

+ (1 p (s)) fX (x) I

  • \m

i=1C α i (s, b)

  • P
  • \m

i=1C α i (s, b)

  • .

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 33 / 43

slide-51
SLIDE 51

The Multidimensional Case

Selection of Lyapunov Inequality: Heuristic (‡uid) analysis suggests Ps (TbA < ∞)

Z ∞

P (X + s + ηt 2 bA) dt

  • m

i=1

Z ∞

P

  • v T

i

(X + s + ηt) bδ dt =

m

i=1

1 v T

i

η Gi

  • bδ v T

i

s

  • ,

where Gi

  • bδ v T

i

s

  • =

Z ∞

bδv T

i

s P

  • v T

i

X > u

  • du

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 34 / 43

slide-52
SLIDE 52

The Multidimensional Case

Lyapunov function to test: Heuristic (‡uid) analysis suggests gb (s) = min

  • κhb (s)2 , 1
  • = O
  • b2P (jXj > b)2

, where hb (s) =

m

i=1

1 v T

i

η Gi

  • bδ v T

i

s

  • .

Theorem

Pick p (s) = θP ([m

i=1C a i (s, b)) /hb (s) and a 2 (0, 1), then κ, θ > 0 can

be chosen so that gb () satis…es the Lyapunov inequality. So, applying the proposed IS when gb (s) < 1 gives a strongly e¢cient estimator.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 35 / 43

slide-53
SLIDE 53

The Multidimensional Case

Implementation issues: Sampling transitions AND computing p (s) involves a rare-event simulation problem

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 36 / 43

slide-54
SLIDE 54

The Multidimensional Case

Implementation issues: Sampling transitions AND computing p (s) involves a rare-event simulation problem

Right, but now the problem is …nite dimensional and static

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 36 / 43

slide-55
SLIDE 55

The Multidimensional Case

Implementation issues: Sampling transitions AND computing p (s) involves a rare-event simulation problem

Right, but now the problem is …nite dimensional and static Sometimes one can use the spectral measure to do the sampling

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 36 / 43

slide-56
SLIDE 56

The Multidimensional Case

Implementation issues: Sampling transitions AND computing p (s) involves a rare-event simulation problem

Right, but now the problem is …nite dimensional and static Sometimes one can use the spectral measure to do the sampling

Termination is even more complicated than in 1 dimension

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 36 / 43

slide-57
SLIDE 57

The Multidimensional Case

Implementation issues: Sampling transitions AND computing p (s) involves a rare-event simulation problem

Right, but now the problem is …nite dimensional and static Sometimes one can use the spectral measure to do the sampling

Termination is even more complicated than in 1 dimension

As in d = 1 one can introduce a controlled bias

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 36 / 43

slide-58
SLIDE 58

The Multidimensional Case

Implementation issues: Sampling transitions AND computing p (s) involves a rare-event simulation problem

Right, but now the problem is …nite dimensional and static Sometimes one can use the spectral measure to do the sampling

Termination is even more complicated than in 1 dimension

As in d = 1 one can introduce a controlled bias

Determination of the v

i ’s when A is union of polyhedra through

linear programs

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 36 / 43

slide-59
SLIDE 59

The Multidimensional Case

Example: Random walk with t distributed increments and α = 3 degrees of freedom. Relative bias no more than 5% with at least 97% con…dence.

A={x>1,y>1} x y η=(0,-1) A={x>1,y>1} x y η=(0,-1)

b Est SD Sample size 10 1.83e 03 1.31e 02 20000 20 4.28e 04 2.54e 03 20000 50 8.80e 05 4.74e 04 20000

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 37 / 43

slide-60
SLIDE 60

The Multidimensional Case

Theorem

Assume CONE and POS. In addition suppose that the sampling and the evaluation of K () can be done in O (1) operations (as b % ∞). Then, state-dependent importance sampling via Lyapunov control requires O

  • ε2δ1ε2/αb
  • perations to achieve ε-relative error with (1 δ) 100% con…dence.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 38 / 43

slide-61
SLIDE 61

Agenda

Introduction The One Dimensional Case Multidimensional Case Markov Random Walks Conclusions

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 39 / 43

slide-62
SLIDE 62

Markov Random Walks

Summary of the methodology: Select an appropriate change of measure.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 40 / 43

slide-63
SLIDE 63

Markov Random Walks

Summary of the methodology: Select an appropriate change of measure. Use Lyapunov inequalities for variance / bias control and termination.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 40 / 43

slide-64
SLIDE 64

Markov Random Walks

Summary of the methodology: Select an appropriate change of measure. Use Lyapunov inequalities for variance / bias control and termination. NOTE the di¤erence between light vs heavy-tailed... discrete nature (jumps for heavy tails) forces control at very short time scales!

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 40 / 43

slide-65
SLIDE 65

Markov Random Walks

Summary of the methodology: Select an appropriate change of measure. Use Lyapunov inequalities for variance / bias control and termination. NOTE the di¤erence between light vs heavy-tailed... discrete nature (jumps for heavy tails) forces control at very short time scales! Goal: Apply Lyapunov inequalities at more than one time step

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 40 / 43

slide-66
SLIDE 66

Markov Random Walks

Basic idea (again let d = 1): To simplify, let (Yn : n 0) be an irreducible, aperiodic, …nite state-space recurrent Markov chain P (Xn > tj Yn = k) = Lk (t) tα

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 41 / 43

slide-67
SLIDE 67

Markov Random Walks

Basic idea (again let d = 1): To simplify, let (Yn : n 0) be an irreducible, aperiodic, …nite state-space recurrent Markov chain P (Xn > tj Yn = k) = Lk (t) tα EπXn < 0 (but there maybe states y = i for which EiXn > 0)

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 41 / 43

slide-68
SLIDE 68

Markov Random Walks

Basic idea (again let d = 1): To simplify, let (Yn : n 0) be an irreducible, aperiodic, …nite state-space recurrent Markov chain P (Xn > tj Yn = k) = Lk (t) tα EπXn < 0 (but there maybe states y = i for which EiXn > 0) Application of Lyapunov inequality doesn’t work (MIGHT NOT SEE EπXn < 0)

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 41 / 43

slide-69
SLIDE 69

Markov Random Walks

Basic idea (again let d = 1): To simplify, let (Yn : n 0) be an irreducible, aperiodic, …nite state-space recurrent Markov chain P (Xn > tj Yn = k) = Lk (t) tα EπXn < 0 (but there maybe states y = i for which EiXn > 0) Application of Lyapunov inequality doesn’t work (MIGHT NOT SEE EπXn < 0) Test Lyapunov inequality and apply algorithm at time scales of order τ (for appropriately de…ne stopping time τ).

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 41 / 43

slide-70
SLIDE 70

Agenda

Introduction The One Dimensional Case Multidimensional Case Markov Random Walks Conclusions

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 42 / 43

slide-71
SLIDE 71

Conclusions

Discussed rare-event simulation for multidimensional random walks

Discrete nature (sudden jumps) forces control at short time scales Variance control via Lyapunov functions obtained from ‡uid analysis Biased estimators to control termination, but bias is quanti…able Provably e¢cient algorithms & interplay between linear programming, a¢ne functions and regular variation.

Markov random walks (and other settings, small noise processes?) force control at slightly larger time scales —> Lyapunov functions applied to τ transition kernel.

Blanchet (Columbia) IS for Heavy-tailed Walks 09/08 43 / 43