On a non-increasing Lindley-type equation Maria Vlasiou EURANDOM, - - PowerPoint PPT Presentation

on a non increasing lindley type equation
SMART_READER_LITE
LIVE PREVIEW

On a non-increasing Lindley-type equation Maria Vlasiou EURANDOM, - - PowerPoint PPT Presentation

On a non-increasing Lindley-type equation Maria Vlasiou EURANDOM, Eindhoven email: vlasiou@eurandom.tue.nl CWI Queueing Colloquium. May 27, 2005 1/22 1/22 Contents 1. The model


slide-1
SLIDE 1

◭ ◭ ◭ ◮ ◮ ◮

1/22

◭ ◭ ◭ ◮ ◮ ◮

1/22

On a non-increasing Lindley-type equation

Maria Vlasiou EURANDOM, Eindhoven email: vlasiou@eurandom.tue.nl CWI Queueing Colloquium. May 27, 2005

slide-2
SLIDE 2

◭ ◭ ◭ ◮ ◮ ◮

2/22

◭ ◭ ◭ ◮ ◮ ◮

2/22

Contents

Contents

  • 1. The model
  • 2. Stability
  • 3. Successive iterations
  • 4. Derivation of the integral equation
  • 5. The class of separable kernels
  • 6. The distribution of W
  • 7. Tail behaviour
slide-3
SLIDE 3

◭ ◭ ◭ ◮ ◮ ◮

3/22

◭ ◭ ◭ ◮ ◮ ◮

3/22

The model

1. The model

  • Infinite supply

Service Service phase Preparation phase . . . Service point 2 . . . point 1

  • Only one customer

allowed in a service point.

  • Server is necessary
  • nly during service

phase.

  • He

alternates be- tween the two service points.

slide-4
SLIDE 4

◭ ◭ ◭ ◮ ◮ ◮

4/22

◭ ◭ ◭ ◮ ◮ ◮

4/22

The model

Bn :

preparation time of the n-th customer

An :

service time of the n-th customer

Wn:

waiting time of the server for the n-th customer

slide-5
SLIDE 5

◭ ◭ ◭ ◮ ◮ ◮

4/22

◭ ◭ ◭ ◮ ◮ ◮

4/22

The model

Bn :

preparation time of the n-th customer

An :

service time of the n-th customer

Wn:

waiting time of the server for the n-th customer

Wn+1 = max{0, Bn+1 − An − Wn}

slide-6
SLIDE 6

◭ ◭ ◭ ◮ ◮ ◮

4/22

◭ ◭ ◭ ◮ ◮ ◮

4/22

The model

Bn :

preparation time of the n-th customer

An :

service time of the n-th customer

Wn:

waiting time of the server for the n-th customer

Wn+1 = max{0, Bn+1 − An − Wn}

slide-7
SLIDE 7

◭ ◭ ◭ ◮ ◮ ◮

4/22

◭ ◭ ◭ ◮ ◮ ◮

4/22

The model

Bn :

preparation time of the n-th customer

An :

service time of the n-th customer

Wn:

waiting time of the server for the n-th customer

Wn+1 = max{0, Bn+1 − An − Wn}

slide-8
SLIDE 8

◭ ◭ ◭ ◮ ◮ ◮

4/22

◭ ◭ ◭ ◮ ◮ ◮

4/22

The model

Bn :

preparation time of the n-th customer

An :

service time of the n-th customer

Wn:

waiting time of the server for the n-th customer

Wn+1 = max{0, Bn+1 − An − Wn}

slide-9
SLIDE 9

◭ ◭ ◭ ◮ ◮ ◮

4/22

◭ ◭ ◭ ◮ ◮ ◮

4/22

The model

Bn :

preparation time of the n-th customer

An :

service time of the n-th customer

Wn:

waiting time of the server for the n-th customer

Wn+1 = max{0, Bn+1 − An − Wn}

Lindley’s equation for the waiting time in a G/G/1 queue: 1

Bn (service) n-th arrival Wn

slide-10
SLIDE 10

◭ ◭ ◭ ◮ ◮ ◮

4/22

◭ ◭ ◭ ◮ ◮ ◮

4/22

The model

Bn :

preparation time of the n-th customer

An :

service time of the n-th customer

Wn:

waiting time of the server for the n-th customer

Wn+1 = max{0, Bn+1 − An − Wn}

Lindley’s equation for the waiting time in a G/G/1 queue: 1

  • Wn+1

arrival n + 1-st arrival Wn Bn (service) n-th

slide-11
SLIDE 11

◭ ◭ ◭ ◮ ◮ ◮

4/22

◭ ◭ ◭ ◮ ◮ ◮

4/22

The model

Bn :

preparation time of the n-th customer

An :

service time of the n-th customer

Wn:

waiting time of the server for the n-th customer

Wn+1 = max{0, Bn+1 − An − Wn}

Lindley’s equation for the waiting time in a G/G/1 queue: 1

  • Wn+1

arrival n + 1-st arrival Wn Bn (service) An n-th

Wn+1 = max {0, Bn + Wn − An}

slide-12
SLIDE 12

◭ ◭ ◭ ◮ ◮ ◮

4/22

◭ ◭ ◭ ◮ ◮ ◮

4/22

The model

Bn :

preparation time of the n-th customer

An :

service time of the n-th customer

Wn:

waiting time of the server for the n-th customer

Wn+1 = max{0, Bn+1 − An − Wn}

Lindley’s equation for the waiting time in a G/G/1 queue: 1

  • Wn+1

arrival n + 1-st arrival Wn Bn (service) An n-th

Wn+1 = max {0, Bn + Wn − An}

In equilibrium:

W = max{0, B − A − W}

For n 1,

Xn+1 = Bn+1 − An

and

P[Xn < 0] > 0.

slide-13
SLIDE 13

◭ ◭ ◭ ◮ ◮ ◮

5/22

◭ ◭ ◭ ◮ ◮ ◮

5/22

The model

Cases that have been studied:

  • A deterministic or exponential, B uniform; Park et al. (2003)
slide-14
SLIDE 14

◭ ◭ ◭ ◮ ◮ ◮

5/22

◭ ◭ ◭ ◮ ◮ ◮

5/22

The model

Cases that have been studied:

  • A deterministic or exponential, B uniform; Park et al. (2003)
  • A phase type, B uniform; Vlasiou et al. (2004)
slide-15
SLIDE 15

◭ ◭ ◭ ◮ ◮ ◮

5/22

◭ ◭ ◭ ◮ ◮ ◮

5/22

The model

Cases that have been studied:

  • A deterministic or exponential, B uniform; Park et al. (2003)
  • A phase type, B uniform; Vlasiou et al. (2004)
  • A general, B phase type; Vlasiou et al. (2005)

Here we shall study the "M/G" case; i.e. A exponential, B general. 1

✄7

slide-16
SLIDE 16

◭ ◭ ◭ ◮ ◮ ◮

6/22

◭ ◭ ◭ ◮ ◮ ◮

6/22

Stability

2. Stability

  • The existence of an invariant distribution is a consequence of the fact

that the sequence P[Wn x] is tight and the function g(w, x) =

max{0, x − w} is continuous in both x and w; see Foss and Konstan-

topoulos (2004), Theorem 4.

slide-17
SLIDE 17

◭ ◭ ◭ ◮ ◮ ◮

6/22

◭ ◭ ◭ ◮ ◮ ◮

6/22

Stability

2. Stability

  • The existence of an invariant distribution is a consequence of the fact

that the sequence P[Wn x] is tight and the function g(w, x) =

max{0, x − w} is continuous in both x and w; see Foss and Konstan-

topoulos (2004), Theorem 4.

  • The uniqueness of the steady-state distribution and the convergence to

it can be shown by a simple coupling argument.

slide-18
SLIDE 18

◭ ◭ ◭ ◮ ◮ ◮

6/22

◭ ◭ ◭ ◮ ◮ ◮

6/22

Stability

2. Stability

  • The existence of an invariant distribution is a consequence of the fact

that the sequence P[Wn x] is tight and the function g(w, x) =

max{0, x − w} is continuous in both x and w; see Foss and Konstan-

topoulos (2004), Theorem 4.

  • The uniqueness of the steady-state distribution and the convergence to

it can be shown by a simple coupling argument.

  • Through coupling we can show that P[X 0] is a bound of the rate of

convergence to the invariant distribution.

slide-19
SLIDE 19

◭ ◭ ◭ ◮ ◮ ◮

7/22

◭ ◭ ◭ ◮ ◮ ◮

7/22

Successive iterations

3. Successive iterations

We have

FW(x) = P[W x] = P[X − W x] = 1 − P[X − W x]

slide-20
SLIDE 20

◭ ◭ ◭ ◮ ◮ ◮

7/22

◭ ◭ ◭ ◮ ◮ ◮

7/22

Successive iterations

3. Successive iterations

We have

FW(x) = P[W x] = P[X − W x] = 1 − P[X − W x] = 1 − ∞

x

P[W y − x]dFX(y) = 1 − ∞

x

FW(y − x)dFX(y).

slide-21
SLIDE 21

◭ ◭ ◭ ◮ ◮ ◮

7/22

◭ ◭ ◭ ◮ ◮ ◮

7/22

Successive iterations

3. Successive iterations

We have

FW(x) = P[W x] = P[X − W x] = 1 − P[X − W x] = 1 − ∞

x

P[W y − x]dFX(y) = 1 − ∞

x

FW(y − x)dFX(y).

Theorem 1. There is a unique measurable bounded function F : [0, ∞) → R that satisfies the functional equation

F(x) = 1 − ∞

x

F(y − x)dFX(y).

slide-22
SLIDE 22

◭ ◭ ◭ ◮ ◮ ◮

8/22

◭ ◭ ◭ ◮ ◮ ◮

8/22

Successive iterations

  • Proof. In L∞([0, ∞)) we define the mapping

(T F)(x) = 1 − ∞

x

F(y − x)dFX(y).

Then we have

(T F1) − (T F2) = sup

x0 |(T F1)(x) − (T F2)(x)|

= sup

x0

x

[F2(y − x) − F1(y − x)] dFX(y)

  • sup

x0

x

sup

t0 |F2(t) − F1(t)|dFX(y)

= F1 − F2 sup

x0(1 − FX(x))

F1 − F2 (1 − FX(0)) = F1 − F2 P(B > A).

slide-23
SLIDE 23

◭ ◭ ◭ ◮ ◮ ◮

9/22

◭ ◭ ◭ ◮ ◮ ◮

9/22

Successive iterations

  • The convergence to the invariant distribution is geometrically fast and

the rate is bounded by the probability P(B > A).

slide-24
SLIDE 24

◭ ◭ ◭ ◮ ◮ ◮

9/22

◭ ◭ ◭ ◮ ◮ ◮

9/22

Successive iterations

  • The convergence to the invariant distribution is geometrically fast and

the rate is bounded by the probability P(B > A).

  • Since we have a contraction mapping, we can approximate the limiting

distribution by successive iterations.

slide-25
SLIDE 25

◭ ◭ ◭ ◮ ◮ ◮

9/22

◭ ◭ ◭ ◮ ◮ ◮

9/22

Successive iterations

  • The convergence to the invariant distribution is geometrically fast and

the rate is bounded by the probability P(B > A).

  • Since we have a contraction mapping, we can approximate the limiting

distribution by successive iterations.

  • If we find a continuous solution that belongs to L∞([0, ∞)), then this

solution is necessarily a distribution.

slide-26
SLIDE 26

◭ ◭ ◭ ◮ ◮ ◮

10/22

◭ ◭ ◭ ◮ ◮ ◮

10/22

Derivation of the integral equation

4. Derivation of the integral equation

FW(x) = P[W x] = P[B − W − A x] = ∞ ∞ P[B x + z + y]dFA(z)dFW(y) = π0 ∞ P[B x + z]dFA(z) + ∞

0+

∞ P[B x + y + z]dFA(z)dFW(y).

slide-27
SLIDE 27

◭ ◭ ◭ ◮ ◮ ◮

10/22

◭ ◭ ◭ ◮ ◮ ◮

10/22

Derivation of the integral equation

4. Derivation of the integral equation

FW(x) = P[W x] = P[B − W − A x] = ∞ ∞ P[B x + z + y]dFA(z)dFW(y) = π0 ∞ P[B x + z]dFA(z) + ∞

0+

∞ P[B x + y + z]dFA(z)dFW(y).

Therefore,

fW(x) = µFW(x) − µπ0FB(x) − µ ∞ FB(x + y)fW(y)dy.

slide-28
SLIDE 28

◭ ◭ ◭ ◮ ◮ ◮

11/22

◭ ◭ ◭ ◮ ◮ ◮

11/22

Derivation of the integral equation – Laplace transforms

Laplace transforms won’t work

Keep in mind that W = max{0, B − W − A}.

E[e−sW] = P[A > B − W] + E[e−s(B−W−A); A B − W] = P[A > B − W] + E[e−s(B−W−A)] − E[es(A−(B−W)); A > B − W] = P[A > B − W] + µ µ − sβ(s)ω(−s) − µ µ − sP[A > B − W].

slide-29
SLIDE 29

◭ ◭ ◭ ◮ ◮ ◮

11/22

◭ ◭ ◭ ◮ ◮ ◮

11/22

Derivation of the integral equation – Laplace transforms

Laplace transforms won’t work

Keep in mind that W = max{0, B − W − A}.

E[e−sW] = P[A > B − W] + E[e−s(B−W−A); A B − W] = P[A > B − W] + E[e−s(B−W−A)] − E[es(A−(B−W)); A > B − W] = P[A > B − W] + µ µ − sβ(s)ω(−s) − µ µ − sP[A > B − W].

But

P[A > B − W] = ∞ ∞

w

P[A > b − w]dFB(b)dFW(w) = ∞ ∞

w

b−w

µe−µududFB(b)dFW(w) = β(µ)ω(−µ) − ∞ w e−µ(b−w)dFB(b)dFW(w).

slide-30
SLIDE 30

◭ ◭ ◭ ◮ ◮ ◮

12/22

◭ ◭ ◭ ◮ ◮ ◮

12/22

The class of separable kernels

5. The class of separable kernels

Define the class M as:

F ∈ M ⇒ ∀x, y : F(x + y) = 1 − F(x + y) =

n

  • i=1

gi(x)hi(y),

where F is a distribution function on [0, ∞), and gi and hi are arbitrary measurable functions (that can even be constants).

slide-31
SLIDE 31

◭ ◭ ◭ ◮ ◮ ◮

12/22

◭ ◭ ◭ ◮ ◮ ◮

12/22

The class of separable kernels

5. The class of separable kernels

Define the class M as:

F ∈ M ⇒ ∀x, y : F(x + y) = 1 − F(x + y) =

n

  • i=1

gi(x)hi(y),

where F is a distribution function on [0, ∞), and gi and hi are arbitrary measurable functions (that can even be constants). Proposition 1. F ∈ PH ⇒ F ∈ M.

slide-32
SLIDE 32

◭ ◭ ◭ ◮ ◮ ◮

13/22

◭ ◭ ◭ ◮ ◮ ◮

13/22

The class of separable kernels – F ∈ PH

  • Proof. Since F is phase type, it can be viewed as the distribution of the time

until absorption of the n + 1-state Markov chain, where state 0 is absorbing and states {1, . . . , n} are not. Then

F(x) = P[J(x) is not absorbed].

So we have that

F(x + y) = P[J(x + y) ∈ {1, . . . , n}] =

n

  • i=1

P[J(x + y) ∈ {1, . . . , n} | J(x) = i]P[J(x) = i] =

n

  • i=1

P[J(y) ∈ {1, . . . , n} | J(0) = i]P[J(x) = i] =

n

  • i=1

hi(y)gi(x).

slide-33
SLIDE 33

◭ ◭ ◭ ◮ ◮ ◮

14/22

◭ ◭ ◭ ◮ ◮ ◮

14/22

The class of separable kernels

A well-known distribution that is not phase type, but has a rational Laplace transform is

F(x) = 1 − e−x(2 + sin x + cos x) 3 ,

which clearly belongs in M, since for example

sin(x + y) = sin x cos y + sin y cos x.

slide-34
SLIDE 34

◭ ◭ ◭ ◮ ◮ ◮

14/22

◭ ◭ ◭ ◮ ◮ ◮

14/22

The class of separable kernels

A well-known distribution that is not phase type, but has a rational Laplace transform is

F(x) = 1 − e−x(2 + sin x + cos x) 3 ,

which clearly belongs in M, since for example

sin(x + y) = sin x cos y + sin y cos x.

Proposition 2. F ∈ RLT ⇒ F ∈ M.

  • Proof. F ∈ RLT ⇒ F(x) =

n

  • i=1

mi

  • j=1

ci

j xj−1

(j−1)! eqix.

slide-35
SLIDE 35

◭ ◭ ◭ ◮ ◮ ◮

15/22

◭ ◭ ◭ ◮ ◮ ◮

15/22

The distribution of W

6. The distribution of W

Theorem 2. Assume that FB ∈ M, is continuous, and that for every i =

1, . . . , n the functions hi(y) are bounded and ∞ |gi(x)|dx < ∞.

Then the distribution of W is given by

FW(x) = 1 − eµx ∞

x

e−µs

  • µπ0F B(s) + µ

n

  • i=1

cigi(s)

  • ds.
slide-36
SLIDE 36

◭ ◭ ◭ ◮ ◮ ◮

16/22

◭ ◭ ◭ ◮ ◮ ◮

16/22

The distribution of W

The constants π0 and ci, i = 1, . . . , n, are a solution to the linear system of equations

2π0 − µπ0 β(µ) + µ

n

  • i=1

ciγi(µ) = 1

and for i = 1, . . . , n,

ci = µπ0 ∞ hi(x)

  • F B(x) − µ

x

e−µ(s−x)F B(s)ds

  • dx

+ µ

n

  • j=1

cj ∞ hi(x)

  • gj(x) − µ

x

e−µ(s−x)gj(s)ds

  • dx.

(1)

slide-37
SLIDE 37

◭ ◭ ◭ ◮ ◮ ◮

17/22

◭ ◭ ◭ ◮ ◮ ◮

17/22

The distribution of W

Outline of the proof. .

  • Substitute F B(x + y) =

n

  • i=1

gi(x)hi(y) into the integral equation.

slide-38
SLIDE 38

◭ ◭ ◭ ◮ ◮ ◮

17/22

◭ ◭ ◭ ◮ ◮ ◮

17/22

The distribution of W

Outline of the proof. .

  • Substitute F B(x + y) =

n

  • i=1

gi(x)hi(y) into the integral equation.

  • Define ci =

0 hi(y)fW(y)dy.

slide-39
SLIDE 39

◭ ◭ ◭ ◮ ◮ ◮

17/22

◭ ◭ ◭ ◮ ◮ ◮

17/22

The distribution of W

Outline of the proof. .

  • Substitute F B(x + y) =

n

  • i=1

gi(x)hi(y) into the integral equation.

  • Define ci =

0 hi(y)fW(y)dy.

  • Solve the linear differential equation that arises.
slide-40
SLIDE 40

◭ ◭ ◭ ◮ ◮ ◮

17/22

◭ ◭ ◭ ◮ ◮ ◮

17/22

The distribution of W

Outline of the proof. .

  • Substitute F B(x + y) =

n

  • i=1

gi(x)hi(y) into the integral equation.

  • Define ci =

0 hi(y)fW(y)dy.

  • Solve the linear differential equation that arises.
  • Show that the linear system of equations that appears in the theorem

has at least one solution.

slide-41
SLIDE 41

◭ ◭ ◭ ◮ ◮ ◮

17/22

◭ ◭ ◭ ◮ ◮ ◮

17/22

The distribution of W

Outline of the proof. .

  • Substitute F B(x + y) =

n

  • i=1

gi(x)hi(y) into the integral equation.

  • Define ci =

0 hi(y)fW(y)dy.

  • Solve the linear differential equation that arises.
  • Show that the linear system of equations that appears in the theorem

has at least one solution.

  • Show that the final solution is continuous and bounded.
slide-42
SLIDE 42

◭ ◭ ◭ ◮ ◮ ◮

17/22

◭ ◭ ◭ ◮ ◮ ◮

17/22

The distribution of W

Outline of the proof. .

  • Substitute F B(x + y) =

n

  • i=1

gi(x)hi(y) into the integral equation.

  • Define ci =

0 hi(y)fW(y)dy.

  • Solve the linear differential equation that arises.
  • Show that the linear system of equations that appears in the theorem

has at least one solution.

  • Show that the final solution is continuous and bounded.

Note that

0 |gi(x)|dx < ∞ implies that B has a finite mean, γi(µ) and

β(µ) exist and are finite numbers, and that ∞ hi(x)F B(x)dx

and

∞ hi(x)gj(x)dx

exist and are finite.

slide-43
SLIDE 43

◭ ◭ ◭ ◮ ◮ ◮

18/22

◭ ◭ ◭ ◮ ◮ ◮

18/22

Tail behaviour

7. Tail behaviour

Assume that eB is regularly varying with index −κ 0, i.e.

P[eB > ex · ey] P[eB > ex]

x→∞

− → (ey)−κ.

So, if κ = 0, then B is long-tailed, and thus heavy-tailed. If κ > 0, then B is light-tailed, but not lighter than an exponential tail, since,

P[B > x + y] P[B > x]

x→∞

− → e−κy.

slide-44
SLIDE 44

◭ ◭ ◭ ◮ ◮ ◮

18/22

◭ ◭ ◭ ◮ ◮ ◮

18/22

Tail behaviour

7. Tail behaviour

Assume that eB is regularly varying with index −κ 0, i.e.

P[eB > ex · ey] P[eB > ex]

x→∞

− → (ey)−κ.

So, if κ = 0, then B is long-tailed, and thus heavy-tailed. If κ > 0, then B is light-tailed, but not lighter than an exponential tail, since,

P[B > x + y] P[B > x]

x→∞

− → e−κy.

Proposition 3. Let eB be regularly varying with index −κ. Then for the tail of W we have that

P[W > x] ∼ P[X > x] E[e−κW].

slide-45
SLIDE 45

◭ ◭ ◭ ◮ ◮ ◮

19/22

◭ ◭ ◭ ◮ ◮ ◮

19/22

Tail behaviour

  • Proof. From Breiman (1965), Proposition 3, and Cline & Samorodnitsky

(1994), Corollary 3.6 we know that if X > 0 is a regularly varying ran- dom variable with index −κ, κ 0, and Y > 0 is independent of X with

E[Y κ+ǫ] finite for some ǫ > 0, then XY is regularly varying with index −κ;

in particular,

P[X · Y > x] ∼ E[Y κ]P[X > x].

So,

P[W > x] = P[B − (W + A) > x] ⇒P[eW > ex] = P[eBe−(W+A) > ex]

slide-46
SLIDE 46

◭ ◭ ◭ ◮ ◮ ◮

19/22

◭ ◭ ◭ ◮ ◮ ◮

19/22

Tail behaviour

  • Proof. From Breiman (1965), Proposition 3, and Cline & Samorodnitsky

(1994), Corollary 3.6 we know that if X > 0 is a regularly varying ran- dom variable with index −κ, κ 0, and Y > 0 is independent of X with

E[Y κ+ǫ] finite for some ǫ > 0, then XY is regularly varying with index −κ;

in particular,

P[X · Y > x] ∼ E[Y κ]P[X > x].

So,

P[W > x] = P[B − (W + A) > x] ⇒P[eW > ex] = P[eBe−(W+A) > ex] ⇒P[eW > ex] ∼ P[eB > ex]E[e−κ(W+A)] ⇒P[W > x] ∼ P[B > x]E[e−κW]E[e−κA].

slide-47
SLIDE 47

◭ ◭ ◭ ◮ ◮ ◮

19/22

◭ ◭ ◭ ◮ ◮ ◮

19/22

Tail behaviour

  • Proof. From Breiman (1965), Proposition 3, and Cline & Samorodnitsky

(1994), Corollary 3.6 we know that if X > 0 is a regularly varying ran- dom variable with index −κ, κ 0, and Y > 0 is independent of X with

E[Y κ+ǫ] finite for some ǫ > 0, then XY is regularly varying with index −κ;

in particular,

P[X · Y > x] ∼ E[Y κ]P[X > x].

So,

P[W > x] = P[B − (W + A) > x] ⇒P[eW > ex] = P[eBe−(W+A) > ex] ⇒P[eW > ex] ∼ P[eB > ex]E[e−κ(W+A)] ⇒P[W > x] ∼ P[B > x]E[e−κW]E[e−κA].

But

P[B > x]E[e−κA] ∼ P[eBe−A > ex] = P[B − A > x] = P[X > x].

slide-48
SLIDE 48

◭ ◭ ◭ ◮ ◮ ◮

20/22

◭ ◭ ◭ ◮ ◮ ◮

20/22

Tail behaviour

Assume now that eB is rapidly varying with index −∞, i.e.

lim

x→∞

P[eB > ex · ey] P[eB > ex] = lim

x→∞

P[B > x + y] P[B > x] =      0,

if y > 0;

1,

if y = 0;

∞,

if y < 0. Proposition 4. Let eB be rapidly varying with index −∞. Then for the tail of W we have that

P[W > x] ∼ P[X > x] P[W = 0].

slide-49
SLIDE 49

◭ ◭ ◭ ◮ ◮ ◮

20/22

◭ ◭ ◭ ◮ ◮ ◮

20/22

Tail behaviour

Assume now that eB is rapidly varying with index −∞, i.e.

lim

x→∞

P[eB > ex · ey] P[eB > ex] = lim

x→∞

P[B > x + y] P[B > x] =      0,

if y > 0;

1,

if y = 0;

∞,

if y < 0. Proposition 4. Let eB be rapidly varying with index −∞. Then for the tail of W we have that

P[W > x] ∼ P[X > x] P[W = 0].

For the proof of this, we shall need the following lemma. Lemma 1. eB is rapidly varying ⇒ eX is rapidly varying.

slide-50
SLIDE 50

◭ ◭ ◭ ◮ ◮ ◮

20/22

◭ ◭ ◭ ◮ ◮ ◮

20/22

Tail behaviour

Assume now that eB is rapidly varying with index −∞, i.e.

lim

x→∞

P[eB > ex · ey] P[eB > ex] = lim

x→∞

P[B > x + y] P[B > x] =      0,

if y > 0;

1,

if y = 0;

∞,

if y < 0. Proposition 4. Let eB be rapidly varying with index −∞. Then for the tail of W we have that

P[W > x] ∼ P[X > x] P[W = 0].

For the proof of this, we shall need the following lemma. Lemma 1. eB is rapidly varying ⇒ eX is rapidly varying.

slide-51
SLIDE 51

◭ ◭ ◭ ◮ ◮ ◮

21/22

◭ ◭ ◭ ◮ ◮ ◮

21/22

Tail behaviour

Proof of Proposition 4. We have

P[W > x] = P[X − W > x] = P[X − W > x ; W = 0] + P[X − W > x ; W > 0]

slide-52
SLIDE 52

◭ ◭ ◭ ◮ ◮ ◮

21/22

◭ ◭ ◭ ◮ ◮ ◮

21/22

Tail behaviour

Proof of Proposition 4. We have

P[W > x] = P[X − W > x] = P[X − W > x ; W = 0] + P[X − W > x ; W > 0] = P[X > x]P[W = 0] + P[X − W > x ; 0 < W < ǫ] + P[X − W > x ; W ǫ].

So

lim inf

x→∞

P[W > x] P[X > x]P[W = 0] 1.

For the upper limit we first observe that

P[X − W > x ; 0 < W < ǫ] P[X > x] P[0 < W < ǫ]

and that

P[X − W > x ; W ǫ] P[X > x + ǫ] P[W ǫ].

slide-53
SLIDE 53

◭ ◭ ◭ ◮ ◮ ◮

22/22

◭ ◭ ◭ ◮ ◮ ◮

22/22

Tail behaviour

So

P[W > x] P[X > x]P[W = 0] 1 + P[0 < W < ǫ] P[W = 0] + P[X > x + ǫ] P[W ǫ] P[X > x]P[W = 0] .

slide-54
SLIDE 54

◭ ◭ ◭ ◮ ◮ ◮

22/22

◭ ◭ ◭ ◮ ◮ ◮

22/22

Tail behaviour

So

P[W > x] P[X > x]P[W = 0] 1 + P[0 < W < ǫ] P[W = 0] + P[X > x + ǫ] P[W ǫ] P[X > x]P[W = 0] .

Since eX is rapidly varying, we have that for ǫ > 0

P[X > x + ǫ] = o(P[X > x]).

So

lim sup

x→∞

P[W > x] P[X > x]P[W = 0] 1. ✷