Eleni Eleni Vatamidou, atamidou, Ivo Ivo Adan, Adan, Ma Maria - - PowerPoint PPT Presentation

eleni eleni vatamidou atamidou ivo ivo adan adan ma maria
SMART_READER_LITE
LIVE PREVIEW

Eleni Eleni Vatamidou, atamidou, Ivo Ivo Adan, Adan, Ma Maria - - PowerPoint PPT Presentation

Eleni Eleni Vatamidou, atamidou, Ivo Ivo Adan, Adan, Ma Maria ria Vlasiou, Vlasiou, and and Bert Bert Zwart rt Asymptotic Asymptotic erro rror bounds ounds fo for truncated truncated buffer buffer app appro roxi ximati ation


slide-1
SLIDE 1

Eleni Eleni Vatamidou, atamidou, Ivo Ivo Adan, Adan, Ma Maria ria Vlasiou, Vlasiou, and and Bert Bert Zwart rt Asymptotic Asymptotic erro rror bounds

  • unds fo

for truncated truncated buffer buffer app appro roxi ximati ation

  • ns of
  • f a

2-no 2-node de tan tandem dem queue queue

MAM-9 Budapest, June 29, 2016

slide-2
SLIDE 2

Tandem network: MX/M/1 → •/M/1

λ (batch) µ1 µ2

1 / 26

slide-3
SLIDE 3

Tandem network: MX/M/1 → •/M/1

λ (batch) µ1 µ2

◮ B r.v. for the batch sizes; EB = ∞ i=1 ipi < ∞. ◮ Assumption: λEB/µi < 1, i = 1, 2 ◮ Uniformisation: λ + µ1 + µ2 = 1 ◮ Xn and Yn queue lengths (including service) at the nth jump

epoch, s.t. (Xn, Yn) ∈ N2

1 / 26

slide-4
SLIDE 4

Transition diagram of the QBD

λ p1 λ p2 λ p3 λ p4 λ pi μ2 μ1

m1 m2

Infinitesimal generator: Q =        B A0 · · · A2 A1 A0 · · · A2 A1 A0 · · · A2 A1 · · · . . . . . . . . . . . . ...        .

2 / 26

slide-5
SLIDE 5

Matrix-analytic methods – MAM

For an irreducible and positive recurrent Markov chain, there exists a unique πQ = 0, πe = 1.

3 / 26

slide-6
SLIDE 6

Matrix-analytic methods – MAM

For an irreducible and positive recurrent Markov chain, there exists a unique πQ = 0, πe = 1.

The stationary distribution

If we partition π by level (1st coordinate) to the sub-vectors πn, n ≥ 0, then π0B + π1A2 = 0, πn−1A0 + πnA1 + πn+1A2 = 0, n ≥ 1,

  • n≥0

πne = 1. where each πn is (N + 1)-dimensional.

3 / 26

slide-7
SLIDE 7

Matrix-analytic methods – MAM

For an irreducible and positive recurrent Markov chain, there exists a unique πQ = 0, πe = 1.

The stationary distribution

If we partition π by level (1st coordinate) to the sub-vectors πn, n ≥ 0, then π0B + π1A2 = 0, πn−1A0 + πnA1 + πn+1A2 = 0, n ≥ 1,

  • n≥0

πne = 1. where each πn is (N + 1)-dimensional. Requirement: finite number of phases (2nd coordinate)

3 / 26

slide-8
SLIDE 8

Evaluation of (X∞, Y∞)

◮ (X0, Y0) = (0, 0) initial state ◮ T(0,0) = inf{n ≥ 1 : Xn = Yn = 0 | X0 = Y0 = 0}, return time

to the origin or cycle length P

  • X∞ ≥ x, Y∞ ≥ y
  • =

1 ET(0,0) E T(0,0)

  • n=1

1 (Xn ≥ x, Yn ≥ y)

  • 4 / 26
slide-9
SLIDE 9

Evaluation of (X∞, Y∞)

◮ (X0, Y0) = (0, 0) initial state ◮ T(0,0) = inf{n ≥ 1 : Xn = Yn = 0 | X0 = Y0 = 0}, return time

to the origin or cycle length P

  • X∞ ≥ x, Y∞ ≥ y
  • =

1 ET(0,0) E T(0,0)

  • n=1

1 (Xn ≥ x, Yn ≥ y)

  • =

1 ET(0,0) E T(0,0)

  • n=1

1 (Xn ≥ x, Yn ≥ y) · 1

  • max

1≤l≤T(0,0)

Xl < N

  • =I

4 / 26

slide-10
SLIDE 10

Evaluation of (X∞, Y∞)

◮ (X0, Y0) = (0, 0) initial state ◮ T(0,0) = inf{n ≥ 1 : Xn = Yn = 0 | X0 = Y0 = 0}, return time

to the origin or cycle length P

  • X∞ ≥ x, Y∞ ≥ y
  • =

1 ET(0,0) E T(0,0)

  • n=1

1 (Xn ≥ x, Yn ≥ y)

  • =

1 ET(0,0) E T(0,0)

  • n=1

1 (Xn ≥ x, Yn ≥ y) · 1

  • max

1≤l≤T(0,0)

Xl < N

  • =I

+ 1 ET(0,0) E T(0,0)

  • n=1

1 (Xn ≥ x, Yn ≥ y) · 1

  • max

1≤l≤T(0,0)

Xl ≥ N

  • =II

.

4 / 26

slide-11
SLIDE 11

Truncation of the state space

λ p1 λ p2 λ p3 λ p4 ∞ λ ΣPi

i=N‐m1

μ2 μ1

m1 m2 N

I =E T (N)

(0,0)

  • n=1

1

  • X (N)

n

≥ x, Y (N)

n

≥ y

  • · 1
  • max

1≤l≤T (N)

(0,0)

X (N)

l

< N ≤E T (N)

(0,0)

  • n=1

1

  • X (N)

n

≥ x, Y (N)

n

≥ y = ET (N)

(0,0)P

  • X (N)

≥ x, Y (N)

≥ y

  • .

5 / 26

slide-12
SLIDE 12

Exceeding the truncation level

II = E T(0,0)

  • n=1

1 (Xn ≥ x, Yn ≥ y) · 1

  • max

1≤l≤T(0,0)

Xl ≥ N ≤ E

  • T(0,0) · 1
  • max

1≤l≤T(0,0)

Xl ≥ N = E

  • T(0,0) · 1
  • MT(0,0) ≥ N

= E

  • T(0,0)
  • MT(0,0) ≥ N
  • P
  • MT(0,0) ≥ N
  • .

6 / 26

slide-13
SLIDE 13

Exceeding the truncation level

II = E T(0,0)

  • n=1

1 (Xn ≥ x, Yn ≥ y) · 1

  • max

1≤l≤T(0,0)

Xl ≥ N ≤ E

  • T(0,0) · 1
  • max

1≤l≤T(0,0)

Xl ≥ N = E

  • T(0,0) · 1
  • MT(0,0) ≥ N

= E

  • T(0,0)
  • MT(0,0) ≥ N
  • P
  • MT(0,0) ≥ N
  • .

Theorem: Upper and lower bounds for the approximation

0 ≤ P

  • X∞ ≥ x, Y∞ ≥ y
  • − P
  • X (N)

≥ x, Y (N)

≥ y

  • ≤ E
  • T(0,0)
  • MT(0,0) ≥ N

P

  • MT(0,0) ≥ N
  • ET(0,0)

.

6 / 26

slide-14
SLIDE 14

Asymptotic upper bound

Main theorem

As N → ∞, P

  • X∞ ≥ x, Y∞ ≥ y
  • − P
  • X (N)

≥ x, Y (N)

≥ y

  • KNe−γN,

where K =

  • 1

µ2 − λEB · (˘ µ1 − µ2)+ ˘ λ˘ EB − ˘ µ1 + (µ1 − µ2)+ µ1 − λEB

  • +

1 ˘ λ˘ EB − ˘ µ1 + 1 µ1 − λEB

  • × C1eγ
  • 1 − λEB

µ1

  • ,

and C1 is a constant.

7 / 26

slide-15
SLIDE 15

Proof

Step 1: Limit for the probability P

  • MT(0,0) ≥ N
  • ◮ T0 = inf{n ≥ 1 : Xn = 0 | X0 = 0}

◮ from extreme value theory:

maxi=1,...,

n ET(0,0) M

T(0,0) i

≈ maxi=1,...,n Xi ≈ maxi=1,...,

n ET0 MT0

i ◮ result:

P

  • MT(0,0) ≥ N
  • ET(0,0)

∼ P

  • MT0 ≥ N
  • ET0

.

8 / 26

slide-16
SLIDE 16

Proof

Step 1: Limit for the probability P

  • MT(0,0) ≥ N
  • ◮ T0 = inf{n ≥ 1 : Xn = 0 | X0 = 0}

◮ from extreme value theory:

maxi=1,...,

n ET(0,0) M

T(0,0) i

≈ maxi=1,...,n Xi ≈ maxi=1,...,

n ET0 MT0

i ◮ result:

P

  • MT(0,0) ≥ N
  • ET(0,0)

∼ P

  • MT0 ≥ N
  • ET0

. Step 2: Limit for the probability P

  • MT0 ≥ N
  • ◮ a conspiracy leads to a maximum value N

◮ an exponential change of measure gives ˘

λ, ˘ P(B = n), ˘ µ1, and ˘ µ2 (γ is the solution of the Lundberg equation).

◮ Cram´

er-Lundberg approximation: eγ(N−1)P

  • MT0 ≥ N
  • → C1

8 / 26

slide-17
SLIDE 17

Proof (continued)

◮ ergodicity of Xn gives: ET0 = 1/P

  • X∞ = 0
  • ◮ Little’s formula: P
  • X∞ = 0
  • = 1 − ρ1 = 1 − λEB/µ1.

9 / 26

slide-18
SLIDE 18

Proof (continued)

◮ ergodicity of Xn gives: ET0 = 1/P

  • X∞ = 0
  • ◮ Little’s formula: P
  • X∞ = 0
  • = 1 − ρ1 = 1 − λEB/µ1.

Main theorem

As N → ∞, P

  • X∞ ≥ x, Y∞ ≥ y
  • − P
  • X (N)

≥ x, Y (N)

≥ y

  • KNe−γN,

where K =

  • 1

µ2 − λEB · (˘ µ1 − µ2)+ ˘ λ˘ EB − ˘ µ1 + (µ1 − µ2)+ µ1 − λEB

  • +

1 ˘ λ˘ EB − ˘ µ1 + 1 µ1 − λEB

  • × C1eγ
  • 1 − λEB

µ1

  • ,

and C1 is a constant.

9 / 26

slide-19
SLIDE 19

Proof (continued)

Step 3: The conditional expectation E

  • T(0,0)
  • MT(0,0) ≥ N
  • 10 / 26
slide-20
SLIDE 20

Proof (continued)

Step 3: The conditional expectation E

  • T(0,0)
  • MT(0,0) ≥ N
  • time

#Q1 ˘ λ˘ EB − ˘ µ1 λEB − µ1 ˘ µ1 µ1 λEB N τ1 τ2 T(0,0) time #Q2 τ1 τ2 T(0,0)

10 / 26

slide-21
SLIDE 21

Proof (continued)

time #Q1 ˘ λ˘ EB − ˘ µ1 λEB − µ1 ˘ µ1 µ1 λEB N τ1 τ2 T(0,0) time #Q2 µ1 − µ2 λEB − µ2

h2

τ1 τ2 τ3 T(0,0)

11 / 26

slide-22
SLIDE 22

Proof (continued)

time #Q1 ˘ λ˘ EB − ˘ µ1 λEB − µ1 ˘ µ1 µ1 λEB N τ1 τ2 T(0,0) time #Q2 ˘ µ1 − µ2 µ

1

− µ

2

λEB − µ2

h1 h2

τ1 τ2 τ3 T(0,0)

12 / 26

slide-23
SLIDE 23

Proof (continued)

Distributions of the jumps/connection with random walks Zn =      0, with probability µ2, −1, with probability µ1, m, with probability λpm, m = 1, 2, . . . , Wn =      −1, if Zn = 0, 1, if Zn = −1 and Xn−1 > 0, 0, else.

13 / 26

slide-24
SLIDE 24

Behaviour in the time interval [0, τ1]

time #Q1 ˘ λ˘ EB − ˘ µ1 λEB − µ1 ˘ µ1 µ1 λEB N τ1 τ2 T(0,0) time #Q2 ˘ µ1 − µ2 µ

1

− µ

2

λEB − µ2

h1 h2

τ1 τ2 τ3 T(0,0)

14 / 26

slide-25
SLIDE 25

Behaviour in the time interval [0, τ1]

Proposition (for Q1)

As N → ∞, E

  • τ1
  • MT(0,0) ≥ N
  • =

1 ˘ λ˘ EB − ˘ µ1

  • N + o(N)
  • .

Proof.

◮ Let z be s.t. z > 1/

˘ λ˘ EB − ˘ µ1

  • . Then,

E τ1 N

  • τ1 < T(0,0)
  • =

z P

  • τ1 > yN
  • τ1 < T(0,0)
  • dy

+ ∞

z

P

  • τ1 > yN
  • τ1 < T(0,0)
  • dy.

◮ change of measure and use of lim N→∞

˘ E τ1 N

  • =

1 ˘ λ˘ EB − ˘ µ1

15 / 26

slide-26
SLIDE 26

Behaviour in the time interval [0, τ1]

Proposition (for Q2)

As N → ∞, E

  • Yτ1
  • MT(0,0) ≥ N
  • ≤ (˘

µ1 − µ2)+ ˘ λ˘ EB − ˘ µ1 N + o(N).

Proof.

◮ kill dependence from Xn

W ′

n =

     −1, if Zn = 0, K, if Zn = −1, 0, else,

◮ use properties of 2-dimensional random walks: V ′

τ(N)

N ˘ P

→ ˘

EW ′ ˘ EZ ,

a.s. N → ∞

16 / 26

slide-27
SLIDE 27

Behaviour in the time interval [τ1, τ2]

time #Q1 ˘ λ˘ EB − ˘ µ1 λEB − µ1 ˘ µ1 µ1 λEB N τ1 τ2 T(0,0) time #Q2 ˘ µ1 − µ2 µ

1

− µ

2

λEB − µ2

h1 h2

τ1 τ2 τ3 T(0,0)

17 / 26

slide-28
SLIDE 28

Behaviour in the time interval [τ1, τ2]

Proposition (for Q1)

As N → ∞, E

  • τ2 − τ1
  • MT(0,0) ≥ N
  • =

1 µ1 − λEB

  • N + o(N)
  • .

Proof.

definition of a recursive function and use of exponential change of measure

18 / 26

slide-29
SLIDE 29

Behaviour in the time interval [τ1, τ2]

Proposition (for Q2)

As N → ∞, E

  • Yτ2
  • MT(0,0) ≥ N
  • =

µ1 − µ2)+ ˘ λ˘ EB − ˘ µ1 + (˘ µ1 − µ2)+ µ1 − λEB

  • N + o(N).

Proof.

◮ Q1 has always customers to feed Q2 ◮ conditioning on {MT(0,0) ≥ N} and taking expectations

E

  • Yτ2
  • MT(0,0) ≥ N
  • = E
  • Yτ1
  • MT(0,0) ≥ N
  • + E
  • τ2
  • n=τ1+1

W ′

n

  • MT(0,0) ≥ N
  • Wald′s equation

19 / 26

slide-30
SLIDE 30

Behaviour in the time interval [τ2, τ3]

time #Q1 ˘ λ˘ EB − ˘ µ1 λEB − µ1 ˘ µ1 µ1 λEB N τ1 τ2 T(0,0) time #Q2 ˘ µ1 − µ2 µ

1

− µ

2

λEB − µ2

h1 h2

τ1 τ2 τ3 T(0,0)

20 / 26

slide-31
SLIDE 31

Behaviour in the time interval [τ2, τ3]

Proposition (for Q2)

As N → ∞, E

  • τ3 − τ2
  • MT(0,0) ≥ N
  • =

1 µ2 − λEB ·

µ1 − µ2)+ ˘ λ˘ EB − ˘ µ1 + (µ1 − µ2)+ µ1 − λEB

  • N + o(N).

Proof.

◮ Wn+1 = Yn+1 − Yn conditionally independent given (Zi)i≥0 ◮ (Xn, Sn)n≥0, with Sn = − n i=1 Wi and X0 = S0 = 0, is a

Markov Additive Process (MAP)/ (Markov Random Walk (MRW)) and satisfies P

  • Xn+1 ∈ A, Sn+1 − Sn ∈ B
  • Xn, Wn
  • = P
  • Xn, A × B
  • .

◮ Markov Renewal Theorem for MAP

21 / 26

slide-32
SLIDE 32

Behaviour in the time interval [τ2, τ3]

Proposition (for Q1)

N → ∞, E

  • Xτ3
  • MT(0,0) ≥ N
  • =

λEB + µ2 2(µ2 − λEB)

  • 1 + o(1)
  • .

Proof.

◮ define the margingale

An = n

i=1 Zi1 (Zi > 0) − n i=1 1 (Zi = 0) − (λEB − µ2)n ◮ use Doob’s optional sampling theorem ◮ and Wald’s equation for Markov random walks

22 / 26

slide-33
SLIDE 33

Numerical example - Special case

◮ Geometric distribution for the batch sizes:

P(B = n) = β(1 − β)n−1, n = 1, 2, . . . .

◮ γ = − ln

  • (λ + µ1 − βµ1)/µ1
  • a.u.e.b. = N
  • β

βµ2 − λ ·

  • λ
  • 1 −

µ2 λ + µ1 − βµ1 + + β(µ1 − µ2)+

  • + β

+ λ (λ + µ1 − βµ1) λ + µ1 − βµ1 µ1 N−1 ρ1(1 − ρ1).

23 / 26

slide-34
SLIDE 34

Numerical example - Special case

Focus: on the marginal distribution of Q2

24 / 26

slide-35
SLIDE 35

Numerical example - Special case

Focus: on the marginal distribution of Q2 Parameter choise: {β = 0.5, ρ1 = 0.7, ρ2 = 0.8} y N = 10 N = 20 N = 30 N = 40 N = 50 5 0.128921 0.025536 0.005539 0.001551 0.000755 10 0.123171 0.029763 0.006556 0.001551 0.000517 15 0.086761 0.026535 0.006317 0.001419 0.000349 20 0.054454 0.020534 0.005432 0.001229 0.000237 25 0.032516 0.014616 0.004358 0.001069 0.000221 30 0.018948 0.009835 0.003276 0.000874 0.000195 a.u.e.b. 0.617191 0.243018 0.071766 0.018839 0.004636

24 / 26

slide-36
SLIDE 36

Conclusions

◮ The asymptotic error bound depends only on N and the

model parameters; i.e. it is uniform in the values of x and y.

25 / 26

slide-37
SLIDE 37

Conclusions

◮ The asymptotic error bound depends only on N and the

model parameters; i.e. it is uniform in the values of x and y.

◮ The bound is conservative.

25 / 26

slide-38
SLIDE 38

Conclusions

◮ The asymptotic error bound depends only on N and the

model parameters; i.e. it is uniform in the values of x and y.

◮ The bound is conservative. ◮ The bound becomes more conservative as N increases.

25 / 26

slide-39
SLIDE 39

Conclusions

◮ The asymptotic error bound depends only on N and the

model parameters; i.e. it is uniform in the values of x and y.

◮ The bound is conservative. ◮ The bound becomes more conservative as N increases. ◮ The undesirable behaviour of the bound is mostly attributed

to N.

25 / 26

slide-40
SLIDE 40

Conclusions

◮ The asymptotic error bound depends only on N and the

model parameters; i.e. it is uniform in the values of x and y.

◮ The bound is conservative. ◮ The bound becomes more conservative as N increases. ◮ The undesirable behaviour of the bound is mostly attributed

to N.

◮ Simply expression that converges to zero.

25 / 26

slide-41
SLIDE 41

Thank you for your attention

26 / 26