on a storage process for fluid networks with multiple l
play

On a storage process for fluid networks with multiple L evy input - PDF document

On a storage process for fluid networks with multiple L evy input Krzysztof D ebicki University of Wroc law, Poland Presentation at EVA 2005 Gothenburg, Sweden August, 2005 1 Outline of the talk Two-node tandem network


  1. On a storage process for fluid networks with multiple L´ evy input Krzysztof D¸ ebicki University of Wroc� law, Poland Presentation at EVA 2005 Gothenburg, Sweden August, 2005 1

  2. Outline of the talk • Two-node tandem network (Mandjes, van Uitert, K.D.) – New representation for Q 2 – L´ evy case: distribution of Q 2 – Examples • n-node tandem network (Dieker, Rolski, K.D.) – Skorokhod problem – Stationary representation – Laplace transform 2

  3. Two-node tandem network • r 1 > r 2 > 0 • J ( t ) - with stationary increments, E J (1) < r 2 We are interested in P ( Q 2 > u ) • Q 2 - stationary buffer content at the second node - Kella, Whitt, Rubin, Shalmon, Mandjes, van Uitert,... 3

  4. Representation for Q 2 Following Reich’s representation we have Q 1 = d sup { J ( t ) − r 1 t } t ≥ 0 and Q total = d sup { J ( t ) − r 2 t } . t ≥ 0 Hence Q 2 = d sup { J ( t ) − r 2 t } − sup { J ( t ) − r 1 t } . t ≥ 0 t ≥ 0 4

  5. New representation for Q 2 Let u t u = . r 1 − r 2 Theorem 1. For each u ≥ 0 , � � P ( Q 2 > u ) = P sup { J ( t ) − r 2 t } − sup { J ( t ) − r 1 t } > u . t ∈ [ t u , ∞ ) t ∈ [0 ,t u ] This representation enables us to analyze the distribution of Q 2 for following classes of input processes: • processes with independent increments • Gaussian processes 5

  6. Input with stationary independent increments Theorem 2. Let { J ( t ) , t ∈ R } be a stochastic process with stationary independent increments and let µ = E J (1) < r 2 . Then for each u ≥ 0 , and J 1 ( · ) and J 2 ( · ) independent copies of the process J ( · ) , � � P ( Q 2 > u ) = P sup { J 1 ( t ) − r 2 t } > sup {− J 2 ( t ) + r 1 t } . t ∈ [0 , ∞ ) t ∈ [0 ,t u ] 6

  7. Input with spectrally positive L´ evy process Let J ( t ) be a spectrally positive L´ evy process. Introduce θ ( s ) := log( E e − s ( J (1) − r 1 ) ) . Theorem 3. Let { J ( t ) , t ∈ R } be a spectrally positive L´ evy process with µ := E J (1) < r 2 . Then, for each x > 0 , θ − 1 ( x ( r 1 − r 2 )) E e − xQ 2 = r 2 − µ · x − θ − 1 ( x ( r 1 − r 2 )) . r 1 − r 2 Remark 1. Theorem 3 can be considered as an analogue of the result of Zolotarev who obtained the Laplace transform of P ( Q 1 < u ) for J ( · ) being a spectrally positive L´ evy process. 7

  8. Pollaczek-Khintchine representation Theorem 4. Let { J ( t ) , t ∈ R } be a spectrally positive L´ evy process with µ := E J (1) < r 2 . Then ∞ � ̺ i − 1 H ⋆i ( u ) , P ( Q 2 ≤ u ) = (1 − ̺ ) i =1 where • ̺ := ( r 1 − r 2 ) / ( r 1 − µ ) • H ( · ) is a distribution function such that H ( x ) = 0 for x < 0 and � ∞ e − xv d H ( v ) = θ − 1 ( x ) ̺x 0 for x ≥ 0 . 8

  9. Examples: exact distributions • J ( t ) is a standard Brownian motion. � r 1 − 2 r 2 √ u � �� P ( Q 2 > u ) = r 1 − 2 r 2 e − 2 r 2 u √ r 1 − r 2 1 − Ψ r 1 − r 2 √ u r 1 � r 1 � + Ψ √ r 1 − r 2 , r 1 − r 2 where Ψ( x ) = P ( N > x ) . � If c 1 > 2 c 2 , then P ( Q 2 > u ) ∼ r 1 − 2 r 2 e − 2 r 2 u . r 1 − r 2 � If c 1 ≤ 2 c 2 , then r 2 � � 1 1 1 √ u exp P ( Q 2 > u ) ∼ − 2( r 1 − r 2 ) u . � 2 π ( r 1 − r 2 ) Also we can get the exact distribution if • J ( t ) is a Poisson process. 9

  10. Examples: asymptotic results • J ( t ) = X α, 1 ,β ( t ) with α ∈ (1 , 2) and β ∈ ( − 1 , 1] . Then C 1 u 1 − α ≤ P ( Q 2 > u ) ≤ C 2 u 1 − α as u → ∞ . • J ( t ) = X α, 1 , 1 ( · ) with α ∈ (1 , 2) . Then P ( Q 2 > u ) ∼ � 1 − α � 1 1 r 1 u 1 − α . ∼ Γ(2 − α ) cos( π ( α − 2) / 2) r 2 r 1 − r 2 Also we can get the asymptotic for • J ( t ) compound Poisson input, with regularly varying jumps 10

  11. n-node tandem network • J ( t ) = ( J 1 ( t ) , ..., J n ( t )) ′ - n -dimensional L´ evy process with mutually independent components and J 1 ( t ) is a spectrally positive L´ evy process, J 2 ( t ) , . . . , J n ( t ) are subordinators • r = ( r 1 , ..., r n ) ′ - output rates • P = ( p ij ) n i,j =1 - routing matrix; 0 < p ii +1 ≤ 1 and p ij = 0 if j � = i + 1 Moreover, we tacitly assume that p ii +1 > r i +1 N1 (Work-conserving) r i , ( I − P ′ ) − 1 E J (1) < r . N2 (Stability) 11

  12. n-node tandem network, ctd. We are interested in the transient joint distribution of • Q ( t ) = ( Q 1 ( t ) , ..., Q n ( t )) ′ - storage process • B ( t ) = ( B 1 ( t ) , ..., B n ( t )) ′ - running busy period process, where B i ( t ) = t − sup { 0 ≤ s ≤ t : Q i ( s ) = 0 } . Q ( t ) is the unique solution of the Skorokhod problem of J ( t ) − ( I − P ′ ) rt with reflection matrix I − P ′ , that is S1 Q ( t ) = w + J ( t ) − ( I − P ′ ) rt + ( I − P ′ ) L ( t ) , t ≥ 0 , S2 Q ( t ) ≥ 0 , t ≥ 0 and Q (0) = w , S3 L (0) = 0 and L is nondecreasing, and � ∞ S4 � n 0 Q i ( t ) dL i ( t ) = 0 . i =1 12

  13. n-node tandem network, ctd. Proposition 1. Suppose that Q ( t ) is the storage process associated to the stochastic network ( J, r, P ) . Then ( I − P ′ ) − 1 Q ( t ) is the solution to the Skorokhod problem of X ( t ) = ( I − P ′ ) − 1 J ( t ) − rt with reflection matrix I . Theorem 5. The stationary distribution ( W ( ∞ ) , B ( ∞ )) is the same as the distribution of (( I − P ′ ) X, G ) , where X = ( X 1 , ..., X n ) ′ and G = ( G 1 , ..., G n ) ′ with     i k − 1 � �  J k ( t ) − r i t X i = sup p jj +1    t ≥ 0 k =1 j =1 G k = sup { t ≥ 0 : X k ( t ) = X k ( t ) } . 13

  14. n-node tandem network, ctd. Theorem 6. Consider a tandem stochastic network ( J, r, P ) that N1-N2 hold. Then for α = ( α 1 , ..., α n ) ′ > 0 , β = ( β 1 , ..., β n ) ′ > 0 E e − <α,Q ( ∞ ) > − <β,B ( ∞ ) > = = E e − α n X n − β n G n × n − 1 E e − α j X j − [ � n ℓ = j +1 Ψ J ℓ ( α ℓ )+ � n ℓ = j +1 ( p ℓ − 1 ℓ r ℓ − 1 − r ℓ ) α ℓ + � n p = j β p ] G j � × p = j +1 β p ] G j , E e − p jj +1 α j +1 X j − [ � n ℓ ( α ℓ )+ � n ℓ = j +1 ( p ℓ − 1 ℓ r ℓ − 1 − r ℓ ) α ℓ + � n ℓ = j +1 Ψ J j =1 where X ( t ) = ( I − P ′ ) − 1 J ( t ) − rt � E e − λJ i (1) � Ψ J i ( λ ) = − log . The formula can be made explicit by the use of fluctuation identities. But is a bit long... 14

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend