tail behaviour of stationary distribution for markov
play

Tail behaviour of stationary distribution for Markov chains with - PowerPoint PPT Presentation

Tail behaviour of stationary distribution for Markov chains with asymptotically zero drift D. Denisov University of Manchester (jointly with D. Korshunov and V.Wachtel) April, 2014 Outline Statement of problem 1 Examples, main results and


  1. Tail behaviour of stationary distribution for Markov chains with asymptotically zero drift D. Denisov University of Manchester (jointly with D. Korshunov and V.Wachtel) April, 2014

  2. Outline Statement of problem 1 Examples, main results and known results 2 General approach - random walk example 3 Harmonic functions and change of measure 4 Renewal Theorem 5 Further developments 6

  3. Object of study One-dimensional homogenous Markov chain on R + . X n , n = 0 , 1 , 2 , . . . Let ξ ( x ) be a random variable corresponding to a jump at point x , i.e. P ( ξ ( x ) ∈ B ) = P ( X n +1 − X n ∈ B | X n = x ) . Let m k ( x ) := E [ ξ ( x ) k ] .

  4. Object of study One-dimensional homogenous Markov chain on R + . X n , n = 0 , 1 , 2 , . . . Let ξ ( x ) be a random variable corresponding to a jump at point x , i.e. P ( ξ ( x ) ∈ B ) = P ( X n +1 − X n ∈ B | X n = x ) . Let m k ( x ) := E [ ξ ( x ) k ] . Main assumptions Small drift: m 1 ( x ) ∼ − µ x , x → ∞ ; Finite variance: m 2 ( x ) → b , x → ∞ .

  5. Questions 1 If 2 xm 1 ( x ) + m 2 ( x ) ≤ − ε then X n is ergodic (Lamperti)

  6. Questions 1 If 2 xm 1 ( x ) + m 2 ( x ) ≤ − ε then X n is ergodic (Lamperti) What can one say about the stationary distribution?

  7. Questions 1 If 2 xm 1 ( x ) + m 2 ( x ) ≤ − ε then X n is ergodic (Lamperti) What can one say about the stationary distribution? 2 If 2 xm 1 ( x ) − m 2 ( x ) ≥ ε then X n is transient .

  8. Questions 1 If 2 xm 1 ( x ) + m 2 ( x ) ≤ − ε then X n is ergodic (Lamperti) What can one say about the stationary distribution? 2 If 2 xm 1 ( x ) − m 2 ( x ) ≥ ε then X n is transient . What can one say about the renewal (Green) function ∞ � H ( x ) = P ( X n ≤ x ) , x → ∞ ? n =0

  9. Continuous time - Bessel-like diffusions Let X t be the solution to SDE dX t = − µ ( X t ) dt + σ ( X t ) dW t , X 0 = x > 0 , X t where µ ( x ) → µ and σ ( x ) → σ. For Bessel processes µ ( x ) = const and σ ( x ) = const .

  10. Continuous time - Bessel-like diffusions Let X t be the solution to SDE dX t = − µ ( X t ) dt + σ ( X t ) dW t , X 0 = x > 0 , X t where µ ( x ) → µ and σ ( x ) → σ. For Bessel processes µ ( x ) = const and σ ( x ) = const . We can use forward Kolmogorov equations to find exact stationary density � µ ( x ) � d 2 � � 0 = d + 1 σ 2 ( x ) p ( x ) p ( x ) dx 2 2 dx x to obtain � � x � 2 c 2 µ ( y ) p ( x ) = σ 2 ( x ) exp − y σ 2 ( y ) dy , c > 0 . 0

  11. Continuous time - Bessel-like diffusions Let X t be the solution to SDE dX t = − µ ( X t ) dt + σ ( X t ) dW t , X 0 = x > 0 , X t where µ ( x ) → µ and σ ( x ) → σ. For Bessel processes µ ( x ) = const and σ ( x ) = const . We can use forward Kolmogorov equations to find exact stationary density � µ ( x ) � d 2 � � 0 = d + 1 σ 2 ( x ) p ( x ) p ( x ) dx 2 2 dx x to obtain � � x � 2 c 2 µ ( y ) p ( x ) = σ 2 ( x ) exp − y σ 2 ( y ) dy , c > 0 . 0 Then, � � � x 2 µ dy ∼ Cx − 2 µ/ b p ( x ) ≈ C exp − b y 1 and � ∞ p ( y ) dy ≈ Cx − 2 µ/ b +1 π ( x , + ∞ ) = x

  12. Simple Markov chain Markov chain on Z P x ( X 1 = x + 1) = p + ( x ) P x ( X 1 = x − 1) = p − ( x ) .

  13. Simple Markov chain Markov chain on Z P x ( X 1 = x + 1) = p + ( x ) P x ( X 1 = x − 1) = p − ( x ) . Then the stationary probabilities π ( x ) satisfy π ( x ) = π ( x − 1) p + ( x − 1) + π ( x + 1) p − ( x + 1) ,

  14. Simple Markov chain Markov chain on Z P x ( X 1 = x + 1) = p + ( x ) P x ( X 1 = x − 1) = p − ( x ) . Then the stationary probabilities π ( x ) satisfy π ( x ) = π ( x − 1) p + ( x − 1) + π ( x + 1) p − ( x + 1) , with solution � x � x � � p + ( k − 1) π ( x ) = π (0) = π (0) exp (log p + ( k − 1) − log p − ( k )) , p − ( k ) k =1 k =1

  15. Asymptotics for the tail of the stationary measure Theorem Suppose that, as x → ∞ , m 1 ( x ) ∼ − µ x , m 2 ( x ) → b and 2 µ > b . (1) Suppose some technical conditions and m 3 ( x ) → m 3 ∈ ( −∞ , ∞ ) as x → ∞ (2) and, for some A < ∞ , E { ξ 2 µ/ b +3+ δ ( x ); ξ ( x ) > Ax } O ( x 2 µ/ b ) . = (3) Then there exist a constant c > 0 such that R x 0 r ( y ) dy = cx − 2 µ/ b +1 ℓ ( x ) π ( x , ∞ ) ∼ cxe − as x → ∞ .

  16. Known results Menshikov and Popov (1995) investigated Markov Chains on Z + with bounded jumps and showed that c − x − 2 µ/ b − ε ≤ π ( { x } ) ≤ c + x − 2 µ/ b + ε .

  17. Known results Menshikov and Popov (1995) investigated Markov Chains on Z + with bounded jumps and showed that c − x − 2 µ/ b − ε ≤ π ( { x } ) ≤ c + x − 2 µ/ b + ε . Korshunov (2011) obtained the following estimate π ( x , ∞ ) ≤ c ( ε ) x − 2 µ/ b +1+ ε .

  18. General approach - random walk example Consider a classical example, of Lindley recursion W n +1 = ( W n + ξ n ) + , n = 0 , 2 , . . . , W 0 = 0 , assuming that E ξ = − a < 0.

  19. General approach - random walk example Consider a classical example, of Lindley recursion W n +1 = ( W n + ξ n ) + , n = 0 , 2 , . . . , W 0 = 0 , assuming that E ξ = − a < 0. This is an ergodic Markov chain (Note drift is not small).

  20. General approach - random walk example Consider a classical example, of Lindley recursion W n +1 = ( W n + ξ n ) + , n = 0 , 2 , . . . , W 0 = 0 , assuming that E ξ = − a < 0. This is an ergodic Markov chain (Note drift is not small). A classical approach consists of three key steps Step 1: Reverse time and consider a random walk S n = ξ 1 + · · · + ξ n , n = 1 , 2 . . . , S 0 = 0 . Then, d → W = sup S n . W n n ≥ 0

  21. Random walks ctd. Step 2: Exponential change of measure. Assuming that there exists κ > 0 such that E [ e κ ξ ] = 1 , E [ ξ e κ ξ ] < ∞ 1

  22. Random walks ctd. Step 2: Exponential change of measure. Assuming that there exists κ > 0 such that E [ e κ ξ ] = 1 , E [ ξ e κ ξ ] < ∞ 1 one can perform change of measure P ( � ξ n ∈ dx ) = e κ x P ( ξ n ∈ dx ) , n = 1 , 2 , . . .

  23. Random walks ctd. Step 2: Exponential change of measure. Assuming that there exists κ > 0 such that E [ e κ ξ ] = 1 , E [ ξ e κ ξ ] < ∞ 1 one can perform change of measure P ( � ξ n ∈ dx ) = e κ x P ( ξ n ∈ dx ) , n = 1 , 2 , . . . ξ 1 > 0 Under new measure � S n = � ξ 1 + · · · + � ξ n and E � � S n → + ∞ , and S n → −∞ .

  24. Random walks ctd. Step 3: Use renewal theorem for � S n .

  25. Random walks ctd. Step 3: Use renewal theorem for � S n . This step uses ladder heights construction and represents P ( M ∈ dx ) = CH ( dx ) = Ce − κ x � H ( dx ) . Now one can apply standard renewal theorem to � H ( dy ) ∼ dy / c to obtain P ( M ∈ dx ) ∼ ce − κ x , x → ∞ .

  26. Asymptotically homogeneous Markov chains One can repeat this programme for asymptotically homogenous Markov chains. Namely, assume ξ ( x ) d → ξ, x → ∞ , where E [ e κ ξ ] = 1 , and E [ ξ e κ ξ ] < ∞

  27. Asymptotically homogeneous Markov chains One can repeat this programme for asymptotically homogenous Markov chains. Namely, assume ξ ( x ) d → ξ, x → ∞ , where E [ e κ ξ ] = 1 , and E [ ξ e κ ξ ] < ∞ Borovkov and Korshunov (1996) showed that if �� � � ∞ x E e κ ξ ( x ) < ∞ , e κ t | P ( ξ ( x ) < t ) − P ( ξ < t ) | dt sup dx 0 R then π ( x , ∞ ) ∼ Ce − κ x , x → ∞ , x → ∞ .

  28. Problems in our case Problem 1 (easier) In our case drift E ξ ( x ) → 0 , x → ∞ . Hence, for 1 = E exp { κ ξ ( x ) } ≈ 1 + κ E ξ ( x ) , x → ∞ to hold we need

  29. Problems in our case Problem 1 (easier) In our case drift E ξ ( x ) → 0 , x → ∞ . Hence, for 1 = E exp { κ ξ ( x ) } ≈ 1 + κ E ξ ( x ) , x → ∞ to hold we need κ = κ ( x ) → ∞ , x → ∞ .

  30. Problems in our case Problem 1 (easier) In our case drift E ξ ( x ) → 0 , x → ∞ . Hence, for 1 = E exp { κ ξ ( x ) } ≈ 1 + κ E ξ ( x ) , x → ∞ to hold we need κ = κ ( x ) → ∞ , x → ∞ . Hence, exponential change of measure does not work.

  31. Problems in our case Problem 2 Suppose we managed to make a change of measure. As a result a . s � → + ∞ X n and E � ξ ( x ) → 0 .

  32. Problems in our case Problem 2 Suppose we managed to make a change of measure. As a result a . s � → + ∞ X n and E � ξ ( x ) → 0 . Then, there is no renewal theorem about ∞ � � H ( x ) = P ( X n ≤ x ) . n =0 Main reason for that � X n d → Gamma ( α, β ) n c which makes the problem difficult.

  33. Harmonic function Step 1 Change of measure via a harmonic function.

  34. Harmonic function Step 1 Change of measure via a harmonic function. Let B be a Borel set in R + with π ( B ) > 0, in our case B = [0 , x 0 ]. Let τ B := min { n ≥ 1 : X n ∈ B } . Note E x τ B < ∞ for every x . V ( x ) is a harmonic function for X n killed at the time of the first visit to B , if V ( x ) = E x { V ( X 1 ); τ B > 1 } = E x { V ( X 1 ); X 1 / ∈ B } If V is harmonic then V ( x ) = E x { V ( X n ); τ B > n } for every n . (4)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend