from persistent random walk to the telegraph noise
play

From persistent random walk to the telegraph noise P . Vallois - PowerPoint PPT Presentation

From persistent random walk to the telegraph noise P . Vallois Institut lie Cartan Nancy, Nancy-Universit joint work with C. Tapiero (NYU) and S. Herrmann (Institut lie Cartan Nancy) Roscoff, March 22, 2010 P . Vallois (IECN)


  1. From persistent random walk to the telegraph noise P . Vallois Institut Élie Cartan Nancy, Nancy-Université joint work with C. Tapiero (NYU) and S. Herrmann (Institut Élie Cartan Nancy) Roscoff, March 22, 2010 P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 1 / 24

  2. Outline Introduction Results at a fixed time and applications From discrete to continuous time Few extensions P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 2 / 24

  3. 1. Introduction Let ( Y n ) n ≥ 0 be a Markov chain taking its values in {− 1 , 1 } with transition matrix : � 1 − α � α 0 < α < 1 , 0 < β < 1 . π = 1 − β β Associated with ( Y n ) consider the process n ≥ 0 . X n := Y 0 + Y 1 + · · · + Y n , ( X n ) is said to be a persistent random walk . Two particular cases are interesting : β = 1 − α : ( X n ) is a classical random walk whose increment is distributed as ( 1 − α ) δ − 1 + αδ 1 . β = α , ( X n ) is a Kac random walk : Y n + 1 = Y n with probability 1 − α and − Y n otherwise. P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 3 / 24

  4. 2. Study at a fixed time and applications Proposition 1 Let ρ := 1 − α − β the asymmetry factor. Then : 2 α E [ X t | Y 0 = − 1 ] = α − β ( 1 − ρ ) 2 ( 1 − ρ t + 1 ) . 1 − ρ ( t + 1 ) − 2 β E [ X t | Y 0 = + 1 ] = α − β ( 1 − ρ ) 2 ( 1 − ρ t + 1 ) . 1 − ρ ( t + 1 ) − Remark 1) In the classical random walk case, we have : ρ = 0 . 2) it is actually possible to compute explicitly the second moment of X t , see C. Tapiero and P .V. : Memory-based persistence in a counting random walk process, Physica A, 2007 P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 4 / 24

  5. Let us introduce : Φ( λ, t ) = E [ λ X t ] , ( λ > 0 ) . Proposition 2 The generating function of X t equals: Φ( λ, t ) = a + θ t + + a − θ t − with a + = 1 − α + λ ( λα − θ − ) a − = 1 when X 0 = Y 0 = − 1 λ 2 √ and λ − a + D � 1 − α � θ ± := 1 √ + ( 1 − β ) λ ± D 2 λ � 1 − α � 2 + ( 1 − β ) λ − 4 ( 1 − α − β ) . D = λ P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 5 / 24

  6. Sketch of the proof of Proposition 2 We decompose Φ( λ, t ) as follows : Φ( λ, t ) = Φ − ( λ, t ) + Φ + ( λ, t ) , with Φ − ( λ, t ) = E [ λ X t 1 { Y t = − 1 } ] , Φ + ( λ, t ) = E [ λ X t 1 { Y t = 1 } ] . The, we obtain the recursive relations : 1 − α Φ − ( λ, t ) + β Φ − ( λ, t + 1 ) = λ Φ + ( λ, t ) λ Φ + ( λ, t + 1 ) αλ Φ − ( λ, t ) + ( 1 − β ) λ Φ + ( λ, t ) = � P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 6 / 24

  7. An application to insurance - A "normal claim" is labelled 0 and its values at time i is Z 0 i ; - an "unusual claim" (for instance "large") is labelled 1 and equals Z 1 i at time i . � � � � Z 0 Z 1 - The claims i , i ≥ 1 are i.i.d, i , i ≥ 1 are i.i.d and the two families of r.v.’s are independent. - The process which attributes labels is ( Y ′ i ) . So, Y ′ i = 0 if at time i a normal claim occurs. Note that Y ′ i ∈ { 0 , 1 } . Set Y i = 2 Y ′ i − 1. Then Y i ∈ {− 1 , 1 } and Y ′ i = 1 ⇔ Y i = 1 . - We suppose that : * ( Y ′ i ) is a Markov chain (then ( Y i ) is a Markov chain as above); � � Z j * all the claims i , j = 0 , 1 , i ≥ 1 and ( Y ′ i ) are independent. P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 7 / 24

  8. The sum of claims at time t is : t t � � Z 1 Z 0 t 1 { Y ′ t 1 { Y ′ ξ t = i = 1 } + i = 0 } . i = 0 i = 0 Proposition 3 1) The first moment of ξ t is : � � E ( ξ t ) = ( t + 1 ) E ( Z 0 E ( Z 1 1 ) − E ( Z 0 E ( X ′ 1 ) + 1 ) t ) t � where X ′ Y ′ t = i . i = 0 2) The Laplace transform of ξ t equals : � 1 �� t + 1 � � e − λξ t � � e − λ Z 0 λ > 0 E = E Φ( z , t ) where � 1 � e − λ Z 1 � t � z := E � z X ′ 1 � , Φ( z , t ) := E . � e − λ Z 0 E P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 8 / 24

  9. Remark Reference : C. Tapiero and P .V. A claims persistence process and Insurance, Insurance : Mathematics and Economics (2009). 2) Recall that : X t = 2 X ′ t − ( t + 1 ) . Then, t ) = 1 � � � t � � z X t / 2 � t + 1 E ( X ′ E ( X t ) + t + 1 z X ′ 2 E , E = z 2 P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 9 / 24

  10. 3. From discrete to continuous time S. Herrmann and P .V. : From persistent random walks to the Telegraph noise. Accepted in Stochastics and Dynamics (2009). 3.1 Notations a) Denote α 0 , β 0 two real numbers : 0 < α 0 ≤ 1 , 0 < β 0 ≤ 1. b) ∆ x is a "small" parameter such that : α := α 0 + c 0 ∆ x ∈ [ 0 , 1 ] , β := β 0 + c 1 ∆ x ∈ [ 0 , 1 ] . c) ( Y t , t ∈ N ) is a Markov chain which takes its values in {− 1 , 1 } with transition matrix : � 1 − α 0 − c 0 ∆ x � α 0 + c 0 ∆ x π ∆ = 1 − β 0 − c 1 ∆ x β 0 + c 1 ∆ x P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 10 / 24

  11. d) The re-normalized random walk associated with ( Y t ) is defined as : Z ∆ (∆ t > 0 ) . s = ∆ x X s / ∆ t , s ∈ ∆ t N e) ( � Z ∆ s , s ≥ 0 ) is the continuous time process which is obtained by linear interpolation from ( Z ∆ s ) . f) Set ρ 0 = 1 − α 0 − β 0 ρ takes into account the "distance" of the persistent random walk to the classical r. w. Remark Note that : ρ 0 = 1 ⇔ 1 − α 0 − β 0 = 0 ⇔ α 0 + β 0 = 0 ⇔ α 0 = β 0 = 0 . P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 11 / 24

  12. 3.2 Convergence to the Brownian motion with drift, ρ 0 � = 1 Theorem 4 We assume that α 0 , β 0 > 0 (i.e. ρ 0 � = 0 ) and r ∆ t = ∆ 2 ( r > 0 ) . x Then the processes √ r η 0 t ξ ∆ = � Z ∆ √ ∆ t + t t 1 − ρ 0 converge in distribution to the process ( ξ 0 t , t ≥ 0 ) , as ∆ x → 0 , with : � � � � � η 2 r ( 1 + ρ 0 ) − c η 0 c ξ 0 0 1 − t = r + t + W t , 1 − ρ 0 ( 1 − ρ 0 ) 2 1 − ρ 0 ( 1 − ρ 0 ) 2 where (W t , t ≥ 0 ) stands for a standard Brownian motion and : η 0 = β 0 − α 0 , c = c 0 + c 1 , c = c 1 − c 0 . Remark η 0 = 0 corresponds to the Kac random walk. P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 12 / 24

  13. Theorem 4 is a consequence of the central limit theorem : � Y 1 + · · · + Y n � √ (∆ t = 1 1 Z ∆ = n n , ∆ x = √ n ) 1 n �� � � � � = √ n Y 1 − E ( Y 1 ) + ··· + Y n − E ( Y n ) + R n , n � E ( Y 1 ) + · · · + E ( Y n ) � √ with R n := n . n β α Since ν := α + β δ 1 is the invariant probability measure α + β δ − 1 + associated with the Markov chain ( Y n ) we have � x ν 0 ( dx ) = α 0 − β 0 η 0 lim n →∞ E ( Y n ) = = . 1 − ρ 0 α 0 + β 0 � P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 13 / 24

  14. 3.3 Convergence when ρ 0 = 1 In this case, the transition matrix of ( Y t ) equals � 1 − c 0 ∆ x � c 0 ∆ x π ∆ = ( c 0 , c 1 > 0 ) . 1 − c 1 ∆ x c 1 ∆ x Consider a sequence ( e n , n ≥ 1 ) of independent r.r.v.’s such that : ( e 2 n , n ≥ 1 ) (resp. ( e 2 n − 1 ; n ≥ 1 ) ) are iid with common exponential distribution with parameter 1 1 c 1 (resp. c 0 ) i.e. E [ e 2 n ] = c 1 (resp. E [ e 2 n − 1 ] = c 0 ). Let � N c 0 , c 1 1 { e 1 + ... + e k ≤ t } , t ≥ 0 . = t k ≥ 1 be the counting process associated with ( e n ; n ≥ 1 ) . P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 14 / 24

  15. Theorem 5 We suppose : α 0 = β 0 = 0 , Y 0 = − 1 , ∆ x = ∆ t . Then, the interpolated persistent random walk ( � Z ∆ s , s ≥ 0 ) converges � � − Z c 0 , c 1 in distribution, as ∆ x → 0 ,to the process where : s � s c 0 , c 1 Z c 0 , c 1 ( − 1 ) N s ≥ 0 . = du u s 0 � � N c 0 , c 1 In the case where c 0 = c 1 , then is the Poisson process with u parameter c 0 . P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 15 / 24

  16. Remarks In the symmetric case, Theorem 5 is a stochastic version of analytical approaches developed by Kac (1974). See for instance the G. Weiss book (1994). � � Z c , c The process has been already introduced by D. Stroock s (1982). The convergence in terms of continuous processes allows to obtain for instance the convergence in distribution of max Z ∆ � s to 0 ≤ s ≤ 1 � � − Z c 0 , c 1 the r.v. max , as ∆ x → 0 . s 0 ≤ s ≤ 1 P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 16 / 24

  17. Sketch of the proof of Theorem 5 We only consider Y 0 = − 1. Let : � � T 1 = inf n ≥ 1 ; Y n = 1 . � � Then T 1 ∼ G . c 0 ∆ x Consequently : � � inf s ; � s > � T ′ Z ∆ Z ∆ = 1 s − � � inf n ∆ t ; Y n = 1 = = T 1 ∆ t . it is easy to deduce the convergence in distribution of T ′ 1 to e 1 , as ∆ x = ∆ t → 0. Recall that e 1 is exponentially distributed with parameter 1 / c 0 . P . Vallois (IECN) persistent random walk Roscoff, March 22, 2010 17 / 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend