lan property for diffusion processes with jumps with
play

LAN property for diffusion processes with jumps with discrete - PowerPoint PPT Presentation

LAN property for diffusion processes with jumps with discrete observations Arturo Kohatsu-Higa Ritsumeikan University, Japan joint work with Tran Ngoc Khue (University of Paris 13) & Eulalia Nualart (University Pompeu Fabra, Barcelona)


  1. LAN property for diffusion processes with jumps with discrete observations Arturo Kohatsu-Higa Ritsumeikan University, Japan joint work with Tran Ngoc Khue (University of Paris 13) & Eulalia Nualart (University Pompeu Fabra, Barcelona) Tokyo Workshop Tokyo, 3 September, 2013 Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 1 / 25

  2. Outline Introduction to the LAMN and LAN property 1 LAN property for a linear model with jumps 2 LAN property for a diffusion process with jumps 3 Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 2 / 25

  3. Parametric statistical model Consider a parametric statistical model ( X n , B ( X n ) , { P n θ , θ ∈ Θ } ) : A probability space (Ω , F , P ) , A parameter space Θ : a closed rectangle of R k , for some integer k ≥ 1, Random vector X n = ( X 1 , X 2 , . . . , X n ) : Ω × Θ → X n ⊂ R n ( ω, θ ) �→ X n ( ω, θ ) , B ( X n ) : Borel σ -algebra of observable events, θ : probability measure on ( X n , B ( X n )) induced by X n under θ . P n Suppose that X n has a density p n ( x ; θ ) , x ∈ R n for all θ ∈ Θ . An estimator T : X n → Θ : x �→ T ( x ) . Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 3 / 25

  4. Motivation : LAMN and LAN property Interest of parametric estimation based on continuous-time and discrete-time observations of diffusion processes with jumps. Our objectives : solve the problem of asymptotic efficiency of the estimators. More precisely, this problem is closely linked to the LAMN and LAN property. Efficiency of an unbiased estimator : its variance achieves the Cram´ er-Rao lower bound in the Cram´ er-Rao’s inequality. Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 4 / 25

  5. Cram´ er-Rao’s inequality via Malliavin calculus Consider a random vector X ( n ) = ( X 1 , . . . , X n ) . Theorem (Corcuera and Kohatsu-Higa’11) Suppose that X i ∈ D 1 , 2 and there exists a stochastic process u ∈ Dom ( δ ) such that � T D t X i u ( t ) dt = ∂ θ X i for all i = 1 , . . . , n . (1) 0 Let T be an unbiased estimator of θ . Under regularity hypothesis on the parametric statistical model : 1 Var θ ( T ( X ( n ) )) ≥ Var θ ( E θ [ δ ( u ) | X ( n ) ]) . (2) Furthermore, if X ( n ) admits a density p n ( x ; θ ) then E θ ( δ ( u ) | X ( n ) = x ) = ∂ θ log p n ( x ; θ ) . (3) Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 5 / 25

  6. Same formula explained in other form An IBP formula (in ∞ -dimensions) is for any f ∈ C ∞ and any G ∈ L 1 (Ω) there b exists a L 1 (Ω) random variable H ( F , G ) such that E [ f ′ ( F ) G ] = E [ f ( F ) H ( F , G )] These formulas have been found for various ( ∞ -dimensional) stochastic differential equations (with jumps, or even with correlation structure in the driving noise). Then if T : C [ 0 , 1 ] → R is a unbiased estimator of a parameter µ then 1 = ∂ µ E [ T ( X )] = E [ < T ′ ( X ) , ∂ µ X > ] = E [ T ( X ) H ( X , ∂ µ X )] ≤ V ( T ( X )) V ( H ( X , ∂ µ X )) . Therefore the Cramer-Rao bound will follow. Interestingly controlling, estimating and approximating H ( X , ∂ µ X ) is a matter that it is well studied in this area. This as explained before is clearly related to the logarithmic derivatives of the density. Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 6 / 25

  7. Definition : LAMN and LAN property Let Θ be a closed rectangle of R k , for some integer k ≥ 1. Definition. The sequence of ( X n , B ( X n ) , { P n θ , θ ∈ Θ } ) has the local 1 asymptotic mixed normality (LAMN) property at θ if there exist positive definite k × k matrix ϕ n ( θ ) satisfying that ϕ n ( θ ) → 0 as n → ∞ , and k × k symmetric positive definite random matrix Γ( θ ) : for any u ∈ R k , n → ∞ , d P n − 1 θ + ϕ n ( θ ) u L ( P θ ) ( X n ) → u T Γ( θ ) 1 / 2 N 2 u T Γ( θ ) u , � � log − 0 , I k (4) d P n θ N ( 0 , I k ) : a centered R k -valued Gaussian variable, independent of Γ( θ ) . Γ( θ ) : asymptotic Fisher information matrix. When Γ( θ ) is deterministic, we have the local asymptotic normality 2 (LAN) property at θ . Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 7 / 25

  8. Consequences of the LAMN property Reduce to study the convergence in law under P θ of log-likelihood ratio : d P n ( X n ) = log p n ( X n ; θ + ϕ n ( θ ) u ) θ + ϕ n ( θ ) u log . d P n p n ( X n ; θ ) θ Conditional convolution theorem : Suppose that the LAMN property holds at θ . If (˜ θ n ) n ≥ 1 is a regular sequence of estimators of θ : ∀ u ∈ R k , � L ( P θ + ϕ n ( θ ) u ) ϕ n ( θ ) − 1 � ˜ θ n − ( θ + ϕ n ( θ ) u ) − → V ( θ ) , n →∞ for some R k -valued r.v V ( θ ) , then L ( V ( θ ) | Γ( θ )) = N 0 , Γ( θ ) − 1 � � ⋆ G Γ( θ ) . Therefore, (˜ θ n ) n ≥ 1 is called asymptotically efficient if as n → ∞ , � L ( P θ ) � ϕ − 1 ˜ → Γ( θ ) − 1 / 2 N ( 0 , I k ) . n ( θ ) θ n − θ − (5) Minimax theorem implies that Γ( θ ) − 1 gives the lower bound for the asymptotic variance of estimators. Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 8 / 25

  9. Several references Gobet’01 derives the LAMN property in the non-ergodic case : 1 � t � t X θ b ( θ, s , X θ σ ( θ, s , X θ t = X 0 + s ) ds + s ) dB s , t ∈ [ 0 , 1 ] . (6) 0 0 Gobet’02 shows the LAN property in the ergodic case : 2 � t � t X α,β b ( α, X α,β σ ( β, X α,β = X 0 + ) ds + ) dB s , t ≥ 0 . (7) s s t 0 0 Delattre and al. ’11 have established the LAMN property : 3 � t � t X λ b ( s , X λ a ( s , X λ � c ( X λ t = X 0 + s ) ds + s ) dB s + T k − , λ k ) , (8) 0 0 k : T k ≤ t for t ∈ [ 0 , 1 ] , where the jump times T 1 , T 2 , . . . , T K are given. For the proof of these results, they use tools of Malliavin calculus and upper and lower Gaussian type estimates of the transition densities of the diffusion processes. In the jump case, these estimates are not satisfied ! Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 9 / 25

  10. The density estimates One has the tendency to believe that “good” upper and lower estimates of the density should essentially solve the problem. We started computing some of these estimates and they do not work. In fact, even in simpler one dimensional situations (Gaussian type jumps) the upper density estimate is of the type � � � C | ln | y − x | √ exp − c | y − x | | . t t The lower density estimates are of the type for x � = y with a different estimate over the diagonal. � � � | ln ( | y − x | Ce − λ t exp − c | y − x | ) | t The last result shows that in general the estimate for large/small | y − x | should be different. This is a first negative result ! Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 10 / 25

  11. A linear model with jumps X t = x + θ t + B t + N t − λ t . (9) B = ( B t , t ≥ 0 ) is a standard Brownian motion, N = ( N t , t ≥ 0 ) is a Poisson process with intensity λ > 0. ( θ, λ ) ∈ Θ × Λ ⊂ R × R ∗ + are unknown parameters to be estimated. High frequency observation X n = ( X t 0 , X t 1 , ..., X t n ) , where t k = k ∆ n : Number of observations n → ∞ , Distance between observations ∆ n → 0, Horizon n ∆ n → ∞ . X n admits a density p n ( x ; ( θ, λ )) . p ( θ,λ ) ( t , · , · ) : transition density of X t conditionally on X 0 under ( θ, λ ) . Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 11 / 25

  12. LAN property Theorem For all ( θ, λ ) ∈ Θ × Λ and ( u , v ) ∈ R 2 , as n → ∞ , � � �� X n ; u v p n θ + √ n ∆ n , λ + √ n ∆ n log p n ( X n ; ( θ, λ )) � T � T � u N ( 0 , Γ( θ, λ )) − 1 � u � u � L ( P ( θ,λ ) ) − → Γ( θ, λ ) , v 2 v v where N ( 0 , Γ( θ, λ )) is a centered R 2 -valued Gaussian variable with covariance matrix � 1 − 1 � Γ( θ, λ ) = . 1 + 1 − 1 λ � � u v For simplicity let ( θ 0 ( n ) , λ 0 ( n )) := θ + √ n ∆ n , λ + . √ n ∆ n Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 12 / 25

  13. Sketch of the proof Step 1. By Markov property, log p n ( X n ; ( θ 0 ( n ) , λ 0 ( n ))) p n ( X n ; ( θ, λ )) � � � 1 n − 1 θ 0 ( n ,ℓ ) ,λ u ∂ θ p � = √ n ∆ n � (∆ n , X t k , X t k + 1 ) d ℓ � θ 0 ( n ,ℓ ) ,λ 0 p k = 0 � 1 n − 1 ∂ λ p ( θ 0 ( n ) ,λ 0 ( n ,ℓ )) v � + √ n ∆ n p ( θ 0 ( n ) ,λ 0 ( n ,ℓ )) (∆ n , X t k , X t k + 1 ) d ℓ. 0 k = 0 ℓ u ℓ v θ 0 ( n , ℓ ) := θ 0 + √ n ∆ n , λ 0 ( n , ℓ ) := λ + √ n ∆ n . Arturo Kohatsu-Higa (Ritsumeikan University, Japan) 7-20 July 2013 13 / 25

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend