# Efficient estimation for ergodic diffusions sampled at high - PowerPoint PPT Presentation

## Efficient estimation for ergodic diffusions sampled at high frequency Michael Srensen Department of Mathematical Sciences University of Copenhagen, Denmark http://www.math.ku.dk/ michael . p.1/42 Discretely observed diffusion R p dX

1. Efficient estimation for ergodic diffusions sampled at high frequency Michael Sørensen Department of Mathematical Sciences University of Copenhagen, Denmark http://www.math.ku.dk/ ∼ michael . – p.1/42

2. Discretely observed diffusion R p dX t = b ( X t ; θ ) dt + σ ( X t ; θ ) dW t θ ∈ Θ ⊆ I Data: X t 1 , · · · , X t n , t 1 < · · · < t n . . – p.2/42

3. Discretely observed diffusion R p dX t = b ( X t ; θ ) dt + σ ( X t ; θ ) dW t θ ∈ Θ ⊆ I Data: X t 1 , · · · , X t n , t 1 < · · · < t n . Review papers: Helle Sørensen (2004) Int. Stat. Rev. Bibby, Jacobsen and Sørensen (2004) Sørensen (2007) . – p.2/42

4. Likelihood inference R p dX t = b ( X t ; θ ) dt + σ ( X t ; θ ) dW t θ ∈ Θ ⊆ I Data: X t 1 , · · · , X t n , t 1 < · · · < t n . Likelihood-function: n � L n ( θ ) = p (∆ i , X t i − 1 , X t i ; θ ) , i =1 where t 0 = 0 and ∆ i = t i − t i − 1 . y �→ p (∆ , x, y ; θ ) is the probability density function of the conditional distribution of X t +∆ given that X t = x . . – p.3/42

5. Likelihood inference R p dX t = b ( X t ; θ ) dt + σ ( X t ; θ ) dW t θ ∈ Θ ⊆ I Data: X t 1 , · · · , X t n , t 1 < · · · < t n . Likelihood-function: n � L n ( θ ) = p (∆ i , X t i − 1 , X t i ; θ ) , i =1 where t 0 = 0 and ∆ i = t i − t i − 1 . y �→ p (∆ , x, y ; θ ) is the probability density function of the conditional distribution of X t +∆ given that X t = x . Score function: n � U n ( θ ) = ∂ θ log L n ( θ ) = ∂ θ log p (∆ i , X t i − 1 , X t i ; θ ) . i =1 Under weak regularity conditions, the score function is a P θ -martingale . – p.3/42

6. Quadratic estimating functions Approximate likelihood function n . � L n ( θ ) = M n ( θ ) = q (∆ i , X t i − 1 , X t i ; θ ) i =1 � ( y − F (∆ , x ; θ )) 2 � 1 . p (∆ , x, y ; θ ) = q (∆ , x, y ; θ ) = exp � 2Φ(∆ , x ; θ ) 2 π Φ(∆ , x ; θ ) F ( x ; θ ) = E θ ( X ∆ | X 0 = x ) Φ( x ; θ ) = Var θ ( X ∆ | X 0 = x ) and . – p.4/42

7. Quadratic estimating functions Approximate likelihood function n . � L n ( θ ) = M n ( θ ) = q (∆ i , X t i − 1 , X t i ; θ ) i =1 � ( y − F (∆ , x ; θ )) 2 � 1 . p (∆ , x, y ; θ ) = q (∆ , x, y ; θ ) = exp � 2Φ(∆ , x ; θ ) 2 π Φ(∆ , x ; θ ) F ( x ; θ ) = E θ ( X ∆ | X 0 = x ) Φ( x ; θ ) = Var θ ( X ∆ | X 0 = x ) and Approximate score function n � ∂ θ F (∆ i , X t i − 1 ; θ ) � ∂ θ log M n ( θ ) = Φ(∆ i , X t i − 1 ; θ ) [ X t i − F (∆ i , X t i − 1 ; θ )] i =1 + ∂ θ Φ(∆ i , X t i − 1 ; θ ) � 2Φ(∆ i , X t i − 1 ; θ ) 2 [( X t i − F (∆ i , X t i − 1 ; θ )) 2 − Φ(∆ i , X t i − 1 ; θ )] . – p.4/42

8. Martingale estimating functions n � G n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) , i =1 N � a j ( x, ∆; θ )[ f j ( y ; θ ) − π ∆ g (∆ , y, x ; θ ) = θ f j ( x ; θ )] j =1 ↑ ↑ p-dimensional real valued π ∆ Transition operator: θ f ( x ; θ ) = E θ ( f ( X ∆ ; θ ) | X 0 = x ) . – p.5/42

9. Martingale estimating functions n � G n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) , i =1 N � a j ( x, ∆; θ )[ f j ( y ; θ ) − π ∆ g (∆ , y, x ; θ ) = θ f j ( x ; θ )] j =1 ↑ ↑ p-dimensional real valued G n ( θ ) is a P θ -martingale: E θ ( a j ( X t i − 1 , ∆ i ; θ )[ f j ( X t i ; θ ) − π ∆ i θ f j ( X t i − 1 ; θ )] | X t 1 , · · · , X t i − 1 ) = 0 . – p.6/42

10. Martingale estimating functions n � G n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) , i =1 N � a j ( x, ∆; θ )[ f j ( y ; θ ) − π ∆ g (∆ , y, x ; θ ) = θ f j ( x ; θ )] j =1 ↑ ↑ p-dimensional real valued G n ( θ ) is a P θ -martingale: E θ ( a j ( X t i − 1 , ∆ i ; θ )[ f j ( X t i ; θ ) − π ∆ i θ f j ( X t i − 1 ; θ )] | X t 1 , · · · , X t i − 1 ) = 0 G n (ˆ G n –estimator(s): θ n ) = 0 Bibby and Sørensen (1995,1996) . – p.6/42

11. Martingale estimating functions n � G n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) , i =1 N � a j ( x, ∆; θ )[ f j ( y ; θ ) − π ∆ g (∆ , y, x ; θ ) = θ f j ( x ; θ )] j =1 • Easy asymptotics by martingale limit theory . – p.7/42

12. Martingale estimating functions n � G n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) , i =1 N � a j ( x, ∆; θ )[ f j ( y ; θ ) − π ∆ g (∆ , y, x ; θ ) = θ f j ( x ; θ )] j =1 • Easy asymptotics by martingale limit theory • Simple expression for Godambe-Heyde optimal estimating function . – p.7/42

13. Martingale estimating functions n � G n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) , i =1 N � a j ( x, ∆; θ )[ f j ( y ; θ ) − π ∆ g (∆ , y, x ; θ ) = θ f j ( x ; θ )] j =1 • Easy asymptotics by martingale limit theory • Simple expression for Godambe-Heyde optimal estimating function • Approximates the score function, which is a P θ -martingale . – p.7/42

14. Martingale estimating functions n � G n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) , i =1 N � a j ( x, ∆; θ )[ f j ( y ; θ ) − π ∆ g (∆ , y, x ; θ ) = θ f j ( x ; θ )] j =1 • Easy asymptotics by martingale limit theory • Simple expression for Godambe-Heyde optimal estimating function • Approximates the score function, which is a P θ -martingale • Particular and most efficient instance of GMM . – p.7/42

15. Asymptotics - ergodic diffusions v ( x ; θ ) = σ 2 ( x ; θ ) dX t = b ( X t ; θ ) dt + σ ( X t ; θ ) dW t . – p.8/42

16. Asymptotics - ergodic diffusions v ( x ; θ ) = σ 2 ( x ; θ ) dX t = b ( X t ; θ ) dt + σ ( X t ; θ ) dW t Assume that � r � x # � r x # s ( x ; θ ) dx = s ( x ; θ ) dx = ∞ and µ θ ( x ) dx = A ( θ ) < ∞ , ˜ ℓ ℓ where x # is an arbitrary point in ( ℓ, r ) � x � b ( y ; θ ) � µ θ ( x ) = [ s ( x ; θ ) v ( x ; θ )] − 1 s ( x ; θ ) = exp − 2 v ( y ; θ ) dy and ˜ x # . – p.8/42

17. Asymptotics - ergodic diffusions v ( x ; θ ) = σ 2 ( x ; θ ) dX t = b ( X t ; θ ) dt + σ ( X t ; θ ) dW t Assume that � r � x # � r x # s ( x ; θ ) dx = s ( x ; θ ) dx = ∞ and µ θ ( x ) dx = A ( θ ) < ∞ , ˜ ℓ ℓ where x # is an arbitrary point in ( ℓ, r ) � x � b ( y ; θ ) � µ θ ( x ) = [ s ( x ; θ ) v ( x ; θ )] − 1 s ( x ; θ ) = exp − 2 v ( y ; θ ) dy and ˜ x # X is ergodic with invariant measure µ θ ( x ) = ˜ µ θ ( x ) /A ( θ ) Q ∆ θ ( x, y ) = µ θ ( x ) p (∆ , x, y ; θ ) . – p.8/42

18. Asymptotics - low frequency t i = ∆ i Assume that and the identifiability condition that Q ∆ θ 0 ( g (∆ , θ )) = 0 θ = θ 0 if and only if and weak regularity conditions. . – p.9/42

19. Asymptotics - low frequency t i = ∆ i Assume that and the identifiability condition that Q ∆ θ 0 ( g (∆ , θ )) = 0 θ = θ 0 if and only if and weak regularity conditions. Then a consistent estimator ˆ θ n that solves the estimating equation G n ( θ ) = 0 exists and is unique in any compact subset of Θ containing θ 0 with a probability that goes to one as n → ∞ . Moreover, √ n (ˆ D 0 , S − 1 θ 0 V θ 0 ( S T θ 0 ) − 1 � � θ n − θ 0 ) − → N under P θ 0 , where V θ = Q ∆ g (∆ , θ ) g (∆ , θ ) T � Q ∆ � � � �� S θ = ∂ θ j g i (∆; θ ) and θ 0 θ 0 . – p.9/42

20. GMM estimator ˜ θ n is found by minimizing n n � � � � 1 1 � � g (∆ i , X t i , X t i − 1 ; θ ) T W − 1 K n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) n n n i =1 i =1 where n W n = 1 g (∆ i , X t i , X t i − 1 ; ¯ θ n ) g (∆ i , X t i , X t i − 1 ; ¯ � θ n ) T n i =1 and ¯ θ n is a consistent estimator. . – p.10/42

21. GMM estimator ˜ θ n is found by minimizing n n � � � � 1 1 � � g (∆ i , X t i , X t i − 1 ; θ ) T W − 1 K n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) n n n i =1 i =1 where n W n = 1 g (∆ i , X t i , X t i − 1 ; ¯ θ n ) g (∆ i , X t i , X t i − 1 ; ¯ � θ n ) T n i =1 and ¯ θ n is a consistent estimator. � n � � n � 1 1 � � ∂ θ g (∆ i , X t i , X t i − 1 ; θ ) T W − 1 ∂ θ K n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) = 0 n n n i =1 i =1 . – p.10/42

22. GMM estimator � n � � n � 1 1 � � ∂ θ g (∆ i , X t i , X t i − 1 ; θ ) T W − 1 ∂ θ K n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) n n n i =1 i =1 n 1 � ∂ θ T g (∆ i , X t i , X t i − 1 ; θ ) → S θ W n → V θ n i =1 n ∂ θ K n ( θ ) ∼ 1 � θ V − 1 S T g (∆ i , X t i , X t i − 1 ; θ ) θ n i =1 . – p.11/42

23. GMM estimator � n � � n � 1 1 � � ∂ θ g (∆ i , X t i , X t i − 1 ; θ ) T W − 1 ∂ θ K n ( θ ) = g (∆ i , X t i , X t i − 1 ; θ ) n n n i =1 i =1 n 1 � ∂ θ T g (∆ i , X t i , X t i − 1 ; θ ) → S θ W n → V θ n i =1 n ∂ θ K n ( θ ) ∼ 1 � θ V − 1 S T g (∆ i , X t i , X t i − 1 ; θ ) θ n i =1 n ∂ θ K n ( θ ) ∼ 1 � θ V − 1 S T A θ ( X t i − 1 )( f θ ( X t i ) − π ∆ θ f θ ( X t i − 1 )) θ n i =1 . – p.11/42