on a resampling scheme for empirical copula hideatsu
play

On a Resampling Scheme for Empirical Copula Hideatsu Tsukahara - PowerPoint PPT Presentation

On a Resampling Scheme for Empirical Copula Hideatsu Tsukahara (tsukahar@seijo.ac.jp) Dept of Economics, Seijo University September 4, 2013 Asymptotic Statistics and Related Topics: Theories and Methodologies Contents 1. Introduction to


  1. On a Resampling Scheme for Empirical Copula Hideatsu Tsukahara (tsukahar@seijo.ac.jp) Dept of Economics, Seijo University September 4, 2013 Asymptotic Statistics and Related Topics: Theories and Methodologies

  2. Contents 1. Introduction to Copula Models 2. Empirical Copula 3. Bootstrap Approximations for Empirical Copula 4. A New Scheme by Prof. Sibuya

  3. 1. Introduction to Copula Models Copula : a df C on [ 0 , 1 ] d with uniform marginals ✓ ✏ Sklar’s Theorem For any d -dim df F with 1-dim marginals F 1 ,..., F d , there exists a copula C s. t. � � F ( x 1 ,..., x d ) = C F 1 ( x 1 ) ,..., F d ( x d ) . ✒ ✑ C is called a copula associated with F . For continuous F , C is unique and is given by � � F − 1 1 ( u 1 ) ,..., F − 1 C ( u 1 ,..., u d ) = F d ( u d ) .

  4. Examples of Bivariate Copulas 1. Clayton family � � − 1 / θ , u − θ + v − θ − 1 θ > 1 C θ ( u , v ) = 2. Gumbel-Hougaard family � ( − log u ) θ +( − log v ) θ � 1 / θ � � θ ≥ 1 C θ ( u , v ) = exp − , 3. Frank family � � 1 + ( e θ u − 1 )( e θ v − 1 ) C θ ( u , v ) = 1 θ ∈ R , θ log e θ − 1

  5. 4. Plackett family  � { 1 +( θ − 1 )( u + v ) } 2 − 4 uv θ ( θ − 1 ) 1 +( θ − 1 )( u + v ) −   θ > 0  ,  2 ( θ − 1 ) θ � = 1 C θ ( u , v ) =     θ = 1 uv , 5. Gaussian family � � C θ ( u , v ) = Φ θ Φ − 1 ( u ) , Φ − 1 ( v ) − 1 ≤ θ ≤ 1 , where �� 0 � � 1 θ �� Φ θ : N , df θ 1 0 and Φ : N ( 0 , 1 ) df

  6. Advantages of Copula Modeling • Better understanding of (scale-free) dependence • Separate modeling for marginals and dependence structure in non- Gaussian multivariate distributions • Easy simulation of multivariate random samples Books on copulas • R. B. Nelsen, An Introduction to Copulas , 2nd ed., Springer, 2006. • H. Joe, Multivariate Models and Dependence Concepts , Chapman & Hall, 1997. • D. Drouet Mari and S. Kotz, Correlation and Dependence , Imperial College Press, 2001.

  7. Semiparametric Estimation Problem X k = ( X k 1 ,..., X k d ) , k = 1 ,..., n iid with continuous df F = C θ ( F 1 ,..., F d ) • { C θ } θ ∈ Θ ⊂ R m : given parametric family of copulas • Marginals F 1 ,..., F d : unknown (nonparametric part) ◮ Semiparametric estimators of θ have asymptotic variances which depend on the unknown C θ 0 .

  8. Goodness-of-fit Tests X k = ( X k 1 ,..., X k d ) , k = 1 ,..., n iid with continuous df F = C ( F 1 ,..., F d ) ◮ For a given C 0 , test H 0 : C = C 0 vs. H 1 : C � = C 0 One can utilize � er-von Mises distance: ρ CvM ( C , D ) = [ 0 , 1 ] d [ C ( u ) − D ( u )] 2 du • Cram´ • Kolmogorov-Smirnov distance: ρ KS ( C , D ) = sup u ∈ [ 0 , 1 ] d | C ( u ) − D ( u ) | to devise test statistics

  9. 2. Empirical Copula X k = ( X k 1 ,..., X k d ) , k = 1 ,..., n iid with continuous df F = C ( F 1 ,..., F d ) � � F − 1 1 ( u 1 ) ,..., F − 1 Recall C ( u 1 ,..., u d ) = F d ( u d ) ✓ ✏ Definition � � F − 1 n 1 ( u 1 ) ,..., F − 1 C n ( u ) : = F n nd ( u d ) where n n F n ( x ) : = 1 F ni ( x i ) : = 1 ∑ ∑ d ≤ x d } , 1 { X k 1 { X k 1 ≤ x 1 ,..., X k i ≤ x i } n n k = 1 k = 1 ✒ ✑

  10. ◮ L ( C n ) is the same for all F whose copula is C ⇒ Enough to consider ξ k = ( ξ k 1 ,..., ξ k d ) : iid with df C ( k = 1 ,..., n ) Put n n G n ( u ) : = 1 G ni ( u i ) : = 1 ∑ ∑ d ≤ u d } , 1 { ξ k 1 { ξ k 1 ≤ u 1 ,..., ξ k i ≤ u i } n n k = 1 k = 1 � � n ( u ) : = √ n • U C G n ( u ) − C ( u ) : Multivariate empirical process � � n ( u ) : = √ n • D C C n ( u ) − C ( u ) : Empirical copula process

  11. Asymptotic representation theorem ✓ ✏ Assume C is differentiable with continuous i th partial derivatives ∂ i C ( u ) : = ∂ C ( u ) / ∂ u i , i = 1 ,..., d . Then we have d ∑ ∂ i C ( u ) U C D C n ( u ) = U C n ( u ) − n ( 1 , u i , 1 )+ R n ( u ) , i = 1 where sup u | R n ( u ) | = o P ( 1 ) as n → ∞ . ✒ ✑ ◮ With stronger conditions on C , one can show � n − 1 / 4 ( log n ) 1 / 2 ( loglog n ) 1 / 4 � | R n ( u ) | = O , a.s. sup u [Tsukahara (2005), with Erratum (2011)]

  12. Proof : Write d ∑ ∂ i C ( u ) U C R n ( u ) = D C n − U C n + n ( 1 , u i , 1 ) i = 1 = : R 1 n ( u )+ R 2 n ( u ) where � � G − 1 n 1 ( u 1 ) ,..., G − 1 R 1 n ( u ) : = U C − U C nd ( u d ) n ( u ) n � R 2 n ( u ) : = √ n � � G − 1 n 1 ( u 1 ) ,..., G − 1 nd ( u d ) − C ( u ) C �� d � ∑ ∂ i C + G ni ( u i ) − u i i = 1

  13. a . s . ◮ sup u | R 1 n ( u ) | − → 0 : Use • Probability inequality for the oscillation of U C n [Einmahl (1987)] � n − 1 / 2 ( loglog n ) 1 / 2 � • Smirnov LIL: sup | G − 1 ni ( u ) − u | = O P ◮ sup u | R 2 n ( u ) | − → 0 : Use • Mean value theorem and 0 ≤ ∂ i C ≤ 1 (Lipschitz continuity of C) • Kiefer (1970): � � � √ n ( G − 1 � ni ( u i ) − u i + G ni ( u i ) − u i ) sup u i � n − 1 / 4 ( log n ) 1 / 2 ( loglog n ) 1 / 4 � = O a.s.

  14. Weak convergence ✓ ✏ → D C in D ([ 0 , 1 ] d ) L n → ∞ D C − n where d ∑ ∂ i C ( u ) U C ( 1 , u i , 1 ) D C ( u ) : = U C ( u ) − i = 1 and U C is a centered Gaussian process with Cov ( U C ( u ) , U C ( v )) = C ( u ∧ v ) − C ( u ) C ( v ) ✒ ✑

  15. 3. Bootstrap Approximations for Empirical Copula Define n C n ( u ) : = 1 ∑ � 1 { F n 1 ( X k 1 ) ≤ u 1 ,..., F nd ( X k d ) ≤ u d } n k = 1 Noting that n C n ( x ) = 1 ∑ nd ( u d ) } , 1 { X k 1 ≤ F − 1 d ≤ F − 1 n 1 ( u 1 ) ,..., X k n k = 1 one can show C n ( u ) − C n ( u ) | ≤ d u ∈ [ 0 , 1 ] d | � sup n

  16. (i) Traditional Bootstrap (Fermanian-Radulovi´ c-Wegkamp (2004)) Define � � F # − 1 n 1 ( u 1 ) ,..., F # − 1 C # n ( u ) : = F # nd ( u d ) n where n n n ( x ) : = 1 ni ( x i ) : = 1 ∑ ∑ F # F # d ≤ x d } , W ni 1 { X k W ni 1 { X k 1 ≤ x 1 ,..., X k i ≤ x i } n n k = 1 k = 1 ( W n 1 ,..., W nn ) ∼ Multinomial ( 1 / n ,..., 1 / n ) Then √ n ( C # P W D C n ( u ) − C n ) �

  17. (ii) Multiplier with Derivative Estimates (R´ emillard-Scaillet (2009)) n n ( u ) : = 1 ∑ C ∗ d ) ≤ u d } , Z i 1 { F n 1 ( X k 1 ) ≤ u 1 ,..., F nd ( X k n k = 1 where Z 1 ,..., Z n : iid mean 0 and variance 1 ⇒ β n : = √ n ( � n − Z n C n ) � U C (unconditional) C ∗ = ∂ i C ( u ) : = C n ( u 1 ,..., u i + h ,..., u d ) − C n ( u 1 ,..., u i − h ,..., u d ) � 2 h with h : = n − 1 / 2 . Then n ∑ ∂ i C ( u ) β n ( 1 , u i , 1 ) � D C ( unconditionally ) � β n ( u ) − i = 1

  18. (iii) Multiplier Bootstrap (B¨ ucher-Dette (2010)) Define � � C ♭ n ( u ) : = F ♭ F ♭ − 1 n 1 ( u 1 ) ,..., F ♭ − 1 nd ( u d ) n where ξ i ξ i n n n ( x ) : = 1 ni ( x i ) : = 1 ∑ ∑ F ♭ F ♭ d ≤ x d } , 1 { X k 1 { X k 1 ≤ x 1 ,..., X k i ≤ x i } ξ n ξ n n n k = 1 k = 1 ξ 1 ,..., ξ n : iid positive rv’s with E ( ξ i ) = µ , Var ( ξ i ) = τ 2 > 0 Then √ n µ P τ ( C ♭ ξ D C n ( u ) − C n ) �

  19. 4. A New Scheme by Prof. Sibuya Let d = 2 for simplicity ( X 1 , Y 1 ) ,..., ( X n , Y n ) : iid with continuous df F ( x , y ) = C ( F 1 ( x ) , F 2 ( y )) For each i = 1 ,..., n , R ni : = rank of X i among X 1 ,..., X n Q ni : = rank of Y i among Y 1 ,..., Y n The vectors of ranks ( R n 1 , Q n 1 ) ,..., ( R nn , Q nn ) are sufficient for C ⇒ Why don’t we resample based only on ( R n 1 , Q n 1 ) ,..., ( R nn , Q nn ) ?

  20. Let U 1 ,..., U n , V 1 ,..., V n be independent U(0,1) random variables independent of ( X 1 , Y 1 ) ,..., ( X n , Y n ) , and • U 1: n < ··· < U n : n : order statistics for U 1 ,..., U n • V 1: n < ··· < V n : n : order statistics for V 1 ,..., V n For each i = 1 ,..., n , put � � U ni : = U R ni : n , V ni : = V Q ni : n One can easily see that 1. ( � U n 1 , � V n 1 ) ,..., ( � U nn , � V nn ) are NOT independent 2. ( � U n 1 , � V n 1 ) ,..., ( � U nn , � V nn ) are identically distributed with the distri- bution varying with n

  21. • Marginal df: n P ( U r : n ≤ u ) · 1 ∑ P ( � U n 1 ≤ u ) = E [ P ( U R ni : n ≤ u | R ni )] = n r = 1 � n − 1 � � u n ∑ t r − 1 ( 1 − t ) n − r d t = r − 1 0 r = 1 � u n − 1 ∑ = p n − 1 , ν ( t ) d t = u 0 ν = 0 where � n � t k ( 1 − t ) n − k p n , k ( t ) = k � � = ⇒ U ni ∼ U ( 0 , 1 ) , V ni ∼ U ( 0 , 1 ) ( i = 1 ,..., n )

  22. • Joint df: H n ( u , v ) : = P ( � U ni ≤ u , � V ni ≤ v ) H n ( u , v ) = E [ P ( U R ni : n ≤ u , V Q ni : n ≤ v ) | R ni , Q ni ] n ∑ = P ( U r : n ≤ u ) P ( V q : n ≤ v ) P ( R ni = r , Q ni = q ) r , q = 1 � u � v n n ! n ! ∑ = ( r − 1 ) ! ( n − r ) ! ( q − 1 ) ! ( n − q ) ! 0 0 r , q = 1 t r − 1 ( 1 − t ) n − r s q − 1 ( 1 − s ) n − q P ( R ni = r , Q ni = q ) d t d s � u � v = : 0 J ( s , t ) d t d s 0 Let K n ( u , v ) : = P ( R ni ≤ nu , Q ni ≤ nv ) . Then � r − 1 � n , q − 1 < R ni n ≤ r < Q ni n ≤ q P ( R ni = r , Q ni = q ) = P n n n

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend