measuring tail dependence for collateral losses using
play

Measuring tail dependence for collateral losses using bivariate L - PowerPoint PPT Presentation

Measuring tail dependence for collateral losses using bivariate L evy process Jiwook Jang Actuarial Studies Faculty of Commerce and Economics University of New South Wales Sydney, AUSTRALIA Actuarial Studies Research Symposium 11 November


  1. Measuring tail dependence for collateral losses using bivariate L´ evy process Jiwook Jang Actuarial Studies Faculty of Commerce and Economics University of New South Wales Sydney, AUSTRALIA Actuarial Studies Research Symposium 11 November 2005

  2. Overview • Collateral losses : In worldwide, once a storm or earthquake arrives, it brings about damages in properties, motors and interruption of busi- nesses. It occurred a couple of losses simultaneously from the World Trade Centre (WTC) catastrophe. • Bivariate L´ evy process with a copula, i.e. bivariate compound Pois- son process with a member of Farlie-Gumbel-Morgenstern copula for dependence between losses. • Calculation of the coe ffi cient of (upper) tail dependence using Fast Fourier transform.

  3. Bivariate aggregate losses • Insurance companies are experiencing dependent losses from one spe- ci fi c event such as fl ood, windstorm, hail, earthquake and terrorist attack. So for bivariate risk case, we can model N t X L (1) = X i , t i =1 N t X L (2) = Y i , t i =1

  4. where L (1) is the total losses arising from risk type 1, L (2) is the total t t losses arising from risk type 2 and N t is the total number of collateral losses up to time t . X i and Y i , i = 1 , 2 , · · · , are the loss amounts, which are to be dependent each other, where H ( x ) be the identically distribution function of X and H ( y ) be the identically distribution function of Y .

  5. A point process and a copula • We assume that the collateral loss arrival process, N t follows a Pois- son process with loss frequency rate µ . It is also assumed that is independent of X i and Y i . • We employ the Farlie-Gumbel-Morgenstern family copula, that is given by C ( u, v ) = uv + θ uv (1 − u )(1 − v ) , where u ∈ [0 , 1], v ∈ [0 , 1] and θ ∈ [ − 1 , 1] , to capture the dependence of collateral losses of X and Y .

  6. Copula • A general approach to model dependence between random variables is to specify the joint distribution of the variables using copulas. • Dependence between random variables is usually completely described by their multivariate distribution function, To de fi ne a copula more formally, consider u = ( u 1 , · · · , u n ) belongs to the n -cube [0 , 1] n . A copula, C ( u ), is a function,with support [0 , 1] n and range [0 , 1], that is multivariate cumulative distribution function whose univariate marginals are uniform U (0 , 1) .

  7. • As a consequence of this de fi nition, we see that ¢ = 0 ¡ u 1 , · · · , u k − 1 , 0 , u k +1 , · · · , u n C and C (1 , · · · , 1 , u k , 1 , · · · , 1) = u k for all k = 1 , 2 , · · · n. Any copula C is therefore the distribution of a multivariate uniform random vector.

  8. Sklar theorem • Let F be a two-dimensional distribution function with margins, F 1 , F 2 . Then there exists a two-dimensional copula C such that for all 2 − x ∈ R , F ( x 1 , x 2 ) = C ( F 1 ( x 1 ) , F 2 ( x 2 )) . (1) • If F 1 and F 2 are continuous then C is unique, i.e. C ( u 1 , u 2 ) = F ( F − 1 ( u 1 ) , F − 1 ( u 2 )) , 1 2

  9. where F − 1 , F − 1 denote the quantile functions of the univariate margins 1 2 F 1 , F 2 . Otherwise C is uniquely determined on Ran F 1 × Ran F 2 . • Conversely, if C is a copula and F 1 and F 2 are distribution functions, then the function F de fi ned by (1) is a two-dimensional distribution function with margins F 1 and F 2 .

  10. Farlie-Gumbel-Morgenstern family copula with exponential margins • In order to obtain the explicit expression of the function F ( x, y ) , that is a two-dimensional distribution function with margins H ( x ) and H ( y ), we let X and Y be exponential random variables, i.e. H ( x ) = 1 − e − α x ( α > 0 , x > 0) and H ( y ) = 1 − e − β y ( β > 0, y > 0), then the joint distribution function F ( x, y ) is given by F ( x, y ) = C (1 − e − α x , 1 − e − β y ) = 1 − e − β y − e − α x + e − α x − β y + θ e − α x − β y − θ e − α x − 2 β y − θ e − 2 α x − β y + θ e − 2 α x − 2 β y .

  11. • And its derivative is given by dF ( x, y ) = dC (1 − e − α x , 1 − e − β y ) = (1 + θ ) αβ e − α x − β y − 2 θαβ e − α x − 2 β y − 2 θαβ e − 2 α x − β y +4 θαβ e − 2 α x − 2 β y . (2)

  12. Upper tail dependence of collateral losses • We examine upper tail dependence of collateral losses X and Y as insurance companies’ concerns are on extreme losses in practice. • we adopt the coe ffi cient of upper tail dependence, λ U , used by Em- brechts, Lindskog and McNeil (2003), ( ) L (2) ( u ) | L (1) > G − 1 > G − 1 u % 1 P lim ( u ) = λ U t t L (2) L (1) t t provided that the limit λ U ∈ [0 , 1] exists, where G L (1) and G L (2) are t t marginal distribution functions for L (1) and L (2) . t t

  13. µ ¶ L (1) , L (2) The generator of the process , t t t µ ¶ L (1) , L (2) • The generator of the process acting on a function , t t t ³ ´ l (1) , l (2) , t f belonging to its domain is given by ³ ´ l (1) , l (2) , t A f ∂ f = ∂ t ⎡ ⎤ ∞ ∞ Z Z ³ ´ ³ ´ ⎢ ⎥ l (1) + x, l (2) + y, t l (1) , l (2) , t + µ f dC ( H ( x ) , H ( y )) − f ⎦ . ⎣ 0 0

  14. A suitable martingale • Considering constants ν ≥ 0 and ξ ≥ 0, ⎡ ⎤ t µ ¶ µ ¶ Z ⎢ ⎥ − ν L (1) − ξ L (2) exp exp exp ⎣ µ { 1 − ˆ c ( ν , ξ ) } ds ⎦ t t 0 ∞ ∞ R R e − ν x e − ξ y dC ( H ( x ) , H ( y )). is a martingale where ˆ c ( ν , ξ ) = 0 0

  15. The joint Laplace transform of the distribution of L (1) and L (2) t t • Using the martingale obtained above, the joint Laplace transform of the distribution of L (1) and L (2) at time t is given by t t ½ ¾ µ ¶ µ ¶ e − ν L (1) t e − ξ L (2) t | L (1) 0 , L (2) − ν L (1) − ξ L (2) = exp exp E 0 0 0 ⎡ ⎤ t Z ⎢ ⎥ × exp ⎣ − µ { 1 − ˆ c ( ν , ξ ) } ds ⎦ . 0

  16. • For simplicity, we assume that L (1) = 0 and L (2) = 0, then it is given 0 0 by ⎡ ⎤ t ½ ¾ Z e − ν L (1) t e − ξ L (2) ⎢ ⎥ E = exp ⎣ − µ { 1 − ˆ c ( ν , ξ ) } ds ⎦ , t 0 ∞ ∞ R R e − ν x e − ξ y dC ( H ( x ) , H ( y )) . where ˆ c ( ν , ξ ) = 0 0

  17. • In order to obtain the explicit expression of the joint Laplace transform of the distribution of L (1) and L (2) at time t, let us use the joint density t t function f ( x, y ) driven by (2), then it is given by ½ ¾ e − ν L (1) t e − ξ L (2) E t " ( ) # ( αξ + βν + νξ ) (2 α + ν ) (2 β + ξ ) − θαβ νξ = exp − µ t . ( α + ν ) ( β + ξ ) (2 α + ν ) (2 β + ξ ) (3)

  18. • If we set ξ = 0 , then the Laplace transform of the distribution of L (1) t is given by ½ ¾ ½ µ ¶ ¾ e − ν L (1) ν = exp (4) E − µ t t α + ν and if we set ν = 0, then the Laplace transform of the distribution of L (2) t is given by ( Ã ! ) ½ ¾ ξ e − ξ L (2) E = exp − µ t , (5) t β + ξ

  19. which are the Laplace transform of the distribution of the compound Pois- son process with exponential loss sizes. Due to the dependence of col- lateral losses of X and Y with sharing loss frequency rate µ, it is obvious that ½ ¾ ½ ¾ ½ ¾ e − ν L (1) t e − ξ L (2) e − ν L (1) e − ξ L (2) E 6 = E E . t t t

  20. When θ = 0,.i.e. no dependence in loss sizes • If θ = 0 , then we have " ( ) # ½ ¾ ( αξ + βν + νξ ) e − ν L (1) t e − ξ L (2) E = exp − µ t , (6) t ( α + ν ) ( β + ξ ) which is the case that two losses X and Y occur at the same time from a sharing loss frequency rate µ , but their sizes are independent each other. • If loss X occurs with its frequency rate µ ( x ) and loss Y occurs with its frequency rate µ ( y ) respectively and everything is independent each

  21. other, we can easily derive the explicit expression of the joint Laplace transform of the distribution of L (1) and L (2) at time t , i.e. t t ½ ¾ ½ ¾ ½ ¾ e − ν L (1) t e − ξ L (2) e − ν L (1) e − ξ L (2) E = E E t t t ( Ã ! ) ½ µ ¶ ¾ ν ξ − µ ( x ) − µ ( y ) = exp t exp t . α + ν β + ξ (7)

  22. • If we set µ = µ ( x ) = µ ( y ) , i.e. frequency rate for loss X and Y are just the same, then (7) becomes ½ ¾ ½ ¾ ½ ¾ e − ν L (1) t e − ξ L (2) e − ν L (1) e − ξ L (2) = E E E t t t ( Ã ! ) ½ µ ¶ ¾ ν ξ = exp − µ t exp − µ t α + ν β + ξ " ( ) # ( αξ + βν + 2 νξ ) = exp (8) − µ t . ( α + ν ) ( β + ξ ) Equation (8) looks similar to (6) as loss size X and Y are independent and their frequency rates are the same. However the joint Laplace transform of the distribution of L (1) and L (2) at time t expressed by (8) are the case t t that they are occurring independently, not collaterally like (6).

  23. Covariance and linear correlation of collateral losses • Di ff erentiating (3) w.r.t. ν and ξ and set ν = 0 and ξ = 0 , then we can easily derive the joint expectation of L (1) and L (2) at time t , i.e. t t ½ ¾ ³ ´ = µ 2 L (1) L (2) αβ t 2 + µ 1 + θ E t. t t αβ 4 • Also from (4) and (5) we can easily derive the expectation of L (1) and t L (2) at time t , i.e. t ½ ¾ ½ ¾ L (1) L (2) = µ = µ and β t . E α t E t t

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend