Kernel Based Estimation of Inequality Indices and Risk Measures - - PowerPoint PPT Presentation

kernel based estimation of inequality indices and risk
SMART_READER_LITE
LIVE PREVIEW

Kernel Based Estimation of Inequality Indices and Risk Measures - - PowerPoint PPT Presentation

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014 Kernel Based Estimation of Inequality Indices and Risk Measures Arthur Charpentier arthur.charpentier@univ-rennes1.fr http://freakonometrics.hypotheses.org/ based on joint work with E. Flachaire


slide-1
SLIDE 1

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Kernel Based Estimation

  • f Inequality Indices and Risk Measures

Arthur Charpentier

arthur.charpentier@univ-rennes1.fr http://freakonometrics.hypotheses.org/

based on joint work with E. Flachaire

initiated by some joint work with

  • A. Oulidi, J.D. Fermanian, O. Scaillet, G. Geenens and D. Paindaveine

(Université de Rennes 1, 2015) 1

slide-2
SLIDE 2

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Stochastic Dominance and Related Indices

  • First Order Stochastic Dominance (cf standard stochastic order, st)

X 1 Y ⇐ ⇒ FX(t) ≥ FY (t), ∀t ⇐ ⇒ VaRX(u) ≤ VaRY (u), ∀u

  • Convex Stochastic Dominance (cf martingale property)

X cx Y ⇐ ⇒ E[ ˜ Y | ˜ X] = ˜ X ⇐ ⇒ ESX(u) ≤ ESY (u), ∀u and E(X) = E(Y )

  • Second Order Stochastic Dominance (cf submartingale property,

stop-loss order, icx) X 2 Y ⇐ ⇒ E[ ˜ Y | ˜ X] ≥ ˜ X ⇐ ⇒ ESX(u) ≤ ESY (u), ∀u

  • Lorenz Stochastic Dominance (cf dilatation order)

X L Y ⇐ ⇒ X E[X] cx X E[Y ] ⇐ ⇒ LX(u) ≤ LY (u), ∀u 2

slide-3
SLIDE 3

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Stochastic Dominance and Related Indices

  • Parametric Model(s)

E.g. N(µX, σ2

X) 1 N(µY , σ2 Y ) ⇐

⇒ µX ≤ µY and σ2

X = σ2 Y

E.g. N(µX, σ2

X) cx N(µY , σ2 Y ) ⇐

⇒ µX = µY and σ2

X ≤ σ2 Y

E.g. N(µX, σ2

X) 2 N(µY , σ2 Y ) ⇐

⇒ µX ≤ µY and σ2

X ≤ σ2 Y

Or other parametric distribution. E.g. a lognormal distribution for losses 3

Density 5 10 15 20 25 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Cumulated Density 5 10 15 20 25 0.0 0.2 0.4 0.6 0.8 1.0

slide-4
SLIDE 4

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Stochastic Dominance and Related Indices

  • Non-parametric Model(s)

4

Density 5 10 15 20 25 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Cumulated Density 5 10 15 20 25 0.0 0.2 0.4 0.6 0.8 1.0

slide-5
SLIDE 5

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Nonparametric estimation of the density

5

density 20 40 60 80 0.0 0.1 0.2 0.3 0.4 0.5 0.6

slide-6
SLIDE 6

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Agenda

sample {y1, · · · , yn} − →

  • fn(·)

ր ց

  • Fn(·) or

F −1

n (·)

R( fn)

  • Estimating densities of copulas
  • Beta kernels
  • Transformed kernels
  • Combining transformed and Beta kernels
  • Moving around the Beta distribution
  • Mixtures of Beta distributions
  • Bernstein Polynomials
  • Some probit type transformations

6

slide-7
SLIDE 7

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Non parametric estimation of copula density

see C., Fermanian & Scaillet (2005), bias of kernel estimators at endpoints

0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2

Kernel based estimation of the uniform density on [0,1]

Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2

Kernel based estimation of the uniform density on [0,1]

Density

7

slide-8
SLIDE 8

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Non parametric estimation of copula density

e.g. E( c(0, 0, h)) = 1 4 · c(u, v) − 1 2[c1(0, 0) + c2(0, 0)] 1 ωK(ω)dω · h + o(h)

0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1 2 3 4 5

Estimation of Frank copula

0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1 2 3 4 5

Estimation of Frank copula

0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

with a symmetric kernel (here a Gaussian kernel). 8

slide-9
SLIDE 9

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Non parametric estimation of copula density

0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4

Standard Gaussian kernel estimator, n=100

Estimation of the density on the diagonal Density of the estimator 0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4

Standard Gaussian kernel estimator, n=1000

Estimation of the density on the diagonal Density of the estimator 0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4

Standard Gaussian kernel estimator, n=10000

Estimation of the density on the diagonal Density of the estimator

Nice asymptotic properties, see Fermanian et al. (2005)... but still: on finite sample, bad behavior on borders. 9

slide-10
SLIDE 10

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Beta kernel idea (for copulas)

see Chen (1999, 2000), Bouezmarni & Rolin (2003), Kxi(u) ∝ exp

  • −(u − xi)2

h2

  • vs. Kxi(u) ∝
  • u

x1,i b

1

[1 − u1]

x1,i b

  • ·
  • u

x2,i b

2

[1 − u2]

x2,i b

  • 10
slide-11
SLIDE 11

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Beta kernel idea (for copulas)

Beta (independent) bivariate kernel , x=0.0, y=0.0 Beta (independent) bivariate kernel , x=0.2, y=0.0 Beta (independent) bivariate kernel , x=0.5, y=0.0 Beta (independent) bivariate kernel , x=0.0, y=0.2 Beta (independent) bivariate kernel , x=0.2, y=0.2 Beta (independent) bivariate kernel , x=0.5, y=0.2 Beta (independent) bivariate kernel , x=0.0, y=0.5 Beta (independent) bivariate kernel , x=0.2, y=0.5 Beta (independent) bivariate kernel , x=0.5, y=0.5

11

slide-12
SLIDE 12

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Beta kernel idea (for copulas)

0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1 2 3 4 5

Estimation of Frank copula

0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.0 0.5 1.0 1.5 2.0 2.5 3.0

Estimation of the copula density (Beta kernel, b=0.1)

0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.0 0.5 1.0 1.5 2.0 2.5 3.0

Estimation of the copula density (Beta kernel, b=0.05)

12

slide-13
SLIDE 13

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Beta kernel idea (for copulas)

0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4

Beta kernel estimator, b=0.05, n=100

Estimation of the density on the diagonal Density of the estimator 0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4

Beta kernel estimator, b=0.02, n=1000

Estimation of the density on the diagonal Density of the estimator 0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4

Beta kernel estimator, b=0.005, n=10000

Estimation of the density on the diagonal Density of the estimator

13

slide-14
SLIDE 14

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Transformed kernel idea (for copulas)

[0, 1] × [0, 1] → R × R → [0, 1] × [0, 1] 14

slide-15
SLIDE 15

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Transformed kernel idea (for copulas)

0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1 2 3 4 5

Estimation of Frank copula

0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1 2 3 4 5

Estimation of Frank copula

0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1 2 3 4 5

Estimation of Frank copula

15

slide-16
SLIDE 16

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Transformed kernel idea (for copulas)

0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4

Transformed kernel estimator (Gaussian), n=100

Estimation of the density on the diagonal Density of the estimator 0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4

Transformed kernel estimator (Gaussian), n=1000

Estimation of the density on the diagonal Density of the estimator 0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4

Transformed kernel estimator (Gaussian), n=10000

Estimation of the density on the diagonal Density of the estimator

see Geenens, C. & Paindaveine (2014) for more details on probit transformation for copulas. 16

slide-17
SLIDE 17

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Combining the two approaches

See Devroye & Györfi (1985), and Devroye & Lugosi (2001) ... use the transformed kernel the other way, R → [0, 1] → R 17

slide-18
SLIDE 18

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Devroye & Györfi (1985) - Devroye & Lugosi (2001)

Interesting point, the optimal T should be F, thus, T can be F

θ

18

slide-19
SLIDE 19

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Heavy Tailed distribution

Let X denote a (heavy-tailed) random variable with tail index α ∈ (0, ∞), i.e. P(X > x) = x−αL1(x) where L1 is some regularly varying function. Let T denote a R → [0, 1] function, such that 1 − T is regularly varying at infinity, with tail index β ∈ (0, ∞). Define Q(x) = T −1(1 − x−1) the associated tail quantile function, then Q(x) = x1/βL⋆

2(1/x), where L⋆ 2 is some regularly varying function (the de Bruyn

conjugate of the regular variation function associated with T). Assume here that Q(x) = bx1/β Let U = T(X). Then, as u → 1 P(U > u) ∼ (1 − u)α/β. 19

slide-20
SLIDE 20

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Heavy Tailed distribution

see C. & Oulidi (2007), α = 0.75−1 , T0.75−1, T0.65−1

lighter

, T0.85−1

heavier

and Tˆ

α

Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

20

slide-21
SLIDE 21

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Heavy Tailed distribution

see C. & Oulidi (2007), impact on quantile estimation ?

20 30 40 50 60 70 80 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Density

21

slide-22
SLIDE 22

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Heavy Tailed distribution

see C. & Oulidi (2007), impact on quantile estimation ? bias ? m.s.e. ? 22

slide-23
SLIDE 23

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Which transformation ?

GB2 : t(y; a, b, p, q) = |a|yap−1 bapB(p, q)[1 + (y/b)a]p+q , for y > 0,

GB2

q→∞

  • a=1
  • p=1
  • q=1
  • GG

a→0

  • a=1
  • p=1
  • Beta2

q→∞

  • SM

q→∞

  • q=1

Dagum

p=1

  • Lognormal

Gamma Weibull Champernowne

23

slide-24
SLIDE 24

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Estimating a density on R+

  • Stay on R+ : xi’s
  • Get on [0, 1] : ui = T

θ(xi) (distribution as uniform as possible)

  • Use Beta Kernels on ui’s
  • Mixtures of Beta distributions on ui’s
  • Bernstein Polynomials on ui’s
  • Get on R : use standard kernels (e.g. Gaussian)
  • On x⋆

i = log(xi)

  • On x⋆

i = BoxCox λ(xi)

  • On x⋆

i = Φ−1[T θ(xi)]

24

slide-25
SLIDE 25

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Beta kernel

  • g(u) =

n

  • i=1

1 n · b

  • u; Ui

h , 1 − Ui h

  • u ∈ [0, 1].

with some possible boundary correction, as suggested in Chen (1999), u h → ρ(u, h) = 2h2 + 2.5 − (4h4 + 6h2 + 2.25 − u2 − u/h)1/2 Problem : choice of the bandwidth h⋆ ? Standard loss function L(h) =

  • [

gn(u) − g(u)]2du =

  • [

gn(u)]2du − 2

  • gn(u) · g(u)du
  • CV (h)

+

  • [g(u)]2du

where

  • CV (h) =
  • gn(u)du

2 − 2 n

n

  • i=1
  • g(−i)(Ui)

25

slide-26
SLIDE 26

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Mixture of Beta distributions

  • g(u) =

k

  • j=1
  • πj · b
  • u;

αj, βj

  • u ∈ [0, 1].

Problem : choice the number of components k (and estimation...). Use of stochastic EM algorithm (or sort of) see Celeux & Diebolt (1985).

Bernstein approximation

  • g(u) =

m

  • k=1

[mωk] · b (u; k, m − k) u ∈ [0, 1]. where ωk = G k m

G k − 1 m

  • .

26

slide-27
SLIDE 27

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

On the log-transform

With a standard Gaussian kernel

  • fX(x) = 1

n

n

  • i=1

φ(x; xi, h) A Gaussian kernel on a log transform,

  • fX(x) = 1

x

  • fX⋆(log x) = 1

n

n

  • i=1

λ(x; log xi, h) where λ(·; µ, σ) is the density of the log-normal distribution. Here, in 0, bias[ fX(x)] ∼ h2 2 [fX(x) + 3x · f ′

X(x) + x2 · f ′′ X(x)]

and Var[ fX(x)] ∼ fX(x) xnh 27

slide-28
SLIDE 28

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

On the Box-Cox-transform

More generally, instead of transformed sample Yi = log[Xi], consider Yi = Xλ

i − 1

λ when λ = 0. Find the optimal transformation using standard regression techniques (least squares) X⋆

i = Xλ⋆ i

− 1 λ⋆ when λ⋆ = 0 and X⋆

i = log[Xi] if λ⋆ = 0. The density estimation is here

  • fX(x) = xλ⋆−1

fX⋆ xλ⋆ − 1 λ⋆

  • 28
slide-29
SLIDE 29

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Illustration with Log-normal samples

Standard kernel (− Silvermans’s rule h⋆)

Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6

29

slide-30
SLIDE 30

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Illustration with Log-normal samples

Log transform, x⋆

i = log xi + Gaussian kernel

Density −2 −1 1 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density −2 −1 1 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density −2 −1 1 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6

30

slide-31
SLIDE 31

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Illustration with Log-normal samples

Probit-type transform, x⋆

i = Φ−1[T θ(xi)] + Gaussian kernel

Density −2 −1 1 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density −2 −1 1 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density −2 −1 1 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6

31

slide-32
SLIDE 32

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Illustration with Log-normal samples

Box-Cox transform, x⋆

i = BoxCox λ(xi) + Gaussian kernel

Density −2 −1 1 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density −2 −1 1 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density −2 −1 1 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6

32

slide-33
SLIDE 33

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Illustration with Log-normal samples

ui = T

θ(xi) + Mixture of Beta distributionsl

Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6

33

slide-34
SLIDE 34

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Illustration with Log-normal samples

ui = T

θ(xi) + Beta kernel estimation

Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Density 2 4 6 8 10 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6

34

slide-35
SLIDE 35

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Quantities of interest

Standard statistical quantities

  • miae,

  • fn(x) − f(x)
  • dx
  • mise,

  • fn(x) − f(x)

2 dx

  • miaew,

  • fn(x) − f(x)
  • |x| dx
  • misew,

  • fn(x) − f(x)

2 x2 dx

  • 35
slide-36
SLIDE 36

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Quantities of interest

Inequality indices and risk measures, based on F(x) = x f(t)dt,

  • Gini, 1

µ ∞ F(t)[1 − F(t)]dt

  • Theil,

∞ t µ log t µ

  • f(t)dt
  • Entropy −

∞ f(t) log[f(t)]dt

  • VaR-quantile, x such that F(x) = P(X ≤ x) = α, i.e. F −1(α)
  • TVaR-expected shorfall, E[X|X > F −1(α)]

where µ = ∞ [1 − F(x)]dx. 36

slide-37
SLIDE 37

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Computations Aspects

Here, for each method, we return two functions,

  • function

fn(·)

  • a random generator for distribution

fn(·)

  • H-transform and Gaussian kernel

draw i ∈ {1, · · · , n} and X = H−1(Z) where Z ∼ N(H(xi), b2)

  • H-transform and Beta kernel

draw i ∈ {1, · · · , n} and X = H−1(U) where U ∼ B(H(xi)/h, [1 − H(xi)]/h)

  • H-transform and Beta mixture

draw k ∈ {1, · · · , K} and X = H−1(U) where U ∼ B(αk, βk) 37

slide-38
SLIDE 38

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

  • ‘standard’ Gaussian kernel (benchmark)

draw i ∈ {1, · · · , n}, and X ∼ N(xi, b2) (almost) up to some normalization, · →

  • fn(·)

  • fn(x)dx

38

Density −2 2 4 6 8 0.0 0.1 0.2 0.3 0.4

slide-39
SLIDE 39

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

MISE

  • f (s)

n (x) − f(x)

2 dx

  • Singh−Maddala

MISE

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 0.000 0.005 0.010 0.015

  • Mixed Singh−Maddala

MISE

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 0.00 0.05 0.10 0.15

39

slide-40
SLIDE 40

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

MIAE

∞ | f (s)

n (x) − f(x)| dx

  • Singh−Maddala

MIAE

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 0.00 0.05 0.10 0.15 0.20

Mixed Singh−Maddala

MIAE

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 0.05 0.10 0.15 0.20 0.25 0.30

40

slide-41
SLIDE 41

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Gini Index

1

  • µ(s)

n

  • F (s)

n (t)[1 −

F (s)

n (t)]dt

Singh−Maddala

Gini Index

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 0.45 0.50 0.55 0.60

Mixed Singh−Maddala

Gini Index

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 0.55 0.60 0.65 0.70

41

slide-42
SLIDE 42

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Value-at-Risk, 95%

  • Q(s)

n (α) = inf{x, α ≤

F (s)

n (x)}

Singh−Maddala

Quantile 95%

standard kernel log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 10 20 30 40 50

Mixed Singh−Maddala

Quantile 95%

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 2.0 2.5 3.0 3.5 4.0 4.5

42

slide-43
SLIDE 43

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Value-at-Risk, 99%

  • Q(s)

n (α) = inf{x, α ≤

F (s)

n (x)}

Singh−Maddala

Quantile 99%

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 6 8 10 12 14 16 18

Mixed Singh−Maddala

Quantile 99%

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 2 4 6 8 10

43

slide-44
SLIDE 44

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Tail Value-at-Risk, 95%

E[X|X > Q(s)

n (α)]

Singh−Maddala

Expected Shortfall 95%

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 4.0 4.5 5.0 5.5 6.0 6.5

Mixed Singh−Maddala

Expected Shortfall 95%

  • standard kernel

log kernel log mix boxcox kernel boxcox mix probit kernel beta kernel beta mix 4 6 8 10 12

44

slide-45
SLIDE 45

Arthur CHARPENTIER - Rennes, SMART Workshop, 2014

Possible conclusion ?

  • estimating densities on transformated data is definitively a good idea
  • but we need to find a good transformation

parametric + beta parametric + probit log-transform Box-Cox

0.719 0.719 0.719 0.719 0.719 0.719 0.719 0.719 1.428 2.137

47.75 48.00 48.25 48.50 48.75 −4.8 −4.4 −4.0 −3.6 longitude latitude 0.011 0.719 1.428 2.137 2.845

0.719 0.719 0.719 0.719 0.719 0.719 0.719 0.719 0.719 1.428 1.428 1.428 1.428 1.428 1.428 1.428 1.428 1.428 1.428 1.428 1.428 1.428 1.428 1.428 1.428 2.137 2.137

47.75 48.00 48.25 48.50 48.75 −4.8 −4.4 −4.0 −3.6 longitude latitude 0.011 0.719 1.428 2.137 2.845

(joint work with E. Gallic) 45