When uniform weak convergence fails: Empirical processes for - - PowerPoint PPT Presentation

when uniform weak convergence fails empirical processes
SMART_READER_LITE
LIVE PREVIEW

When uniform weak convergence fails: Empirical processes for - - PowerPoint PPT Presentation

When uniform weak convergence fails: Empirical processes for dependence functions and residuals via epi- and hypographs Axel B ucher, Johan Segers and Stanislav Volgushev Universit e catholique de Louvain and Ruhr-Universit at Bochum


slide-1
SLIDE 1

When uniform weak convergence fails: Empirical processes for dependence functions and residuals via epi- and hypographs

Axel B¨ ucher, Johan Segers and Stanislav Volgushev

Universit´ e catholique de Louvain and Ruhr-Universit¨ at Bochum

Van Dantzig Seminar, Mathematical Institute, Leiden University, 11 Apr 2014

1/ 32

slide-2
SLIDE 2

Motivation

Uniform convergence of bounded functions Strong implications vs. Restricted applicability

2/ 32

slide-3
SLIDE 3

Motivation

Uniform convergence of bounded functions Strong implications

◮ Implies pointwise, continuous,

Lp-convergence . . . vs. Restricted applicability

2/ 32

slide-4
SLIDE 4

Motivation

Uniform convergence of bounded functions Strong implications

◮ Implies pointwise, continuous,

Lp-convergence . . .

◮ Well-developed weak convergence

theory

Great success story in mathematical statistics

[Van der Vaart and Wellner (1996): Weak convergence and empirical processes]

vs. Restricted applicability

2/ 32

slide-5
SLIDE 5

Motivation

Uniform convergence of bounded functions Strong implications

◮ Implies pointwise, continuous,

Lp-convergence . . .

◮ Well-developed weak convergence

theory

Great success story in mathematical statistics

[Van der Vaart and Wellner (1996): Weak convergence and empirical processes]

◮ Many applications through the

continuous mapping theorem and the functional delta method vs. Restricted applicability

2/ 32

slide-6
SLIDE 6

Motivation

Uniform convergence of bounded functions Strong implications

◮ Implies pointwise, continuous,

Lp-convergence . . .

◮ Well-developed weak convergence

theory

Great success story in mathematical statistics

[Van der Vaart and Wellner (1996): Weak convergence and empirical processes]

◮ Many applications through the

continuous mapping theorem and the functional delta method vs. Restricted applicability

◮ Continuous functions cannot

converge to jump functions

0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2

2/ 32

slide-7
SLIDE 7

Motivation

Uniform convergence of bounded functions Strong implications

◮ Implies pointwise, continuous,

Lp-convergence . . .

◮ Well-developed weak convergence

theory

Great success story in mathematical statistics

[Van der Vaart and Wellner (1996): Weak convergence and empirical processes]

◮ Many applications through the

continuous mapping theorem and the functional delta method vs. Restricted applicability

◮ Continuous functions cannot

converge to jump functions

0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2

◮ Questions: Weaker metric? Weak

convergence theory? Applications?

2/ 32

slide-8
SLIDE 8

Empirical processes via epi- and hypographs

The empirical copula process Weak convergence with respect to the uniform metric Non-smooth copulas: when weak convergence fails The hypi-semimetric and weak convergence Applications

3/ 32

slide-9
SLIDE 9

Empirical processes via epi- and hypographs

The empirical copula process Weak convergence with respect to the uniform metric Non-smooth copulas: when weak convergence fails The hypi-semimetric and weak convergence Applications

4/ 32

slide-10
SLIDE 10

Copulas

◮ A d-variate copula C is a d-variate distribution function with uniform (0, 1)

margins.

5/ 32

slide-11
SLIDE 11

Copulas

◮ A d-variate copula C is a d-variate distribution function with uniform (0, 1)

margins.

◮ Sklar’s (1959) theorem: If F is a d-variate distribution function with margins

F1, . . . , Fd, then there exists a copula C such that F(x1, . . . , xd) = C

  • F1(x1), . . . , Fd(xd)
  • 5/ 32
slide-12
SLIDE 12

Copulas

◮ A d-variate copula C is a d-variate distribution function with uniform (0, 1)

margins.

◮ Sklar’s (1959) theorem: If F is a d-variate distribution function with margins

F1, . . . , Fd, then there exists a copula C such that F(x1, . . . , xd) = C

  • F1(x1), . . . , Fd(xd)
  • ◮ Moreover, if the margins are continuous, then C is unique and is given by the

distribution function of (F1(X1), . . . , Fd(Xd)), with (X1, . . . , Xd) ∼ F: C(u1, . . . , ud) = P[F1(X1) ≤ u1, . . . , Fd(Xd) ≤ ud] = P[X1 ≤ F −

1 (u1), . . . , Xd ≤ F − d (ud)]

= F

  • F −

1 (u1), . . . , F − d (ud)

  • with F −

j (u) = inf{x : Fj(x) ≥ u} the generalized inverse (quantile function)

5/ 32

slide-13
SLIDE 13

Copulas

◮ A d-variate copula C is a d-variate distribution function with uniform (0, 1)

margins.

◮ Sklar’s (1959) theorem: If F is a d-variate distribution function with margins

F1, . . . , Fd, then there exists a copula C such that F(x1, . . . , xd) = C

  • F1(x1), . . . , Fd(xd)
  • ◮ Moreover, if the margins are continuous, then C is unique and is given by the

distribution function of (F1(X1), . . . , Fd(Xd)), with (X1, . . . , Xd) ∼ F: C(u1, . . . , ud) = P[F1(X1) ≤ u1, . . . , Fd(Xd) ≤ ud] = P[X1 ≤ F −

1 (u1), . . . , Xd ≤ F − d (ud)]

= F

  • F −

1 (u1), . . . , F − d (ud)

  • with F −

j (u) = inf{x : Fj(x) ≥ u} the generalized inverse (quantile function) ◮ Usage: Modelling dependence between components X1, . . . , Xd, irrespective of their

marginal distributions

5/ 32

slide-14
SLIDE 14

The empirical copula

◮ Situation: (Xi)i=1,...,n i.i.d. rvs, Xi ∼ F = C(F1, . . . , Fd), continuous marginals Fj.

[hence C(u) = F{F −

1 (u1), . . . , F − d (ud)} with the generalized inverse

F −

j (u) = inf{x : Fj(x) ≥ u}]

slide-15
SLIDE 15

The empirical copula

◮ Situation: (Xi)i=1,...,n i.i.d. rvs, Xi ∼ F = C(F1, . . . , Fd), continuous marginals Fj.

[hence C(u) = F{F −

1 (u1), . . . , F − d (ud)} with the generalized inverse

F −

j (u) = inf{x : Fj(x) ≥ u}] ◮ Goal: Estimate C nonparametrically.

slide-16
SLIDE 16

The empirical copula

◮ Situation: (Xi)i=1,...,n i.i.d. rvs, Xi ∼ F = C(F1, . . . , Fd), continuous marginals Fj.

[hence C(u) = F{F −

1 (u1), . . . , F − d (ud)} with the generalized inverse

F −

j (u) = inf{x : Fj(x) ≥ u}] ◮ Goal: Estimate C nonparametrically. ◮ Simple plug-in estimation: empirical cdfs

Fn(x) := 1 n

n

  • i=1

I(Xi1 ≤ x1, . . . , Xid ≤ xd), Fnj(xj) := 1 n

n

  • i=1

I(Xij ≤ xj). yield the empirical copula Cn(u) = Fn{F −

n1(u1), . . . , F − nd(ud)} = n−1 n

  • i=1

I{Xi1 ≤ F −

n1(u1), . . . , Xid ≤ F − nd(ud)}

6/ 32

slide-17
SLIDE 17

The empirical copula

◮ Situation: (Xi)i=1,...,n i.i.d. rvs, Xi ∼ F = C(F1, . . . , Fd), continuous marginals Fj.

[hence C(u) = F{F −

1 (u1), . . . , F − d (ud)} with the generalized inverse

F −

j (u) = inf{x : Fj(x) ≥ u}] ◮ Goal: Estimate C nonparametrically. ◮ Simple plug-in estimation: empirical cdfs

Fn(x) := 1 n

n

  • i=1

I(Xi1 ≤ x1, . . . , Xid ≤ xd), Fnj(xj) := 1 n

n

  • i=1

I(Xij ≤ xj). yield the empirical copula Cn(u) = Fn{F −

n1(u1), . . . , F − nd(ud)} = n−1 n

  • i=1

I{Xi1 ≤ F −

n1(u1), . . . , Xid ≤ F − nd(ud)}

= n−1

n

  • i=1

I ˆ Ui1 ≤ u1, . . . , ˆ Uid ≤ ud

  • + O(n−1)

[where ˆ Uij = rank(Xij)/n are ‘pseudo-observations’ of C (rescaled ranks)]

6/ 32

slide-18
SLIDE 18

The empirical copula process

u → Cn(u) = √n{Cn(u) − C(u)} ∈ ℓ∞([0, 1]d) is called empirical copula process.

[ℓ∞([0, 1]d) the space of bounded functions on [0, 1]d.]

7/ 32

slide-19
SLIDE 19

The empirical copula process

u → Cn(u) = √n{Cn(u) − C(u)} ∈ ℓ∞([0, 1]d) is called empirical copula process.

[ℓ∞([0, 1]d) the space of bounded functions on [0, 1]d.]

Many applications.

◮ Testing for structural assumptions. Example: symmetry [Genest, Neˇ

slehov´ a, Quessy (2012)]. Null hypothesis: C(u, v) = C(v, u) for all u, v. Tn = n

  • {Cn(u, v) − Cn(v, u)}2 du dv

H0

=

  • {Cn(u, v) − Cn(v, u)}2 du dv

7/ 32

slide-20
SLIDE 20

The empirical copula process

u → Cn(u) = √n{Cn(u) − C(u)} ∈ ℓ∞([0, 1]d) is called empirical copula process.

[ℓ∞([0, 1]d) the space of bounded functions on [0, 1]d.]

Many applications.

◮ Testing for structural assumptions. Example: symmetry [Genest, Neˇ

slehov´ a, Quessy (2012)]. Null hypothesis: C(u, v) = C(v, u) for all u, v. Tn = n

  • {Cn(u, v) − Cn(v, u)}2 du dv

H0

=

  • {Cn(u, v) − Cn(v, u)}2 du dv

◮ Minimum-distance estimators of parametric copulas [Tsukahara (2005)].

{Cθ | θ ∈ Θ} class of parametric candidate models. Estimator: ˆ θ := argminθ

  • {Cθ(u, v) − Cn(u, v)}2 du dv.

7/ 32

slide-21
SLIDE 21

The empirical copula process

u → Cn(u) = √n{Cn(u) − C(u)} ∈ ℓ∞([0, 1]d) is called empirical copula process.

[ℓ∞([0, 1]d) the space of bounded functions on [0, 1]d.]

Many applications.

◮ Testing for structural assumptions. Example: symmetry [Genest, Neˇ

slehov´ a, Quessy (2012)]. Null hypothesis: C(u, v) = C(v, u) for all u, v. Tn = n

  • {Cn(u, v) − Cn(v, u)}2 du dv

H0

=

  • {Cn(u, v) − Cn(v, u)}2 du dv

◮ Minimum-distance estimators of parametric copulas [Tsukahara (2005)].

{Cθ | θ ∈ Θ} class of parametric candidate models. Estimator: ˆ θ := argminθ

  • {Cθ(u, v) − Cn(u, v)}2 du dv.

◮ Goodness-of fit tests, Asymptotics of estimators for Pickands dep. fct. ...

7/ 32

slide-22
SLIDE 22

The empirical copula process

u → Cn(u) = √n{Cn(u) − C(u)} ∈ ℓ∞([0, 1]d) is called empirical copula process.

[ℓ∞([0, 1]d) the space of bounded functions on [0, 1]d.]

Many applications.

◮ Testing for structural assumptions. Example: symmetry [Genest, Neˇ

slehov´ a, Quessy (2012)]. Null hypothesis: C(u, v) = C(v, u) for all u, v. Tn = n

  • {Cn(u, v) − Cn(v, u)}2 du dv

H0

=

  • {Cn(u, v) − Cn(v, u)}2 du dv

◮ Minimum-distance estimators of parametric copulas [Tsukahara (2005)].

{Cθ | θ ∈ Θ} class of parametric candidate models. Estimator: ˆ θ := argminθ

  • {Cθ(u, v) − Cn(u, v)}2 du dv.

◮ Goodness-of fit tests, Asymptotics of estimators for Pickands dep. fct. ...

Derivation of asymptotic distributions: Process convergence of Cn

7/ 32

slide-23
SLIDE 23

Empirical processes via epi- and hypographs

The empirical copula process Weak convergence with respect to the uniform metric Non-smooth copulas: when weak convergence fails The hypi-semimetric and weak convergence Applications

8/ 32

slide-24
SLIDE 24

Key quantities

Vector of quantile functions: F−(u) =

  • F −

1 (u1), . . . , F − d (ud)

  • F−

n (u) =

  • F −

n,1(u1), . . . , F − n,d(ud)

  • Copula and empirical copula:

C(u) = F

  • F−(u)
  • Cn(u) = Fn
  • F−

n (u)

  • Empirical process:

αn(x) = √n{Fn(x) − F(x)}

9/ 32

slide-25
SLIDE 25

Standard empirical process theory

◮ Since the empirical copula is rank-based, we can without loss of generality assume

that margins are uniform, hence F = C.

◮ Classical empirical process theory yields

αn(u) = √n{Fn(u) − C(u)} BC(u) in

  • ℓ∞([0, 1]d), · ∞
  • a C-Brownian bridge.

◮ The Bahadur–Kiefer theorem links the empirical quantile and distribution functions:

√n{F −

n,j(uj) − uj} = −√n{Fn,j(uj) − uj} + oP(1)

− BC,j(uj) = BC(1, . . . , 1, uj, 1, . . . , 1)

10/ 32

slide-26
SLIDE 26

Decomposition of the empirical copula process

Fundamental decomposition: Cn(u) = √n

  • Cn(u) − C(u)
  • = √n
  • Fn
  • F−

n (u)

  • − F
  • F−(u)
  • = √n
  • Fn
  • F−

n (u)

  • − F
  • F−

n (u)

  • + √n
  • F
  • F−

n (u)

  • − F
  • F−(u)
  • Recall F = C (uniform margins). We find

Cn(u) = αn

  • F−

n (u)

  • + √n
  • C
  • F−

n (u)

  • − C(u)
  • Treat each of the two terms separately:

αn

  • F−

n (u)

  • = αn(u) + oP(1)

√n

  • C
  • F−

n (u)

  • − C(u)
  • =

d

  • j=1

˙ Cj(u) √n{F −

n,j(uj) − uj} + oP(1)

11/ 32

slide-27
SLIDE 27

Weak convergence of the empirical copula process in the topology of uniform convergence

Theorem [Weak uniform convergence of Cn] Suppose that (S1) ˙ Cj =

∂ ∂uj C exists and is continuous for u ∈ [0, 1]d with uj ∈ (0, 1).

Then, in (ℓ∞([0, 1]d), · ∞), √n(Cn − C)(u) CC(u) := BC(u) −

d

  • j=1

˙ Cj(u)BC,j(uj) where BC is a C-brownian bridge and BC,j(uj) = BC(1, ..., 1, uj, 1..., 1).

12/ 32

slide-28
SLIDE 28

Weak convergence of the empirical copula process in the topology of uniform convergence

Theorem [Weak uniform convergence of Cn] Suppose that (S1) ˙ Cj =

∂ ∂uj C exists and is continuous for u ∈ [0, 1]d with uj ∈ (0, 1).

Then, in (ℓ∞([0, 1]d), · ∞), √n(Cn − C)(u) CC(u) := BC(u) −

d

  • j=1

˙ Cj(u)BC,j(uj) where BC is a C-brownian bridge and BC,j(uj) = BC(1, ..., 1, uj, 1..., 1). Discussion

◮ Dating back to R¨

uschendorf (1976), Gaenssler and Stute (1987)

◮ Assumption (S1) due to S. (2012) ◮ Possible relaxation: stationary and short range dependent instead of i.i.d.

12/ 32

slide-29
SLIDE 29

Empirical processes via epi- and hypographs

The empirical copula process Weak convergence with respect to the uniform metric Non-smooth copulas: when weak convergence fails The hypi-semimetric and weak convergence Applications

13/ 32

slide-30
SLIDE 30

’Non-smooth’ copulas: examples

◮ (Un)fortunately: The assumption

(S1) ˙ Cj exists and is continuous for u ∈ [0, 1]d with uj ∈ (0, 1) is satisfied by many, but not by all interesting copulas.

14/ 32

slide-31
SLIDE 31

’Non-smooth’ copulas: examples

◮ (Un)fortunately: The assumption

(S1) ˙ Cj exists and is continuous for u ∈ [0, 1]d with uj ∈ (0, 1) is satisfied by many, but not by all interesting copulas.

◮ Example:

C(u) := λu1u2 + (1 − λ)min(u1, u2) Here ˙ C1(u) = λu2 + (1 − λ)1{u1<u2}, ˙ C2(u) = λu1 + (1 − λ)1{u1>u2}, for u1 = u2 and the partial derivatives do not exist for u1 = u2.

0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

14/ 32

slide-32
SLIDE 32

’Non-smooth’ copulas: examples

◮ (Un)fortunately: The assumption

(S1) ˙ Cj exists and is continuous for u ∈ [0, 1]d with uj ∈ (0, 1) is satisfied by many, but not by all interesting copulas.

◮ Example:

C(u) := λu1u2 + (1 − λ)min(u1, u2) Here ˙ C1(u) = λu2 + (1 − λ)1{u1<u2}, ˙ C2(u) = λu1 + (1 − λ)1{u1>u2}, for u1 = u2 and the partial derivatives do not exist for u1 = u2.

0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

◮ Other examples

◮ Extreme-value copulas with non-differentiable Pickands dependence function ◮ Marshall-Olkin copulas ◮ Archimedean copulas with non-smooth generators ◮ ... 14/ 32

slide-33
SLIDE 33

Non-smooth copulas: pointwise vs. functional weak convergence

◮ Pointwise limit for the previous example:

Cn(u) C∗

C(u) = BC(u) − ˙

C1(u) BC(u1, 1) − ˙ C2(u) BC(1, u2), apart from the diagonal and Cn(u) C∗

C(u) = BC(u) − λu{BC(u, 1) + BC(1, u)}

− (1 − λ) max{BC(u, 1), BC(1, u)}

  • n the diagonal.

15/ 32

slide-34
SLIDE 34

Non-smooth copulas: pointwise vs. functional weak convergence

◮ Pointwise limit for the previous example:

Cn(u) C∗

C(u) = BC(u) − ˙

C1(u) BC(u1, 1) − ˙ C2(u) BC(1, u2), apart from the diagonal and Cn(u) C∗

C(u) = BC(u) − λu{BC(u, 1) + BC(1, u)}

− (1 − λ) max{BC(u, 1), BC(1, u)}

  • n the diagonal.

◮ Question: Can we have: Cn C∗ C in (ℓ∞([0, 1]2), · ∞)?

15/ 32

slide-35
SLIDE 35

Non-smooth copulas: pointwise vs. functional weak convergence

◮ Pointwise limit for the previous example:

Cn(u) C∗

C(u) = BC(u) − ˙

C1(u) BC(u1, 1) − ˙ C2(u) BC(1, u2), apart from the diagonal and Cn(u) C∗

C(u) = BC(u) − λu{BC(u, 1) + BC(1, u)}

− (1 − λ) max{BC(u, 1), BC(1, u)}

  • n the diagonal.

◮ Question: Can we have: Cn C∗ C in (ℓ∞([0, 1]2), · ∞)? ◮ Answer: Lemma [B¨

ucher, Segers, Volgushev, 2013]: If Cn converges weakly with respect to · ∞, then the limit must have continuous trajectories, a.s. This is not the case here!

15/ 32

slide-36
SLIDE 36

Lack of uniform convergence.

0.48 0.49 0.50 0.51 0.52

  • 0.6
  • 0.4
  • 0.2

0.0 0.2 0.4 0.6

Limit candidate

0.48 0.49 0.50 0.51 0.52

  • 0.4
  • 0.2

0.0 0.2 0.4

n=10,000

0.48 0.49 0.50 0.51 0.52

  • 0.4
  • 0.2

0.0 0.2 0.4 0.6

n=100,000

◮ Left: sample paths of candidate limit process (based on n = 100, 000) on

[−0.48, 0.52] × {0.5}.

◮ Middle and right: ’typical realizations’ of the empirical copula process, n = 10, 000

and n = 100, 000.

16/ 32

slide-37
SLIDE 37

Lack of uniform convergence.

0.48 0.49 0.50 0.51 0.52

  • 0.6
  • 0.4
  • 0.2

0.0 0.2 0.4 0.6

Limit candidate

0.48 0.49 0.50 0.51 0.52

  • 0.4
  • 0.2

0.0 0.2 0.4

n=10,000

0.48 0.49 0.50 0.51 0.52

  • 0.4
  • 0.2

0.0 0.2 0.4 0.6

n=100,000

◮ Left: sample paths of candidate limit process (based on n = 100, 000) on

[−0.48, 0.52] × {0.5}.

◮ Middle and right: ’typical realizations’ of the empirical copula process, n = 10, 000

and n = 100, 000. Suggestion: Weak convergence may hold with respect to a metric, for which jump functions can be ‘close’ to continuous functions. Generalize Skorohod’s M2 metric.

16/ 32

slide-38
SLIDE 38

Empirical processes via epi- and hypographs

The empirical copula process Weak convergence with respect to the uniform metric Non-smooth copulas: when weak convergence fails The hypi-semimetric and weak convergence Applications

17/ 32

slide-39
SLIDE 39

Painlev´ e–Kuratowski convergence

Sequence of sets An in a metric space (T, d). lim inf

n→∞ An = {x ∈ T | ∃xn ∈ An : xn → x}

lim sup

n→∞ An = {x ∈ T | ∃xnk ∈ Ank : xnk → x}

18/ 32

slide-40
SLIDE 40

Painlev´ e–Kuratowski convergence

Sequence of sets An in a metric space (T, d). lim inf

n→∞ An = {x ∈ T | ∃xn ∈ An : xn → x}

lim sup

n→∞ An = {x ∈ T | ∃xnk ∈ Ank : xnk → x}

Painlev´ e–Kuratowski convergence: An → A if A = lim inf

n→∞ An = lim sup n→∞ An

18/ 32

slide-41
SLIDE 41

Painlev´ e–Kuratowski convergence

Sequence of sets An in a metric space (T, d). lim inf

n→∞ An = {x ∈ T | ∃xn ∈ An : xn → x}

lim sup

n→∞ An = {x ∈ T | ∃xnk ∈ Ank : xnk → x}

Painlev´ e–Kuratowski convergence: An → A if A = lim inf

n→∞ An = lim sup n→∞ An

Properties:

◮ Necessarily, A is closed. ◮ An → A iff cl(An) → A. ◮ Metrizable if (T, d) is locally compact and separable: Fell topology ◮ If (T, d) is compact, then PK convergence is convergence in the Hausdorff metric.

18/ 32

slide-42
SLIDE 42

Introducing hypi-convergence

◮ Epi- and hypograph of a function f ∈ ℓ∞([0, 1]d):

epi f := {(u, t) ∈ [0, 1]d × R | f (u) ≤ t} hypo f := {(u, t) ∈ [0, 1]d × R | f (u) ≥ t}

0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 two functions 0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 epi graphs 0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 hypo−graphs

19/ 32

slide-43
SLIDE 43

Introducing hypi-convergence

◮ Epi- and hypograph of a function f ∈ ℓ∞([0, 1]d):

epi f := {(u, t) ∈ [0, 1]d × R | f (u) ≤ t} hypo f := {(u, t) ∈ [0, 1]d × R | f (u) ≥ t}

◮ The hypi-semimetric is defined as

dhypi(f , g) = max{dF(cl(epi f ), cl(epi g)), dF(cl(hypo f ), cl(hypo g))}. where dF is a metric on closed sets inducing Painlev´ e–Kuratowski convergence.

0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 two functions 0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 epi graphs 0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 hypo−graphs

19/ 32

slide-44
SLIDE 44

Introducing hypi-convergence

◮ Epi- and hypograph of a function f ∈ ℓ∞([0, 1]d):

epi f := {(u, t) ∈ [0, 1]d × R | f (u) ≤ t} hypo f := {(u, t) ∈ [0, 1]d × R | f (u) ≥ t}

◮ The hypi-semimetric is defined as

dhypi(f , g) = max{dF(cl(epi f ), cl(epi g)), dF(cl(hypo f ), cl(hypo g))}. where dF is a metric on closed sets inducing Painlev´ e–Kuratowski convergence.

0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 two functions 0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 epi graphs 0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 hypo graphs

20/ 32

slide-45
SLIDE 45

Point-wise criteria for hypi-convergence

Define lower and upper semicontinuous hulls of f : f∧(x) = sup

ε>0 inf{f (x′) : x′ − x < ε}

f∨(x) = inf

ε>0 sup{f (x′) : x′ − x < ε}

21/ 32

slide-46
SLIDE 46

Point-wise criteria for hypi-convergence

Define lower and upper semicontinuous hulls of f : f∧(x) = sup

ε>0 inf{f (x′) : x′ − x < ε}

f∨(x) = inf

ε>0 sup{f (x′) : x′ − x < ε}

Then dhypi(fn, f ) → 0 iff the following two conditions hold:

  • 1. f∧ and f∨ provide asymptotic bounds for fn:

∀x ∈ [0, 1]d : ∀xn → x : f∧(x) ≤ lim inf

n→∞ fn(xn)

≤ lim sup

n→∞ fn(xn) ≤ f∨(x)

21/ 32

slide-47
SLIDE 47

Point-wise criteria for hypi-convergence

Define lower and upper semicontinuous hulls of f : f∧(x) = sup

ε>0 inf{f (x′) : x′ − x < ε}

f∨(x) = inf

ε>0 sup{f (x′) : x′ − x < ε}

Then dhypi(fn, f ) → 0 iff the following two conditions hold:

  • 1. f∧ and f∨ provide asymptotic bounds for fn:

∀x ∈ [0, 1]d : ∀xn → x : f∧(x) ≤ lim inf

n→∞ fn(xn)

≤ lim sup

n→∞ fn(xn) ≤ f∨(x)

  • 2. f∧ and f∨ are asymptotically attainable by fn:

∀x ∈ [0, 1]d :      ∃xn → x : lim inf

n→∞ fn(xn) = f∧(x),

∃xn → x : lim sup

n→∞ fn(xn) = f∨(x)

21/ 32

slide-48
SLIDE 48

Hypi-convergence: Useful at all?

Theorem [Handy implications of hypi-convergence] Let fn, f ∈ ℓ∞([0, 1]d) and dhypi(fn, f ) → 0.

◮ Let µ be a finite measure on [0, 1]d. If µ(discontinuity points of f ) = 0, then

fn − f Lp(µ) → 0 for any p ∈ [1, ∞).

22/ 32

slide-49
SLIDE 49

Hypi-convergence: Useful at all?

Theorem [Handy implications of hypi-convergence] Let fn, f ∈ ℓ∞([0, 1]d) and dhypi(fn, f ) → 0.

◮ Let µ be a finite measure on [0, 1]d. If µ(discontinuity points of f ) = 0, then

fn − f Lp(µ) → 0 for any p ∈ [1, ∞).

◮ sup fn → sup f and inf fn → inf f

22/ 32

slide-50
SLIDE 50

Hypi-convergence: Useful at all?

Theorem [Handy implications of hypi-convergence] Let fn, f ∈ ℓ∞([0, 1]d) and dhypi(fn, f ) → 0.

◮ Let µ be a finite measure on [0, 1]d. If µ(discontinuity points of f ) = 0, then

fn − f Lp(µ) → 0 for any p ∈ [1, ∞).

◮ sup fn → sup f and inf fn → inf f ◮ If f is continuous in x, then fn(xn) → f (x) whenever xn → x.

Also uniformly over compact sets.

22/ 32

slide-51
SLIDE 51

Hypi-convergence: Useful at all?

Theorem [Handy implications of hypi-convergence] Let fn, f ∈ ℓ∞([0, 1]d) and dhypi(fn, f ) → 0.

◮ Let µ be a finite measure on [0, 1]d. If µ(discontinuity points of f ) = 0, then

fn − f Lp(µ) → 0 for any p ∈ [1, ∞).

◮ sup fn → sup f and inf fn → inf f ◮ If f is continuous in x, then fn(xn) → f (x) whenever xn → x.

Also uniformly over compact sets. Interpretation: dhypi is ‘between’ · ∞ and · p with p < ∞. It adapts to regularity of the limit function.

22/ 32

slide-52
SLIDE 52

Comments on hypi-convergence

◮ hypi = epi + hypo:

dhypi(fn, f ) ⇐ ⇒ fn epi-converges to f∧, i.e., epi fn → epi f∧ fn hypo-converges to f∨, i.e., hypo fn → hypo f∨ Epi- and hypoconvergence have a long history in the analysis of minimizers and maximizers of functions (Rockafeller & Wets 1998, Molchanov 2005)

23/ 32

slide-53
SLIDE 53

Comments on hypi-convergence

◮ hypi = epi + hypo:

dhypi(fn, f ) ⇐ ⇒ fn epi-converges to f∧, i.e., epi fn → epi f∧ fn hypo-converges to f∨, i.e., hypo fn → hypo f∨ Epi- and hypoconvergence have a long history in the analysis of minimizers and maximizers of functions (Rockafeller & Wets 1998, Molchanov 2005)

◮ Only defines a semi-metric:

dhypi(f , g) = 0 ⇐ ⇒ f∧ = g∧ f∨ = g∨ Care must be taken when considering weak convergence.

23/ 32

slide-54
SLIDE 54

Comments on hypi-convergence

◮ hypi = epi + hypo:

dhypi(fn, f ) ⇐ ⇒ fn epi-converges to f∧, i.e., epi fn → epi f∧ fn hypo-converges to f∨, i.e., hypo fn → hypo f∨ Epi- and hypoconvergence have a long history in the analysis of minimizers and maximizers of functions (Rockafeller & Wets 1998, Molchanov 2005)

◮ Only defines a semi-metric:

dhypi(f , g) = 0 ⇐ ⇒ f∧ = g∧ f∨ = g∨ Care must be taken when considering weak convergence.

◮ Addition is not continuous! Extra work needed to deal with convergence of sums.

23/ 32

slide-55
SLIDE 55

Comments on hypi-convergence

◮ hypi = epi + hypo:

dhypi(fn, f ) ⇐ ⇒ fn epi-converges to f∧, i.e., epi fn → epi f∧ fn hypo-converges to f∨, i.e., hypo fn → hypo f∨ Epi- and hypoconvergence have a long history in the analysis of minimizers and maximizers of functions (Rockafeller & Wets 1998, Molchanov 2005)

◮ Only defines a semi-metric:

dhypi(f , g) = 0 ⇐ ⇒ f∧ = g∧ f∨ = g∨ Care must be taken when considering weak convergence.

◮ Addition is not continuous! Extra work needed to deal with convergence of sums. ◮ Can be generalized to functions on locally compact, separable metric spaces.

23/ 32

slide-56
SLIDE 56

Weak hypi-convergence of the empirical copula process

Theorem [B¨ ucher, Segers, Volgushev, 2013] Let D(C) := {u ∈ [0, 1]d | ˙ Cj(u) does not exist or is not continuous for some 1 ≤ j ≤ d} and suppose that (S2) D(C) is a Lebesgue-null set. Then, [Cn]dhypi = [√n(Cn − C)]dhypi [CC]dhypi in (L∞([0, 1]d), dhypi), where CC(u) = BC(u) + dC(−BC,1,...,−BC,d )(u) and where, for a = (a1, . . . , ad) with aj : [0, 1] → R continuous, dCa(u) = lim

ε→0 inf

d

  • j=1

˙ Cj(v) aj(vj) : v ∈ [0, 1]d\D(C), |v − u| < ε

  • .

◮ Recall (S1): ˙

Cj(u) exists and is continuous for u with uj ∈ (0, 1).

◮ Recall CC (u) := BC (u) − d j=1 ˙

Cj(u)BC,j(uj).

24/ 32

slide-57
SLIDE 57

Consequences of hypi-convergence of the empirical copula process

Consequences for Cn through the continuous mapping theorem:

◮ Hypi-convergence implies uniform convergence if the limit is continuous

⇒ Retrieve usual weak convergence result under (S1)

25/ 32

slide-58
SLIDE 58

Consequences of hypi-convergence of the empirical copula process

Consequences for Cn through the continuous mapping theorem:

◮ Hypi-convergence implies uniform convergence if the limit is continuous

⇒ Retrieve usual weak convergence result under (S1)

◮ Hypi-convergence implies Lp convergence for p < ∞

⇒ Weak convergence with respect to Lp ⇒ Cram´ er–von Mises type statistics

25/ 32

slide-59
SLIDE 59

Consequences of hypi-convergence of the empirical copula process

Consequences for Cn through the continuous mapping theorem:

◮ Hypi-convergence implies uniform convergence if the limit is continuous

⇒ Retrieve usual weak convergence result under (S1)

◮ Hypi-convergence implies Lp convergence for p < ∞

⇒ Weak convergence with respect to Lp ⇒ Cram´ er–von Mises type statistics

◮ Hypi-convergence implies convergence of infima and suprema

⇒ Weak convergence of and Kolmogorov–Smirnov statistics.

25/ 32

slide-60
SLIDE 60

Empirical processes via epi- and hypographs

The empirical copula process Weak convergence with respect to the uniform metric Non-smooth copulas: when weak convergence fails The hypi-semimetric and weak convergence Applications

26/ 32

slide-61
SLIDE 61

Comparing test statistics via local power curves

◮ Test for

H0 : C = C0, where C0 is a given copula (e.g, C0 = Π).

27/ 32

slide-62
SLIDE 62

Comparing test statistics via local power curves

◮ Test for

H0 : C = C0, where C0 is a given copula (e.g, C0 = Π).

◮ Two competing test statistics

Sn = n

  • {Cn − C0}2 dΠ

Cram´ er–von Mises Tn = √nCn − C0∞ Kolmogorov–Smirnov

27/ 32

slide-63
SLIDE 63

Comparing test statistics via local power curves

◮ Test for

H0 : C = C0, where C0 is a given copula (e.g, C0 = Π).

◮ Two competing test statistics

Sn = n

  • {Cn − C0}2 dΠ

Cram´ er–von Mises Tn = √nCn − C0∞ Kolmogorov–Smirnov

◮ Comparing the quality of tests: Local power curves

How well does a test detect alternatives that converge to the null hypothesis?

27/ 32

slide-64
SLIDE 64

Local power curves of simple goodness-of-fit tests

◮ Local alternatives in direction Λ:

Let (X(n)

i

)i=1,...,n be row-wise i.i.d. with copula C (n). Assume ∆n = √n(C (n) − C0) → δΛ uniformly, δ > 0, Λ ≡ 0.

28/ 32

slide-65
SLIDE 65

Local power curves of simple goodness-of-fit tests

◮ Local alternatives in direction Λ:

Let (X(n)

i

)i=1,...,n be row-wise i.i.d. with copula C (n). Assume ∆n = √n(C (n) − C0) → δΛ uniformly, δ > 0, Λ ≡ 0.

◮ Proposition.

If C0 satisfies (S2), then √n(Cn − C0) CC0 + δΛ in (L∞([0, 1]d), dhypi). Consequence: limit distribution of the test statistics under the local alternatives Sn → Sδ =

  • {CC0 + δΛ}2 dΠ

Tn → Tδ = CC0 + δΛ∞.

28/ 32

slide-66
SLIDE 66

Local power curves of simple goodness-of-fit tests

◮ Local alternatives in direction Λ:

Let (X(n)

i

)i=1,...,n be row-wise i.i.d. with copula C (n). Assume ∆n = √n(C (n) − C0) → δΛ uniformly, δ > 0, Λ ≡ 0.

◮ Proposition.

If C0 satisfies (S2), then √n(Cn − C0) CC0 + δΛ in (L∞([0, 1]d), dhypi). Consequence: limit distribution of the test statistics under the local alternatives Sn → Sδ =

  • {CC0 + δΛ}2 dΠ

Tn → Tδ = CC0 + δΛ∞.

◮ Local power curves in direction Λ:

‘δ → asymptotic power(δ)’ at significance level α δ → Pr{Sδ > qS0(1 − α)}, δ → Pr{Tδ > qT0(1 − α)}

28/ 32

slide-67
SLIDE 67

Minimum L2-distance estimators ` a la Tsukahara

◮ Let {Cθ | θ ∈ Θ ⊂ Rp} be a class of parametric candidate models. Estimator:

ˆ θ := argminθ

  • (Cθ − Cn)2 dµ

29/ 32

slide-68
SLIDE 68

Minimum L2-distance estimators ` a la Tsukahara

◮ Let {Cθ | θ ∈ Θ ⊂ Rp} be a class of parametric candidate models. Estimator:

ˆ θ := argminθ

  • (Cθ − Cn)2 dµ

Proposition (Asymptotic normality of ˆ θ): Suppose that (S2) holds and that µ(D(C)) = 0. Under usual regularity conditions on the model: (i) for correctly specified models (θ0 is the ‘true’ parameter): √n(ˆ θ − θ0)

  • ∇Cθ0∇C T

θ0 dµ

−1 ∇Cθ0CC dµ,

29/ 32

slide-69
SLIDE 69

Minimum L2-distance estimators ` a la Tsukahara

◮ Let {Cθ | θ ∈ Θ ⊂ Rp} be a class of parametric candidate models. Estimator:

ˆ θ := argminθ

  • (Cθ − Cn)2 dµ

Proposition (Asymptotic normality of ˆ θ): Suppose that (S2) holds and that µ(D(C)) = 0. Under usual regularity conditions on the model: (i) for correctly specified models (θ0 is the ‘true’ parameter): √n(ˆ θ − θ0)

  • ∇Cθ0∇C T

θ0 dµ

−1 ∇Cθ0CC dµ, (ii) for incorrectly specified models: √n(ˆ θ − θ0)

  • ∇Cθ0∇C T

θ0 + (Cθ0 − C)Jθ0 dµ

−1 ∇Cθ0CC dµ, where θ0 = arg min

  • (Cθ − C)2 dµ.

29/ 32

slide-70
SLIDE 70

Beyond copulas. . .

Helpful for different problems?

◮ The hypi-semimetric can be defined for real-valued, locally bounded functions on a

compact, separable, metrizable domain

30/ 32

slide-71
SLIDE 71

Beyond copulas. . .

Helpful for different problems?

◮ The hypi-semimetric can be defined for real-valued, locally bounded functions on a

compact, separable, metrizable domain

◮ Might help whenever a (pointwise) candidate limit has discontinuities that are not

exactly matched for finite n Empirical processes of residuals (measurement error in the ordinates)

30/ 32

slide-72
SLIDE 72

Conclusion

◮ Weak convergence w.r.t. topology of uniform convergence:

great success story in mathematical statistics

◮ Occasionally, it fails: continuous functions cannot converge to functions with jumps ◮ Alternative: weak convergence with respect to a new topology:

hypi = epi ∩ hypo

◮ implies uniform convergence for continuous limits ◮ implies convergence of infima and suprema ◮ adapts to limit functions with jumps ◮ stronger than Lp convergence

◮ Potentially useful for empirical processes based on estimated data

Examples: empirical copula processes, empirical processes of regression residuals

31/ 32

slide-73
SLIDE 73

Thank you!

  • A. B¨

ucher, J. Segers & S. Volgushev (2013) When uniform weak convergence fails: Empirical processes for dependence functions via epi- and hypographs Submitted for publication, arXiv:1305.6408

32/ 32