Estimation of the self-similarity and the stability indices through - - PowerPoint PPT Presentation

estimation of the self similarity and the stability
SMART_READER_LITE
LIVE PREVIEW

Estimation of the self-similarity and the stability indices through - - PowerPoint PPT Presentation

Estimation of the self-similarity and the stability indices through negative power variations Thi To Nhu DANG, jointwork with Jacques ISTAS Laboratory Jean Kunztmann University Grenoble Alpes Colloque JPS, 2016 Introduction Main results


slide-1
SLIDE 1

Estimation of the self-similarity and the stability indices through negative power variations

Thi To Nhu DANG, jointwork with Jacques ISTAS

Laboratory Jean Kunztmann University Grenoble Alpes

Colloque JPS, 2016

slide-2
SLIDE 2

Introduction Main results Conclusion

Outline

1

Introduction State of the art Preliminary

2

Main results H-sssi, SαS-stable random processes

Settings and assumptions Estimation of H and α Examples

H-sssi, SαS-stable random fields

Settings Results and examples

Multifractional stable processes

3

Conclusion

slide-3
SLIDE 3

Introduction Main results Conclusion

Outline

1

Introduction State of the art Preliminary

2

Main results H-sssi, SαS-stable random processes

Settings and assumptions Estimation of H and α Examples

H-sssi, SαS-stable random fields

Settings Results and examples

Multifractional stable processes

3

Conclusion

slide-4
SLIDE 4

Introduction Main results Conclusion

State of the art

Self-similar processes are important in probability: connect to limit theorems, be of great interest in modeling, appear in geophysics, hydrology, turbulence, economics.... Stable distributions are the only distributions that can be

  • btained as limits of normalized sums of i.i.d random variables.
slide-5
SLIDE 5

Introduction Main results Conclusion

State of the art

Let a = (a0, . . . , aK), K, L ∈ N such that for q = 0, . . . , L

K

  • k=0

kqak = 0,

K

  • k=0

kL+1ak = 0

slide-6
SLIDE 6

Introduction Main results Conclusion

State of the art

Let a = (a0, . . . , aK), K, L ∈ N such that for q = 0, . . . , L

K

  • k=0

kqak = 0,

K

  • k=0

kL+1ak = 0 e.g K = 2, L = 1 : (a0, a1, a2) = (−1, 2, −1). The increments of the process X with respect to a are defined by △p,nX =

K

  • k=0

akX(k + p n ) (1)

slide-7
SLIDE 7

Introduction Main results Conclusion

State of the art

Let a = (a0, . . . , aK), K, L ∈ N such that for q = 0, . . . , L

K

  • k=0

kqak = 0,

K

  • k=0

kL+1ak = 0 e.g K = 2, L = 1 : (a0, a1, a2) = (−1, 2, −1). The increments of the process X with respect to a are defined by △p,nX =

K

  • k=0

akX(k + p n ) (1) A usual statistical tool is the φ− variations: Vn(φ, X) = 1 n − K + 1

n−K

  • p=0

φ(|△p,nX|)

slide-8
SLIDE 8

Introduction Main results Conclusion

State of the art

For a fBm with finite variance, generalized quadratic variations (φ(x) = x2) are used ([Istas1997])

slide-9
SLIDE 9

Introduction Main results Conclusion

State of the art

For a fBm with finite variance, generalized quadratic variations (φ(x) = x2) are used ([Istas1997]) Wavelet: the increments of the process X are replaced by wavelet coefficients ([Bardet2010], [Lacaux2007], [Cohen2013]).

slide-10
SLIDE 10

Introduction Main results Conclusion

State of the art

For a fBm with finite variance, generalized quadratic variations (φ(x) = x2) are used ([Istas1997]) Wavelet: the increments of the process X are replaced by wavelet coefficients ([Bardet2010], [Lacaux2007], [Cohen2013]). p-variations (φ(x) = xp, 0 < p < α) are used for fBm, for

  • ther H-sssi processes with infinite variance (e.g. α-stable

processes )

slide-11
SLIDE 11

Introduction Main results Conclusion

State of the art

For a fBm with finite variance, generalized quadratic variations (φ(x) = x2) are used ([Istas1997]) Wavelet: the increments of the process X are replaced by wavelet coefficients ([Bardet2010], [Lacaux2007], [Cohen2013]). p-variations (φ(x) = xp, 0 < p < α) are used for fBm, for

  • ther H-sssi processes with infinite variance (e.g. α-stable

processes ) Log-variations φ(x) = log |x| [Istas2012b]⇒ requires the existence of logarithmic moments, rate of convergence is slow.

slide-12
SLIDE 12

Introduction Main results Conclusion

State of the art

For a fBm with finite variance, generalized quadratic variations (φ(x) = x2) are used ([Istas1997]) Wavelet: the increments of the process X are replaced by wavelet coefficients ([Bardet2010], [Lacaux2007], [Cohen2013]). p-variations (φ(x) = xp, 0 < p < α) are used for fBm, for

  • ther H-sssi processes with infinite variance (e.g. α-stable

processes ) Log-variations φ(x) = log |x| [Istas2012b]⇒ requires the existence of logarithmic moments, rate of convergence is slow. Complex variations φ(x) = xiM, M ∈ R [Istas2012a].

slide-13
SLIDE 13

Introduction Main results Conclusion

State of the art

For estimating α: [LeGu´ evel2013] used p-variations (p ∈ (0, c), c = min

u∈U α(u)) to estimate the stability functions

  • f multistable processes
slide-14
SLIDE 14

Introduction Main results Conclusion

State of the art

For estimating α: [LeGu´ evel2013] used p-variations (p ∈ (0, c), c = min

u∈U α(u)) to estimate the stability functions

  • f multistable processes

Objective: estimate both H and α, using β-variations, β ∈ (− 1

2, 0).

slide-15
SLIDE 15

Introduction Main results Conclusion

Outline

1

Introduction State of the art Preliminary

2

Main results H-sssi, SαS-stable random processes

Settings and assumptions Estimation of H and α Examples

H-sssi, SαS-stable random fields

Settings Results and examples

Multifractional stable processes

3

Conclusion

slide-16
SLIDE 16

Introduction Main results Conclusion

H-sssi process

A real-valued process X is H-self-similar (H-ss) if for all a > 0, {X(at), t ∈ R}

(d)

= aH{X(t), t ∈ R},

slide-17
SLIDE 17

Introduction Main results Conclusion

H-sssi process

A real-valued process X is H-self-similar (H-ss) if for all a > 0, {X(at), t ∈ R}

(d)

= aH{X(t), t ∈ R}, has stationary increments (si) if, for all s ∈ R, {X(t + s) − X(s), t ∈ R}

(d)

= {X(t) − X(0), t ∈ R}.

slide-18
SLIDE 18

Introduction Main results Conclusion

α-stable process

A r.v X is said to have a symmetric α-stable distribution (SαS) if there are parameters 0 < α ≤ 2, σ > 0 such that its characteristic function has the following form: EeiθX = exp (−σα | θ |α) We can write X ∼ Sα(σ, 0, 0).

slide-19
SLIDE 19

Introduction Main results Conclusion

α-stable process

A r.v X is said to have a symmetric α-stable distribution (SαS) if there are parameters 0 < α ≤ 2, σ > 0 such that its characteristic function has the following form: EeiθX = exp (−σα | θ |α) We can write X ∼ Sα(σ, 0, 0). σ = 1, a SαS is said to be standard.

slide-20
SLIDE 20

Introduction Main results Conclusion

α-stable process

A r.v X is said to have a symmetric α-stable distribution (SαS) if there are parameters 0 < α ≤ 2, σ > 0 such that its characteristic function has the following form: EeiθX = exp (−σα | θ |α) We can write X ∼ Sα(σ, 0, 0). σ = 1, a SαS is said to be standard. X = (X1, . . . , Xn) is a symmetric stable random vector if any linear combination of the components of X is symmetric α-stable (α ∈ (0, 2]).

slide-21
SLIDE 21

Introduction Main results Conclusion

α-stable process

A r.v X is said to have a symmetric α-stable distribution (SαS) if there are parameters 0 < α ≤ 2, σ > 0 such that its characteristic function has the following form: EeiθX = exp (−σα | θ |α) We can write X ∼ Sα(σ, 0, 0). σ = 1, a SαS is said to be standard. X = (X1, . . . , Xn) is a symmetric stable random vector if any linear combination of the components of X is symmetric α-stable (α ∈ (0, 2]). {X(t), t ∈ T} is symmetric stable if all of its finite-dimensional distributions are symmetric stable.

slide-22
SLIDE 22

Introduction Main results Conclusion

Outline

1

Introduction State of the art Preliminary

2

Main results H-sssi, SαS-stable random processes

Settings and assumptions Estimation of H and α Examples

H-sssi, SαS-stable random fields

Settings Results and examples

Multifractional stable processes

3

Conclusion

slide-23
SLIDE 23

Introduction Main results Conclusion

Settings and assumptions

Let X be a H − sssi, SαS random process (α ∈ (0, 2]) The increments of X with respect to a are defined by △p,nX =

K

  • k=0

akX(k + p n ) (2)

slide-24
SLIDE 24

Introduction Main results Conclusion

Settings and assumptions

Let X be a H − sssi, SαS random process (α ∈ (0, 2]) The increments of X with respect to a are defined by △p,nX =

K

  • k=0

akX(k + p n ) (2) Let β ∈ R, − 1

2 < β < 0, set

Vn(β) = 1 n − K + 1

n−K

  • p=0

|△p,nX|β (3) Wn(β) = nβHVn(β) (4)

  • Hn = 1

β log2 Vn/2(β) Vn(β) (5)

slide-25
SLIDE 25

Introduction Main results Conclusion

An estimator of α

Let u, v ∈ R such that 0 < v < u. gu,v : (0, +∞) → R gu,v(x) = u ln (Γ(1 + vx)) − v ln (Γ(1 + ux)) ,

slide-26
SLIDE 26

Introduction Main results Conclusion

An estimator of α

Let u, v ∈ R such that 0 < v < u. gu,v : (0, +∞) → R gu,v(x) = u ln (Γ(1 + vx)) − v ln (Γ(1 + ux)) , hu,v : (0, +∞) → (−∞, 0) hu,v(x) = gu,v(1/x),

slide-27
SLIDE 27

Introduction Main results Conclusion

An estimator of α

ψu,v : R+ × R+ → R ψu,v(x, y) = −v ln x + u ln y + C(u, v), C(u, v) = u − v 2 ln π + u ln Γ(1 + v/2) + v ln Γ(1 − u 2 ) − v ln Γ(1 + u/2) − u ln Γ(1 − v 2 ),

slide-28
SLIDE 28

Introduction Main results Conclusion

An estimator of α

ψu,v : R+ × R+ → R ψu,v(x, y) = −v ln x + u ln y + C(u, v), C(u, v) = u − v 2 ln π + u ln Γ(1 + v/2) + v ln Γ(1 − u 2 ) − v ln Γ(1 + u/2) − u ln Γ(1 − v 2 ), ϕu,v : R → [0, +∞) ϕu,v(x) =

  • 0,

x ≥ 0 h−1

u,v(x),

x < 0

slide-29
SLIDE 29

Introduction Main results Conclusion

An estimator of α

Let β1, β2 ∈ R, −1/2 < β1 < β2 < 0.

slide-30
SLIDE 30

Introduction Main results Conclusion

An estimator of α

Let β1, β2 ∈ R, −1/2 < β1 < β2 < 0. Let αn defined by

  • αn = ϕ−β1,−β2 (ψ−β1,−β2(Vn(β1), Vn(β2)))
slide-31
SLIDE 31

Introduction Main results Conclusion

Assumptions

Assumptions: lim

n→∞

1 n

  • p∈Z,|p|≤n

|cov(|△p,1X|β, |△0,1X|β)| = 0 (6)

slide-32
SLIDE 32

Introduction Main results Conclusion

Assumptions

Assumptions: lim

n→∞

1 n

  • p∈Z,|p|≤n

|cov(|△p,1X|β, |△0,1X|β)| = 0 (6) There exists a sequence {bn, n ∈ N}, lim

n→+∞ bn = 0 such that

lim sup

n→∞

1 nb2

n

  • p∈Z,|p|≤n

|cov(|△p,1X|β, |△0,1X|β)| ≤ C 2, (7) where C is a constant.

slide-33
SLIDE 33

Introduction Main results Conclusion

Estimation of H and α

Theorem 1

  • 1. Assume (6), then

lim

n→+∞

  • Hn

(P)

= H, lim

n→+∞

αn

(P)

= α.

slide-34
SLIDE 34

Introduction Main results Conclusion

Estimation of H and α

Theorem 1

  • 1. Assume (6), then

lim

n→+∞

  • Hn

(P)

= H, lim

n→+∞

αn

(P)

= α.

  • 2. Assume (7), then
  • Hn − H = OP(bn),

αn − α = OP(bn).

slide-35
SLIDE 35

Introduction Main results Conclusion

Examples

Well-balanced linear fractional stable motions M: a SαS random measure (0 < α < 2) with Lebesgue control measure. X(t) =

  • R

(|t − s|H−1/α − |s|H−1/α)M(ds) with H ∈ (0, 1), H = 1/α.

slide-36
SLIDE 36

Introduction Main results Conclusion

Examples

Well-balanced linear fractional stable motions M: a SαS random measure (0 < α < 2) with Lebesgue control measure. X(t) =

  • R

(|t − s|H−1/α − |s|H−1/α)M(ds) with H ∈ (0, 1), H = 1/α. Takenaka’s processes t ∈ R, set Ct = {(x, r) ∈ R × R+, |x − t| ≤ r}, St = Ct△C0. M: a SαS random measure (0 < α < 2) with control measure m(dx, dr) = rν−2dxdr, (0 < ν < 1). X(t) =

  • R×R+ 1St(x, r)M(dx, dr)
slide-37
SLIDE 37

Introduction Main results Conclusion

Examples

Well-balanced linear fractional stable motions M: a SαS random measure (0 < α < 2) with Lebesgue control measure. X(t) =

  • R

(|t − s|H−1/α − |s|H−1/α)M(ds) with H ∈ (0, 1), H = 1/α. Takenaka’s processes t ∈ R, set Ct = {(x, r) ∈ R × R+, |x − t| ≤ r}, St = Ct△C0. M: a SαS random measure (0 < α < 2) with control measure m(dx, dr) = rν−2dxdr, (0 < ν < 1). X(t) =

  • R×R+ 1St(x, r)M(dx, dr)

The process X is ν/α-sssi.

slide-38
SLIDE 38

Introduction Main results Conclusion

Examples

Theorem 1 is true for well-balanced linear fractional stable motions, with bn =        n−1/2 , if H < L + 1 − 2

α

n

αH−(L+1)α 4

, if H > L + 1 − 2

α

  • ln n

n

, if H = L + 1 − 2

α

(8)

slide-39
SLIDE 39

Introduction Main results Conclusion

Examples

Theorem 1 is true for well-balanced linear fractional stable motions, with bn =        n−1/2 , if H < L + 1 − 2

α

n

αH−(L+1)α 4

, if H > L + 1 − 2

α

  • ln n

n

, if H = L + 1 − 2

α

(8) Takenaka’s processes, with bn = n

ν−1 2 , ν ∈ (0, 1)

(9)

slide-40
SLIDE 40

Introduction Main results Conclusion

CLT for fractional Brownian motions (α = 2) and SαS L´ evy motions

Fractional Brownian motion is the unique, up to a constant, centered Gaussian H-sssi process, with H ∈ (0, 1]. Its covariance is given by R(t, s) = C 2 {|s|2H + |t|2H − |s − t|2H}.

slide-41
SLIDE 41

Introduction Main results Conclusion

CLT for fractional Brownian motions (α = 2) and SαS L´ evy motions

Fractional Brownian motion is the unique, up to a constant, centered Gaussian H-sssi process, with H ∈ (0, 1]. Its covariance is given by R(t, s) = C 2 {|s|2H + |t|2H − |s − t|2H}. {X(t), t ≥ 0} with:

X(0) = 0 a.s, has independent increments, X(t) − X(s) ∼ Sα((t − s)1/α, 0, 0) for any 0 ≤ s < t < ∞ and 0 < α ≤ 2

is called a SαS L´ evy motion.

slide-42
SLIDE 42

Introduction Main results Conclusion

CLT for fractional Brownian motions (α = 2), SαS L´ evy motions

Theorem 2 Let X be a fBm (or SαS-stable L´ evy motion), then we have: a) lim

n→+∞

  • Hn

P

= H, lim

n→+∞

αn

P

= α b)√n( Hn − H) converges in distribution as n → +∞, to a centered Gaussian variable. c)√n( αn − α) converges in distribution as n → +∞, to a centered Gaussian variable.

slide-43
SLIDE 43

Introduction Main results Conclusion

Outline

1

Introduction State of the art Preliminary

2

Main results H-sssi, SαS-stable random processes

Settings and assumptions Estimation of H and α Examples

H-sssi, SαS-stable random fields

Settings Results and examples

Multifractional stable processes

3

Conclusion

slide-44
SLIDE 44

Introduction Main results Conclusion

Settings

a = (a0, . . . , aK), for q = 0, . . . , L,

K

  • k=0

kqak = 0,

K

  • k=0

kL+1ak = 0

slide-45
SLIDE 45

Introduction Main results Conclusion

Settings

a = (a0, . . . , aK), for q = 0, . . . , L,

K

  • k=0

kqak = 0,

K

  • k=0

kL+1ak = 0 p = (p1, . . . , pd) ∈ Nd, pi = 0, . . . , K, ap = ap1 . . . apd

slide-46
SLIDE 46

Introduction Main results Conclusion

Settings

a = (a0, . . . , aK), for q = 0, . . . , L,

K

  • k=0

kqak = 0,

K

  • k=0

kL+1ak = 0 p = (p1, . . . , pd) ∈ Nd, pi = 0, . . . , K, ap = ap1 . . . apd k = (k1, . . . , kd) ∈ Nd, △k,nX =

K

  • p=(p1,...,pd),pi=0

apX(k + p n )

slide-47
SLIDE 47

Introduction Main results Conclusion

Settings

Fix −1/2 < β < 0, let Vn(β) = 1 (n − K + 1)d

n−K

  • k=(k1,...,kd),ki=0

|△k,nX|β Wn(β) = nβHVn(β)

  • Hn = 1

β log2 Vn/2(β) Vn(β) .

slide-48
SLIDE 48

Introduction Main results Conclusion

Settings

Fix −1/2 < β < 0, let Vn(β) = 1 (n − K + 1)d

n−K

  • k=(k1,...,kd),ki=0

|△k,nX|β Wn(β) = nβHVn(β)

  • Hn = 1

β log2 Vn/2(β) Vn(β) . Fix −1/2 < β1 < β2 < 0:

  • αn = ϕ−β1,−β2 (ψ−β1,−β2(Vn(β1), Vn(β2)))
slide-49
SLIDE 49

Introduction Main results Conclusion

Estimation of H and α

Asumptions: lim

n→+∞

1 nd

  • k=(k1,...,kd)∈Zd,|ki|≤n
  • cov(|△k,1X|β, |△0,1X|β)
  • = 0,

(10)

slide-50
SLIDE 50

Introduction Main results Conclusion

Estimation of H and α

Asumptions: lim

n→+∞

1 nd

  • k=(k1,...,kd)∈Zd,|ki|≤n
  • cov(|△k,1X|β, |△0,1X|β)
  • = 0,

(10) There exists a sequence {bn, n ∈ N} and a constant C such that lim

n→+∞ bn = 0, bn/2 = O(bn) and

lim

n→+∞

1 ndb2

n

  • k=(k1,...,kd)∈Zd,|ki|≤n
  • cov(|△k,1X|β, |△0,1X|β)
  • ≤ C 2.

(11)

slide-51
SLIDE 51

Introduction Main results Conclusion

Estimation of H and α

Theorem 3

  • 1. Assume (10), then

lim

n→+∞

  • Hn

(P)

= H, lim

n→+∞

αn

(P)

= α.

slide-52
SLIDE 52

Introduction Main results Conclusion

Estimation of H and α

Theorem 3

  • 1. Assume (10), then

lim

n→+∞

  • Hn

(P)

= H, lim

n→+∞

αn

(P)

= α.

  • 2. Assume (11), then

lim

n→+∞

  • Hn(β) = H, (P),
  • Hn − H = OP(bn),

αn − α = OP(bn).

slide-53
SLIDE 53

Introduction Main results Conclusion

Examples

Theorem 3 is true for: L´ evy fractional Brownian field with bn = n−d/2 Well-balanced linear fractional stable field with bn =        n−d/2 , if αH−(L+1)αd

2

< −d n

αH−(L+1)αd 4

, if − d < αH−(L+1)αd

2

< 0

  • ln n

n

, if αH−(L+1)αd

2

= −d

slide-54
SLIDE 54

Introduction Main results Conclusion

Examples

Theorem 3 is true for: L´ evy fractional Brownian field with bn = n−d/2 Well-balanced linear fractional stable field with bn =        n−d/2 , if αH−(L+1)αd

2

< −d n

αH−(L+1)αd 4

, if − d < αH−(L+1)αd

2

< 0

  • ln n

n

, if αH−(L+1)αd

2

= −d Takenaka random field with bn = n

ν−1 2

slide-55
SLIDE 55

Introduction Main results Conclusion

Outline

1

Introduction State of the art Preliminary

2

Main results H-sssi, SαS-stable random processes

Settings and assumptions Estimation of H and α Examples

H-sssi, SαS-stable random fields

Settings Results and examples

Multifractional stable processes

3

Conclusion

slide-56
SLIDE 56

Introduction Main results Conclusion

Definition

Let 0 < α ≤ 2 and H : U → (0, 1) be an infinite differentiable function on a closed interval U ⊂ R. Let X(t) =

  • R

(|t − s|H(t)−1/α − |s|H(t)−1/α)Mα(ds) (12) where Mα is a symmetric α-stable random measure on R which control measure ds is Lebesgue measure.

slide-57
SLIDE 57

Introduction Main results Conclusion

Definition

Let 0 < α ≤ 2 and H : U → (0, 1) be an infinite differentiable function on a closed interval U ⊂ R. Let X(t) =

  • R

(|t − s|H(t)−1/α − |s|H(t)−1/α)Mα(ds) (12) where Mα is a symmetric α-stable random measure on R which control measure ds is Lebesgue measure. 0 < α < 2, X(t) is called a linear multifractional stable motion

slide-58
SLIDE 58

Introduction Main results Conclusion

Definition

Let 0 < α ≤ 2 and H : U → (0, 1) be an infinite differentiable function on a closed interval U ⊂ R. Let X(t) =

  • R

(|t − s|H(t)−1/α − |s|H(t)−1/α)Mα(ds) (12) where Mα is a symmetric α-stable random measure on R which control measure ds is Lebesgue measure. 0 < α < 2, X(t) is called a linear multifractional stable motion α = 2, X(t) is called a multifractional Brownian motion (M(du) is the standard Gaussian measure on R).

slide-59
SLIDE 59

Introduction Main results Conclusion

Settings

Let △p,nX =

K

  • k=0

akX k + p n

  • .

Let γ be fixed such that 0 < lim sup

t∈U

H(t) < γ < 1. Define a set νγ,n(u) and its cardinal by νγ,n(u) := {k ∈ Z : ∀p = 0, . . . , K, |k + p n − u| ≤ 1 nγ }, υγ,n(u) := #νγ,n(u) {k + p n , k ∈ νγ,n(u), p = 0, . . . , K} ⊂ U.

slide-60
SLIDE 60

Introduction Main results Conclusion

Settings

Let β ∈ (−1/2, 0) be fixed and Vu,n(β) = 1 υγ,n(u)

  • k∈νγ,n(u)

|△k,nX|β Wu,n(β) = nβH(u)Vu,n(β).

slide-61
SLIDE 61

Introduction Main results Conclusion

Settings

Let β ∈ (−1/2, 0) be fixed and Vu,n(β) = 1 υγ,n(u)

  • k∈νγ,n(u)

|△k,nX|β Wu,n(β) = nβH(u)Vu,n(β).

  • Hn(u) := 1

β log2 Vu,n/2(β) Vu,n(β) .

slide-62
SLIDE 62

Introduction Main results Conclusion

Settings

Let β ∈ (−1/2, 0) be fixed and Vu,n(β) = 1 υγ,n(u)

  • k∈νγ,n(u)

|△k,nX|β Wu,n(β) = nβH(u)Vu,n(β).

  • Hn(u) := 1

β log2 Vu,n/2(β) Vu,n(β) . Fix −1/2 < β1 < β2 < 0,

  • αn = ϕ−β1,−β2 (ψ−β1,−β2(Vu,n(β1), Vu,n(β2))) .
slide-63
SLIDE 63

Introduction Main results Conclusion

Estimation of H and α

Theorem 1 Let X be a linear multifractional stable motion or multifractional Brownian motion. For u ∈ U fixed, then lim

n→+∞

  • Hn(u) = H(u),

Hn(u) − H(u) = OP(dn), αn − α = OP(dn)

slide-64
SLIDE 64

Introduction Main results Conclusion

Estimation of H and α

where dn is defined by dn =                        n

α(H(u)−γ) 4

H(u) < L + 1 − 2

α, H(u) < γ ≤ 2+αH(u) 2+α

n

γ−1 2

H(u) < L + 1 − 2

α, γ > 2+αH(u) 2+α

n

α(1−γ)(H(u)−(L+1)) 4

H(u) > L + 1 − 2

α, γ ≥ L+1 L+2−H(u)

n

α(H(u)−γ) 4

H(u) > L + 1 − 2

α, H(u) < γ < L+1 L+2−H(u)

n

α(H(u)−γ) 4

H(u) = L + 1 − 2

α, H(u) < γ < (L+1)α 2+α

n

γ−1 2

ln(n) H(u) = L + 1 − 2

αγ ≥ (L+1)α 2+α

for linear multifractional stable motion dn =

  • nH(u)−γ

if H(u) < γ ≤ 1+2H(u)

3

n

γ−1 2

if γ > 1+2H(u)

3

for multifractional Brownian motion.

slide-65
SLIDE 65

Introduction Main results Conclusion

Perspective

Improve results in case of multifractional stable motions? Other mulfractional multistable processes? ...

slide-66
SLIDE 66

Introduction Main results Conclusion

Thank you for your attention!

slide-67
SLIDE 67
  • J. M. Bardet and C. Tudor.

A wavelet analysis of the Rosenblatt process: chaos expansion and estimation of the self-similarity parameter. Stochastic Processes and their Applications, 120(12):2331–2362, 2010.

  • S. Cohen and J. Istas.

Fractional fields and applications. Springer-Verlag, Berlin, 2013.

  • K. J. Falconer and J. L´

evy V´ ehel. Multifractional, multistable, and other processes with prescribed local form. Journal of Theoretical Probability, 22(2):375–401, 2009.

slide-68
SLIDE 68

Introduction Main results Conclusion

  • J. Istas and G. Lang.

Quadratic variations and estimation of the Holder index of a Gaussian process.

  • Ann. Inst. H. Poincar´

e Probab. Statist, 33(4):407–436, 1997.

  • J. Istas.

Estimating self-similarity through complex variations. Electronic Journal of Statistics, 6:1392–1408, 2012.

  • J. Istas.

Manifold indexed fractional fields. ESAIM: Probability and Statistics, 16:222–276, 2012.

  • R. Le Gu´

evel. An estimation of the stability and the localisability functions of multistable processes. Electronic Journal of Statistics, 7:1129–1166, 2013.

slide-69
SLIDE 69

Introduction Main results Conclusion

  • C. Lacaux and J.-M. Loubes.

Hurst exponent estimation of fractional L´ evy motions. ALEA Lat. Am. J. Probab. Math. Stat., 3:143–161, 2007.

  • I. Nourdin, D. Nualart, and C. Tudor.

Central and non-central limit theorems for weighted power variations of fractional Brownian motion.

  • Ann. Inst. H. Poincar´

e Probab. Statist., 46(4):1055–1079, 2010.

slide-70
SLIDE 70

Introduction Main results Conclusion

Sketch of proofs - Auxiliary lemmas

(S, µ): a measure space, f , g ∈ Lα(S, µ), M: a SαS random measure on S with control measure µ. Set U =

  • S

f (s)M(ds), V =

  • S

g(s)M(ds), ||U||α

α = ||V ||α α = 1,

slide-71
SLIDE 71

Introduction Main results Conclusion

Sketch of proofs - Auxiliary lemmas

(S, µ): a measure space, f , g ∈ Lα(S, µ), M: a SαS random measure on S with control measure µ. Set U =

  • S

f (s)M(ds), V =

  • S

g(s)M(ds), ||U||α

α = ||V ||α α = 1,

  • S

f (s)g(s)ds ≤ η < 1, Cβ = 2β+1/2Γ( β+1

2 )

Γ( −β

2 )

.

slide-72
SLIDE 72

Introduction Main results Conclusion

Sketch of proofs - Auxiliary lemmas

Lemma 2 E|U|β = Cβ √ 2π

  • R

EeiUy |y|1+β dy, E|U|β|V |β = CβCβ 2π

  • R2

EeixU+iyV |x|1+β|y|1+β dxdy in sense of distribution.

slide-73
SLIDE 73

Introduction Main results Conclusion

Sketch of proofs - Auxiliary lemmas

Lemma 2 E|U|β = Cβ √ 2π

  • R

EeiUy |y|1+β dy, E|U|β|V |β = CβCβ 2π

  • R2

EeixU+iyV |x|1+β|y|1+β dxdy in sense of distribution. There exists a constant C(η) such that |cov(|U|β, |V |β)| ≤ C(η)

  • S

|f (s)g(s)|α/2ds.

slide-74
SLIDE 74

Introduction Main results Conclusion

Sketch of proofs - Auxiliary lemmas

Lemma 3 Let X be a standard SαS random variable with 0 < α ≤ 2, let −1 < γ < 0 then E|X|γ < +∞, moreover E|X|γ = 2γΓ( γ+1

2 )Γ(1 − γ α)

√πΓ(1 − γ

2)

.

slide-75
SLIDE 75

Introduction Main results Conclusion

Sketch of proofs - Auxiliary lemmas

gu,v is a strictly decreasing function on (0, +∞) and lim

x→0 gu,v(x) = 0,

lim

x→+∞ gu,v(x) = −∞.

slide-76
SLIDE 76

Introduction Main results Conclusion

Sketch of proofs - Auxiliary lemmas

gu,v is a strictly decreasing function on (0, +∞) and lim

x→0 gu,v(x) = 0,

lim

x→+∞ gu,v(x) = −∞.

hu,v is a strictly increasing function on (0, +∞) and lim

x→+∞ hu,v(x) = 0, lim x→0 hu,v(x) = −∞.

slide-77
SLIDE 77

Introduction Main results Conclusion

Sketch of proofs - Auxiliary lemmas

gu,v is a strictly decreasing function on (0, +∞) and lim

x→0 gu,v(x) = 0,

lim

x→+∞ gu,v(x) = −∞.

hu,v is a strictly increasing function on (0, +∞) and lim

x→+∞ hu,v(x) = 0, lim x→0 hu,v(x) = −∞.

hu,v is invertible and h−1

u,v is continuous and differentiable on

(−∞, 0).

slide-78
SLIDE 78

Introduction Main results Conclusion

Sketch of proofs - Auxiliary lemmas

ψ−β1,−β2 (Wn(β1), Wn(β2)) converges in probability to ψ−β1,−β2(E|△0,1X|β1, E|△0,1X|β2) = h−β1,−β2(α) as n → +∞.

slide-79
SLIDE 79

Introduction Main results Conclusion

Sketch of proofs - Auxiliary lemmas

ψ−β1,−β2 (Wn(β1), Wn(β2)) converges in probability to ψ−β1,−β2(E|△0,1X|β1, E|△0,1X|β2) = h−β1,−β2(α) as n → +∞. ψ−β1,−β2(Wn(β1), Wn(β2)) − h−β1,−β2(α) = OP(bn). ϕ−β1,−β2 ◦ ψ−β1,−β2 is continuous at x0 = (E|△0,1X|β1, E|△0,1X|β2).