Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Multivariate fractional Brownian motion Fr ed eric Lavancier - - PowerPoint PPT Presentation
Multivariate fractional Brownian motion Fr ed eric Lavancier - - PowerPoint PPT Presentation
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications Multivariate fractional Brownian motion Fr ed eric Lavancier Laboratoire de Math ematiques Jean Leray
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Axiomatic definition
Definition A multivariate fractional Brownian motion (mfBm) with parameter H ∈ (0, 1)p is a p-multivariate process satisfying the three following properties Gaussianity Self similarity with parameter H = (H1, . . . , Hp) ∈ (0, 1)p (X1(λt), · · · , Xp(λt))t∈R
fidi
=
- λH1X1(t), · · · , λHpXp(t)
- t∈R
∀λ > 0 Stationarity of the increments → This is a particular case of operator self-similar processes: X(λt)
fidi
= tHX(t) for some matrix H [Laha, Rohagi’81 ; Hudson, Mason’82], by taking H = diag(H1, . . . , Hp). → Each component (Xj(t))t∈R is a fractional Brownian motion. → Motivation: study of correlated fractional Brownian motions
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Axiomatic definition
Definition A multivariate fractional Brownian motion (mfBm) with parameter H ∈ (0, 1)p is a p-multivariate process satisfying the three following properties Gaussianity Self similarity with parameter H = (H1, . . . , Hp) ∈ (0, 1)p (X1(λt), · · · , Xp(λt))t∈R
fidi
=
- λH1X1(t), · · · , λHpXp(t)
- t∈R
∀λ > 0 Stationarity of the increments → This is a particular case of operator self-similar processes: X(λt)
fidi
= tHX(t) for some matrix H [Laha, Rohagi’81 ; Hudson, Mason’82], by taking H = diag(H1, . . . , Hp). → Each component (Xj(t))t∈R is a fractional Brownian motion. → Motivation: study of correlated fractional Brownian motions
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Some examples : Below, the correlation between all couples of FBM’s is 0.6 H ∈ [0.3, 0.4] (decentered) H ∈ [0.8, 0.9] H ∈ [0.4, 0.8]
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
1
Justification of joint self-similarity
2
Integral representations
3
Covariance structure of the mfBm
4
The increment process
5
Existence of the covariance function
6
Identification
7
Some applications
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Justification of joint self-similarity
As for the univariate case, we have the following Lamperti results: Theorem (Lamperti’62; Hudson, Mason’82)
1
Stationarity ↔ Self-similarity. (Y (t))t∈R is a p-multivariate stationary process if and only if there exists H ∈ (0, 1)p such that (tH1Y1(log(t)), . . . , tHpYp(log(t)))t∈R is a p-multivariate self-similar process.
2
Self-similarity in limiting processes. Let (X(t))t∈R be a p-multivariate proper process, continuous in
- probability. If there exist a p-multivariate process (Y (t))t∈R and p real
functions a1, ...., ap such that (a1(n)Y1(nt), . . . , ap(n)Yp(nt))
n→∞
− − − →
fidi
X(t), then the multivariate process (X(t)) is self-similar. Conversely, any multivariate self-similar process can be obtained as a such limit.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Justification of joint self-similarity
As for the univariate case, we have the following Lamperti results: Theorem (Lamperti’62; Hudson, Mason’82)
1
Stationarity ↔ Self-similarity. (Y (t))t∈R is a p-multivariate stationary process if and only if there exists H ∈ (0, 1)p such that (tH1Y1(log(t)), . . . , tHpYp(log(t)))t∈R is a p-multivariate self-similar process.
2
Self-similarity in limiting processes. Let (X(t))t∈R be a p-multivariate proper process, continuous in
- probability. If there exist a p-multivariate process (Y (t))t∈R and p real
functions a1, ...., ap such that (a1(n)Y1(nt), . . . , ap(n)Yp(nt))
n→∞
− − − →
fidi
X(t), then the multivariate process (X(t)) is self-similar. Conversely, any multivariate self-similar process can be obtained as a such limit.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
An example: partial sums of superlinear processes
Let (εj(k))k∈Z, j = 1, . . . , p be p independent i.i.d. sequences with zero mean and unit variance. Let us consider the superlinear processes Zi(t) =
p
- j=1
- k∈Z
ψi,j(t − k)εj(k), i = 1, . . . , p, where
k∈Z ψ2 i,j(k) < ∞.
Assume that ψi,j(k) = ψ+
i,j(k) + ψ− i,j(k) where ψ+ i,j(k) satisfies one of the
following conditions:
(i) ψ+
i,j(k) =
- α+
i,j + o(1)
- k
d+ i,j −1 +
as |k| → ∞, with 0 < d+
i,j < 1 2 and α+ i,j = 0,
(ii) ψ+
i,j(k) =
- α+
i,j + o(1)
- k
d+ i,j −1 +
as |k| → ∞, with − 1
2 < d+ i,j < 0, k∈Z ψ+ i,j(k) = 0 and
α+
i,j = 0,
(iii)
k∈Z
- ψ+
i,j(k)
- < ∞ and let α+
i,j := k∈Z ψ+ i,j(k) = 0, d+ i,j := 0.
ψ−
i,j(k) is assumed to satisfy (i), (ii) or (iii) with k−, d− i,j and α− i,j.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
An example: partial sums of superlinear processes
Proposition Let di = max(d+
i1, d− i1 , · · · , d+ ip , d− ip ), for i = 1, . . . , p. Consider the vector of
partial sums, for τ ∈ R, Sn(τ) = n−d1−(1/2)
[nτ]
- t=1
Z1(t), · · · , n−dp−(1/2)
[nτ]
- t=1
Zp(t) . Then the finite dimensional distributions of (Sn(τ))τ∈R converges in law towards a p-mfBm (X(τ))τ∈R with Hurst exponent (d1 + 1/2, . . . , dp + 1/2). Conversely any p-mfBm with Hurst exponent (H1, . . . , Hp) with ∀i Hi = 1/2 can be obtained as such a limit.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Proof. Cramer-Wold device + convergence of discrete stochastic integral [Surgailis’82]. Idea of the latter (univariate case): If Z(t) = n
k∈Z ψ(t − k)ε(k), then
n−d−1/2
n
- t=1
Z(t) = n−d−1/2
n
- t=1
n
- k∈Z
ψ(t − k)ε(k) =
- k∈Z
n−d
n
- t=1
ψ(t − k)n−1/2ε(k) =
- k∈Z
n−d
n
- t=1
ψ(t − k)Wn k n ; k + 1 n
- where for any interval I, Wn(I) = n−1/2
j∈nI ε(j).
n−d−1/2
n
- t=1
Z(t) =
- k∈Z
n−d
n
- t=1
ψ(t − k) (k+1)/n
k/n
Wn(dx) =
- fn(x)Wn(dx)
where fn(x) = n−d n
t=1 ψ(t − ⌊nx⌋).
Lemma (Surgailis’82) If fn
L2
→ f then
- fn(x)Wn(dx) L
→
- f (x)W (dx) where W is a Gaussian white noise.
So the proof reduces to study the L2 convergence of fn... provided we have an integral representation for the mfBm.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Proof. Cramer-Wold device + convergence of discrete stochastic integral [Surgailis’82]. Idea of the latter (univariate case): If Z(t) = n
k∈Z ψ(t − k)ε(k), then
n−d−1/2
n
- t=1
Z(t) = n−d−1/2
n
- t=1
n
- k∈Z
ψ(t − k)ε(k) =
- k∈Z
n−d
n
- t=1
ψ(t − k)n−1/2ε(k) =
- k∈Z
n−d
n
- t=1
ψ(t − k)Wn k n ; k + 1 n
- where for any interval I, Wn(I) = n−1/2
j∈nI ε(j).
n−d−1/2
n
- t=1
Z(t) =
- k∈Z
n−d
n
- t=1
ψ(t − k) (k+1)/n
k/n
Wn(dx) =
- fn(x)Wn(dx)
where fn(x) = n−d n
t=1 ψ(t − ⌊nx⌋).
Lemma (Surgailis’82) If fn
L2
→ f then
- fn(x)Wn(dx) L
→
- f (x)W (dx) where W is a Gaussian white noise.
So the proof reduces to study the L2 convergence of fn... provided we have an integral representation for the mfBm.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Proof. Cramer-Wold device + convergence of discrete stochastic integral [Surgailis’82]. Idea of the latter (univariate case): If Z(t) = n
k∈Z ψ(t − k)ε(k), then
n−d−1/2
n
- t=1
Z(t) = n−d−1/2
n
- t=1
n
- k∈Z
ψ(t − k)ε(k) =
- k∈Z
n−d
n
- t=1
ψ(t − k)n−1/2ε(k) =
- k∈Z
n−d
n
- t=1
ψ(t − k)Wn k n ; k + 1 n
- where for any interval I, Wn(I) = n−1/2
j∈nI ε(j).
n−d−1/2
n
- t=1
Z(t) =
- k∈Z
n−d
n
- t=1
ψ(t − k) (k+1)/n
k/n
Wn(dx) =
- fn(x)Wn(dx)
where fn(x) = n−d n
t=1 ψ(t − ⌊nx⌋).
Lemma (Surgailis’82) If fn
L2
→ f then
- fn(x)Wn(dx) L
→
- f (x)W (dx) where W is a Gaussian white noise.
So the proof reduces to study the L2 convergence of fn... provided we have an integral representation for the mfBm.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Proof. Cramer-Wold device + convergence of discrete stochastic integral [Surgailis’82]. Idea of the latter (univariate case): If Z(t) = n
k∈Z ψ(t − k)ε(k), then
n−d−1/2
n
- t=1
Z(t) = n−d−1/2
n
- t=1
n
- k∈Z
ψ(t − k)ε(k) =
- k∈Z
n−d
n
- t=1
ψ(t − k)n−1/2ε(k) =
- k∈Z
n−d
n
- t=1
ψ(t − k)Wn k n ; k + 1 n
- where for any interval I, Wn(I) = n−1/2
j∈nI ε(j).
n−d−1/2
n
- t=1
Z(t) =
- k∈Z
n−d
n
- t=1
ψ(t − k) (k+1)/n
k/n
Wn(dx) =
- fn(x)Wn(dx)
where fn(x) = n−d n
t=1 ψ(t − ⌊nx⌋).
Lemma (Surgailis’82) If fn
L2
→ f then
- fn(x)Wn(dx) L
→
- f (x)W (dx) where W is a Gaussian white noise.
So the proof reduces to study the L2 convergence of fn... provided we have an integral representation for the mfBm.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Proof. Cramer-Wold device + convergence of discrete stochastic integral [Surgailis’82]. Idea of the latter (univariate case): If Z(t) = n
k∈Z ψ(t − k)ε(k), then
n−d−1/2
n
- t=1
Z(t) = n−d−1/2
n
- t=1
n
- k∈Z
ψ(t − k)ε(k) =
- k∈Z
n−d
n
- t=1
ψ(t − k)n−1/2ε(k) =
- k∈Z
n−d
n
- t=1
ψ(t − k)Wn k n ; k + 1 n
- where for any interval I, Wn(I) = n−1/2
j∈nI ε(j).
n−d−1/2
n
- t=1
Z(t) =
- k∈Z
n−d
n
- t=1
ψ(t − k) (k+1)/n
k/n
Wn(dx) =
- fn(x)Wn(dx)
where fn(x) = n−d n
t=1 ψ(t − ⌊nx⌋).
Lemma (Surgailis’82) If fn
L2
→ f then
- fn(x)Wn(dx) L
→
- f (x)W (dx) where W is a Gaussian white noise.
So the proof reduces to study the L2 convergence of fn... provided we have an integral representation for the mfBm.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
1
Justification of joint self-similarity
2
Integral representations
3
Covariance structure of the mfBm
4
The increment process
5
Existence of the covariance function
6
Identification
7
Some applications
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Integral representations
Spectral representation
Theorem (Didier, Pipiras’2010) Let (X(t))t∈R be a p-mfBm with parameter (H1, · · · , Hp) ∈ (0, 1)[p. There exists a p × p complex matrix A such that each component admits the following representation
Xi(t) =
p
- j=1
- eitx − 1
ix (Aijx
−Hi +1/2 +
+ ¯ Aijx
−Hi +1/2 −
) ˜ Wj( dx) (1)
where the ˜ Wj’s are independent Gaussian complex measures such that for all j ˜ Wj = ˜ Wj,1 + i ˜ Wj,2 with ˜ Wj,1(x) = ˜ Wj,1(−x), ˜ Wj,2(x) = − ˜ Wj,2(x), ˜ Wj,1 and ˜ Wj,2 are independent E( ˜ Wj,i( dx) ˜ Wj,i( dx)′) = dx, i = 1, 2. When p = 1, any fBm can be equivalently represented as in (1) where the integral involves x+ only, or x− only, or involves |x| instead of x+ and x−. When p > 1, as long as we wish a representation involving independent measures Wj, all terms are necessary to cover all p-mfBm’s. However the matrix A in (1) is not unique.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Integral representations
Representation in time domain
Theorem (Didier, Pipiras’2010) Assume that for all i ∈ {1, ..., p}, Hi = 1/2. There exist M+, M− two p × p real matrices such that
Xi(t) =
p
- j=1
- R
M+
i,j
- (t − x)
Hi −.5 +
− (−x)
Hi −.5 +
- + M−
i,j
- (t − x)
Hi −.5 −
− (−x)
Hi −.5 −
- Wj(dx),
(2)
with W (dx) = (W1(dx), · · · , Wp(dx)) is a Gaussian white noise with zero mean, independent components and covariance EWi(dx)Wj(dx) = δi,jdx.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
1
Justification of joint self-similarity
2
Integral representations
3
Covariance structure of the mfBm
4
The increment process
5
Existence of the covariance function
6
Identification
7
Some applications
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Covariance structure of the mfBm
Problem: find the explicit form of the cross covariances Ri,j(s, t) = EXi(s)Xj(t) Denote σ2
i = E(Xi(1)2) and ρi,j = ρj,i = corr(Xi(1), Xj(1)).
First, by self-similarity of (Xi(t), Xj(t)): E(Xi(t)Xj(t)) = |t|Hi +Hj E(Xi(1)Xj(1)) = ρi,jσiσj|t|Hi +Hj Second, by stationarity of the increments: E((Xi(s)−Xi(t))(Xj(s)−Xj(t))) = E(Xi(s−t)Xj(s−t)) = ρi,jσiσj|s−t|Hi +Hj Third, by developing the left hand term: Ri,j(s, t) + Ri,j(t, s) = ρi,jσiσj(|t|Hi +Hj + |s|Hi +Hj − |s − t|Hi +Hj ) (3)
If i = j, Ri,i(s, t) = Ri,i(t, s), and we get the expression of Ri,i(s, t). If i = j, take s = 1 in (3) and let g(t) = Ri,j(1, t). By self-similarity, if t > 0, Ri,j(t, 1) = |t|Hi +Hj g(1/t) so we have to solve: g(t) + |t|Hi +Hj g(1/t) = ρi,jσiσj(|t|Hi +Hj + 1 − |t − 1|Hi +Hj ) Difficulty: find all solutions of this functional equation.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Covariance structure of the mfBm
Problem: find the explicit form of the cross covariances Ri,j(s, t) = EXi(s)Xj(t) Denote σ2
i = E(Xi(1)2) and ρi,j = ρj,i = corr(Xi(1), Xj(1)).
First, by self-similarity of (Xi(t), Xj(t)): E(Xi(t)Xj(t)) = |t|Hi +Hj E(Xi(1)Xj(1)) = ρi,jσiσj|t|Hi +Hj Second, by stationarity of the increments: E((Xi(s)−Xi(t))(Xj(s)−Xj(t))) = E(Xi(s−t)Xj(s−t)) = ρi,jσiσj|s−t|Hi +Hj Third, by developing the left hand term: Ri,j(s, t) + Ri,j(t, s) = ρi,jσiσj(|t|Hi +Hj + |s|Hi +Hj − |s − t|Hi +Hj ) (3)
If i = j, Ri,i(s, t) = Ri,i(t, s), and we get the expression of Ri,i(s, t). If i = j, take s = 1 in (3) and let g(t) = Ri,j(1, t). By self-similarity, if t > 0, Ri,j(t, 1) = |t|Hi +Hj g(1/t) so we have to solve: g(t) + |t|Hi +Hj g(1/t) = ρi,jσiσj(|t|Hi +Hj + 1 − |t − 1|Hi +Hj ) Difficulty: find all solutions of this functional equation.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Covariance structure of the mfBm
Problem: find the explicit form of the cross covariances Ri,j(s, t) = EXi(s)Xj(t) Denote σ2
i = E(Xi(1)2) and ρi,j = ρj,i = corr(Xi(1), Xj(1)).
First, by self-similarity of (Xi(t), Xj(t)): E(Xi(t)Xj(t)) = |t|Hi +Hj E(Xi(1)Xj(1)) = ρi,jσiσj|t|Hi +Hj Second, by stationarity of the increments: E((Xi(s)−Xi(t))(Xj(s)−Xj(t))) = E(Xi(s−t)Xj(s−t)) = ρi,jσiσj|s−t|Hi +Hj Third, by developing the left hand term: Ri,j(s, t) + Ri,j(t, s) = ρi,jσiσj(|t|Hi +Hj + |s|Hi +Hj − |s − t|Hi +Hj ) (3)
If i = j, Ri,i(s, t) = Ri,i(t, s), and we get the expression of Ri,i(s, t). If i = j, take s = 1 in (3) and let g(t) = Ri,j(1, t). By self-similarity, if t > 0, Ri,j(t, 1) = |t|Hi +Hj g(1/t) so we have to solve: g(t) + |t|Hi +Hj g(1/t) = ρi,jσiσj(|t|Hi +Hj + 1 − |t − 1|Hi +Hj ) Difficulty: find all solutions of this functional equation.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Covariance structure of the mfBm
Problem: find the explicit form of the cross covariances Ri,j(s, t) = EXi(s)Xj(t) Denote σ2
i = E(Xi(1)2) and ρi,j = ρj,i = corr(Xi(1), Xj(1)).
First, by self-similarity of (Xi(t), Xj(t)): E(Xi(t)Xj(t)) = |t|Hi +Hj E(Xi(1)Xj(1)) = ρi,jσiσj|t|Hi +Hj Second, by stationarity of the increments: E((Xi(s)−Xi(t))(Xj(s)−Xj(t))) = E(Xi(s−t)Xj(s−t)) = ρi,jσiσj|s−t|Hi +Hj Third, by developing the left hand term: Ri,j(s, t) + Ri,j(t, s) = ρi,jσiσj(|t|Hi +Hj + |s|Hi +Hj − |s − t|Hi +Hj ) (3)
If i = j, Ri,i(s, t) = Ri,i(t, s), and we get the expression of Ri,i(s, t). If i = j, take s = 1 in (3) and let g(t) = Ri,j(1, t). By self-similarity, if t > 0, Ri,j(t, 1) = |t|Hi +Hj g(1/t) so we have to solve: g(t) + |t|Hi +Hj g(1/t) = ρi,jσiσj(|t|Hi +Hj + 1 − |t − 1|Hi +Hj ) Difficulty: find all solutions of this functional equation.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Covariance structure of the mfBm
Problem: find the explicit form of the cross covariances Ri,j(s, t) = EXi(s)Xj(t) Denote σ2
i = E(Xi(1)2) and ρi,j = ρj,i = corr(Xi(1), Xj(1)).
First, by self-similarity of (Xi(t), Xj(t)): E(Xi(t)Xj(t)) = |t|Hi +Hj E(Xi(1)Xj(1)) = ρi,jσiσj|t|Hi +Hj Second, by stationarity of the increments: E((Xi(s)−Xi(t))(Xj(s)−Xj(t))) = E(Xi(s−t)Xj(s−t)) = ρi,jσiσj|s−t|Hi +Hj Third, by developing the left hand term: Ri,j(s, t) + Ri,j(t, s) = ρi,jσiσj(|t|Hi +Hj + |s|Hi +Hj − |s − t|Hi +Hj ) (3)
If i = j, Ri,i(s, t) = Ri,i(t, s), and we get the expression of Ri,i(s, t). If i = j, take s = 1 in (3) and let g(t) = Ri,j(1, t). By self-similarity, if t > 0, Ri,j(t, 1) = |t|Hi +Hj g(1/t) so we have to solve: g(t) + |t|Hi +Hj g(1/t) = ρi,jσiσj(|t|Hi +Hj + 1 − |t − 1|Hi +Hj ) Difficulty: find all solutions of this functional equation.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Covariance structure of the mfBm
Problem: find the explicit form of the cross covariances Ri,j(s, t) = EXi(s)Xj(t) Denote σ2
i = E(Xi(1)2) and ρi,j = ρj,i = corr(Xi(1), Xj(1)).
First, by self-similarity of (Xi(t), Xj(t)): E(Xi(t)Xj(t)) = |t|Hi +Hj E(Xi(1)Xj(1)) = ρi,jσiσj|t|Hi +Hj Second, by stationarity of the increments: E((Xi(s)−Xi(t))(Xj(s)−Xj(t))) = E(Xi(s−t)Xj(s−t)) = ρi,jσiσj|s−t|Hi +Hj Third, by developing the left hand term: Ri,j(s, t) + Ri,j(t, s) = ρi,jσiσj(|t|Hi +Hj + |s|Hi +Hj − |s − t|Hi +Hj ) (3)
If i = j, Ri,i(s, t) = Ri,i(t, s), and we get the expression of Ri,i(s, t). If i = j, take s = 1 in (3) and let g(t) = Ri,j(1, t). By self-similarity, if t > 0, Ri,j(t, 1) = |t|Hi +Hj g(1/t) so we have to solve: g(t) + |t|Hi +Hj g(1/t) = ρi,jσiσj(|t|Hi +Hj + 1 − |t − 1|Hi +Hj ) Difficulty: find all solutions of this functional equation.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Covariance structure of mfBm
Theorem Denote σ2
i = E(Xi(1)2) and ρi,j = ρj,i = corr(Xi(1), Xj(1)).
If Hi + Hj = 1, there exists ηi,j = −ηj,i such that
Ri,j(s, t) = σiσj 2
- (ρi,j + ηi,jsign(s))|s|Hi +Hj + (ρi,j − ηi,jsign(t))|t|Hi +Hj
−(ρi,j − ηi,jsign(t − s))|t − s|Hi +Hj
- .
If Hi + Hj = 1, there exists ˜ ηi,j = −˜ ηj,i
Ri,j(s, t) = σiσj 2
- ρi,j(|s| + |t| − |s − t|)+
+˜ ηi,j(t log |t| − s log |s| − (t − s) log |t − s|)
- .
Remark : Gaussianity is not necessary to obtain this result.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Comments
Assume Hi + Hj = 1 (the same kind of comments hold when Hi + Hj = 1) The covariance of Xi and Xj depends on two parameters (up to σi and σj): ρi,j = corr(Xi(1), Xj(1)), ηi,j = corr(Xi(1), Xj(−1)) − corr(Xi(−1), Xj(1)) 2 − 2Hi +Hj . ηi,j quantifies the disymetry between Xi and Xj. ηi,j = 0 ⇔ Ri,j(s, t) = Rj,i(s, t) ∀i, j ηi,j = 0 ⇔ X(t) L = X(−t) ⇔ the integral representations are well-balanced Spectral: ∀i, Xi(t) =
p
- j=1
eitx − 1 ix Aij|x|−Hi +1/2 ˜ Wj( dx). Time-domain: ∀i, Xi(t) =
p
- j=1
- R
Mi,j
- |t − x|Hi −.5 − |x|Hi −.5
Wj(dx).
Questions: Given σi, σj, Hi, Hj, what are the possible values for ρi,j and ηi,j? In other words, how correlated can be two fBm’s? − → see in a few slides.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Comments
Assume Hi + Hj = 1 (the same kind of comments hold when Hi + Hj = 1) The covariance of Xi and Xj depends on two parameters (up to σi and σj): ρi,j = corr(Xi(1), Xj(1)), ηi,j = corr(Xi(1), Xj(−1)) − corr(Xi(−1), Xj(1)) 2 − 2Hi +Hj . ηi,j quantifies the disymetry between Xi and Xj. ηi,j = 0 ⇔ Ri,j(s, t) = Rj,i(s, t) ∀i, j ηi,j = 0 ⇔ X(t) L = X(−t) ⇔ the integral representations are well-balanced Spectral: ∀i, Xi(t) =
p
- j=1
eitx − 1 ix Aij|x|−Hi +1/2 ˜ Wj( dx). Time-domain: ∀i, Xi(t) =
p
- j=1
- R
Mi,j
- |t − x|Hi −.5 − |x|Hi −.5
Wj(dx).
Questions: Given σi, σj, Hi, Hj, what are the possible values for ρi,j and ηi,j? In other words, how correlated can be two fBm’s? − → see in a few slides.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Comments
Assume Hi + Hj = 1 (the same kind of comments hold when Hi + Hj = 1) The covariance of Xi and Xj depends on two parameters (up to σi and σj): ρi,j = corr(Xi(1), Xj(1)), ηi,j = corr(Xi(1), Xj(−1)) − corr(Xi(−1), Xj(1)) 2 − 2Hi +Hj . ηi,j quantifies the disymetry between Xi and Xj. ηi,j = 0 ⇔ Ri,j(s, t) = Rj,i(s, t) ∀i, j ηi,j = 0 ⇔ X(t) L = X(−t) ⇔ the integral representations are well-balanced Spectral: ∀i, Xi(t) =
p
- j=1
eitx − 1 ix Aij|x|−Hi +1/2 ˜ Wj( dx). Time-domain: ∀i, Xi(t) =
p
- j=1
- R
Mi,j
- |t − x|Hi −.5 − |x|Hi −.5
Wj(dx).
Questions: Given σi, σj, Hi, Hj, what are the possible values for ρi,j and ηi,j? In other words, how correlated can be two fBm’s? − → see in a few slides.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Comments
Assume Hi + Hj = 1 (the same kind of comments hold when Hi + Hj = 1) The covariance of Xi and Xj depends on two parameters (up to σi and σj): ρi,j = corr(Xi(1), Xj(1)), ηi,j = corr(Xi(1), Xj(−1)) − corr(Xi(−1), Xj(1)) 2 − 2Hi +Hj . ηi,j quantifies the disymetry between Xi and Xj. ηi,j = 0 ⇔ Ri,j(s, t) = Rj,i(s, t) ∀i, j ηi,j = 0 ⇔ X(t) L = X(−t) ⇔ the integral representations are well-balanced Spectral: ∀i, Xi(t) =
p
- j=1
eitx − 1 ix Aij|x|−Hi +1/2 ˜ Wj( dx). Time-domain: ∀i, Xi(t) =
p
- j=1
- R
Mi,j
- |t − x|Hi −.5 − |x|Hi −.5
Wj(dx).
Questions: Given σi, σj, Hi, Hj, what are the possible values for ρi,j and ηi,j? In other words, how correlated can be two fBm’s? − → see in a few slides.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
1
Justification of joint self-similarity
2
Integral representations
3
Covariance structure of the mfBm
4
The increment process
5
Existence of the covariance function
6
Identification
7
Some applications
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Covariance of the increment process
Consider the increments : ∆X(t) = X(t + 1) − X(t) From the covariance function of X(t), we deduce the covariance of ∆X(t). Asymptotic behavior of the cross-covariance of the increments E (∆Xi(t)∆Xj(t + h)) ∼ |h|Hi +Hj −2κi,j(sign(h)) as |h| → +∞. where κi,j(sign(h)) is a constant depending on sign(h) Each component, say ∆Xi, is short-range dependent if Hi < 1/2, independent if Hi = 1/2 and long-range dependent if Hi > 1/2. Two components of the increments are either independent or: if Hi + Hj < 1, they are short-range interdependent. if Hi + Hj ≥ 1, they are long-range interdependent, even if one of them is short-range dependent.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Covariance of the increment process
Consider the increments : ∆X(t) = X(t + 1) − X(t) From the covariance function of X(t), we deduce the covariance of ∆X(t). Asymptotic behavior of the cross-covariance of the increments E (∆Xi(t)∆Xj(t + h)) ∼ |h|Hi +Hj −2κi,j(sign(h)) as |h| → +∞. where κi,j(sign(h)) is a constant depending on sign(h) Each component, say ∆Xi, is short-range dependent if Hi < 1/2, independent if Hi = 1/2 and long-range dependent if Hi > 1/2. Two components of the increments are either independent or: if Hi + Hj < 1, they are short-range interdependent. if Hi + Hj ≥ 1, they are long-range interdependent, even if one of them is short-range dependent.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Covariance of the increment process
Consider the increments : ∆X(t) = X(t + 1) − X(t) From the covariance function of X(t), we deduce the covariance of ∆X(t). Asymptotic behavior of the cross-covariance of the increments E (∆Xi(t)∆Xj(t + h)) ∼ |h|Hi +Hj −2κi,j(sign(h)) as |h| → +∞. where κi,j(sign(h)) is a constant depending on sign(h) Each component, say ∆Xi, is short-range dependent if Hi < 1/2, independent if Hi = 1/2 and long-range dependent if Hi > 1/2. Two components of the increments are either independent or: if Hi + Hj < 1, they are short-range interdependent. if Hi + Hj ≥ 1, they are long-range interdependent, even if one of them is short-range dependent.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Spectral density of the increment process
Let Si,j be the (cross)-spectral density of the increments ∆Xi and ∆Xj. Si,j is deduced by Fourier transform of the cross-covariance: Si,j(ω) = 1 − cos(ω) |ω|Hi +Hj +1 τi,j(sign(ω)) where τi,j(sign(ω)) is a complex constant depending on sign(ω) Asymptotic behaviour of the cross-spectral density of the increments
- Si,j(ω)
- ∼
ci,j |ω|Hi +Hj −1 as ω → 0. where ci,j is a positive constant. Inter-dependency between ∆Xi and ∆Xj depends on the singularity of Si,j at 0. ⇒ Same remarks as in the previous slide.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
1
Justification of joint self-similarity
2
Integral representations
3
Covariance structure of the mfBm
4
The increment process
5
Existence of the covariance function
6
Identification
7
Some applications
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Existence of the covariance function
From the spectral representation: Xi(t) =
p
- j=1
eitx − 1 ix (Aijx−Hi +1/2
+
+ ¯ Aijx−Hi +1/2
−
) ˜ Wj( dx), i = 1, . . . , p we obtain an alternative expression for the spectral densities Si,j. By identification with the previous formula for Si,j, we get: (if Hi + Hj = 1) (AA∗)i,j = σiσj 2π Γ(Hi+Hj+1)
- ρi,j sin
π 2 (Hi + Hj)
- − i ηi,j cos
π 2 (Hi + Hj)
- .
Theorem (if Hi + Hj = 1) The p × p matrix with entries Ri,j(s, t) is a covariance matrix if and only if the matrix with entries Γ(Hi + Hj + 1)
- ρi,j sin
π 2 (Hi + Hj)
- − i ηi,j cos
π 2 (Hi + Hj)
- is Hermitian and positive semidefinite.
If Hi + Hj = 1, a similar result can be proved.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Existence of the covariance function
From the spectral representation: Xi(t) =
p
- j=1
eitx − 1 ix (Aijx−Hi +1/2
+
+ ¯ Aijx−Hi +1/2
−
) ˜ Wj( dx), i = 1, . . . , p we obtain an alternative expression for the spectral densities Si,j. By identification with the previous formula for Si,j, we get: (if Hi + Hj = 1) (AA∗)i,j = σiσj 2π Γ(Hi+Hj+1)
- ρi,j sin
π 2 (Hi + Hj)
- − i ηi,j cos
π 2 (Hi + Hj)
- .
Theorem (if Hi + Hj = 1) The p × p matrix with entries Ri,j(s, t) is a covariance matrix if and only if the matrix with entries Γ(Hi + Hj + 1)
- ρi,j sin
π 2 (Hi + Hj)
- − i ηi,j cos
π 2 (Hi + Hj)
- is Hermitian and positive semidefinite.
If Hi + Hj = 1, a similar result can be proved.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Existence of the covariance function
Case p = 2
When p = 2, the condition reduces to ρ1,2 = ρ2,1, η1,2 = −η2,1 and Γ(H1 + H2 + 1)2 Γ(2H1 + 1)Γ(2H2 + 1) ρ2
1,2 sin
π
2 (H1 + H2)
2 + η2
1,2 cos
π
2 (H1 + H2)
2 sin(πH1) sin(πH2) ≤ 1 So, given σ1, σ2, H1, H2, the possible values for ρ1,2 and η1,2 are inside an ellipse. Here are some examples when σ1 = σ2 = 1 and different values of H1, H2
|H2 − H1| = 0, . . . , 0.8 H1 = 0.1 and H2 = 0.1, . . . , 0.9
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Maximal possible value of |ρ1,2| in terms of H1 and H2 when η1,2 = 0.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
1
Justification of joint self-similarity
2
Integral representations
3
Covariance structure of the mfBm
4
The increment process
5
Existence of the covariance function
6
Identification
7
Some applications
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Identification
Let X = (X(1), . . . , X(n)) be a discrete sample of a p-mfBm. Aim: estimate σi, Hi (∀i = 1, . . . , p) and ρi,j, ηi,j (∀i, j = 1, . . . , p). Filtering techniques [Amblard,Coeurjolly’11]. Let: a = (a1, . . . , aℓ): finite filter with range ℓ and having ν zero moments, (i.e.
q aqqr = 0, for r = 0, . . . , ν − 1)
ex: increments, Daubechies,. . . am filter a dilated m times am[Xi]: Xi filtered with am, i.e. am[Xi](t) = ℓ
k=1 am k Xi(t + k − 1)
Example:
a[Xi](t) = Xi(t) − 2Xi(t + 1) + Xi(t + 2) a2[Xi](t) = Xi(t) − 2Xi(t + 2) + Xi(t + 4)
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Identification
Let γm1,m2
i,j
(k) = E
- am1[Xi](t) am2[Xj](t + k)
- Explicit expression of γm1,m2
i,j
[Amblard,Coeurjolly’11] γm1,m2
i,j
(k) = F(k, m1, m2; σi, σj, Hi, Hj, ρi,j, ηi,j) where F is explicitly known. In particular, denoting ℓ the range of the filter a, we have the identities: log|γm,m
i,j
(0)| = (Hi + Hj)log(m) + f (σi, σj, Hi, Hj, ρi,j) log|γm,m
i,j
(mℓ) − γm,m
j,i
(mℓ)| = (Hi + Hj)log(m) + ˜ f (σi, σj, Hi, Hj, ηi,j) where f and ˜ f are explicitly known. ⇒ Estimation of all parameters can be done by least square regression. Simulation results from [Amblard,Coeurjolly’11]: To estimate Hi: marginal estimation (i.e. using γm,m
i,i
(0)) is sufficient. Adding γm,m
i,j
(i.e. cross-covariance) does not improve the estimation of Hi. ρi,j can be fairly estimated, but ηi,j seems difficult to identify by this procedure.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Identification
Let γm1,m2
i,j
(k) = E
- am1[Xi](t) am2[Xj](t + k)
- Explicit expression of γm1,m2
i,j
[Amblard,Coeurjolly’11] γm1,m2
i,j
(k) = F(k, m1, m2; σi, σj, Hi, Hj, ρi,j, ηi,j) where F is explicitly known. In particular, denoting ℓ the range of the filter a, we have the identities: log|γm,m
i,j
(0)| = (Hi + Hj)log(m) + f (σi, σj, Hi, Hj, ρi,j) log|γm,m
i,j
(mℓ) − γm,m
j,i
(mℓ)| = (Hi + Hj)log(m) + ˜ f (σi, σj, Hi, Hj, ηi,j) where f and ˜ f are explicitly known. ⇒ Estimation of all parameters can be done by least square regression. Simulation results from [Amblard,Coeurjolly’11]: To estimate Hi: marginal estimation (i.e. using γm,m
i,i
(0)) is sufficient. Adding γm,m
i,j
(i.e. cross-covariance) does not improve the estimation of Hi. ρi,j can be fairly estimated, but ηi,j seems difficult to identify by this procedure.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
1
Justification of joint self-similarity
2
Integral representations
3
Covariance structure of the mfBm
4
The increment process
5
Existence of the covariance function
6
Identification
7
Some applications
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 1: Model for functional magnetic resonance imaging
Data (time series) are collected from different parts of the brain.
1
Time series seem to be of fractal nature
2
Time series are correlated ⇒ A mfBm may be a good model to study cross-dependencies (initial motivation of [Amblard, Coeurjolly’11])
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 2: Model for estimating turbulent motions.
Setting : turbulent flows of incompressible fluids. Problem: from a sequence of images, estimate the vector field of deformation. Example: a sequence of two images y0, y1 Initial image y0 Vector field of deformation
→
u = (u1, u2) Aim: estimate
→
u from the noisy observation of y0 and y1
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 2: Model for estimating turbulent motions.
Setting : turbulent flows of incompressible fluids. Problem: from a sequence of images, estimate the vector field of deformation. Example: a sequence of two images y0, y1 Final image y1 Vector field of deformation
→
u = (u1, u2) Aim: estimate
→
u from the noisy observation of y0 and y1
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 2: Model for estimating turbulent motions.
At each location x, if no noise: y0(x) = y1(x +
→
u (x)) In practice: noisy measurements. The vector field u can be estimated by:
- →
u = argmin
→
u
y1(x +
→
u (x)) − y0(x)2 But this is not identifiable. ⇒ A priori information are needed for
→
u :
1
Turbulent motion ⇒
→
u is assumed to be self-similar
2
Incompressibility assumption ⇒
→
u must be divergent-free In [H´ eas, L., Kadri-Harouna’13],
→
u is a divergent-free bivariate fBm, defined by:
→
u (x) = σ 2π
- R2
(eik·x − 1) kH+1
- I − kkT
k2 ˜ W1(dk) ˜ W2(dk)
- , x ∈ R2,
where I denotes the identity matrix and ˜ W = ( ˜ W1, ˜ W2)T denotes a bi-variate standard Gaussian spectral measure. Remark: Divergence-free of
→
u implies correlated components.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 2: Model for estimating turbulent motions.
At each location x, if no noise: y0(x) = y1(x +
→
u (x)) In practice: noisy measurements. The vector field u can be estimated by:
- →
u = argmin
→
u
y1(x +
→
u (x)) − y0(x)2 But this is not identifiable. ⇒ A priori information are needed for
→
u :
1
Turbulent motion ⇒
→
u is assumed to be self-similar
2
Incompressibility assumption ⇒
→
u must be divergent-free In [H´ eas, L., Kadri-Harouna’13],
→
u is a divergent-free bivariate fBm, defined by:
→
u (x) = σ 2π
- R2
(eik·x − 1) kH+1
- I − kkT
k2 ˜ W1(dk) ˜ W2(dk)
- , x ∈ R2,
where I denotes the identity matrix and ˜ W = ( ˜ W1, ˜ W2)T denotes a bi-variate standard Gaussian spectral measure. Remark: Divergence-free of
→
u implies correlated components.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 2: Model for estimating turbulent motions.
At each location x, if no noise: y0(x) = y1(x +
→
u (x)) In practice: noisy measurements. The vector field u can be estimated by:
- →
u = argmin
→
u
y1(x +
→
u (x)) − y0(x)2 But this is not identifiable. ⇒ A priori information are needed for
→
u :
1
Turbulent motion ⇒
→
u is assumed to be self-similar
2
Incompressibility assumption ⇒
→
u must be divergent-free In [H´ eas, L., Kadri-Harouna’13],
→
u is a divergent-free bivariate fBm, defined by:
→
u (x) = σ 2π
- R2
(eik·x − 1) kH+1
- I − kkT
k2 ˜ W1(dk) ˜ W2(dk)
- , x ∈ R2,
where I denotes the identity matrix and ˜ W = ( ˜ W1, ˜ W2)T denotes a bi-variate standard Gaussian spectral measure. Remark: Divergence-free of
→
u implies correlated components.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 2: Model for estimating turbulent motions.
At each location x, if no noise: y0(x) = y1(x +
→
u (x)) In practice: noisy measurements. The vector field u can be estimated by:
- →
u = argmin
→
u
y1(x +
→
u (x)) − y0(x)2 But this is not identifiable. ⇒ A priori information are needed for
→
u :
1
Turbulent motion ⇒
→
u is assumed to be self-similar
2
Incompressibility assumption ⇒
→
u must be divergent-free In [H´ eas, L., Kadri-Harouna’13],
→
u is a divergent-free bivariate fBm, defined by:
→
u (x) = σ 2π
- R2
(eik·x − 1) kH+1
- I − kkT
k2 ˜ W1(dk) ˜ W2(dk)
- , x ∈ R2,
where I denotes the identity matrix and ˜ W = ( ˜ W1, ˜ W2)T denotes a bi-variate standard Gaussian spectral measure. Remark: Divergence-free of
→
u implies correlated components.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 2: Model for estimating turbulent motions.
At each location x, if no noise: y0(x) = y1(x +
→
u (x)) In practice: noisy measurements. The vector field u can be estimated by:
- →
u = argmin
→
u
y1(x +
→
u (x)) − y0(x)2 But this is not identifiable. ⇒ A priori information are needed for
→
u :
1
Turbulent motion ⇒
→
u is assumed to be self-similar
2
Incompressibility assumption ⇒
→
u must be divergent-free In [H´ eas, L., Kadri-Harouna’13],
→
u is a divergent-free bivariate fBm, defined by:
→
u (x) = σ 2π
- R2
(eik·x − 1) kH+1
- I − kkT
k2 ˜ W1(dk) ˜ W2(dk)
- , x ∈ R2,
where I denotes the identity matrix and ˜ W = ( ˜ W1, ˜ W2)T denotes a bi-variate standard Gaussian spectral measure. Remark: Divergence-free of
→
u implies correlated components.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 2: Model for estimating turbulent motions.
At each location x, if no noise: y0(x) = y1(x +
→
u (x)) In practice: noisy measurements. The vector field u can be estimated by:
- →
u = argmin
→
u
y1(x +
→
u (x)) − y0(x)2 But this is not identifiable. ⇒ A priori information are needed for
→
u :
1
Turbulent motion ⇒
→
u is assumed to be self-similar
2
Incompressibility assumption ⇒
→
u must be divergent-free In [H´ eas, L., Kadri-Harouna’13],
→
u is a divergent-free bivariate fBm, defined by:
→
u (x) = σ 2π
- R2
(eik·x − 1) kH+1
- I − kkT
k2 ˜ W1(dk) ˜ W2(dk)
- , x ∈ R2,
where I denotes the identity matrix and ˜ W = ( ˜ W1, ˜ W2)T denotes a bi-variate standard Gaussian spectral measure. Remark: Divergence-free of
→
u implies correlated components.
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 3: mfBm as a limiting process
As shown in the beginning of the talk : mfBm arise in particular as limits of normalized partial sums of multivariate (linear) long memory processes. Consequently, they appear e.g. in: the limit distribution of OLS estimator ˆ β in a linear regression model with long memory exogenous variables [Cheung’02]: Y (t) = β0 +
p
- i=1
βiZi(t) + ε(t), t = 1, . . . , n where Y is a univariate process, the Zi’s are correlated long memory processes, and ε is an independent error term. the null distribution in the test H0 : d1 = d2 vs H1 : d1 = d2 between 2 long memory time series with respective parameters d1 and d2 [see next talk]
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 3: mfBm as a limiting process
As shown in the beginning of the talk : mfBm arise in particular as limits of normalized partial sums of multivariate (linear) long memory processes. Consequently, they appear e.g. in: the limit distribution of OLS estimator ˆ β in a linear regression model with long memory exogenous variables [Cheung’02]: Y (t) = β0 +
p
- i=1
βiZi(t) + ε(t), t = 1, . . . , n where Y is a univariate process, the Zi’s are correlated long memory processes, and ε is an independent error term. the null distribution in the test H0 : d1 = d2 vs H1 : d1 = d2 between 2 long memory time series with respective parameters d1 and d2 [see next talk]
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Application 3: mfBm as a limiting process
As shown in the beginning of the talk : mfBm arise in particular as limits of normalized partial sums of multivariate (linear) long memory processes. Consequently, they appear e.g. in: the limit distribution of OLS estimator ˆ β in a linear regression model with long memory exogenous variables [Cheung’02]: Y (t) = β0 +
p
- i=1
βiZi(t) + ε(t), t = 1, . . . , n where Y is a univariate process, the Zi’s are correlated long memory processes, and ε is an independent error term. the null distribution in the test H0 : d1 = d2 vs H1 : d1 = d2 between 2 long memory time series with respective parameters d1 and d2 [see next talk]
Self-similarity Integral representations Covariance Increments Existence of the covariance Identification Some applications
Some references
P.-O. Amblard, J.-F. Coeurjolly (2011), Identification of the multivariate fractional
Brownian motion, IEEE Transactions on Signal Processing 59, 5152-5168.
P.-O. Amblard, J.-F. Coeurjolly, F. Lavancier, A. Philippe (2013), Basic properties
- f the multivariate fractional Brownian motion, S´
eminaires et Congr` es 28, 65-87.
- G. Didier, V. Pipiras (2011), Integral representations and properties of operator
fractional Brownian motions, Bernoulli 17, 1-33.
- F. Lavancier, A. Philippe and D. Surgailis (2009), Covariance function of vector
self-similar process. Statistics and Probability Letters. 79, 2415-2421.
- P. H´