Some sufficient condition for the ergodicity of the L´ evy transform
Vilmos Prokaj
E¨
- tv¨
- s Lor´
and University, Hungary
Some sufficient condition for the ergodicity of the L evy transform - - PowerPoint PPT Presentation
Some sufficient condition for the ergodicity of the L evy transform Vilmos Prokaj E otv os Lor and University, Hungary Probability, Control and Finance A Conference in Honor of Ioannis Karatzas 2012, New York L evy
E¨
and University, Hungary
◮ β is a Brownian motion
◮ T is a transformation of the path space. ◮ T preserves the Wiener measure. ◮ Is T ergodic? ◮ A deep result of Marc Malric claims that the L´
◮ We use only a weaker form, also due to Marc Malric, the density of zeros
∞
◮ T is ergodic, if
◮
N−1
◮ or, 1
N−1
◮ or, the invariant σ–field, is trivial.
◮ T is strongly mixing if P
◮ T is ergodic iff 1
n−1
w
◮ T is strongly mixing iff P ◦ (T 0, T n)−1 w
◮ T is ergodic iff 1
n−1
w
◮ T is strongly mixing iff P ◦ (T 0, T n)−1 w
n−1
◮ β is the canonical process on Ω = C[0, ∞), ◮ h : [0, ∞) × C[0, ∞) progressive, |h| = 1 dt ⊗ dP a.e.
◮ β(n) = T nβ is the n-th iterated path. ◮ h(0) = 1, h(n) s
k=0 h(s, β(k)) for n > 0, so β(n) t
0 h(n) s dβs.
◮ β is the canonical process on Ω = C[0, ∞), ◮ h : [0, ∞) × C[0, ∞) progressive, |h| = 1 dt ⊗ dP a.e.
◮ β(n) = T nβ is the n-th iterated path. ◮ h(0) = 1, h(n) s
k=0 h(s, β(k)) for n > 0, so β(n) t
0 h(n) s dβs.
◮ The distribution of (β, β(n)) is P ◦ (T 0, T n)−1 ◮ Let κn is uniform on {0, 1, . . . , n − 1} and independent of β.
n
k=0 P ◦ (T 0, T k)−1. ◮ T is strongly mixing, iff (β, β(n)) f .d.
◮ Similarly, T is ergodic, iff (β, β(κn)) f .d.
◮ Reason: Tightness + f .d.
◮ Fix t1, . . . , tr ≥ 0 and α = (a1, . . . , ar, b1, . . . , br) ∈ R2r ◮ The characteristic function of (βt1, . . . , βtr , β(n) t1 , . . . , β(n) tr ) at α is
i
s
i f (s)+g(s)h(n)
s
j=1 aj1[0,tj], g = r j=1 bj1[0,tj]. ◮ Finite dim. marginals has the right limit, if for all choices r ≥ 1,
n−1
2
◮ Mt =
0 (f (s) + h(n) s g(s))dβs. ◮ M is a closed martingale and so is Z = exp
2 M
◮ Z0 = 1 =
◮ M∞ =
0 f 2(s) + g 2(s)ds + 2
0 h(n) s f (s)g(s)ds ◮
j ajbj1[0,tj]. Then with Xn(t) =
0 h(n) s ds
∞
0 fgh(n)
fg|E
r
p
n
k=0 X 2 k (t) p
◮ The only missing part is the convergence of finite dimensional marginals. ◮ If Xn(t) p
◮ Then |φn − φ| ≤ C j E |Xn(tj)| → 0 =
f .d.
◮ (β, β(n)) f .d.
D
◮ (β, β(n)) D
p
n
k=0 X 2 k (t) p
◮ By Cauchy-Schwarz and |Xk(t)| ≤ t
n
k=0 |Xk(t)|
1 n
k=0 X 2 k (t)
◮ Then
n
k=0 φk − φ
j E 1 n
k=0 |Xk(tj)| → 0 =
f .d.
◮ κn is uniform on {0, . . . , n − 1} and independent of β ◮ (β, β(κn)) f .d.
D
◮ (β, β(κn)) D
p
n
k=0 X 2 k (t) p
◮ Fix 0 < s < t. Then the following limits exist a.s and in L2:
n→∞
n−1
s h(k) u
n→∞
n−1
s (β(k) t −β(k) s )
◮ Then h(k) s (β(k) t
s ) =
s h(k) s h([k) u dβu and
s
s
s
◮ Z ∼ N(0, σ 2) since |Zu| is non-random. But |Z | is also non-random.
1 n
k=0 X 2 k (t) → 0.
◮ T is a measure preserving transformation of Ω, ◮ ε0 is r.v. taking values in {− 1, +1}, εk = ε0 ◦ T k. ◮ For ξ ∈ L2(Ω), Uξ = ξ ◦ Tε0 is an isometry. ◮ von Neumann’s mean errgodic theorem says, that
n−1
◮ what is Ukξ?
k−1
◮ Almost sure convergence also holds by the subadditive ergodic theorem.
◮ The L´
◮ As before β(n) = β ◦ Tn, h(n) t
k=0 sign(β(k) t ), Xn(t) =
0 h(n) s ds.
p
n
k=0 X 2 k (1) p
sd[n] ~ n
0.0 0.2 0.4 0.6 20 40 60 80 100
10 20 30 40 20 40 60 80 100
1.0 1.5 2.0 2.5 20 40 60 80 100
n (1)
n
k=0 sign(˜
s )ds with
n
π n √ 2 ≈ 2.22 n .
n (1)
0 h(n) s ds → 0 almost
◮ Goal: Xn(1) =
0 h(n) s ds p
◮ Enough: Xn(1) → 0 in L2. ◮
n (1)
u h(n) v
u , h(n) v
◮ Enough:
s h(n) 1
◮ New goal: fixing s ∈ (0, 1),
s h(n) 1 = 1
s h(n) 1 = −1
◮ Assume that S : C[0, ∞) → C[0, ∞) preserves P. ◮ Denote by ˜
◮ Assume also that there is an event A such that on A the sequences
s ) sign(β(n) 1 )
s ) sign(˜
1 )
n→∞
s h(n) 1
s h(n) 1
s h(n) 1 + ˜
s ˜
1
◮ s < τ < 1, ◮ exists ν < ∞, s.t. β(ν) τ
◮ min0≤k<ν |β(k) τ | > C
n
s h(n) 1
s∈[0,1] |βs | > C
◮ s < τ < 1, ◮ exists ν < ∞, s.t. β(ν) τ
◮ min0≤k<ν |β(k) τ | > C
n
s h(n) 1
s∈[0,1] |βs | > C
◮ S reflects β after τ:
◮ A =
t
τ | ≤ C
◮ Then
s∈[0,1] |βs | > C
◮ s < τ < 1, ◮ exists ν < ∞, s.t. β(ν) τ
◮ min0≤k<ν |β(k) τ | > C
n
s h(n) 1
s∈[0,1] |βs | > C
◮ S reflects β after τ:
◮ A =
t
τ | ≤ C
◮ We need that
s β(n) 1 )
s ˜
1 )
◮ Recall that Tβ = |β| − L.
◮ s < τ < 1, ◮ exists ν < ∞, s.t. β(ν) τ
◮ min0≤k<ν |β(k) τ | > C
n
s h(n) 1
s∈[0,1] |βs | > C
◮ S reflects β after τ:
◮ A =
t
τ | ≤ C
◮ We need that
s β(n) 1 )
s ˜
1 )
◮ Recall that Tβ = |β| − L.
◮ s < τ < 1, ◮ exists ν < ∞, s.t. β(ν) τ
◮ min0≤k<ν |β(k) τ | > C
n
s h(n) 1
s∈[0,1] |βs | > C
◮ S reflects β after τ:
◮ A =
t
τ | ≤ C
◮ We need that
s β(n) 1 )
s ˜
1 )
◮ Recall that Tβ = |β| − L.
◮ s < τ < 1, ◮ exists ν < ∞, s.t. β(ν) τ
◮ min0≤k<ν |β(k) τ | > C
n
s h(n) 1
s∈[0,1] |βs | > C
◮ S reflects β after τ:
◮ A =
t
τ | ≤ C
◮ We need that
s β(n) 1 )
s ˜
1 )
◮ Recall that Tβ = |β| − L.
◮ s < τ < 1, ◮ exists ν < ∞, s.t. β(ν) τ
◮ min0≤k<ν |β(k) τ | > C
n
s h(n) 1
s∈[0,1] |βs | > C
◮ S reflects β after τ:
◮ A =
t
τ | ≤ C
◮ We need that
s β(n) 1 )
s ˜
1 )
◮ Recall that Tβ = |β| − L.
τ
τ | > C
τ
τ | > C
t
0≤k<n |β(k) t | ≥ C
n τn. ◮ τn, ˜
◮ By the condition s ≤ ˜
◮ If for some ω ∈ Ω, ˜
n≥0 |β(n) ˜ τ | ≥ C
◮ This can only happen with probability zero due to Malric’s density
γ
0≤k<n
γ
γ
0≤k<n
γ
γ
0≤k<n
γ
◮ P (1 ∈ A(C, s)) = 1 ⇔ A(C, s) has full Lebesgue measure almost surely
0.0 0.2 0.4 0.6 0.8 1.0 −1.5 −1.0 −0.5 0.0 0.5 0.0 0.2 0.4 0.6 0.8 1.0 all β(3) β(2) β(1) β(0)
γ | = ξ > 0 then
r→0
◮ H is porous at x =
r→0
◮ H is porous at x =
◮ H is Borel and porous at Lebesgue almost every point of R =
r→0
◮ H is porous at x =
◮ H is Borel and porous at Lebesgue almost every point of R =
◮ For H = [0, ∞) \ A(C, s) the set of bad time points
r→0
◮ H is porous at x =
◮ H is Borel and porous at Lebesgue almost every point of R =
◮ For H = [0, ∞) \ A(C, s) the set of bad time points
γ∗ n |
n
◮ Here γn = sup
t
n = max0≤k≤n γk.
γ∗ n |
n
◮ Here γn = sup
t
n = max0≤k≤n γk. ◮ By Malric’s density theorem γ∗ n → 1.
n = γn
γn |
γn |
γn |
γn |
2 lim supn→∞ mink<n |β(k)
γ∗ n |
√1−γ∗
n
t>0 : ∃n, ∃γ∈(st,t), β(n)
γ =0, mink<n |β(k) γ |>C√t−γ}
C 2 (1 − γn)
γ∗ n |
n
◮ Here γn = sup
t
n = max0≤k≤n γk. ◮ By Malric’s density theorem γ∗ n → 1.
n = γn
γn |
γn |
γn |
γn |
2 lim supn→∞ mink<n |β(k)
γ∗ n |
√1−γ∗
n
t>0 : ∃n, ∃γ∈(st,t), β(n)
γ =0, mink<n |β(k) γ |>C√t−γ}
C 2 (1 − γn) ◮ I ⊂ A(C, s), the length of I is proportional to (1 − γn) = δ′. ◮ A(C, s) is of full Lebesgue measure for all C, s, etc..
◮ Here Zn = min0≤k<n |β(k) 1 |.
◮ Here Zn = min0≤k<n |β(k) 1 |. ◮ This condition is obtained similarly, by considering the right
2r
C 2 x2
1 β(n)
1 β(k)
1 β(0)
1 β(1)
γ∗ n |
n
0≤k<n |β(k) 1 |, γ∗ n = max 0≤k≤n γk, γk = sup
t
◮ Either Y = 0 a.s., ◮ or 0 < P (Y = 0) < 1 and T is not ergodic, ◮ or Y > 0 a.s. and then Y = ∞ and T is strongly mixing.
◮ Either X = 1, ◮ or 0 < P (X = 1) < 1 and T is not ergodic, ◮ or X < 1 a.s. and then X = 0 and Y = ∞ and T is strongly mixing.
γ∗ n |
n
0≤k<n |β(k) 1 |, γ∗ n = max 0≤k≤n γk, γk = sup
t
◮ Either Y = 0 a.s., ◮ or 0 < P (Y = 0) < 1 and T is not ergodic, ◮ or Y > 0 a.s. and then Y = ∞ and T is strongly mixing.
◮ Either X = 1, ◮ or 0 < P (X = 1) < 1 and T is not ergodic, ◮ or X < 1 a.s. and then X = 0 and Y = ∞ and T is strongly mixing.
◮ P (X = 1) > 0 =
◮ Both X, and Y characterize ergodicity: X < 1 ⇔ Y > 0 ⇔ T is
1
◮ Here ν(x) = inf
1 | < x
1 |. ◮
n→∞
xց0
1
1
1
◮ Here ν(x) = inf
1 | < x
1 |. ◮
n→∞
xց0
1
1
◮ Claim. {xν(x) : x ∈ (0, 1)} is tight =
|β(ν(x))
1
| x
◮ Proof: 1(X>1−δ) ≤ lim inf 1(|β(ν(x))
1
|/x>1−δ).
1
◮ Here ν(x) = inf
1 | < x
1 |. ◮
n→∞
xց0
1
1
◮ Claim. {xν(x) : x ∈ (0, 1)} is tight =
|β(ν(x))
1
| x
◮ Proof: 1(X>1−δ) ≤ lim inf 1(|β(ν(x))
1
|/x>1−δ). By Fatou–lemma
x→0+ P
1
1
◮ Here ν(x) = inf
1 | < x
1 |. ◮
n→∞
xց0
1
1
◮ Claim. {xν(x) : x ∈ (0, 1)} is tight =
|β(ν(x))
1
| x
◮ Proof: 1(X>1−δ) ≤ lim inf 1(|β(ν(x))
1
|/x>1−δ). By Fatou–lemma
x→0+ P
1
x→0+ P (xν(x) > K ) + (1 + K/x)P (1 − δ < |β1| /x < 1)
1
◮ Here ν(x) = inf
1 | < x
1 |. ◮
n→∞
xց0
1
1
◮ Claim. {xν(x) : x ∈ (0, 1)} is tight =
|β(ν(x))
1
| x
◮ Proof: 1(X>1−δ) ≤ lim inf 1(|β(ν(x))
1
|/x>1−δ). By Fatou–lemma
x→0+ P
1
x→0+ P (xν(x) > K ) + (1 + K/x)P (1 − δ < |β1| /x < 1)
x∈(0,1)
K inf δ P (xν(x) > K ) + (1 + K)δ.
E(ν(x))
50 100 150 200 0.0 0.2 0.4 0.6 0.8 1.0
p(x)E(ν(x))
0.90 0.95 1.00 1.05 1.10 0.0 0.2 0.4 0.6 0.8 1.0
1 | < x
◮ Consider the natural extension of (Ω, B, P, T). Then T is an invertible
◮ Ω = C[0, ∞)❩, ◮ for ω = (ωn)n∈❩ (Tω)n = ωn+1 and β(n)(ω) = ωn, ◮ P is such that β(k), β(k+1), . . . has the same joint law as (β, T1β, . . . ) for
◮ Consider the natural extension of (Ω, B, P, T). Then T is an invertible
◮ Put ν∗(x) = inf
1
◮ Consider the natural extension of (Ω, B, P, T). Then T is an invertible
◮ Put ν∗(x) = inf
1
◮ Then by the tower decomposition of Ω, for A ⊂ C[0, ∞) and
|β(0)
1 |<x}1{
β(0)∈A}
◮ Consider the natural extension of (Ω, B, P, T). Then T is an invertible
◮ Put ν∗(x) = inf
1
◮ Then by the tower decomposition of Ω, for A ⊂ C[0, ∞) and
|β(0)
1 |<x}1{
β(0)∈A}
◮ Consider the natural extension of (Ω, B, P, T). Then T is an invertible
◮ Put ν∗(x) = inf
1
◮ Then by the tower decomposition of Ω, for A ⊂ C[0, ∞) and
|β(0)
1 |<x}1{
β(0)∈A}
◮ The density fx of 1 x |β(ν(x)) 1
1 | = yx
2e−04 4e−04 6e−04 8e−04 1e−03 0.0 0.4 0.8
U < 0.001
rescaled return time
1e−04 2e−04 3e−04 4e−04 5e−04 0.0 0.4 0.8
U < 5e−04
rescaled return time
2e−05 4e−05 6e−05 8e−05 1e−04 0.0 0.4 0.8
U < 1e−04
rescaled return time
1e−05 2e−05 3e−05 4e−05 5e−05 0.0 0.4 0.8
U < 5e−05
rescaled return time
1 | and ν∗(x) given |β(0) 1 | < x. Both are rescaled to
1 x |β(0) 1 | seems to be conditionally independent of xν∗(x),
◮ Conjecture: 1 x |β(ν(x)) 1
◮ Playing with two types of expected return times one can show that
x→0+ P
1
◮ This is enough
x→0+
1
◮ Recall that then both
x→0+
1 |
1 |
x→0+
γ∗
n |
n
◮ Marc Malric has proved that the orbit of a typical sample path meets
◮ To prove strong mixing only certain open sets has to be considered. ◮ For these open sets