Small ball probabilities and metric entropy Frank Aurzada, TU Berlin - - PowerPoint PPT Presentation
Small ball probabilities and metric entropy Frank Aurzada, TU Berlin - - PowerPoint PPT Presentation
Small ball probabilities and metric entropy Frank Aurzada, TU Berlin Sydney, February 2012 MCQMC Outline Small ball probabilities vs. metric entropy 1 Connection to other questions 2 Recent results for concrete examples 3 Outline Small
Outline
1
Small ball probabilities vs. metric entropy
2
Connection to other questions
3
Recent results for concrete examples
Outline
1
Small ball probabilities vs. metric entropy
2
Connection to other questions
3
Recent results for concrete examples
Small ball probabilities
Let (Xt)t≥0 be a stochastic process with X0 = 0 Goal: find asymptotic rate of P
- sup
0≤t≤1
|Xt| ≤ ε
- ≈ ? ,
with ε → 0
1 ε X −ε
In many examples, P
- sup
0≤t≤1
|Xt| ≤ ε
- = e−κε−γ(1+o(1)),
with ε → 0 with γ > 0 und κ > 0.
Small ball probabilities
Let (Xt)t≥0 be a stochastic process with X0 = 0 Goal: find asymptotic rate of P
- sup
0≤t≤1
|Xt| ≤ ε
- ≈ ? ,
with ε → 0
1 ε X −ε
In many examples, P
- sup
0≤t≤1
|Xt| ≤ ε
- = e−κε−γ(1+o(1)),
with ε → 0 with γ > 0 und κ > 0.
Small ball probabilities
Let (Xt)t≥0 be a stochastic process with X0 = 0 Goal: find asymptotic rate of P
- sup
0≤t≤1
|Xt| ≤ ε
- ≈ ? ,
with ε → 0
1 ε X −ε
Therefore, we study φX(ε) := − log P
- sup
0≤t≤1
|Xt| ≤ ε
- = κε−γ(1 + o(1)),
with ε → 0 the so-called small ball function of X. γ
Entropy numbers
Let X be a centred Gaussian random variable with values in a sep. Banach space (E, ||.||): i.e. X, g Gaussian ∀g ∈ E′.
Entropy numbers
Let X be a centred Gaussian random variable with values in a sep. Banach space (E, ||.||): i.e. X, g Gaussian ∀g ∈ E′. There is a linear operator u : L2[0, 1] → E belonging to X such that EeiX,g = exp
- −1
2 ||u′(g)||2
2
- ,
g ∈ E′. Note: u(L2[0, 1]) is the RKHS of X
Entropy numbers
Let X be a centred Gaussian random variable with values in a sep. Banach space (E, ||.||): i.e. X, g Gaussian ∀g ∈ E′. There is a linear operator u : L2[0, 1] → E belonging to X such that EeiX,g = exp
- −1
2 ||u′(g)||2
2
- ,
g ∈ E′. Note: u(L2[0, 1]) is the RKHS of X Example: X BM in E = C[0, 1] (uf)(t) = t f(s)ds; u : L2[0, 1] → C[0, 1].
Entropy numbers / small ball function
On the one hand, we consider the small ball function: φX(ε) = − log P [||X||E ≤ ε]
- = − log P
- sup
0≤t≤1
|Xt| ≤ ε
Entropy numbers / small ball function
On the one hand, we consider the small ball function: φX(ε) = − log P [||X||E ≤ ε]
- = − log P
- sup
0≤t≤1
|Xt| ≤ ε
- On the other hand, the entropy numbers of u:
en(u) := inf{ε > 0 | ∃ ε-net of 2n−1 points of u(BL2[0,1]) in E}, where BL2[0,1] is the unit ball in L2[0, 1] (inverse of covering numbers).
Asymptotics
We use the following notation Weak asymptotics: a(ε) b(ε), ε → 0 means lim sup
ε→0
a(ε) b(ε) < ∞ a(ε) ≈ b(ε), ε → 0 means a(ε) b(ε) and b(ε) a(ε)
Asymptotics
We use the following notation Weak asymptotics: a(ε) b(ε), ε → 0 means lim sup
ε→0
a(ε) b(ε) < ∞ a(ε) ≈ b(ε), ε → 0 means a(ε) b(ε) and b(ε) a(ε) Strong asymptotics: a(ε) b(ε), ε → 0 means lim sup
ε→0
a(ε) b(ε) = 1 a(ε) ∼ b(ε), ε → 0 means a(ε) b(ε) and b(ε) a(ε) Similarly for n → ∞
The small ball – entropy connection
Theorem (Kuelbs/Li’93, Li/Linde’99, A./Ibragimov/Lifshits/van Zanten’08)
For r > 0 and δ ∈ R: φX(ε) ε−r| log ε|δ ⇔ en(u) n−1/2−1/r(log n)δ/r φX(ε) ε−r| log ε|δ ⇔ en(u) n−1/2−1/r(log n)δ/r where the first ⇐ requires φX(ε) φ(2ε). Further, for δ > 0 and κ > 0, φX(ε) κ| log ε|δ ⇔ − log en(u) κ−1/δn1/δ φX(ε) κ| log ε|δ ⇔ − log en(u) κ−1/δn1/δ.
small ball pr. ↔ entropy numbers (probabilistic) (functional analytic)
The small ball - entropy connection
Example: X Riemann-Liouville process in C[0, 1] (uf)(t) = t (t − s)H−1/2f(s)ds; u : L2[0, 1] → C[0, 1].
- ne has
φX(ε) ≈ ε−1/H en(u) ≈ n−1/2−H In particular for X BM, H = 1/2 φX(ε) ≈ ε−2 en(u) ≈ n−1
Outline
1
Small ball probabilities vs. metric entropy
2
Connection to other questions
3
Recent results for concrete examples
Connections of small ball prob. to other questions
In the setup of Gaussian processes, there are various connections to: entropy of function classes convergence rate of series representations coding quantities for the process approximation quantitites for the process Chung’s law of the iterated logarithm statistical problems ... Generally: the small ball rate increases the slower the better the process can be approximated the smoother the process is
Connections of small ball prob. to other questions
path regularity
Gaussian process n-times differentiable ⇒ γ ≤ 1/n
functional analysis
entropy numbers of linear operators between Banach spaces
approximation of stochastic processes X(n)
t
=
n
- i=1
ξiψi(t) → Xt error ||X(n) − X|| → 0 coding, quantisation, quadrature
E[f(X)] ≈
N
- i=1
f( ˆ Xi)qi
law of the iterated logarithm
lim inf
t→0
sups≤t |Xs| b(t) = c PDE problems
- ther approximation quantities such as
Kolmogorov widths, etc.
φX(ε) = − log P
- sup
0≤t≤1
|Xt| ≤ ε
- = κε−γ(1 + o(1))
Connections of small ball prob. to other questions
path regularity
Gaussian process n-times differentiable ⇒ γ ≤ 1/n
functional analysis
entropy numbers of linear operators between Banach spaces
approximation of stochastic processes X(n)
t
=
n
- i=1
ξiψi(t) → Xt error ||X(n) − X|| → 0 coding, quantisation, quadrature
E[f(X)] ≈
N
- i=1
f( ˆ Xi)qi
law of the iterated logarithm
lim inf
t→0
sups≤t |Xs| b(t) = c PDE problems
- ther approximation quantities such as
Kolmogorov widths, etc.
φX(ε) = − log P
- sup
0≤t≤1
|Xt| ≤ ε
- = κε−γ(1 + o(1))
Connections of small ball prob. to other questions
path regularity
Gaussian process n-times differentiable ⇒ γ ≤ 1/n
functional analysis
entropy numbers of linear operators between Banach spaces
approximation of stochastic processes X(n)
t
=
n
- i=1
ξiψi(t) → Xt error ||X(n) − X|| → 0 coding, quantisation, quadrature
E[f(X)] ≈
N
- i=1
f( ˆ Xi)qi
law of the iterated logarithm
lim inf
t→0
sups≤t |Xs| b(t) = c PDE problems
- ther approximation quantities such as
Kolmogorov widths, etc.
φX(ε) = − log P
- sup
0≤t≤1
|Xt| ≤ ε
- = κε−γ(1 + o(1))
Connection to smoothness of process
Theorem (A.’11)
Let (Xt)t∈[0,1] be a centred Gaussian process and n an integer. If (a modif. of) X is n-times differentiable with X (n) ∈ L2[0, 1] then φX(ε) = − log P
- sup
0≤t≤1
|Xt| ≤ ε
- ε−1/n.
1 ε X −ε
Connection to smoothness of process
Theorem (A.’11)
Let (Xt)t∈[0,1] be a centred Gaussian process and n an integer. If (a modif. of) X is n-times differentiable with X (n) ∈ L2[0, 1] then φX(ε) = − log P
- sup
0≤t≤1
|Xt| ≤ ε
- ε−1/n.
1 ε X −ε
Now, what happens when n (above) is non-integer?
Connection to smoothness of process
Define fractional differentiation: Let γ > 0 (recall X0 = 0) X (γ)
t
= x(t) if Xt = t (t − s)γ−1x(t)dt.
Connection to smoothness of process
Define fractional differentiation: Let γ > 0 (recall X0 = 0) X (γ)
t
= x(t) if Xt = t (t − s)γ−1x(t)dt.
Theorem (A.’11)
Let (Xt)t∈[0,1] be a centred Gaussian process and γ > 1/2. If X (γ) exists and X (γ) ∈ L2[0, 1] then φX(ε) = − log P
- sup
0≤t≤1
|Xt| ≤ ε
- ε−1/γ.
Connection to smoothness of process
Define fractional differentiation: Let γ > 0 (recall X0 = 0) X (γ)
t
= x(t) if Xt = t (t − s)γ−1x(t)dt.
Theorem (A.’11)
Let (Xt)t∈[0,1] be a centred Gaussian process and γ > 1/2. If X (γ) exists and X (γ) ∈ L2[0, 1] then φX(ε) = − log P
- sup
0≤t≤1
|Xt| ≤ ε
- ε−1/γ.
‘’Example”: Brownian motion X is γ-times “differentiable” (H¨
- lder),
γ < 1
2.
− log P
- sup
0≤t≤1
|Xt| ≤ ε
- ≈ ε−2 = ε− 1
1/2 .
Connection to smoothness of process
Define fractional differentiation: Let γ > 0 (recall X0 = 0) X (γ)
t
= x(t) if Xt = t (t − s)γ−1x(t)dt.
Theorem (A.’11)
Let (Xt)t∈[0,1] be a centred Gaussian process and γ > 1/2. If X (γ) exists and X (γ) ∈ L2[0, 1] then φX(ε) = − log P
- sup
0≤t≤1
|Xt| ≤ ε
- ε−1/γ.
‘’Example”: Brownian motion X is γ-times “differentiable” (H¨
- lder),
γ < 1
2.
− log P
- sup
0≤t≤1
|Xt| ≤ ε
- ≈ ε−2 = ε− 1
1/2 .
Similar results for different norms
Connection to smoothness of process
Corollary (A.’11)
Let (Xt)t∈[0,1] be a centred Gaussian process. If X has a C∞-modif. then for any δ > 0 lim
ε→0 εδ
- − log P
- sup
0≤t≤1
|Xt| ≤ ε
- = 0.
Connection to smoothness of process
Corollary (A.’11)
Let (Xt)t∈[0,1] be a centred Gaussian process. If X has a C∞-modif. then for any δ > 0 lim
ε→0 εδ
- − log P
- sup
0≤t≤1
|Xt| ≤ ε
- = 0.
Let X and Y be (not nec. indep.) centred Gaussian, s.t. φX(ε) = − log P
- sup
0≤t≤1
|Xt| ≤ ε
- ≈ ε−γ,
and one has Y (α) ∈ L2[0, 1] with α > 1/γ and α > 1/2. Then φX+Y(ε) = − log P
- sup
0≤t≤1
|Xt + Yt| ≤ ε
- ≈ ε−γ.
Outline
1
Small ball probabilities vs. metric entropy
2
Connection to other questions
3
Recent results for concrete examples
The entropy method: recent results
several recent results using the entropy connection in the case of slowly varying φX, i.e. exp. decreasing en(u) K¨ uhn’11: L2 and L∞ case EXtXs = e−σ2||t−s||2, t, s ∈ Rd Result: φX(ε) ≈ | log ε|d+1 (log | log ε|)d . A./Gao/K¨ uhn/Li/Shao’11+: L2 and L∞ case EXtXs = 22β+1(ts)α (t + s)2β+1 , t, s > 0 Result: φX(ε) ≈ | log ε|3.
The entropy method: recent results
cont’d The spectral measure F of stationary Gaussian process is given by: EXtXs = EXt−sX0 = k(t − s) =
- ei(t−s)udF(u).
The entropy method: recent results
cont’d The spectral measure F of stationary Gaussian process is given by: EXtXs = EXt−sX0 = k(t − s) =
- ei(t−s)udF(u).
A./Ibragimov/Lifshits/van Zanten’08: spectral measure dF(u) = e−|u|νdu, ˜ F =
- k∈Z
e−|k|νδ2πk Result for L∞ norm: φX(ε) ≈ | log ε|2 log | log ε|, ν > 1, F; φX(ε) ≈ | log ε|1+1/ν, 0 < ν ≤ 1, F; or ν > 0, ˜ F.
The entropy method: recent results
cont’d The spectral measure F of stationary Gaussian process is given by: EXtXs = EXt−sX0 = k(t − s) =
- ei(t−s)udF(u).
A./Ibragimov/Lifshits/van Zanten’08: spectral measure dF(u) = e−|u|νdu, ˜ F =
- k∈Z