Small ball probabilities and metric entropy Frank Aurzada, TU Berlin - - PowerPoint PPT Presentation

small ball probabilities and metric entropy
SMART_READER_LITE
LIVE PREVIEW

Small ball probabilities and metric entropy Frank Aurzada, TU Berlin - - PowerPoint PPT Presentation

Small ball probabilities and metric entropy Frank Aurzada, TU Berlin Sydney, February 2012 MCQMC Outline Small ball probabilities vs. metric entropy 1 Connection to other questions 2 Recent results for concrete examples 3 Outline Small


slide-1
SLIDE 1

Small ball probabilities and metric entropy

Frank Aurzada, TU Berlin Sydney, February 2012 MCQMC

slide-2
SLIDE 2

Outline

1

Small ball probabilities vs. metric entropy

2

Connection to other questions

3

Recent results for concrete examples

slide-3
SLIDE 3

Outline

1

Small ball probabilities vs. metric entropy

2

Connection to other questions

3

Recent results for concrete examples

slide-4
SLIDE 4

Small ball probabilities

Let (Xt)t≥0 be a stochastic process with X0 = 0 Goal: find asymptotic rate of P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ≈ ? ,

with ε → 0

1 ε X −ε

In many examples, P

  • sup

0≤t≤1

|Xt| ≤ ε

  • = e−κε−γ(1+o(1)),

with ε → 0 with γ > 0 und κ > 0.

slide-5
SLIDE 5

Small ball probabilities

Let (Xt)t≥0 be a stochastic process with X0 = 0 Goal: find asymptotic rate of P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ≈ ? ,

with ε → 0

1 ε X −ε

In many examples, P

  • sup

0≤t≤1

|Xt| ≤ ε

  • = e−κε−γ(1+o(1)),

with ε → 0 with γ > 0 und κ > 0.

slide-6
SLIDE 6

Small ball probabilities

Let (Xt)t≥0 be a stochastic process with X0 = 0 Goal: find asymptotic rate of P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ≈ ? ,

with ε → 0

1 ε X −ε

Therefore, we study φX(ε) := − log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • = κε−γ(1 + o(1)),

with ε → 0 the so-called small ball function of X. γ

slide-7
SLIDE 7

Entropy numbers

Let X be a centred Gaussian random variable with values in a sep. Banach space (E, ||.||): i.e. X, g Gaussian ∀g ∈ E′.

slide-8
SLIDE 8

Entropy numbers

Let X be a centred Gaussian random variable with values in a sep. Banach space (E, ||.||): i.e. X, g Gaussian ∀g ∈ E′. There is a linear operator u : L2[0, 1] → E belonging to X such that EeiX,g = exp

  • −1

2 ||u′(g)||2

2

  • ,

g ∈ E′. Note: u(L2[0, 1]) is the RKHS of X

slide-9
SLIDE 9

Entropy numbers

Let X be a centred Gaussian random variable with values in a sep. Banach space (E, ||.||): i.e. X, g Gaussian ∀g ∈ E′. There is a linear operator u : L2[0, 1] → E belonging to X such that EeiX,g = exp

  • −1

2 ||u′(g)||2

2

  • ,

g ∈ E′. Note: u(L2[0, 1]) is the RKHS of X Example: X BM in E = C[0, 1] (uf)(t) = t f(s)ds; u : L2[0, 1] → C[0, 1].

slide-10
SLIDE 10

Entropy numbers / small ball function

On the one hand, we consider the small ball function: φX(ε) = − log P [||X||E ≤ ε]

  • = − log P
  • sup

0≤t≤1

|Xt| ≤ ε

slide-11
SLIDE 11

Entropy numbers / small ball function

On the one hand, we consider the small ball function: φX(ε) = − log P [||X||E ≤ ε]

  • = − log P
  • sup

0≤t≤1

|Xt| ≤ ε

  • On the other hand, the entropy numbers of u:

en(u) := inf{ε > 0 | ∃ ε-net of 2n−1 points of u(BL2[0,1]) in E}, where BL2[0,1] is the unit ball in L2[0, 1] (inverse of covering numbers).

slide-12
SLIDE 12

Asymptotics

We use the following notation Weak asymptotics: a(ε) b(ε), ε → 0 means lim sup

ε→0

a(ε) b(ε) < ∞ a(ε) ≈ b(ε), ε → 0 means a(ε) b(ε) and b(ε) a(ε)

slide-13
SLIDE 13

Asymptotics

We use the following notation Weak asymptotics: a(ε) b(ε), ε → 0 means lim sup

ε→0

a(ε) b(ε) < ∞ a(ε) ≈ b(ε), ε → 0 means a(ε) b(ε) and b(ε) a(ε) Strong asymptotics: a(ε) b(ε), ε → 0 means lim sup

ε→0

a(ε) b(ε) = 1 a(ε) ∼ b(ε), ε → 0 means a(ε) b(ε) and b(ε) a(ε) Similarly for n → ∞

slide-14
SLIDE 14

The small ball – entropy connection

Theorem (Kuelbs/Li’93, Li/Linde’99, A./Ibragimov/Lifshits/van Zanten’08)

For r > 0 and δ ∈ R: φX(ε) ε−r| log ε|δ ⇔ en(u) n−1/2−1/r(log n)δ/r φX(ε) ε−r| log ε|δ ⇔ en(u) n−1/2−1/r(log n)δ/r where the first ⇐ requires φX(ε) φ(2ε). Further, for δ > 0 and κ > 0, φX(ε) κ| log ε|δ ⇔ − log en(u) κ−1/δn1/δ φX(ε) κ| log ε|δ ⇔ − log en(u) κ−1/δn1/δ.

small ball pr. ↔ entropy numbers (probabilistic) (functional analytic)

slide-15
SLIDE 15

The small ball - entropy connection

Example: X Riemann-Liouville process in C[0, 1] (uf)(t) = t (t − s)H−1/2f(s)ds; u : L2[0, 1] → C[0, 1].

  • ne has

φX(ε) ≈ ε−1/H en(u) ≈ n−1/2−H In particular for X BM, H = 1/2 φX(ε) ≈ ε−2 en(u) ≈ n−1

slide-16
SLIDE 16

Outline

1

Small ball probabilities vs. metric entropy

2

Connection to other questions

3

Recent results for concrete examples

slide-17
SLIDE 17

Connections of small ball prob. to other questions

In the setup of Gaussian processes, there are various connections to: entropy of function classes convergence rate of series representations coding quantities for the process approximation quantitites for the process Chung’s law of the iterated logarithm statistical problems ... Generally: the small ball rate increases the slower the better the process can be approximated the smoother the process is

slide-18
SLIDE 18

Connections of small ball prob. to other questions

path regularity

Gaussian process n-times differentiable ⇒ γ ≤ 1/n

functional analysis

entropy numbers of linear operators between Banach spaces

approximation of stochastic processes X(n)

t

=

n

  • i=1

ξiψi(t) → Xt error ||X(n) − X|| → 0 coding, quantisation, quadrature

E[f(X)] ≈

N

  • i=1

f( ˆ Xi)qi

law of the iterated logarithm

lim inf

t→0

sups≤t |Xs| b(t) = c PDE problems

  • ther approximation quantities such as

Kolmogorov widths, etc.

φX(ε) = − log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • = κε−γ(1 + o(1))
slide-19
SLIDE 19

Connections of small ball prob. to other questions

path regularity

Gaussian process n-times differentiable ⇒ γ ≤ 1/n

functional analysis

entropy numbers of linear operators between Banach spaces

approximation of stochastic processes X(n)

t

=

n

  • i=1

ξiψi(t) → Xt error ||X(n) − X|| → 0 coding, quantisation, quadrature

E[f(X)] ≈

N

  • i=1

f( ˆ Xi)qi

law of the iterated logarithm

lim inf

t→0

sups≤t |Xs| b(t) = c PDE problems

  • ther approximation quantities such as

Kolmogorov widths, etc.

φX(ε) = − log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • = κε−γ(1 + o(1))
slide-20
SLIDE 20

Connections of small ball prob. to other questions

path regularity

Gaussian process n-times differentiable ⇒ γ ≤ 1/n

functional analysis

entropy numbers of linear operators between Banach spaces

approximation of stochastic processes X(n)

t

=

n

  • i=1

ξiψi(t) → Xt error ||X(n) − X|| → 0 coding, quantisation, quadrature

E[f(X)] ≈

N

  • i=1

f( ˆ Xi)qi

law of the iterated logarithm

lim inf

t→0

sups≤t |Xs| b(t) = c PDE problems

  • ther approximation quantities such as

Kolmogorov widths, etc.

φX(ε) = − log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • = κε−γ(1 + o(1))
slide-21
SLIDE 21

Connection to smoothness of process

Theorem (A.’11)

Let (Xt)t∈[0,1] be a centred Gaussian process and n an integer. If (a modif. of) X is n-times differentiable with X (n) ∈ L2[0, 1] then φX(ε) = − log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ε−1/n.

1 ε X −ε

slide-22
SLIDE 22

Connection to smoothness of process

Theorem (A.’11)

Let (Xt)t∈[0,1] be a centred Gaussian process and n an integer. If (a modif. of) X is n-times differentiable with X (n) ∈ L2[0, 1] then φX(ε) = − log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ε−1/n.

1 ε X −ε

Now, what happens when n (above) is non-integer?

slide-23
SLIDE 23

Connection to smoothness of process

Define fractional differentiation: Let γ > 0 (recall X0 = 0) X (γ)

t

= x(t) if Xt = t (t − s)γ−1x(t)dt.

slide-24
SLIDE 24

Connection to smoothness of process

Define fractional differentiation: Let γ > 0 (recall X0 = 0) X (γ)

t

= x(t) if Xt = t (t − s)γ−1x(t)dt.

Theorem (A.’11)

Let (Xt)t∈[0,1] be a centred Gaussian process and γ > 1/2. If X (γ) exists and X (γ) ∈ L2[0, 1] then φX(ε) = − log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ε−1/γ.
slide-25
SLIDE 25

Connection to smoothness of process

Define fractional differentiation: Let γ > 0 (recall X0 = 0) X (γ)

t

= x(t) if Xt = t (t − s)γ−1x(t)dt.

Theorem (A.’11)

Let (Xt)t∈[0,1] be a centred Gaussian process and γ > 1/2. If X (γ) exists and X (γ) ∈ L2[0, 1] then φX(ε) = − log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ε−1/γ.

‘’Example”: Brownian motion X is γ-times “differentiable” (H¨

  • lder),

γ < 1

2.

− log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ≈ ε−2 = ε− 1

1/2 .

slide-26
SLIDE 26

Connection to smoothness of process

Define fractional differentiation: Let γ > 0 (recall X0 = 0) X (γ)

t

= x(t) if Xt = t (t − s)γ−1x(t)dt.

Theorem (A.’11)

Let (Xt)t∈[0,1] be a centred Gaussian process and γ > 1/2. If X (γ) exists and X (γ) ∈ L2[0, 1] then φX(ε) = − log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ε−1/γ.

‘’Example”: Brownian motion X is γ-times “differentiable” (H¨

  • lder),

γ < 1

2.

− log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ≈ ε−2 = ε− 1

1/2 .

Similar results for different norms

slide-27
SLIDE 27

Connection to smoothness of process

Corollary (A.’11)

Let (Xt)t∈[0,1] be a centred Gaussian process. If X has a C∞-modif. then for any δ > 0 lim

ε→0 εδ

  • − log P
  • sup

0≤t≤1

|Xt| ≤ ε

  • = 0.
slide-28
SLIDE 28

Connection to smoothness of process

Corollary (A.’11)

Let (Xt)t∈[0,1] be a centred Gaussian process. If X has a C∞-modif. then for any δ > 0 lim

ε→0 εδ

  • − log P
  • sup

0≤t≤1

|Xt| ≤ ε

  • = 0.

Let X and Y be (not nec. indep.) centred Gaussian, s.t. φX(ε) = − log P

  • sup

0≤t≤1

|Xt| ≤ ε

  • ≈ ε−γ,

and one has Y (α) ∈ L2[0, 1] with α > 1/γ and α > 1/2. Then φX+Y(ε) = − log P

  • sup

0≤t≤1

|Xt + Yt| ≤ ε

  • ≈ ε−γ.
slide-29
SLIDE 29

Outline

1

Small ball probabilities vs. metric entropy

2

Connection to other questions

3

Recent results for concrete examples

slide-30
SLIDE 30

The entropy method: recent results

several recent results using the entropy connection in the case of slowly varying φX, i.e. exp. decreasing en(u) K¨ uhn’11: L2 and L∞ case EXtXs = e−σ2||t−s||2, t, s ∈ Rd Result: φX(ε) ≈ | log ε|d+1 (log | log ε|)d . A./Gao/K¨ uhn/Li/Shao’11+: L2 and L∞ case EXtXs = 22β+1(ts)α (t + s)2β+1 , t, s > 0 Result: φX(ε) ≈ | log ε|3.

slide-31
SLIDE 31

The entropy method: recent results

cont’d The spectral measure F of stationary Gaussian process is given by: EXtXs = EXt−sX0 = k(t − s) =

  • ei(t−s)udF(u).
slide-32
SLIDE 32

The entropy method: recent results

cont’d The spectral measure F of stationary Gaussian process is given by: EXtXs = EXt−sX0 = k(t − s) =

  • ei(t−s)udF(u).

A./Ibragimov/Lifshits/van Zanten’08: spectral measure dF(u) = e−|u|νdu, ˜ F =

  • k∈Z

e−|k|νδ2πk Result for L∞ norm: φX(ε) ≈ | log ε|2 log | log ε|, ν > 1, F; φX(ε) ≈ | log ε|1+1/ν, 0 < ν ≤ 1, F; or ν > 0, ˜ F.

slide-33
SLIDE 33

The entropy method: recent results

cont’d The spectral measure F of stationary Gaussian process is given by: EXtXs = EXt−sX0 = k(t − s) =

  • ei(t−s)udF(u).

A./Ibragimov/Lifshits/van Zanten’08: spectral measure dF(u) = e−|u|νdu, ˜ F =

  • k∈Z

e−|k|νδ2πk Result for L∞ norm: φX(ε) ≈ | log ε|2 log | log ε|, ν > 1, F; φX(ε) ≈ | log ε|1+1/ν, 0 < ν ≤ 1, F; or ν > 0, ˜ F. Karol’/Nazarov’11+: rather general spectral measure, Rd indexed, L2 case dF(u) = e−G(u)du relate behaviour of G at ∞ with small ball probabilities

slide-34
SLIDE 34

Thank you for your attention! Frank Aurzada Technische Universit¨ at Berlin page.math.tu-berlin.de/∼aurzada/