small ball probabilities and metric entropy
play

Small ball probabilities and metric entropy Frank Aurzada, TU Berlin - PowerPoint PPT Presentation

Small ball probabilities and metric entropy Frank Aurzada, TU Berlin Sydney, February 2012 MCQMC Outline Small ball probabilities vs. metric entropy 1 Connection to other questions 2 Recent results for concrete examples 3 Outline Small


  1. Small ball probabilities and metric entropy Frank Aurzada, TU Berlin Sydney, February 2012 MCQMC

  2. Outline Small ball probabilities vs. metric entropy 1 Connection to other questions 2 Recent results for concrete examples 3

  3. Outline Small ball probabilities vs. metric entropy 1 Connection to other questions 2 Recent results for concrete examples 3

  4. Small ball probabilities Let ( X t ) t ≥ 0 be a stochastic process with X 0 = 0 Goal: find asymptotic rate of � � sup | X t | ≤ ε ≈ ? , with ε → 0 P 0 ≤ t ≤ 1 ε 1 X − ε In many examples, � � = e − κε − γ ( 1 + o ( 1 )) , P sup | X t | ≤ ε with ε → 0 0 ≤ t ≤ 1 with γ > 0 und κ > 0.

  5. Small ball probabilities Let ( X t ) t ≥ 0 be a stochastic process with X 0 = 0 Goal: find asymptotic rate of � � sup | X t | ≤ ε ≈ ? , with ε → 0 P 0 ≤ t ≤ 1 ε 1 X − ε In many examples, � � = e − κε − γ ( 1 + o ( 1 )) , P sup | X t | ≤ ε with ε → 0 0 ≤ t ≤ 1 with γ > 0 und κ > 0.

  6. Small ball probabilities Let ( X t ) t ≥ 0 be a stochastic process with X 0 = 0 Goal: find asymptotic rate of � � sup | X t | ≤ ε ≈ ? , with ε → 0 P 0 ≤ t ≤ 1 ε 1 X − ε Therefore, we study � � = κε − γ ( 1 + o ( 1 )) , φ X ( ε ) := − log P sup | X t | ≤ ε with ε → 0 0 ≤ t ≤ 1 the so-called small ball function of X . γ

  7. Entropy numbers Let X be a centred Gaussian random variable with values in a sep. Banach space ( E , || . || ) : i.e. ∀ g ∈ E ′ . � X , g � Gaussian

  8. Entropy numbers Let X be a centred Gaussian random variable with values in a sep. Banach space ( E , || . || ) : i.e. ∀ g ∈ E ′ . � X , g � Gaussian There is a linear operator u : L 2 [ 0 , 1 ] → E belonging to X such that � − 1 � E e i � X , g � = exp 2 || u ′ ( g ) || 2 g ∈ E ′ . , 2 Note: u ( L 2 [ 0 , 1 ]) is the RKHS of X

  9. Entropy numbers Let X be a centred Gaussian random variable with values in a sep. Banach space ( E , || . || ) : i.e. ∀ g ∈ E ′ . � X , g � Gaussian There is a linear operator u : L 2 [ 0 , 1 ] → E belonging to X such that � − 1 � E e i � X , g � = exp 2 || u ′ ( g ) || 2 g ∈ E ′ . , 2 Note: u ( L 2 [ 0 , 1 ]) is the RKHS of X Example: X BM in E = C [ 0 , 1 ] � t ( uf )( t ) = f ( s ) d s ; u : L 2 [ 0 , 1 ] → C [ 0 , 1 ] . 0

  10. Entropy numbers / small ball function On the one hand, we consider the small ball function: � � �� φ X ( ε ) = − log P [ || X || E ≤ ε ] = − log P sup | X t | ≤ ε 0 ≤ t ≤ 1

  11. Entropy numbers / small ball function On the one hand, we consider the small ball function: � � �� φ X ( ε ) = − log P [ || X || E ≤ ε ] = − log P sup | X t | ≤ ε 0 ≤ t ≤ 1 On the other hand, the entropy numbers of u : e n ( u ) := inf { ε > 0 | ∃ ε -net of 2 n − 1 points of u ( B L 2 [ 0 , 1 ] ) in E } , where B L 2 [ 0 , 1 ] is the unit ball in L 2 [ 0 , 1 ] (inverse of covering numbers).

  12. Asymptotics We use the following notation Weak asymptotics: a ( ε ) a ( ε ) � b ( ε ) , ε → 0 means lim sup b ( ε ) < ∞ ε → 0 a ( ε ) ≈ b ( ε ) , ε → 0 means a ( ε ) � b ( ε ) and b ( ε ) � a ( ε )

  13. Asymptotics We use the following notation Weak asymptotics: a ( ε ) a ( ε ) � b ( ε ) , ε → 0 means lim sup b ( ε ) < ∞ ε → 0 a ( ε ) ≈ b ( ε ) , ε → 0 means a ( ε ) � b ( ε ) and b ( ε ) � a ( ε ) Strong asymptotics: a ( ε ) a ( ε ) � b ( ε ) , ε → 0 means lim sup b ( ε ) = 1 ε → 0 a ( ε ) ∼ b ( ε ) , ε → 0 means a ( ε ) � b ( ε ) and b ( ε ) � a ( ε ) Similarly for n → ∞

  14. The small ball – entropy connection Theorem (Kuelbs/Li’93, Li/Linde’99, A./Ibragimov/Lifshits/van Zanten’08) For r > 0 and δ ∈ R : φ X ( ε ) � ε − r | log ε | δ e n ( u ) � n − 1 / 2 − 1 / r ( log n ) δ/ r ⇔ φ X ( ε ) � ε − r | log ε | δ e n ( u ) � n − 1 / 2 − 1 / r ( log n ) δ/ r ⇔ where the first ⇐ requires φ X ( ε ) � φ ( 2 ε ) . Further, for δ > 0 and κ > 0, φ X ( ε ) � κ | log ε | δ − log e n ( u ) � κ − 1 /δ n 1 /δ ⇔ φ X ( ε ) � κ | log ε | δ − log e n ( u ) � κ − 1 /δ n 1 /δ . ⇔ small ball pr. ↔ entropy numbers (probabilistic) (functional analytic)

  15. The small ball - entropy connection Example: X Riemann-Liouville process in C [ 0 , 1 ] � t ( t − s ) H − 1 / 2 f ( s ) d s ; ( uf )( t ) = u : L 2 [ 0 , 1 ] → C [ 0 , 1 ] . 0 one has φ X ( ε ) ≈ ε − 1 / H e n ( u ) ≈ n − 1 / 2 − H In particular for X BM, H = 1 / 2 φ X ( ε ) ≈ ε − 2 e n ( u ) ≈ n − 1

  16. Outline Small ball probabilities vs. metric entropy 1 Connection to other questions 2 Recent results for concrete examples 3

  17. Connections of small ball prob. to other questions In the setup of Gaussian processes, there are various connections to: entropy of function classes convergence rate of series representations coding quantities for the process approximation quantitites for the process Chung’s law of the iterated logarithm statistical problems ... Generally: the small ball rate increases the slower the better the process can be approximated the smoother the process is

  18. Connections of small ball prob. to other questions approximation of stochastic processes coding, quantisation, law of the iterated quadrature logarithm n X ( n ) � = ξ i ψ i ( t ) → X t sup s ≤ t | X s | t N lim inf = c i =1 � f ( ˆ E [ f ( X )] ≈ X i ) q i b ( t ) error || X ( n ) − X || → 0 t → 0 i =1 � � = κε − γ (1 + o (1)) φ X ( ε ) = − log P sup | X t | ≤ ε 0 ≤ t ≤ 1 path regularity functional analysis Gaussian process entropy numbers of linear operators n -times differentiable ⇒ γ ≤ 1 /n between Banach spaces other approximation quantities such as PDE problems Kolmogorov widths, etc.

  19. Connections of small ball prob. to other questions approximation of stochastic processes coding, quantisation, law of the iterated quadrature logarithm n X ( n ) � = ξ i ψ i ( t ) → X t sup s ≤ t | X s | t N lim inf = c i =1 � f ( ˆ E [ f ( X )] ≈ X i ) q i b ( t ) error || X ( n ) − X || → 0 t → 0 i =1 � � = κε − γ (1 + o (1)) φ X ( ε ) = − log P sup | X t | ≤ ε 0 ≤ t ≤ 1 path regularity functional analysis Gaussian process entropy numbers of linear operators n -times differentiable ⇒ γ ≤ 1 /n between Banach spaces other approximation quantities such as PDE problems Kolmogorov widths, etc.

  20. Connections of small ball prob. to other questions approximation of stochastic processes coding, quantisation, law of the iterated quadrature logarithm n X ( n ) � = ξ i ψ i ( t ) → X t sup s ≤ t | X s | t N lim inf = c i =1 � f ( ˆ E [ f ( X )] ≈ X i ) q i b ( t ) error || X ( n ) − X || → 0 t → 0 i =1 � � = κε − γ (1 + o (1)) φ X ( ε ) = − log P sup | X t | ≤ ε 0 ≤ t ≤ 1 path regularity functional analysis Gaussian process entropy numbers of linear operators n -times differentiable ⇒ γ ≤ 1 /n between Banach spaces other approximation quantities such as PDE problems Kolmogorov widths, etc.

  21. Connection to smoothness of process Theorem (A.’11) Let ( X t ) t ∈ [ 0 , 1 ] be a centred Gaussian process and n an integer. If (a modif. of) X is n -times differentiable with X ( n ) ∈ L 2 [ 0 , 1 ] then � � � ε − 1 / n . φ X ( ε ) = − log P sup | X t | ≤ ε 0 ≤ t ≤ 1 ε 1 X − ε

  22. Connection to smoothness of process Theorem (A.’11) Let ( X t ) t ∈ [ 0 , 1 ] be a centred Gaussian process and n an integer. If (a modif. of) X is n -times differentiable with X ( n ) ∈ L 2 [ 0 , 1 ] then � � � ε − 1 / n . φ X ( ε ) = − log P sup | X t | ≤ ε 0 ≤ t ≤ 1 ε 1 X − ε Now, what happens when n (above) is non-integer?

  23. Connection to smoothness of process Define fractional differentiation: Let γ > 0 (recall X 0 = 0) � t X ( γ ) ( t − s ) γ − 1 x ( t ) d t . = x ( t ) if X t = t 0

  24. Connection to smoothness of process Define fractional differentiation: Let γ > 0 (recall X 0 = 0) � t X ( γ ) ( t − s ) γ − 1 x ( t ) d t . = x ( t ) if X t = t 0 Theorem (A.’11) Let ( X t ) t ∈ [ 0 , 1 ] be a centred Gaussian process and γ > 1 / 2. If X ( γ ) exists and X ( γ ) ∈ L 2 [ 0 , 1 ] then � � � ε − 1 /γ . φ X ( ε ) = − log P sup | X t | ≤ ε 0 ≤ t ≤ 1

  25. Connection to smoothness of process Define fractional differentiation: Let γ > 0 (recall X 0 = 0) � t X ( γ ) ( t − s ) γ − 1 x ( t ) d t . = x ( t ) if X t = t 0 Theorem (A.’11) Let ( X t ) t ∈ [ 0 , 1 ] be a centred Gaussian process and γ > 1 / 2. If X ( γ ) exists and X ( γ ) ∈ L 2 [ 0 , 1 ] then � � � ε − 1 /γ . φ X ( ε ) = − log P sup | X t | ≤ ε 0 ≤ t ≤ 1 ‘’Example”: Brownian motion X is γ -times “differentiable” (H¨ older), γ < 1 2 . � � ≈ ε − 2 = ε − 1 1 / 2 . − log P sup | X t | ≤ ε 0 ≤ t ≤ 1

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend