universally typical sets for ergodic sources of
play

Universally Typical Sets for Ergodic Sources of Multidimensional - PowerPoint PPT Presentation

Universally Typical Sets for Ergodic Sources of Multidimensional Data Tyll Kr uger, Guido Montufar, Ruedi Seiler and Rainer Siegmund-Schultze http://arxiv.org/abs/1105.0393 Universal Lossless Encoding Algorithms data modeled by


  1. Universally Typical Sets for Ergodic Sources of Multidimensional Data Tyll Kr¨ uger, Guido Montufar, Ruedi Seiler and Rainer Siegmund-Schultze http://arxiv.org/abs/1105.0393

  2. Universal Lossless Encoding Algorithms • data modeled by stationary/ergodic random process

  3. Universal Lossless Encoding Algorithms • data modeled by stationary/ergodic random process • lossless: algorithm ensurs exact reconstruction

  4. Universal Lossless Encoding Algorithms • data modeled by stationary/ergodic random process • lossless: algorithm ensurs exact reconstruction • main idea (Shannon): encode typical but small set

  5. Universal Lossless Encoding Algorithms • data modeled by stationary/ergodic random process • lossless: algorithm ensurs exact reconstruction • main idea (Shannon): encode typical but small set • universal : algorithm does not involve specific properties of random process.

  6. Universal Lossless Encoding Algorithms • data modeled by stationary/ergodic random process • lossless: algorithm ensurs exact reconstruction • main idea (Shannon): encode typical but small set • universal : algorithm does not involve specific properties of random process. • main idea: construction of universally typical sets.

  7. Entropy Typical Set − 1 ( x n n log µ ( x n 1 ) with 1 ) ∼ h ( µ ) . have the Asymptotic Equipartition Property : 1 ) have the same probability e − nh ( µ ) • all ( x n

  8. Entropy Typical Set − 1 ( x n n log µ ( x n 1 ) with 1 ) ∼ h ( µ ) . have the Asymptotic Equipartition Property : 1 ) have the same probability e − nh ( µ ) • all ( x n • small size e nh ( µ ) but still

  9. Entropy Typical Set − 1 ( x n n log µ ( x n 1 ) with 1 ) ∼ h ( µ ) . have the Asymptotic Equipartition Property : 1 ) have the same probability e − nh ( µ ) • all ( x n • small size e nh ( µ ) but still • nearly full measure

  10. Entropy Typical Set − 1 ( x n n log µ ( x n 1 ) with 1 ) ∼ h ( µ ) . have the Asymptotic Equipartition Property : 1 ) have the same probability e − nh ( µ ) • all ( x n • small size e nh ( µ ) but still • nearly full measure • output sequences with higher or smaler probability than e − nh ( µ ) will rarely be observed.

  11. Shannon-Mcmillan-Briman Z -ergodic processes: • − 1 n log µ ( x n 1 ) → h ( µ ) . • in probability (Shannon)

  12. Shannon-Mcmillan-Briman Z -ergodic processes: • − 1 n log µ ( x n 1 ) → h ( µ ) . • in probability (Shannon) • pointwise almost surely (Mcmillan, Briman)

  13. Shannon-Mcmillan-Briman Z -ergodic processes: • − 1 n log µ ( x n 1 ) → h ( µ ) . • in probability (Shannon) • pointwise almost surely (Mcmillan, Briman) • amenable groups, Z d (Kiefer, Ornstein and Weiss)

  14. Notation d -dimensional: � ( i 1 , . . . , i d ) ∈ Z d � • Λ n := + : 0 ≤ i j ≤ n − 1 , j ∈ { 1 , . . . , d }

  15. Notation d -dimensional: � ( i 1 , . . . , i d ) ∈ Z d � • Λ n := + : 0 ≤ i j ≤ n − 1 , j ∈ { 1 , . . . , d } • Σ n := A Λ n , Σ = A Z d , A finite alphabet

  16. Result Theorem (Universally-typical-sets) For any given h 0 with 0 < h 0 ≤ log( |A| ) one can construct a sequence of subsets {T n ( h 0 ) ⊂ Σ n } n such that for all µ ∈ P erg with h ( µ ) < h 0 the following holds: 1. n →∞ µ n ( T n ( h 0 )) = 1 , lim

  17. Result Theorem (Universally-typical-sets) For any given h 0 with 0 < h 0 ≤ log( |A| ) one can construct a sequence of subsets {T n ( h 0 ) ⊂ Σ n } n such that for all µ ∈ P erg with h ( µ ) < h 0 the following holds: 1. n →∞ µ n ( T n ( h 0 )) = 1 , lim log |T n ( h 0 ) | 2. lim = h 0 . n d n →∞

  18. Result Theorem (Universally-typical-sets) For any given h 0 with 0 < h 0 ≤ log( |A| ) one can construct a sequence of subsets {T n ( h 0 ) ⊂ Σ n } n such that for all µ ∈ P erg with h ( µ ) < h 0 the following holds: 1. n →∞ µ n ( T n ( h 0 )) = 1 , lim log |T n ( h 0 ) | 2. lim = h 0 . n d n →∞ 3. optimal

  19. Remarks about Proof: � � µ k,n k ≤ n on A Λ k . • for each x ∈ Σ empirical measures ˜ x � � µ k,n Explain empirical measure ˜ by a drawing (black bord) x

  20. Remarks about Proof: � � µ k,n k ≤ n on A Λ k . • for each x ∈ Σ empirical measures ˜ x µ k,n 1 • T n ( h 0 ) := Π n { x ∈ Σ : k d H (˜ x ) ≤ h 0 } � � µ k,n Explain empirical measure ˜ by a drawing (black bord) x

  21. Remarks about Proof: � � µ k,n k ≤ n on A Λ k . • for each x ∈ Σ empirical measures ˜ x µ k,n 1 • T n ( h 0 ) := Π n { x ∈ Σ : k d H (˜ x ) ≤ h 0 } • k d ≤ 1 1+ ǫ log |A| n d , ǫ > 0 � � µ k,n Explain empirical measure ˜ by a drawing (black bord) x

  22. Remarks about Proof: � � µ k,n k ≤ n on A Λ k . • for each x ∈ Σ empirical measures ˜ x µ k,n 1 • T n ( h 0 ) := Π n { x ∈ Σ : k d H (˜ x ) ≤ h 0 } • k d ≤ 1 1+ ǫ log |A| n d , ǫ > 0 • lim sup log |T n ( h 0 ) | ≤ h 0 n d � � µ k,n Explain empirical measure ˜ by a drawing (black bord) x

  23. Theorem (Empirical-Entropy Theorem) n →∞ Let µ ∈ P erg . For any sequence { k n } satisfying k n − − − → ∞ and k d n = o ( n d ) we have 1 µ k n ,n lim H (˜ ) = h ( µ ) , µ -almost surely . x k d n →∞ n

  24. Main references • Paul C. Shields: The Ergodic Theory of Discrete Sample Paths, Graduate Studies in Mathematics, Vol.13 AMS. • Article to appear in KYBERNETICA

  25. Background Material Lemma (Packing Lemma) Consider for any fixed 0 < δ ≤ 1 integers k and m related through k ≥ d · m/δ . Let C ⊂ Σ m and x ∈ Σ with the property µ m,k that ˜ x,overl ( C ) ≥ 1 − δ . Then, there exists p ∈ Λ m such that: µ p ,m,k a) ˜ ( C ) ≥ 1 − 2 δ , and also b) x � k � d ˜ � k µ p ,m,k � + 2) d . ( C ) ≥ (1 − 4) δ ( x m m

  26. Theorem Given any µ ∈ P erg and any α ∈ (0 , 1 2 ) we have the following: • For all k larger than some k 0 = k 0 ( α ) there is a set T k ( α ) ⊂ Σ k satisfying log |T k ( α ) | ≤ h ( µ ) + α , k d and such that for µ -a.e. x the following holds: µ k,n ˜ ( T k ( α )) > 1 − α , x for all n and k such that k n < ε for some ε = ε ( α ) > 0 and n larger than some n 0 ( x ) . • (optimality)

  27. Definition (Entropy-typical-sets) Let δ < 1 2 . For some µ with entropy rate h ( µ ) the entropy-typical-sets are defined as: � x ∈ Σ m : 2 − m d ( h ( µ )+ δ ) ≤ µ m ( { x } ) ≤ 2 − m d ( h ( µ ) − δ ) � C m ( δ ) := . (1) We will use these sets as basic sets for the typical-sampling-sets defined below.

  28. Definition (Typical-sampling-sets) Consider some µ . Consider some δ < 1 2 . For k ≥ m , we define a typical-sampling-set T k ( δ, m ) as the set of elements in Σ k that have a regular m -block partition such that the resulting words belonging to the µ -entropy-typical-set C m = C m ( δ ) contribute at least a (1 − δ )-fraction to the (slightly modified) number of partition elements in that regular m -block partition, more precisely, they occupy at least a (1 − δ )-fraction of all sites in Λ k � k � d � x ∈ Σ k : � T k ( δ, m ) := 1 [ C m ] ( σ r + p x ) ≥ (1 − δ ) for some p m r ∈ m · Z d : (Λ m + r + p ) ∩ Λ k � = ∅

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend