permutation channels
play

Permutation Channels Anuran Makur Department of Electrical - PowerPoint PPT Presentation

Permutation Channels Anuran Makur Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology 10 July 2020 Anuran Makur (MIT) Permutation Channels 10 July 2020 1 / 40 Outline Introduction 1 Three


  1. Example: Coding for Random Deletion Network Consider a communication network where packets can be dropped: SENDER RECEIVER RANDOM RANDOM DELETION PERMUTATION NETWORK Abstraction: n -length codeword = sequence of n packets Random deletion channel: Delete each symbol/packet independently with prob p ∈ (0 , 1) Random permutation block: Randomly permute packets of codeword Anuran Makur (MIT) Permutation Channels 10 July 2020 5 / 40

  2. Example: Coding for Random Deletion Network Consider a communication network where packets can be dropped: SENDER RECEIVER ERASURE RANDOM CHANNEL PERMUTATION ? ? NETWORK Abstraction: n -length codeword = sequence of n packets Equivalent Erasure channel: Erase each symbol/packet independently with prob p ∈ (0 , 1) Random permutation block: Randomly permute packets of codeword Anuran Makur (MIT) Permutation Channels 10 July 2020 5 / 40

  3. Example: Coding for Random Deletion Network Consider a communication network where packets can be dropped: SENDER RECEIVER ERASURE RANDOM CHANNEL PERMUTATION 1 ? 3 1 2 3 3 1 ? NETWORK Abstraction: n -length codeword = sequence of n packets Erasure channel: Erase each symbol/packet independently with prob p ∈ (0 , 1) Random permutation block: Randomly permute packets of codeword Coding: Add sequence numbers (packet size = b + log( n ) bits, alphabet size = n 2 b ) Anuran Makur (MIT) Permutation Channels 10 July 2020 5 / 40

  4. Example: Coding for Random Deletion Network Consider a communication network where packets can be dropped: SENDER RECEIVER ERASURE RANDOM CHANNEL PERMUTATION 1 ? 3 1 2 3 3 1 ? NETWORK Abstraction: n -length codeword = sequence of n packets Erasure channel: Erase each symbol/packet independently with prob p ∈ (0 , 1) Random permutation block: Randomly permute packets of codeword Coding: Add sequence numbers and use standard coding techniques Anuran Makur (MIT) Permutation Channels 10 July 2020 5 / 40

  5. Example: Coding for Random Deletion Network Consider a communication network where packets can be dropped: SENDER RECEIVER ERASURE RANDOM CHANNEL PERMUTATION 1 ? 3 1 2 3 3 1 ? NETWORK Abstraction: n -length codeword = sequence of n packets Erasure channel: Erase each symbol/packet independently with prob p ∈ (0 , 1) Random permutation block: Randomly permute packets of codeword Coding: Add sequence numbers and use standard coding techniques More refined coding techniques simulate sequence numbers [Mit06], [Met09] Anuran Makur (MIT) Permutation Channels 10 July 2020 5 / 40

  6. Example: Coding for Random Deletion Network Consider a communication network where packets can be dropped: SENDER RECEIVER ERASURE RANDOM CHANNEL PERMUTATION ? ? NETWORK Abstraction: n -length codeword = sequence of n packets Erasure channel: Erase each symbol/packet independently with prob p ∈ (0 , 1) Random permutation block: Randomly permute packets of codeword How do you code in such channels without increasing alphabet size? Anuran Makur (MIT) Permutation Channels 10 July 2020 5 / 40

  7. Permutation Channel Model � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION Sender sends message M ∼ Uniform( M ) n = blocklength Anuran Makur (MIT) Permutation Channels 10 July 2020 6 / 40

  8. Permutation Channel Model � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION Sender sends message M ∼ Uniform( M ) n = blocklength Randomized encoder f n : M → X n produces codeword X n 1 = ( X 1 , . . . , X n ) = f n ( M ) Anuran Makur (MIT) Permutation Channels 10 July 2020 6 / 40

  9. Permutation Channel Model � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION Sender sends message M ∼ Uniform( M ) n = blocklength Randomized encoder f n : M → X n produces codeword X n 1 = ( X 1 , . . . , X n ) = f n ( M ) Discrete memoryless channel P Z | X with input & output alphabets X & Y produces Z n 1 : n � 1 ( z n 1 | x n P Z n 1 ) = P Z | X ( z i | x i ) 1 | X n i =1 Anuran Makur (MIT) Permutation Channels 10 July 2020 6 / 40

  10. Permutation Channel Model � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION Sender sends message M ∼ Uniform( M ) n = blocklength Randomized encoder f n : M → X n produces codeword X n 1 = ( X 1 , . . . , X n ) = f n ( M ) Discrete memoryless channel P Z | X with input & output alphabets X & Y produces Z n 1 : n � 1 ( z n 1 | x n P Z n 1 ) = P Z | X ( z i | x i ) 1 | X n i =1 Random permutation π generates Y n 1 from Z n 1 : Y π ( i ) = Z i for i ∈ { 1 , . . . , n } Anuran Makur (MIT) Permutation Channels 10 July 2020 6 / 40

  11. Permutation Channel Model � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION Sender sends message M ∼ Uniform( M ) n = blocklength Randomized encoder f n : M → X n produces codeword X n 1 = ( X 1 , . . . , X n ) = f n ( M ) Discrete memoryless channel P Z | X with input & output alphabets X & Y produces Z n 1 : n � 1 ( z n 1 | x n P Z n 1 ) = P Z | X ( z i | x i ) 1 | X n i =1 Random permutation π generates Y n 1 from Z n 1 : Y π ( i ) = Z i for i ∈ { 1 , . . . , n } Randomized decoder g n : Y n → M ∪ { error } produces estimate ˆ M = g n ( Y n 1 ) at receiver Anuran Makur (MIT) Permutation Channels 10 July 2020 6 / 40

  12. � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION Permutation Channel Model What if we analyze the “swapped” model? � � � � 𝑁 𝑌 � 𝑊 𝑋 𝑁 RANDOM � � ENCODER CHANNEL DECODER PERMUTATION Anuran Makur (MIT) Permutation Channels 10 July 2020 7 / 40

  13. Permutation Channel Model What if we analyze the “swapped” model? � � � � 𝑁 𝑌 � 𝑊 𝑋 𝑁 RANDOM � � ENCODER CHANNEL DECODER PERMUTATION Proposition (Equivalent Models) If channel P W | V is equal to channel P Z | X , then channel P W n 1 is equal to channel P Y n 1 . 1 | X n 1 | X n � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION Anuran Makur (MIT) Permutation Channels 10 July 2020 7 / 40

  14. Permutation Channel Model What if we analyze the “swapped” model? � � � � 𝑁 𝑌 � 𝑊 𝑋 𝑁 RANDOM � � ENCODER CHANNEL DECODER PERMUTATION Proposition (Equivalent Models) If channel P W | V is equal to channel P Z | X , then channel P W n 1 is equal to channel P Y n 1 . 1 | X n 1 | X n � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION Remarks: Proof follows from direct calculation. Anuran Makur (MIT) Permutation Channels 10 July 2020 7 / 40

  15. Permutation Channel Model What if we analyze the “swapped” model? � � � � 𝑁 𝑌 � 𝑊 𝑋 𝑁 RANDOM � � ENCODER CHANNEL DECODER PERMUTATION Proposition (Equivalent Models) If channel P W | V is equal to channel P Z | X , then channel P W n 1 is equal to channel P Y n 1 . 1 | X n 1 | X n � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION Remarks: Proof follows from direct calculation. Can analyze either model! Anuran Makur (MIT) Permutation Channels 10 July 2020 7 / 40

  16. Coding for the Permutation Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � CHANNEL ENCODER DECODER PERMUTATION General Principle: “Encode the information in an object that is invariant under the [permutation] transformation.” [KV13] Anuran Makur (MIT) Permutation Channels 10 July 2020 8 / 40

  17. Coding for the Permutation Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � CHANNEL ENCODER DECODER PERMUTATION General Principle: “Encode the information in an object that is invariant under the [permutation] transformation.” [KV13] Multiset codes are studied in [KV13], [KV15], and [KT18]. Anuran Makur (MIT) Permutation Channels 10 July 2020 8 / 40

  18. Coding for the Permutation Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � CHANNEL ENCODER DECODER PERMUTATION General Principle: “Encode the information in an object that is invariant under the [permutation] transformation.” [KV13] Multiset codes are studied in [KV13], [KV15], and [KT18]. What are the fundamental information theoretic limits of this model? Anuran Makur (MIT) Permutation Channels 10 July 2020 8 / 40

  19. Information Capacity of the Permutation Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION error � P ( M � = ˆ Average probability of error P n M ) Anuran Makur (MIT) Permutation Channels 10 July 2020 9 / 40

  20. Information Capacity of the Permutation Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION error � P ( M � = ˆ Average probability of error P n M ) “Rate” of coding scheme ( f n , g n ) is R � log( |M| ) log( n ) Anuran Makur (MIT) Permutation Channels 10 July 2020 9 / 40

  21. Information Capacity of the Permutation Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION error � P ( M � = ˆ Average probability of error P n M ) “Rate” of coding scheme ( f n , g n ) is R � log( |M| ) log( n ) |M| = n R Anuran Makur (MIT) Permutation Channels 10 July 2020 9 / 40

  22. Information Capacity of the Permutation Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION error � P ( M � = ˆ Average probability of error P n M ) “Rate” of coding scheme ( f n , g n ) is R � log( |M| ) log( n ) |M| = n R because number of empirical distributions of Y n 1 is poly ( n ) Anuran Makur (MIT) Permutation Channels 10 July 2020 9 / 40

  23. Information Capacity of the Permutation Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION error � P ( M � = ˆ Average probability of error P n M ) “Rate” of coding scheme ( f n , g n ) is R � log( |M| ) log( n ) |M| = n R n →∞ P n Rate R ≥ 0 is achievable ⇔ ∃ { ( f n , g n ) } n ∈ N such that lim error = 0 Anuran Makur (MIT) Permutation Channels 10 July 2020 9 / 40

  24. Information Capacity of the Permutation Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION error � P ( M � = ˆ Average probability of error P n M ) “Rate” of coding scheme ( f n , g n ) is R � log( |M| ) log( n ) |M| = n R n →∞ P n Rate R ≥ 0 is achievable ⇔ ∃ { ( f n , g n ) } n ∈ N such that lim error = 0 Definition (Permutation Channel Capacity) C perm ( P Z | X ) � sup { R ≥ 0 : R is achievable } Anuran Makur (MIT) Permutation Channels 10 July 2020 9 / 40

  25. Information Capacity of the Permutation Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � ENCODER CHANNEL DECODER PERMUTATION error � P ( M � = ˆ Average probability of error P n M ) “Rate” of coding scheme ( f n , g n ) is R � log( |M| ) log( n ) |M| = n R n →∞ P n Rate R ≥ 0 is achievable ⇔ ∃ { ( f n , g n ) } n ∈ N such that lim error = 0 Definition (Permutation Channel Capacity) C perm ( P Z | X ) � sup { R ≥ 0 : R is achievable } Main Question What is the permutation channel capacity of a general P Z | X ? Anuran Makur (MIT) Permutation Channels 10 July 2020 9 / 40

  26. Example: Binary Symmetric Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � BSC 𝒒 ENCODER DECODER PERMUTATION Channel is binary symmetric channel, denoted BSC( p ): � 1 − p , for z = x ∀ z , x ∈ { 0 , 1 } , P Z | X ( z | x ) = p , for z � = x Anuran Makur (MIT) Permutation Channels 10 July 2020 10 / 40

  27. Example: Binary Symmetric Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � BSC 𝒒 ENCODER DECODER PERMUTATION Channel is binary symmetric channel, denoted BSC( p ): � 1 − p , for z = x ∀ z , x ∈ { 0 , 1 } , P Z | X ( z | x ) = p , for z � = x Alphabets are X = Y = { 0 , 1 } Anuran Makur (MIT) Permutation Channels 10 July 2020 10 / 40

  28. Example: Binary Symmetric Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � BSC 𝒒 ENCODER DECODER PERMUTATION Channel is binary symmetric channel, denoted BSC( p ): � 1 − p , for z = x ∀ z , x ∈ { 0 , 1 } , P Z | X ( z | x ) = p , for z � = x Alphabets are X = Y = { 0 , 1 } Assume crossover probability p ∈ (0 , 1) and p � = 1 2 Anuran Makur (MIT) Permutation Channels 10 July 2020 10 / 40

  29. Example: Binary Symmetric Channel � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � BSC 𝒒 ENCODER DECODER PERMUTATION Channel is binary symmetric channel, denoted BSC( p ): � 1 − p , for z = x ∀ z , x ∈ { 0 , 1 } , P Z | X ( z | x ) = p , for z � = x Alphabets are X = Y = { 0 , 1 } Assume crossover probability p ∈ (0 , 1) and p � = 1 2 Question: What is the permutation channel capacity of the BSC? Anuran Makur (MIT) Permutation Channels 10 July 2020 10 / 40

  30. Outline Introduction 1 Achievability and Converse for the BSC 2 Encoder and Decoder Testing between Converging Hypotheses Second Moment Method for TV Distance Fano’s Inequality and CLT Approximation General Achievability Bound 3 General Converse Bounds 4 Conclusion 5 Anuran Makur (MIT) Permutation Channels 10 July 2020 11 / 40

  31. 𝑟 � � 1 𝑟 � � 2 0 1 3 3 ✶ Warm-up: Sending Two Messages � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � BSC 𝒒 ENCODER DECODER PERMUTATION Fix a message m ∈ { 0 , 1 } Anuran Makur (MIT) Permutation Channels 10 July 2020 12 / 40

  32. ✶ Warm-up: Sending Two Messages � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � BSC 𝒒 ENCODER DECODER PERMUTATION i.i.d. Fix a message m ∈ { 0 , 1 } , and encode m as f n ( m ) = X n ∼ Ber( q m ) 1 𝑟 � � 1 𝑟 � � 2 0 1 3 3 Anuran Makur (MIT) Permutation Channels 10 July 2020 12 / 40

  33. ✶ Warm-up: Sending Two Messages � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � BSC 𝒒 ENCODER DECODER PERMUTATION i.i.d. Fix a message m ∈ { 0 , 1 } , and encode m as f n ( m ) = X n ∼ Ber( q m ) 1 𝑟 � � 1 𝑟 � � 2 0 1 3 3 i.i.d. Memoryless BSC( p ) outputs Z n ∼ Ber( p ∗ q m ), where p ∗ q m � p (1 − q m ) + q m (1 − p ) 1 is the convolution of p and q m Anuran Makur (MIT) Permutation Channels 10 July 2020 12 / 40

  34. ✶ Warm-up: Sending Two Messages � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � BSC 𝒒 ENCODER DECODER PERMUTATION i.i.d. Fix a message m ∈ { 0 , 1 } , and encode m as f n ( m ) = X n ∼ Ber( q m ) 1 𝑟 � � 1 𝑟 � � 2 0 1 3 3 i.i.d. Memoryless BSC( p ) outputs Z n ∼ Ber( p ∗ q m ), where p ∗ q m � p (1 − q m ) + q m (1 − p ) 1 is the convolution of p and q m i.i.d. Random permutation generates Y n ∼ Ber( p ∗ q m ) 1 Anuran Makur (MIT) Permutation Channels 10 July 2020 12 / 40

  35. Warm-up: Sending Two Messages � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � BSC 𝒒 ENCODER DECODER PERMUTATION i.i.d. Fix a message m ∈ { 0 , 1 } , and encode m as f n ( m ) = X n ∼ Ber( q m ) 1 𝑟 � � 1 𝑟 � � 2 0 1 3 3 i.i.d. Memoryless BSC( p ) outputs Z n ∼ Ber( p ∗ q m ), where p ∗ q m � p (1 − q m ) + q m (1 − p ) 1 is the convolution of p and q m i.i.d. Random permutation generates Y n ∼ Ber( p ∗ q m ) 1 � 1 � n � Maximum Likelihood (ML) decoder: ˆ i =1 Y i ≥ 1 (for p < 1 M = ✶ 2 ) n 2 Anuran Makur (MIT) Permutation Channels 10 July 2020 12 / 40

  36. Warm-up: Sending Two Messages � � � � 𝑁 𝑌 � 𝑎 � 𝑍 𝑁 RANDOM � BSC 𝒒 ENCODER DECODER PERMUTATION i.i.d. Fix a message m ∈ { 0 , 1 } , and encode m as f n ( m ) = X n ∼ Ber( q m ) 1 𝑟 � � 1 𝑟 � � 2 0 1 3 3 i.i.d. Memoryless BSC( p ) outputs Z n ∼ Ber( p ∗ q m ), where p ∗ q m � p (1 − q m ) + q m (1 − p ) 1 is the convolution of p and q m i.i.d. Random permutation generates Y n ∼ Ber( p ∗ q m ) 1 � 1 � n � Maximum Likelihood (ML) decoder: ˆ i =1 Y i ≥ 1 (for p < 1 M = ✶ 2 ) n 2 � n 1 n →∞ P n i =1 Y i → p ∗ q m in probability as n → ∞ ⇒ lim error = 0 as p ∗ q 0 � = p ∗ q 1 n Anuran Makur (MIT) Permutation Channels 10 July 2020 12 / 40

  37. 0 1 𝑜 �� Encoder and Decoder Suppose M = { 1 , . . . , n R } for some R > 0 Anuran Makur (MIT) Permutation Channels 10 July 2020 13 / 40

  38. Encoder and Decoder Suppose M = { 1 , . . . , n R } for some R > 0 � m � i.i.d. Randomized encoder: Given m ∈ M , f n ( m ) = X n ∼ Ber 1 n R 0 1 𝑜 �� Anuran Makur (MIT) Permutation Channels 10 July 2020 13 / 40

  39. Encoder and Decoder Suppose M = { 1 , . . . , n R } for some R > 0 � m � i.i.d. Randomized encoder: Given m ∈ M , f n ( m ) = X n ∼ Ber 1 n R 0 1 𝑜 �� � p ∗ m � i.i.d. Given m ∈ M , Y n ∼ Ber (as before) 1 n R Anuran Makur (MIT) Permutation Channels 10 July 2020 13 / 40

  40. Encoder and Decoder Suppose M = { 1 , . . . , n R } for some R > 0 � m � i.i.d. Randomized encoder: Given m ∈ M , f n ( m ) = X n ∼ Ber 1 n R 0 1 𝑜 �� � p ∗ m � i.i.d. Given m ∈ M , Y n ∼ Ber 1 n R 1 ∈ { 0 , 1 } n , g n ( y n 1 | M ( y n ML decoder: For y n 1 ) = arg max P Y n 1 | m ) m ∈M Anuran Makur (MIT) Permutation Channels 10 July 2020 13 / 40

  41. Encoder and Decoder Suppose M = { 1 , . . . , n R } for some R > 0 � m � i.i.d. Randomized encoder: Given m ∈ M , f n ( m ) = X n ∼ Ber 1 n R 0 1 𝑜 �� � p ∗ m � i.i.d. Given m ∈ M , Y n ∼ Ber 1 n R 1 ∈ { 0 , 1 } n , g n ( y n 1 | M ( y n ML decoder: For y n 1 ) = arg max P Y n 1 | m ) m ∈M � n Challenge: Although 1 i =1 Y i → p ∗ m n R in probability as n → ∞ , consecutive messages n n R − m +1 m become indistinguishable , i.e. → 0 n R Anuran Makur (MIT) Permutation Channels 10 July 2020 13 / 40

  42. Encoder and Decoder Suppose M = { 1 , . . . , n R } for some R > 0 � m � i.i.d. Randomized encoder: Given m ∈ M , f n ( m ) = X n ∼ Ber 1 n R 0 1 𝑜 �� � p ∗ m � i.i.d. Given m ∈ M , Y n ∼ Ber 1 n R 1 ∈ { 0 , 1 } n , g n ( y n 1 | M ( y n ML decoder: For y n 1 ) = arg max P Y n 1 | m ) m ∈M � n Challenge: Although 1 i =1 Y i → p ∗ m n R in probability as n → ∞ , consecutive messages n n R − m +1 m become indistinguishable , i.e. → 0 n R n →∞ P n Fact: Consecutive messages distinguishable ⇒ lim error = 0 Anuran Makur (MIT) Permutation Channels 10 July 2020 13 / 40

  43. Encoder and Decoder Suppose M = { 1 , . . . , n R } for some R > 0 � m � i.i.d. Randomized encoder: Given m ∈ M , f n ( m ) = X n ∼ Ber 1 n R 0 1 𝑜 �� � p ∗ m � i.i.d. Given m ∈ M , Y n ∼ Ber 1 n R 1 ∈ { 0 , 1 } n , g n ( y n 1 | M ( y n ML decoder: For y n 1 ) = arg max P Y n 1 | m ) m ∈M � n Challenge: Although 1 i =1 Y i → p ∗ m n R in probability as n → ∞ , consecutive messages n n R − m +1 m become indistinguishable , i.e. → 0 n R n →∞ P n Fact: Consecutive messages distinguishable ⇒ lim error = 0 What is the largest R such that two consecutive messages can be distinguished? Anuran Makur (MIT) Permutation Channels 10 July 2020 13 / 40

  44. Testing between Converging Hypotheses Binary Hypothesis Testing: � 1 � Consider hypothesis H ∼ Ber with uniform prior 2 Anuran Makur (MIT) Permutation Channels 10 July 2020 14 / 40

  45. Testing between Converging Hypotheses Binary Hypothesis Testing: � 1 � Consider hypothesis H ∼ Ber with uniform prior 2 For any n ∈ N , q ∈ (0 , 1), and R > 0, consider likelihoods: i.i.d. Given H = 0 : X n ∼ P X | H =0 = Ber( q ) 1 � � q + 1 i.i.d. Given H = 1 : X n ∼ P X | H =1 = Ber 1 n R Anuran Makur (MIT) Permutation Channels 10 July 2020 14 / 40

  46. Testing between Converging Hypotheses Binary Hypothesis Testing: � 1 � Consider hypothesis H ∼ Ber with uniform prior 2 For any n ∈ N , q ∈ (0 , 1), and R > 0, consider likelihoods: i.i.d. Given H = 0 : X n ∼ P X | H =0 = Ber( q ) 1 � � q + 1 i.i.d. Given H = 1 : X n ∼ P X | H =1 = Ber 1 n R Define the zero-mean sufficient statistic of X n 1 for H : n T n � 1 1 � X i − q − n 2 n R i =1 Anuran Makur (MIT) Permutation Channels 10 July 2020 14 / 40

  47. Testing between Converging Hypotheses Binary Hypothesis Testing: � 1 � Consider hypothesis H ∼ Ber with uniform prior 2 For any n ∈ N , q ∈ (0 , 1), and R > 0, consider likelihoods: i.i.d. Given H = 0 : X n ∼ P X | H =0 = Ber( q ) 1 � � q + 1 i.i.d. Given H = 1 : X n ∼ P X | H =1 = Ber 1 n R Define the zero-mean sufficient statistic of X n 1 for H : n T n � 1 1 � X i − q − n 2 n R i =1 Let ˆ H n ML ( T n ) denote the ML decoder for H based on T n with minimum probability of ML � P ( ˆ error P n H n ML ( T n ) � = H ) Anuran Makur (MIT) Permutation Channels 10 July 2020 14 / 40

  48. Testing between Converging Hypotheses Binary Hypothesis Testing: � 1 � Consider hypothesis H ∼ Ber with uniform prior 2 For any n ∈ N , q ∈ (0 , 1), and R > 0, consider likelihoods: i.i.d. Given H = 0 : X n ∼ P X | H =0 = Ber( q ) 1 � � q + 1 i.i.d. Given H = 1 : X n ∼ P X | H =1 = Ber 1 n R Define the zero-mean sufficient statistic of X n 1 for H : n T n � 1 1 � X i − q − n 2 n R i =1 Let ˆ H n ML ( T n ) denote the ML decoder for H based on T n with minimum probability of ML � P ( ˆ error P n H n ML ( T n ) � = H ) n →∞ P n Want: Largest R > 0 such that lim ML = 0? Anuran Makur (MIT) Permutation Channels 10 July 2020 14 / 40

  49. Intuition via Central Limit Theorem For large n , P T n | H ( ·| 0) and P T n | H ( ·| 1) are Gaussian distributions Figure: 𝑄 � � |� 𝑢|0 𝑄 � � |� 𝑢|1 𝑢 0 �1 1 2𝑜 � 2𝑜 � Anuran Makur (MIT) Permutation Channels 10 July 2020 15 / 40

  50. Intuition via Central Limit Theorem For large n , P T n | H ( ·| 0) and P T n | H ( ·| 1) are Gaussian distributions | E [ T n | H = 0] − E [ T n | H = 1] | = 1 / n R Figure: 𝑄 � � |� 𝑢|0 𝑄 � � |� 𝑢|1 1 1 𝛴 𝛴 𝑜 𝑜 𝑢 0 1 𝑜 � Anuran Makur (MIT) Permutation Channels 10 July 2020 15 / 40

  51. Intuition via Central Limit Theorem For large n , P T n | H ( ·| 0) and P T n | H ( ·| 1) are Gaussian distributions | E [ T n | H = 0] − E [ T n | H = 1] | = 1 / n R √ n � � Standard deviations are Θ 1 / Figure: 𝑄 � � |� 𝑢|0 𝑄 � � |� 𝑢|1 1 1 𝛴 𝛴 𝑜 𝑜 𝑢 0 1 𝑜 � Anuran Makur (MIT) Permutation Channels 10 July 2020 15 / 40

  52. Intuition via Central Limit Theorem For large n , P T n | H ( ·| 0) and P T n | H ( ·| 1) are Gaussian distributions | E [ T n | H = 0] − E [ T n | H = 1] | = 1 / n R √ n � � Standard deviations are Θ 1 / Case R < 1 2 : 𝑄 � � |� 𝑢|0 𝑄 � � |� 𝑢|1 1 1 𝛴 𝛴 𝑜 𝑜 𝑢 0 1 𝑜 � Anuran Makur (MIT) Permutation Channels 10 July 2020 15 / 40

  53. Intuition via Central Limit Theorem For large n , P T n | H ( ·| 0) and P T n | H ( ·| 1) are Gaussian distributions | E [ T n | H = 0] − E [ T n | H = 1] | = 1 / n R √ n � � Standard deviations are Θ 1 / Case R < 1 2 : Decoding is possible � 𝑄 � � |� 𝑢|0 𝑄 � � |� 𝑢|1 1 1 𝛴 𝛴 𝑜 𝑜 𝑢 0 1 𝑜 � Anuran Makur (MIT) Permutation Channels 10 July 2020 15 / 40

  54. Intuition via Central Limit Theorem For large n , P T n | H ( ·| 0) and P T n | H ( ·| 1) are Gaussian distributions | E [ T n | H = 0] − E [ T n | H = 1] | = 1 / n R √ n � � Standard deviations are Θ 1 / Case R > 1 2 : 𝑄 � � |� 𝑢|0 𝑄 � � |� 𝑢|1 1 1 𝛴 𝛴 𝑜 𝑜 𝑢 0 1 𝑜 � Anuran Makur (MIT) Permutation Channels 10 July 2020 15 / 40

  55. Intuition via Central Limit Theorem For large n , P T n | H ( ·| 0) and P T n | H ( ·| 1) are Gaussian distributions | E [ T n | H = 0] − E [ T n | H = 1] | = 1 / n R √ n � � Standard deviations are Θ 1 / Case R > 1 2 : Decoding is impossible � 𝑄 � � |� 𝑢|0 𝑄 � � |� 𝑢|1 1 1 𝛴 𝛴 𝑜 𝑜 𝑢 0 1 𝑜 � Anuran Makur (MIT) Permutation Channels 10 July 2020 15 / 40

  56. Second Moment Method for TV Distance Lemma (2 nd Moment Method [EKPS00]) TV ≥ ( E [ T n | H = 1] − E [ T n | H = 0]) 2 � � � P T n | H =1 − P T n | H =0 � 4 VAR ( T n ) where � P − Q � TV = 1 2 � P − Q � 1 denotes the total variation (TV) distance between the distributions P and Q . Anuran Makur (MIT) Permutation Channels 10 July 2020 16 / 40

  57. Second Moment Method for TV Distance Lemma (2 nd Moment Method [EKPS00]) TV ≥ ( E [ T n | H = 1] − E [ T n | H = 0]) 2 � � � P T n | H =1 − P T n | H =0 � 4 VAR ( T n ) where � P − Q � TV = 1 2 � P − Q � 1 denotes the total variation (TV) distance between the distributions P and Q . Proof: Let T + n ∼ P T n | H =1 and T − n ∼ P T n | H =0 �� � 2 �� 2 = � � � � � � T + T − E − E t P T n | H ( t | 1) − P T n | H ( t | 0) n n t Anuran Makur (MIT) Permutation Channels 10 July 2020 16 / 40

  58. Second Moment Method for TV Distance Lemma (2 nd Moment Method [EKPS00]) TV ≥ ( E [ T n | H = 1] − E [ T n | H = 0]) 2 � � � P T n | H =1 − P T n | H =0 � 4 VAR ( T n ) where � P − Q � TV = 1 2 � P − Q � 1 denotes the total variation (TV) distance between the distributions P and Q . Proof: Let T + n ∼ P T n | H =1 and T − n ∼ P T n | H =0 �� � 2 � � P T n | H ( t | 1) − P T n | H ( t | 0) �� 2 = � � � � � T + T − E − E t P T n ( t ) � n n P T n ( t ) t Anuran Makur (MIT) Permutation Channels 10 July 2020 16 / 40

  59. Second Moment Method for TV Distance Lemma (2 nd Moment Method [EKPS00]) TV ≥ ( E [ T n | H = 1] − E [ T n | H = 0]) 2 � � � P T n | H =1 − P T n | H =0 � 4 VAR ( T n ) where � P − Q � TV = 1 2 � P − Q � 1 denotes the total variation (TV) distance between the distributions P and Q . Proof: Cauchy-Schwarz inequality �� � 2 � � P T n | H ( t | 1) − P T n | H ( t | 0) �� 2 = � � � � � T + T − E − E t P T n ( t ) � n n P T n ( t ) t �� ��� � � � 2 P T n | H ( t | 1) − P T n | H ( t | 0) t 2 P T n ( t ) ≤ P T n ( t ) t t Anuran Makur (MIT) Permutation Channels 10 July 2020 16 / 40

  60. Second Moment Method for TV Distance Lemma (2 nd Moment Method [EKPS00]) TV ≥ ( E [ T n | H = 1] − E [ T n | H = 0]) 2 � � � P T n | H =1 − P T n | H =0 � 4 VAR ( T n ) where � P − Q � TV = 1 2 � P − Q � 1 denotes the total variation (TV) distance between the distributions P and Q . Proof: Recall that T n is zero-mean �� � � � 2 P T n | H ( t | 1) − P T n | H ( t | 0) �� 2 = � � � � � T + T − − E t P T n ( t ) E � n n P T n ( t ) t �� � � � 2 P T n | H ( t | 1) − P T n | H ( t | 0) ≤ VAR ( T n ) P T n ( t ) t Anuran Makur (MIT) Permutation Channels 10 July 2020 16 / 40

  61. Second Moment Method for TV Distance Lemma (2 nd Moment Method [EKPS00]) TV ≥ ( E [ T n | H = 1] − E [ T n | H = 0]) 2 � � � P T n | H =1 − P T n | H =0 � 4 VAR ( T n ) where � P − Q � TV = 1 2 � P − Q � 1 denotes the total variation (TV) distance between the distributions P and Q . Proof: Hammersley-Chapman-Robbins bound �� � 2 � � P T n | H ( t | 1) − P T n | H ( t | 0) �� 2 = � � � � � T + T − E − E t P T n ( t ) � n n P T n ( t ) t � � � � 2 P T n | H ( t | 1) − P T n | H ( t | 0) 1 � ≤ 4 VAR ( T n ) 4 P T n ( t ) t � �� � Vincze-Le Cam distance Anuran Makur (MIT) Permutation Channels 10 July 2020 16 / 40

  62. Second Moment Method for TV Distance Lemma (2 nd Moment Method [EKPS00]) TV ≥ ( E [ T n | H = 1] − E [ T n | H = 0]) 2 � � � P T n | H =1 − P T n | H =0 � 4 VAR ( T n ) where � P − Q � TV = 1 2 � P − Q � 1 denotes the total variation (TV) distance between the distributions P and Q . Proof: �� � 2 � � P T n | H ( t | 1) − P T n | H ( t | 0) �� 2 = � � � � � T + T − E − E t P T n ( t ) � n n P T n ( t ) t � � � � 2 P T n | H ( t | 1) − P T n | H ( t | 0) 1 � ≤ 4 VAR ( T n ) 4 P T n ( t ) t � � ≤ 4 VAR ( T n ) � P T n | H =1 − P T n | H =0 � TV Anuran Makur (MIT) Permutation Channels 10 July 2020 16 / 40

  63. BSC Achievability Proof Proposition (BSC Achievability) � 1 � For any 0 < R < 1 / 2, consider the binary hypothesis testing problem with H ∼ Ber , and 2 i.i.d. � � q + h X n ∼ Ber given H = h ∈ { 0 , 1 } . 1 n R Proof: Start with Le Cam’s relation ML = 1 � � � � P n 1 − � P T n | H =1 − P T n | H =0 � TV 2 Anuran Makur (MIT) Permutation Channels 10 July 2020 17 / 40

  64. BSC Achievability Proof Proposition (BSC Achievability) � 1 � For any 0 < R < 1 / 2, consider the binary hypothesis testing problem with H ∼ Ber , and 2 i.i.d. � � q + h X n ∼ Ber given H = h ∈ { 0 , 1 } . 1 n R Proof: Apply second moment method lemma ML = 1 � � � � P n 1 − � P T n | H =1 − P T n | H =0 � TV 2 � � 1 − ( E [ T n | H = 1] − E [ T n | H = 0]) 2 ≤ 1 2 4 VAR ( T n ) Anuran Makur (MIT) Permutation Channels 10 July 2020 17 / 40

  65. BSC Achievability Proof Proposition (BSC Achievability) � 1 � For any 0 < R < 1 / 2, consider the binary hypothesis testing problem with H ∼ Ber , and 2 i.i.d. � � q + h X n ∼ Ber given H = h ∈ { 0 , 1 } . 1 n R Proof: After explicit computation and simplification... ML = 1 � � � � P n 1 − � P T n | H =1 − P T n | H =0 � TV 2 � � 1 − ( E [ T n | H = 1] − E [ T n | H = 0]) 2 ≤ 1 2 4 VAR ( T n ) Anuran Makur (MIT) Permutation Channels 10 July 2020 17 / 40

  66. BSC Achievability Proof Proposition (BSC Achievability) � 1 � For any 0 < R < 1 / 2, consider the binary hypothesis testing problem with H ∼ Ber , and 2 i.i.d. � � q + h X n ∼ Ber given H = h ∈ { 0 , 1 } . 1 n R Proof: For any 0 < R < 1 2 , ML = 1 � � � � P n 1 − � P T n | H =1 − P T n | H =0 � TV 2 � � 1 − ( E [ T n | H = 1] − E [ T n | H = 0]) 2 ≤ 1 2 4 VAR ( T n ) 3 ≤ 2 n 1 − 2 R Anuran Makur (MIT) Permutation Channels 10 July 2020 17 / 40

  67. BSC Achievability Proof Proposition (BSC Achievability) � 1 � For any 0 < R < 1 / 2, consider the binary hypothesis testing problem with H ∼ Ber , and 2 i.i.d. � � q + h X n ∼ Ber given H = h ∈ { 0 , 1 } . 1 n R n →∞ P n Then, lim ML = 0. Proof: For any 0 < R < 1 2 , ML = 1 � � � � P n 1 − � P T n | H =1 − P T n | H =0 � TV 2 � � 1 − ( E [ T n | H = 1] − E [ T n | H = 0]) 2 ≤ 1 2 4 VAR ( T n ) 3 ≤ 2 n 1 − 2 R → 0 as n → ∞ Anuran Makur (MIT) Permutation Channels 10 July 2020 17 / 40

  68. BSC Achievability Proof Proposition (BSC Achievability) � 1 � For any 0 < R < 1 / 2, consider the binary hypothesis testing problem with H ∼ Ber , and 2 i.i.d. � � q + h X n ∼ Ber given H = h ∈ { 0 , 1 } . 1 n R n →∞ P n Then, lim ML = 0. This implies that: C perm (BSC( p )) ≥ 1 2 . Proof: For any 0 < R < 1 2 , ML = 1 � � � � P n 1 − � P T n | H =1 − P T n | H =0 � TV 2 � � 1 − ( E [ T n | H = 1] − E [ T n | H = 0]) 2 ≤ 1 2 4 VAR ( T n ) 3 ≤ 2 n 1 − 2 R → 0 as n → ∞ Anuran Makur (MIT) Permutation Channels 10 July 2020 17 / 40

  69. Outline Introduction 1 Achievability and Converse for the BSC 2 Encoder and Decoder Testing between Converging Hypotheses Second Moment Method for TV Distance Fano’s Inequality and CLT Approximation General Achievability Bound 3 General Converse Bounds 4 Conclusion 5 Anuran Makur (MIT) Permutation Channels 10 July 2020 18 / 40

  70. Recall: Basic Definitions of Information Measures Consider discrete random variables X ∈ X and Y ∈ Y with joint distribution P X , Y . Anuran Makur (MIT) Permutation Channels 10 July 2020 19 / 40

  71. Recall: Basic Definitions of Information Measures Consider discrete random variables X ∈ X and Y ∈ Y with joint distribution P X , Y . Shannon Entropy: � H ( X ) � − P X ( x ) log( P X ( x )) x ∈X Anuran Makur (MIT) Permutation Channels 10 July 2020 19 / 40

  72. Recall: Basic Definitions of Information Measures Consider discrete random variables X ∈ X and Y ∈ Y with joint distribution P X , Y . Shannon Entropy: � H ( X ) � − P X ( x ) log( P X ( x )) x ∈X Conditional Shannon Entropy: � � � � H ( X | Y ) � − P X , Y ( x , y ) log P X | Y ( x | y ) x ∈X y ∈Y Anuran Makur (MIT) Permutation Channels 10 July 2020 19 / 40

  73. Recall: Basic Definitions of Information Measures Consider discrete random variables X ∈ X and Y ∈ Y with joint distribution P X , Y . Shannon Entropy: � H ( X ) � − P X ( x ) log( P X ( x )) x ∈X Conditional Shannon Entropy: � � � � H ( X | Y ) � − P X , Y ( x , y ) log P X | Y ( x | y ) x ∈X y ∈Y Mutual Information: � P X , Y ( x , y ) � � � I ( X ; Y ) � P X , Y ( x , y ) log P X ( y ) P Y ( y ) x ∈X y ∈Y Anuran Makur (MIT) Permutation Channels 10 July 2020 19 / 40

  74. Recall: Basic Definitions of Information Measures Consider discrete random variables X ∈ X and Y ∈ Y with joint distribution P X , Y . Shannon Entropy: � H ( X ) � − P X ( x ) log( P X ( x )) x ∈X Conditional Shannon Entropy: � � � � H ( X | Y ) � − P X , Y ( x , y ) log P X | Y ( x | y ) x ∈X y ∈Y Mutual Information: � P X , Y ( x , y ) � � � I ( X ; Y ) � P X , Y ( x , y ) log P X ( y ) P Y ( y ) x ∈X y ∈Y = H ( X ) − H ( X | Y ) Anuran Makur (MIT) Permutation Channels 10 July 2020 19 / 40

  75. Recall: Two Information Inequalities Consider discrete random variables X , Y , Z that form a Markov chain X → Y → Z . Anuran Makur (MIT) Permutation Channels 10 July 2020 20 / 40

  76. Recall: Two Information Inequalities Consider discrete random variables X , Y , Z that form a Markov chain X → Y → Z . Lemma (Data Processing Inequality [CT06]) I ( X ; Z ) ≤ I ( X ; Y ) with equality if and only if Z is a sufficient statistic of Y for X , i.e., X → Z → Y also forms a Markov chain. Anuran Makur (MIT) Permutation Channels 10 July 2020 20 / 40

  77. Recall: Two Information Inequalities Consider discrete random variables X , Y , Z that form a Markov chain X → Y → Z . Lemma (Data Processing Inequality [CT06]) I ( X ; Z ) ≤ I ( X ; Y ) with equality if and only if Z is a sufficient statistic of Y for X , i.e., X → Z → Y also forms a Markov chain. Lemma (Fano’s Inequality [CT06]) If X takes values in the finite alphabet X , then H ( X | Z ) ≤ 1 + P ( X � = Z ) log( |X| ) where we perceive Z as an estimator for X based on Y . Anuran Makur (MIT) Permutation Channels 10 July 2020 20 / 40

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend