lecture 6 polar coding
play

Lecture 6 Polar Coding I-Hsiang Wang Department of Electrical - PowerPoint PPT Presentation

Lecture 6 Polar Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 5, 2016 1 / 63 I-Hsiang Wang IT Lecture 6 In Pursuit of Shannon's Limit Since 1948, Shannon's theory has drawn


  1. Lecture 6 Polar Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 5, 2016 1 / 63 I-Hsiang Wang IT Lecture 6

  2. In Pursuit of Shannon's Limit Since 1948, Shannon's theory has drawn the sharp boundary between the possible and the impossible in data compression and data transmission. Once fundamental limits are characterized, the next natural question is: How to achieve these limits with acceptable complexity? For lossless source coding , it did not take us too long to find optimal schemes with low complexity: Huffman Code (1952): optimal for memoryless source Lempel-Ziv (1977): optimal for stationary ergodic source On the other hand, for channel coding and lossy source coding , it turns out to be much harder. It has been the holy grail for coding theorist to find codes that achieve Shannon's limit with low complexity. 2 / 63 I-Hsiang Wang IT Lecture 6

  3. In Pursuit of Capacity-Achieving Codes Two barriers in pursuing low-complexity capacity-achieving codes: 1 Lack of explicit construction. In Shannon's proof, it is only proved that there exists coding schemes that achieve capacity. 2 Lack of structure to reduce complexity. In the proof of coding theorems, complexity issues are often neglected, while codes with structures are hard to prove to achieve capacity. Since 90's, several practical codes were found to approach capacity – turbo code, low-density parity-check (LDPC) code, etc. They perform well empirically, but lack rigorous proof of optimality. The first provably capacity-achieving coding scheme with acceptable complexity is polar code, introduced by Erdal Arıkan in 2007. Later in 2012, spatially coupled LDPC codes were also shown to achieve capacity (Shrinivas Kudekar, Tom Richardson, and Rüediger Urbanke). 3 / 63 I-Hsiang Wang IT Lecture 6

  4. IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 7, JULY 2009 3051 Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels Erdal Arıkan , Senior Member, IEEE The paper wins the 2010 Information Theory Society Best Paper Award. 4 / 63 I-Hsiang Wang IT Lecture 6

  5. Overview When Arıkan introduced polar codes in 2007, he focus on achieving capacity for the general binary-input memoryless symmetric channels (BMSC), including BSC, BEC, etc. Later, polar codes are shown to be optimal in many other settings, including lossy source coding, non-binary-input channels, multiple access channels, channel coding with encoder side information (Gelfand-Pinsker), source coding with side information (Wyner-Ziv), etc. Instead of giving a comprehensive introduction, we shall focus on polar coding for channel coding. The outline is as follows: 1 First we introduce the concept of channel polarization. 2 Second we explore polar coding for binary input channels. 3 Finally we briefly talk about polar coding for source coding (source polarization). 5 / 63 I-Hsiang Wang IT Lecture 6

  6. Notations In channel coding, we use the DMC N times where N is the blocklength of the coding scheme. Since the channel is the main focus, we use the following notations throughout this lecture: W to denote the channel P Y | X P to denote the input distribution P X I ( P , W ) to denote I ( X ; Y ) . ( 1 ) Since we focus on BMSC, and X ∼ Ber achieves the channel capacity of any BMSC, we shall 2 ( 1 ) use I ( W ) (slight abuse of notation) to denote I ( P , W ) when the input P is Ber . 2 In other words, the channel capacity of a BMSC W is I ( W ) . 6 / 63 I-Hsiang Wang IT Lecture 6

  7. Polarization Polarization 1 Basic Channel Transformation Channel Polarization 2 Polar Coding Encoding and Decoding Architectures Performance Analysis 7 / 63 I-Hsiang Wang IT Lecture 6

  8. Polarization Single Usage of Channel W X W Y N Usage of Channel W X 1 Y 1 W X 2 Y 2 W ˆ ENC DEC M M . . . X N Y N W 8 / 63 I-Hsiang Wang IT Lecture 6

  9. Polarization Arıkan's Idea X 1 Y 1 U 1 W V 1 X 2 Y 2 U 2 W V 2 Pre- Post- Processing Processing . . . X N Y N U N W V N Apply special transforms to both input and output 9 / 63 I-Hsiang Wang IT Lecture 6

  10. Polarization Arıkan's Idea V 1 U 1 W 1 U 2 V 2 W 2 . . . U N W N V N 10 / 63 I-Hsiang Wang IT Lecture 6

  11. Polarization Arıkan's Idea Roughly N I ( W ) channels with capacity ≈ 1 V 1 U 1 W 1 U 2 V 2 W 2 . . . U N W N V N 11 / 63 I-Hsiang Wang IT Lecture 6

  12. Polarization Arıkan's Idea Roughly N I ( W ) channels with capacity ≈ 1 V 1 U 1 W 1 U 2 V 2 W 2 . . . Roughly N (1 − I ( W )) channels with capacity ≈ 0 U N W N V N Equivalently some perfect channels and some useless channels − → Polarization Coding becomes extremely simple : simply use those perfect channels for uncoded transmission, and throw those useless channels away. 12 / 63 I-Hsiang Wang IT Lecture 6

  13. Polarization Basic Channel Transformation Polarization 1 Basic Channel Transformation Channel Polarization 2 Polar Coding Encoding and Decoding Architectures Performance Analysis 13 / 63 I-Hsiang Wang IT Lecture 6

  14. Polarization Basic Channel Transformation Arıkan's Basic Channel Transformation Consider two channel uses of W : X 1 Y 1 W X 2 W Y 2 14 / 63 I-Hsiang Wang IT Lecture 6

  15. Polarization Basic Channel Transformation Arıkan's Basic Channel Transformation Consider two channel uses of W : U 1 Y 1 W Apply the pre-processor: X 1 = U 1 ⊕ U 2 , X 2 = U 2 , ( 1 ) where U 1 ⊥ ⊥ U 2 , U 1 , U 2 ∼ Ber . U 2 W Y 2 2 We now have two synthetic channels induced by the above procedure: W − : U 1 → V 1 ≜ ( Y 1 , Y 2 ) W + : U 2 → V 2 ≜ ( Y 1 , Y 2 , U 1 ) The above transform yields the following two crucial phenomenon: I ( W − ) ≤ I ( W ) ≤ I ( W + ) (Polarization) I ( W − ) + I ( W + ) = 2 I ( W ) (Conservation of Information) 15 / 63 I-Hsiang Wang IT Lecture 6

  16. Polarization Basic Channel Transformation Example: Binary Erasure Channel Example 1 Let W be a BEC with erasure probability ε ∈ (0 , 1) , and I ( W ) = 1 − ε . Find the values of I ( W − ) and I ( W + ) , and verify the above properties. sol : Intuitively W − is worse than W and W + is better than W : For W − , input is U 1 , output is ( Y 1 , Y 2 ) : Only when both Y 1 and Y 2 are not erased, one can figure out ⇒ W − is BEC with erasure probability 1 − (1 − ε ) 2 = 2 ε − ε 2 . U 1 ! = For W + , input is U 2 , output is ( Y 1 , Y 2 , U 1 ) : As long as one of Y 1 and Y 2 are not erased, one can figure ⇒ W + is BEC with erasure probability ε 2 . out U 2 ! = Hence, I ( W − ) = 1 − 2 ε + ε 2 and I ( W + ) = 1 − ε 2 . 16 / 63 I-Hsiang Wang IT Lecture 6

  17. Polarization Basic Channel Transformation Example: Binary Symmetric Channel Example 2 Let W be a BSC with crossover probability p ∈ (0 , 1) , and I ( W ) = 1 − H b ( p ) . Find the values of I ( W − ) and I ( W + ) . 17 / 63 I-Hsiang Wang IT Lecture 6

  18. Polarization Basic Channel Transformation Basic Properties Theorem 1 For any BMSC W and the induced channels { W − , W + } from Arıkan's basic transformation, we have I ( W − ) ≤ I ( W ) ≤ I ( W + ) with equality iff I ( W ) = 0 or 1 . I ( W − ) + I ( W + ) = 2 I ( W ) pf : We prove the conservation of information first: ( W − ) ( W + ) I + I = I ( U 1 ; Y 1 , Y 2 ) + I ( U 2 ; Y 1 , Y 2 , U 1 ) = I ( U 1 ; Y 1 , Y 2 ) + I ( U 2 ; Y 1 , Y 2 | U 1 ) = I ( U 1 , U 2 ; Y 1 , Y 2 ) = I ( X 1 , X 2 ; Y 1 , Y 2 ) = I ( X 1 ; Y 1 ) + I ( X 2 ; Y 2 ) = 2 I ( W ) . I ( W + ) = I ( X 2 ; Y 1 , Y 2 , U 1 ) ≥ I ( X 2 ; Y 2 ) = I ( W ) , and hence the first property holds. (Proof of the condition for equality is left as exercise.) 18 / 63 I-Hsiang Wang IT Lecture 6

  19. Polarization Basic Channel Transformation Extremal Channels If we plot the "information stretch" I ( W + ) − I ( W − ) BEC vs. the original I ( W ) , it turns out among all BMSC: 0.5 BEC maximizes the stretch I ( W + ) − I ( W − ) [bits] 0.4 BSC minimizes the stretch 0.3 BSC 0.2 Lower boundary: 2 H b (2 p (1 − p )) − 2 H b ( p ) , − 1 (1 − I ( W )) . where p = H b 0.1 0 Upper boundary: 2 I ( W ) (1 − I ( W )) . 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 I ( W ) [bits] (Taken from Chap. 12.1 of Moser[4] .) 19 / 63 I-Hsiang Wang IT Lecture 6

  20. Polarization Channel Polarization Polarization 1 Basic Channel Transformation Channel Polarization 2 Polar Coding Encoding and Decoding Architectures Performance Analysis 20 / 63 I-Hsiang Wang IT Lecture 6

  21. Polarization Channel Polarization Recursive Application of Arıkan's Transformation Duplicate W , apply the transformation, and get W − and W + . W W 21 / 63 I-Hsiang Wang IT Lecture 6

  22. Polarization Channel Polarization Recursive Application of Arıkan's Transformation Duplicate W , apply the transformation, and get W − and W + . W Duplicate W − (and W + ). W W W 22 / 63 I-Hsiang Wang IT Lecture 6

  23. Polarization Channel Polarization Recursive Application of Arıkan's Transformation Duplicate W , apply the transformation, and get W − and W + . W Duplicate W − (and W + ). W Apply the transformation on W − , and get W −− and W − + . W W 23 / 63 I-Hsiang Wang IT Lecture 6

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend