list decoding for universal polar codes
play

List Decoding for Universal Polar Codes Boaz Shuval and Ido Tal - PowerPoint PPT Presentation

List Decoding for Universal Polar Codes Boaz Shuval and Ido Tal 1/15 Overview Universal polarization: memoryless setting: o S as glu & Wang, ISIT 2011 settings with memory: Shuval & Tal, ISIT 2019 Error


  1. List Decoding for Universal Polar Codes Boaz Shuval and Ido Tal 1/15

  2. Overview � Universal polarization: � memoryless setting: ¸o˘ S ¸as glu & Wang, ISIT 2011 � settings with memory: Shuval & Tal, ISIT 2019 � Error probability analysis based on successive cancellation (SC) decoding � Potential for better results using successive cancellation list (SCL) decoding This talk: An efficient implementation of SCL for universal polarization 2/15

  3. Setting � Communication with uncertainty: � Encoder: Knows channel belongs to a set of channels � Decoder: Knows channel statistics (e.g., via estimation) � Memory: � In channels � In input distribution � Universal polar code achieves: � Vanishing error probability over set � Best rate (infimal information rate over set) 3/15

  4. List decoding improves probability of error! BSC 10 0 10 − 1 SC Word Error Rate upper 10 − 2 bound 10 − 3 10 − 4 10 − 5 0 . 25 0 . 3 0 . 35 0 . 4 0 . 45 0 . 5 Capacity 4/15

  5. List decoding improves probability of error! BSC 10 0 10 − 1 SC Word Error Rate upper 10 − 2 bound 10 − 3 10 − 4 10 − 5 0 . 25 0 . 3 0 . 35 0 . 4 0 . 45 0 . 5 Capacity 4/15

  6. List decoding improves probability of error! BSC 10 0 L = 1 10 − 1 SC Word Error Rate upper 10 − 2 bound 10 − 3 10 − 4 10 − 5 0 . 25 0 . 3 0 . 35 0 . 4 0 . 45 0 . 5 Capacity 4/15

  7. List decoding improves probability of error! BSC 10 0 L = 1 L = 2 L = 4 10 − 1 SC L = 8 Word Error Rate upper L = 16 10 − 2 bound 10 − 3 10 − 4 10 − 5 0 . 25 0 . 3 0 . 35 0 . 4 0 . 45 0 . 5 Capacity 4/15

  8. List decoding improves probability of error! BEC 10 0 L = 1 L = 2 L = 4 10 − 1 SC L = 8 Word Error Rate upper L = 16 10 − 2 bound 10 − 3 10 − 4 10 − 5 0 . 25 0 . 3 0 . 35 0 . 4 0 . 45 0 . 5 Capacity 4/15

  9. List decoding improves probability of error! AWGN 10 0 L = 1 L = 2 L = 4 10 − 1 SC L = 8 Word Error Rate upper L = 16 10 − 2 bound 10 − 3 10 − 4 10 − 5 0 . 25 0 . 3 0 . 35 0 . 4 0 . 45 0 . 5 Capacity 4/15

  10. SCL in a nutshell � Parameter: list size L � Successively decode u 0 , u 1 , . . . , u N − 1 � Decoding path: sequence of decoding decisions � ˆ � ˆ u i − 1 = ˆ ˆ . . . u 0 u 1 u i − 1 � If u i is not frozen, split each decoding path into two paths, � ˆ � 0 u i − 1 ˆ u i − 1 � ˆ � 1 u i − 1 � For each i , keep L best decoding paths � Final decoding: choose best path 5/15

  11. The Universal Construction � Concatenation of two transform blocks X N − 1 0 U N − 1 Y N − 1 fast slow channel 0 0 universal � Slow — for universality � Fast (Arıkan) — for vanishing error probability 6/15

  12. The Universal Construction � Concatenation of two transform blocks slow fast slow X N − 1 fast 0 U N − 1 Y N − 1 slow channel 0 0 fast slow universal � Slow — for universality � Fast (Arıkan) — for vanishing error probability � Each block contains multiple copies of basic transforms � Polar: certain indices of U N − 1 are data bits, 0 other indices “frozen” (Honda & Yamamoto 2013) � Universal: data bit locations regardless of channel 6/15

  13. Fast Transform vs. Slow Transform Fast (Arıkan) transform Slow transform 7/15

  14. Fast Transform vs. Slow Transform Fast (Arıkan) transform Slow transform 7/15

  15. Fast Transform vs. Slow Transform Fast (Arıkan) transform Slow transform lateral lateral 7/15

  16. Fast Transform vs. Slow Transform Fast (Arıkan) transform Slow transform lateral lateral medial lateral lateral 7/15

  17. Fast Transform vs. Slow Transform Fast (Arıkan) transform Slow transform lateral i + 1 i lateral medial i i lateral lateral 7/15

  18. The Rules of Slow lateral lateral � Any medial on left is a lateral medial − child of medial − and medial + lateral medial + on right medial − medial − medial + medial + medial − medial − medial + medial + lateral medial − medial + lateral medial − medial − medial + medial + medial − medial − medial + medial + lateral medial − lateral medial + lateral lateral 8/15

  19. The Rules of Slow lateral lateral � Any medial on left is a lateral medial − child of medial − and medial + lateral medial + on right medial − medial − medial + medial + medial − medial − medial + medial + lateral medial − medial + lateral medial − medial − medial + medial + medial − medial − medial + medial + lateral medial − lateral medial + lateral lateral 8/15

  20. The Rules of Slow lateral lateral � Any medial on left is a lateral medial − child of medial − and medial + lateral medial + on right medial − medial − medial + medial + � Update of a medial + medial − medial − on left updates both medial + medial + medial − and medial + lateral medial − on right medial + lateral medial − medial − medial + medial + medial − medial − medial + medial + lateral medial − lateral medial + lateral lateral 8/15

  21. The Rules of Slow lateral lateral � Any medial on left is a lateral medial − child of medial − and medial + lateral medial + on right medial − medial − medial + medial + � Update of a medial + medial − medial − on left updates both medial + medial + medial − and medial + lateral medial − on right medial + lateral medial − medial − medial + medial + medial − medial − medial + medial + lateral medial − lateral medial + lateral lateral 8/15

  22. The Rules of Slow lateral lateral � Any medial on left is a lateral medial − child of medial − and medial + lateral medial + on right medial − medial − medial + medial + � Update of a medial + medial − medial − on left updates both medial + medial + medial − and medial + lateral medial − on right medial + lateral medial − medial − medial + medial + medial − medial − medial + medial + lateral medial − lateral medial + lateral lateral 8/15

  23. The Rules of Slow lateral lateral � Any medial on left is a lateral medial − child of medial − and medial + lateral medial + on right medial − medial − medial + medial + � Update of a medial + medial − medial − on left updates both medial + medial + medial − and medial + lateral medial − on right medial + lateral medial − medial − medial + medial + medial − medial − medial + medial + lateral medial − lateral medial + lateral lateral 8/15

  24. The Rules of Slow lateral lateral � Any medial on left is a lateral medial − child of medial − and medial + lateral medial + on right medial − medial − medial + medial + � Update of a medial + medial − medial − on left updates both medial + medial + medial − and medial + lateral medial − on right medial + lateral medial − medial − Corollary medial + medial + Update of a medial + on left medial − medial − medial + medial + results in a cascade of lateral medial − updates, all the way to the lateral medial + channel on the far right lateral lateral 8/15

  25. List Decoding — Fast Transform vs. Slow Transform Fast Transform Slow Transform The layer at depth λ gets The layer at depth λ gets updated once every 2 λ writes updated every other write to to the U vector the U vector Deep layers get updated All layers get updated infrequently frequently 9/15

  26. List Decoding — Fast Transform vs. Slow Transform Fast Transform Slow Transform The layer at depth λ gets The layer at depth λ gets updated once every 2 λ writes updated every other write to to the U vector the U vector Deep layers get updated All layers get updated infrequently frequently Good news for sharing data Bad news for sharing data between list decoding paths between list decoding paths 9/15

  27. Is All Hope Lost? Fast Transform Slow Transform � Layer at depth λ is � Layer at depth λ is updated once every 2 λ updated at every other writes write � At layer of depth λ , 2 λ � At layer of depth λ , at indices are updated at most two indices are once updated 10/15

  28. Is All Hope Lost? Fast Transform Slow Transform � Layer at depth λ is � Layer at depth λ is updated once every 2 λ updated at every other writes write � At layer of depth λ , 2 λ � At layer of depth λ , at indices are updated at most two indices are once updated Our Saving Grace The updates are cyclic! 10/15

  29. A Bespoke Data Structure � Aim: mimic the fast transform’s update regime � Frequent updates small � Infrequent large updates � Bespoke data structure: Cyclic Exponential Array � Main idea: an array comprised of sub-arrays, growing exponentially in length 11/15

  30. Example entire array: cyclic exponential array: last written value 12/15

  31. Example entire array: 0 cyclic exponential array: 0 last written value 12/15

  32. Example entire array: 0 1 cyclic exponential array: 0 1 last written value 12/15

  33. Example entire array: 0 1 2 cyclic exponential array: 0 1 2 last written value 12/15

  34. Example entire array: 0 1 2 3 cyclic exponential array: 0 1 2 3 last written value 12/15

  35. Example entire array: 0 1 2 3 4 cyclic exponential array: 0 1 2 3 4 last written value 12/15

  36. Example entire array: 0 1 2 3 4 4 5 cyclic exponential array: 0 1 2 3 4 4 5 last written value 12/15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend