discrete memoryless channel
play

Discrete Memoryless Channel y x Discrete Channel W Input - PowerPoint PPT Presentation

Polar Codes: Speed of Polarization & Polynomial Gap to Capacity Venkatesan Guruswami Carnegie Mellon University (currently visiting Microsoft Research New England) Based on joint work with Patrick Xia Charles River Science of Information


  1. Polar Codes: Speed of Polarization & Polynomial Gap to Capacity Venkatesan Guruswami Carnegie Mellon University (currently visiting Microsoft Research New England) Based on joint work with Patrick Xia Charles River Science of Information Day MIT, April 28, 2014

  2. Discrete Memoryless Channel y ∈ 𝓩 x ∈ 𝓨 ¡ Discrete Channel W Input alphabet 𝓨 y ¡~ ¡W( ¡∙ ¡| ¡x) ¡ Finite ¡output ¡alphabet ¡𝓩 a b c d 0.1 0.4 0.2 0.3 0 Memoryless channels: 1 0.4 0.1 0.3 0.2 Channel’s behavior on i’th bit independent of rest

  3. Noisy Coding theorem [Shannon’48] Every discrete memoryless channel W has a capacity I(W) such that one can communicate at asymptotic rate I(W) - ε with vanishing probability of miscommunication ( for any desired gap to capacity ε > 0) Conversely, reliable communication is not possible at rate I(W)+ ε . Asymptotic rate: Communicate (I(W)- ε )N bits in N uses of the channel in limit of large block length N x 1 y 1 m’ x 2 y 2 message DMC Decoder Encoder m ∈ ℳ ¡ (= m?) W x N y N Rate = (log | ℳ |)/N

  4. Shannon’s Theorem Shows that (if channel isn’t completely noisy) constant factor overhead suffices for negligible decoding error probability, provided we tolerate some delay • Delay/block length N ≈ 1/ ε 2 suffices for rate within ε of capacity • Miscommunication prob. ≈ exp(- ε 2 N)

  5. Binary Memoryless Symmetric (BMS) channel • 𝓨 = {0,1} (binary inputs) • Symmetric - Output symbols can be paired up {y,y’} such that W(y|b) = W(y’|1-b) Most important example: BSC p (binary symmetric channel with crossover probability p) 0 1 1-p p 0 1 p 1-p

  6. Capacity of BMS channels Denote H(W) := H(X|Y) where X ~ U {0,1} ; Y ~ W ( · |X) Shannon capacity I(W) = 1 - H(W) Two well-known examples BEC α BSC p 0 1 ? 0 1 1- α α 0 0 1-p p 0 1- α 1 0 α 1 p 1-p Capacity = 1 - α Capacity = 1 - h(p)

  7. Realizing Shannon • Shannon’s theorem non-constructive - random codes, exponential time decoding ★ Challenge: Explicit coding schemes with efficient encoding/decoding algorithms to communicate at information rates ≈ capacity ‣ Has occupied coding & information theorists for 60+ years

  8. “Achieving” capacity In the asymptotic limit of large block lengths N, not hard to approach capacity within any fixed ε > 0 ✦ Code concatenation (Forney’66) rate ≈ 1 - ε outer code, B bits B ≈ε -2 can correct a positive frac. of worst-case errors Ensemble of inner codes of rate ≈ capacity - ε Decoding time ≈ N exp(1/ ε 2 ) (brute force max. likelihood decoding of inner blocks) Complexity scales poorly with gap ε to capacity

  9. Achieving capacity: A precise theoretical formalism Given channel W and desired gap to capacity ε , Construct Enc : {0,1} RN → {0,1} N & Dec : {0,1} N → {0,1} RN for rate R = I(W) - ε such that • ∀ msg. m, Pr [ Dec(W(Enc(m))) ≠ m ] ≪ ε (say ε 100 ) • Block length N ≤ poly(1/ ε ) • Runtime of Enc and Dec bounded by poly(1/ ε ) That is, seek complexity polynomially bounded in single parameter, gap ε to capacity

  10. Our Main Result Polar codes [Arikan, 2008] give a solution to this challenge Deterministic polytime constructible binary linear codes for approaching capacity of BMS channels W within ε with complexity O(N log N) for N ≥ (1/ ε ) c ‣ c = absolute constant independent of W ‣ Decoding error probability exp(-N 0.49 ) ✦ The first (and so far only) construction to achieve capacity with such a theoretically proven guarantee. ✦ Provides a complexity-theoretic basis for the statement ``polar codes are the first constructive capacity achieving codes”

  11. Other “capacity achievers” • Forney’s concatenated codes (1966) - Decoding complexity exp(1/ ε ) due to brute-force inner decoder • LDPC codes + variants (Gallager 1963, revived ~ 1995 onwards) - Proven to approach capacity arbitrarily closely only for erasures - Ensemble to draw from, rather than explicit codes • Turbo codes (1993) - Excellent empirical performance. Not known to approach capacity arbitrarily closely as block length N → ∞ • Spatially coupled LDPC codes (Kudekar-Richardson-Urbanke, 2012) - Asymptotically achieves capacity of all BMS channels! - Polynomial convergence to limit not yet known

  12. Weren’t polar codes already shown to achieve capacity? • Yes, in the limit of large block length ‣ Can approach rate I(W) as N → ∞ [Arikan] • We need to bound the speed of convergence to capacity ‣ Block length N=N( ε ) needed for rate I(W)- ε ? • We show N( ε ) ≤ poly(1/ ε ) ‣ Mentioned as an open problem, eg. in [Korada’09; Kudekar-Richardson- Urbanke’12; Shpilka’12; Tal-Vardy’13] ‣ Independently shown in [Hassani-Alishahi-Urbanke’13]

  13. Finite length analysis • Asymptotic nature of previous analyses due to use of convergence theorem for supermartingales • We give an elementary analysis, leading to effective bounds on the speed of convergence

  14. Roadmap • Polarizing matrices & capacity-achieving codes • Arikan’s recursive polarizing matrix construction • Analysis: Rough polarization • Remaining issues, fine polarization

  15. Source coding setting & Polarization Focus on BSC p . C is the kernel of a (1-R)N x N parity check matrix H N : Suppose C ⊂ {0,1} N is a C = { c ∈ {0,1} N : H N c = 0 } linear code of rate R ≈ 1- h(p) C is a good H N gives a optimal lossless source code ⟺ channel code for BSC p for compressing Bernoulli(p) source: • x 0 x 1 …. x N-1 i.i.d samples from source X = Bernoulli(p) • They can be recovered w.h.p from ≈ h(p)N bits H N (x 0 x 1 …. x N-1 ) T If we complete the rows of H N to a basis , resulting N x N invertible matrix P N is `` polarizing ’’

  16. Coding needs Polarization S ource coding setting X0 U0 Invertible U1 • X 0 X 1 …. X N-1 i.i.d copies of X X1 Matrix ‣ ( For general channel coding, work with P N XN-1 UN-1 conditional r.v’s X i | Y i + handle some subtleties) P N has the following polarizing property : Polarizing matrices are implied by linear capacity-achieving codes

  17. Insights in Polar Coding U0 X0 U1 Polarizing X1 Invertible Matrix X N -1 UN-1 1. Sufficiency of such matrices ‣ No need to output U i for good indices i (when H(U i |U 0 ...U i-1 ) ≈ 0) 2. Recursive construction of polarizing matrices, along with low-complexity decoder

  18. 2 x 2 polarization Suppose X ~ Bernoulli(p) H(U 0 ) = h(2p(1-p)) > h(p) (unless h(p)=0 or 1) H(U 1 | U 0 ) = 2h(p) - H(U 0 ) < h(p) If X is not fully deterministic or random, the output entropies are separated from each other

  19. An explicit polarizing matrix [Arikan] (for N = 2 n ) n=2

  20. Recursive Polarization X0 V0 U0 G 2 G 2 U1 X1 V1 X2 T0 U2 G 2 G 2 X3 T1 U3 (V 0 , V 1 ) & (T 0 , T 1 ) i.i.d General recursion Vi U2i G 2 Ti U2i+1 B n = bit reversal permutation

  21. Proof idea Channel = pair W of (correlated) random variables (A;B) (with A ∈ {0,1} ) • Channel entropy H( W ) = H(A|B) Abstracting each step in recursion: A0 A 0 +A 1 G 2 ‣ Given a channel W = (A; B) A1 A1 ‣ Take two i.i.d copies (A 0 ; B 0 ) and (A 1 ; B 1 ) of W ‣ Output two pairs W − = (A 0 +A 1 ; B 0 ,B 1 ) and W + =(A 1 ; A 0 +A 1 ,B 0 ,B 1 ) H( W − ) + H( W + ) = 2 H( W ) Channel splitting W H( W + ) ≤ H( W ) ≤ H( W − ) W − W +

  22. Channels produced by recursion Input = 2 n i.i.d copies of W ( = (X; 0) where X is the source, H( W) = H(X)) The channels at various levels of recursion evolve as follows: W Therefore, W − W + H(U i |U 0 ,U 1 , …, U i-1 ) = H( W ) W −− W − + W + − W ++ …. W − ++ ⋯ − W +++ ⋯ + W −−− ⋯ −

  23. Polarization: Asymptotic Analysis W Consider random walk down the tree, moving left/right randomly at each step W − W + Let H n be the r.v. equal to entropy W −− W − + W + − W ++ of the channel at depth n. ‣ H 0 , H 1 , H 2, …. is a bounded martingale …. W − ++ ⋯ − W +++ ⋯ + W −−− ⋯ − ⟹ Converges almost surely to a r.v. H ∞ (martingale convergence theorem) Only fixed points for entropy H ∞ is {0,1}-valued evolution H( W ) → H( W − ) are 0,1 (deterministic/fully noisy channels)

  24. Entropy increase lemma [Sasoglu] If H( W) ∈ ( δ ,1- δ ) for some δ > 0, then H( W − ) ≥ H( W) + γ ( δ ) for some γ ( δ ) > 0 That is, I f (X 1 ,Y 1 ), (X 2 ,Y 2 ) are i.i.d with X i ∈ {0,1} & H(X i |Y i ) ∈ ( δ ,1- δ ), then H(X 1 +X 2 |Y 1 ,Y 2 ) ≥ H(X 1 |Y 1 ) + γ ( δ ) Note: We saw this for X i ~ Bernoulli(p) (without any Y i ) earlier. ‣ h(2p(1-p)) > h(p) unless h(p) ∈ {0,1}

  25. Polarization: A direct analysis Lemma : There is a Λ < 1 such that for all “channels” W ( ✺ ) Corollary: n = O(log (1/ ε )) recursive steps (and thus N=poly(1/ ε )) suffice for Pr[H n (1-H n ) ≥ε ] ≤ ε (and ∴ Pr[H n ≤ ε ] ≥ 1-H(X)- ε ) ‣ rough polarization Proof of Lemma has two steps: 1. H( W − ) − H( W ) ≥ θ H( W )(1-H( W )) for some θ > 0 • quantitative version of “entropy increase lemma” 2. Use 1. + calculations to deduce ( ✺ )

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend