an upgrading algorithm with optimal power law
play

An Upgrading Algorithm with Optimal Power Law Or Ordentlich 1 Ido Tal - PowerPoint PPT Presentation

An Upgrading Algorithm with Optimal Power Law Or Ordentlich 1 Ido Tal 2 1 Hebrew University 2 Technion 1 / 14 Big picture first In this talk: An upgrading algorithm for channels with non-binary input Optimal power law Achieved by


  1. An Upgrading Algorithm with Optimal Power Law Or Ordentlich 1 Ido Tal 2 1 Hebrew University 2 Technion 1 / 14

  2. Big picture first In this talk: ◮ An upgrading algorithm for channels with non-binary input ◮ Optimal power law ◮ Achieved by reduction to the binary-input case ◮ Important for constructing polar codes 2 / 14

  3. Constructing vanilla polar codes ◮ Underlying channel: a binary-input symmetric and memoryless channel W : X → Y , where X = { 0 , 1 } ◮ Derive N = 2 n synthetic channels W ( n ) : X → Y N × X j − 1 , j where 1 ≤ j ≤ N . ◮ Constructing a vanilla polar code ≡ finding which synthetic channels W ( n ) are ‘almost noiseless’ j ◮ Problem: output alphabet Y N × X j − 1 is intractably large ◮ Solution: ◮ Replace W ( n ) with Q ( n ) having output alphabet size L j j ◮ Have Q ( n ) be (stochastically) degraded with respect to W ( n ) j j input intractably large output size = L W ( n ) Φ j Q ( n ) j ◮ Q ( n ) ⇒ W ( n ) almost noiseless = almost noiseless j j 3 / 14

  4. Constructing vanilla polar codes ◮ We write Q ≤ W if Q is degraded with respect to W ◮ Alternatively, we write W ≥ Q and say that W is upgraded with respect to Q ◮ Previous slide: Q ( n ) ≤ W ( n ) j j ◮ We can also approximate W ( n ) “from above” by an upgraded j channel R ( n ) having output alphabet size at most L . j ◮ Sandwich property: Q ( n ) ≤ W ( n ) ≤ R ( n ) j j j ◮ In vanilla setting, R ( n ) has secondary importance. . . j 4 / 14

  5. Constructing generalized polar codes ◮ Polar codes have been generalized beyond vanilla setting ◮ Asymmetric channels (with asymmetric input distribution) ◮ Wiretap channels ◮ Channels with memory (input distribution can have memory as well) ◮ In all these settings upgrading is as important as degrading for constructing the code ◮ For settings with memory, the “effective input alphabet” is non-binary 5 / 14

  6. Problem statement ◮ Given: joint distribution of channel and input P X , Y ( x , y ) ◮ x ∈ X , the input alphabet and y ∈ Y , the output alphabet ◮ P X , Y ( x , y ) = P X ( x ) · P Y | X ( y | x ) � �� � � �� � input distribution channel ◮ Find: P ∗ X , Z , Y ( x , z , y ) such that ◮ Marginalization: � z P ∗ X , Z , Y ( x , z , y ) = P X , Y ( x , y ) ◮ Upgrading: X − Z − Y is a Markov chain ◮ Tractable output alphabet size: z ∈ Z and |Z| ≤ L X Y X Z Y = ⇒ Φ W R W ◮ Figure of merit: H ( X | Y ) − H ( X | Z ) = I ( X ; Z ) − I ( X ; Y ) should be ‘small’ 6 / 14

  7. Power law ◮ Previous results: ◮ Recall: output alphabet size of upgraded channel |Z| ≤ L ◮ There exists a ‘hard to upgrade’ joint distribution P ( X , Y ): H ( X | Y ) − H ( X | Z ) = Ω( L − 2 / ( |X|− 1) ) ◮ For binary input, |X| = 2, and any P X , Y , there exists an upgrading algorithm such that H ( X | Y ) − H ( X | Z ) = O ( L − 2 ) = O ( L − 2 / ( |X|− 1) ) ◮ New result: ◮ Also for non-binary input, we can upgrade any P X , Y and achieve H ( X | Y ) − H ( X | Z ) = O ( L − 2 / ( |X|− 1) ) ◮ Main idea: use binary-input as a black-box (reduction) 7 / 14

  8. One-hot representation ◮ Denote q = |X| . Assume X = { 1 , 2 , . . . , q } ◮ For x ∈ X , define g ( x ) = ( x 1 , x 2 , . . . , x q − 1 ) , the one-hot representation: g (1) = (1 , 0 , 0 . . . 0 , 0) g (2) = (0 , 1 , 0 . . . 0 , 0) . . . g ( q − 1) = (0 , 0 , 0 . . . 0 , 1) g ( q ) = (0 , 0 , 0 . . . 0 , 0) ◮ Abuse notation and write x = g ( x ) = ( x 1 , x 2 , . . . , x q − 1 ) 8 / 14

  9. ⇒ α ( i ) = ⇒ β ( i ) = ⇒ γ ( i ) = ⇒ P ∗ P X , Y = X , Z , Y ◮ We are given P X , Y , where |X| = q ◮ Need to produce P ∗ X , Z , Y by reducing to binary-input upgrading ◮ Denote X ′ = { 0 , 1 } ◮ Let X = ( X 1 , X 2 , . . . , X q − 1 ) and Y be distributed according to P X , Y ◮ First step: define, for 1 ≤ i ≤ q − 1 the joint distribution α ( i ) X i , Y ( x ′ , y ) = P ( X i = x ′ , Y = y | X i − 1 = 0 i − 1 ) 1 1 ◮ The joint distribution α ( i ) X i , Y ( x ′ , y ) has binary input, x ′ ∈ X ′ ◮ We may apply our binary-input upgrading procedure 9 / 14

  10. ⇒ α ( i ) = ⇒ β ( i ) = ⇒ γ ( i ) = ⇒ P ∗ P X , Y = X , Z , Y ◮ Recall our binary-input joint distribution: for 1 ≤ i ≤ q − 1, α ( i ) X i , Y ( x ′ , y ) = P ( X i = x ′ , Y = y | X i − 1 = 0 i − 1 ) 1 1 ◮ Define � L 1 / ( q − 1) � Λ = . ◮ Second step: ◮ Apply our binary-input upgrading procedure to α ( i ) X i , Y ( x ′ , y ), resulting in β ( i ) X i , Z i , Y ( x ′ , z , y ) , where |Z i | ≤ Λ ◮ Difference in entropies is O (Λ − 2 ) 10 / 14

  11. ⇒ α ( i ) = ⇒ β ( i ) = ⇒ γ ( i ) = ⇒ P ∗ P X , Y = X , Z , Y ◮ Recall that we have produced β ( i ) X i , Z i , Y ( x ′ , z , y ), where x ′ ∈ X ′ is binary ◮ Third step: define the conditional distribution γ ( i ) ( x i | z i , x i − 1 ) X i | Z i , X i − 1 1 1  β ( i ) if x i − 1 = 0 i − 1 X i | Z i ( x i | z i ) ,  1 1  if x i − 1 � = 0 i − 1 = 1 and x i = 0 , 1 1   0 otherwise . ◮ That is, if x i − 1 is non-zero, force x i to zero, in accordance 1 with the one-hot representation is zero, use β ( i ) ◮ Otherwise, if x i − 1 1 X i | Z i 11 / 14

  12. ⇒ α ( i ) = ⇒ β ( i ) = ⇒ γ ( i ) = ⇒ P ∗ P X , Y = X , Z , Y ◮ Last step: define � q − 1 � � β ( i ) P ∗ X , Z , Y ( x , z , y ) = P Y ( y ) · Z i | Y ( z i | y ) i =1 � q − 1 � � γ ( i ) ( x i | z i , x i − 1 · ) X i | Z i , X i − 1 1 1 i =1 ◮ A valid upgrade, with optimal power law: H ( X | Y ) − H ( X | Z ) = O ( L − 2 / ( |X|− 1) ) 12 / 14

  13. A graphical description of P X , Y ˜ α (1) X 1 f 1 ( ˜ X 1 ) X 1 X 1 | Y . . . . . . ˜ X i α ( i ) f i ( ˜ ∼ Y X i 1 ) X i X i | Y . . . . . . ˜ X q − 1 α ( q − 1) f q − 1 ( ˜ X q − 1 X q − 1 ) X q − 1 | Y 1 x i 1 ) � ˜ where f i (˜ x i · 1 { ˜ x i − 1 =0 i − 1 } 1 1 13 / 14

  14. A graphical description of P X , Z , Y ˜ Z 1 β (1) β (1) X 1 f 1 ( ˜ X 1 ) X 1 Z 1 | Y X 1 | Z 1 . . . . . . . . . ˜ Z i X i β ( i ) β ( i ) f i ( ˜ ∼ Y X i 1 ) X i Z i | Y X i | Z i . . . . . . . . . ˜ X q − 1 Z q − 1 β ( q − 1) β ( q − 1) f q − 1 ( ˜ X q − 1 X q − 1 ) Z q − 1 | Y X q − 1 | Z q − 1 1 x i 1 ) � ˜ where f i (˜ x i · 1 { ˜ x i − 1 =0 i − 1 } 1 1 14 / 14

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend