SLIDE 7 Introduction Pruning Neural BP Decoders Numerical Results Conclusion
Optimizing the Weights
ˆ λ(0) ˆ λ(1) ˆ λ(2) ˆ λ(3) λch,0 λch,1 λch,2 λch,3 λch,4 λch,5 λch,6 λch,0 λch,1 λch,2 λch,3 λch,4 λch,5 λch,6 λch,0 λch,1 λch,2 λch,3 λch,4 λch,5 λch,6 λch,0 λch,1 λch,2 λch,3 λch,4 λch,5 λch,6 ˆ λ0 ˆ λ1 ˆ λ2 ˆ λ3 ˆ λ4 ˆ λ5 ˆ λ6
- Binary classification for each of the n bits.
- Loss function: Binary cross-entropy or soft
bit error rate
- Training data: Transmission of the all-zero
codeword over the channel.
- Convergence is improved by using
¯ Γ =
Γ(x, ˆ λ(i))
- Contribution of the intermediate layers
decays over the training.
[1] E. Nachmani, Y. Be’ery, and D. Burshtein, “Learning to decode linear codes using deep learning,” in Proc. Annu. Allerton Conf. Commun., Control, Comput., Allerton, IL, USA, Sep. 2016, pp. 341—346. Pruning Neural BP Decoders |
ager, H. D. Pfister, L. Schmalen, A. Graell i Amat 6 / 15