Compute-Forward Multiple Access University of Melbourne Jingge Zhu - - PowerPoint PPT Presentation

compute forward multiple access
SMART_READER_LITE
LIVE PREVIEW

Compute-Forward Multiple Access University of Melbourne Jingge Zhu - - PowerPoint PPT Presentation

Compute-Forward Multiple Access University of Melbourne Jingge Zhu Joint work with Erixhen Sula, Sung Hoon Lim, Adriano Pastore, Michael Gastpar 1 / 33 Overview Overview: Compute-Forward Multiple Access (CFMA) Theory A Practical


slide-1
SLIDE 1

Compute-Forward Multiple Access

University of Melbourne Jingge Zhu Joint work with Erixhen Sula, Sung Hoon Lim, Adriano Pastore, Michael Gastpar

1 / 33

slide-2
SLIDE 2

Overview Overview: Compute-Forward Multiple Access (CFMA) Theory A Practical Implementation

2 / 33

slide-3
SLIDE 3

Overview: Compute-Forward Multiple Access (CFMA)

3 / 33

slide-4
SLIDE 4

Capacity of two-user Gaussian Multiple Access Channel ◮ y = h1x1 + h2x2 + z ◮ E[xk2] ≤ nP

MAC Capacity

R1 < I(X1; Y |X2) R2 < I(X2; Y |X1) R1 + R2 < I(X1, X2; Y )

4 / 33

slide-5
SLIDE 5

Compute-Forward Multiple Access (CFMA) To achieve capacity ◮ “Multi-user decoding”, e. g.

◮ maximum likelihood decoder ◮ joint typicality decoder

◮ “Single-user decoding”, e. g.

◮ Successive cancellation decoding + time-sharing ◮ Rate-splitting [Rimoldi-Urbanke ’96]

5 / 33

slide-6
SLIDE 6

Compute-Forward Multiple Access (CFMA) To achieve capacity ◮ “Multi-user decoding”, e. g.

◮ maximum likelihood decoder ◮ joint typicality decoder

◮ “Single-user decoding”, e. g.

◮ Successive cancellation decoding + time-sharing ◮ Rate-splitting [Rimoldi-Urbanke ’96]

Features of CFMA ◮ capacity achieving ◮ “single-user decoder” ◮ no time-sharing or rate-splitting needed

5 / 33

slide-7
SLIDE 7

CFMA ◮ first decode x1 + x2 using y ◮ then decode x1 (or x2) using y, x1 + x2 ◮ solve for x1, x2 (equivalently w1, w2)

6 / 33

slide-8
SLIDE 8

Theory

7 / 33

slide-9
SLIDE 9

The Compute-and-Forward Problem [Nazer-Gastpar ’09] ◮ x1, x2 from nested lattice codes in [Nazer-Gastpar] ◮ decoder aims to decode the sum x1 + x2 ∈ Rn

8 / 33

slide-10
SLIDE 10

Nested lattices Two lattices Λ and Λ′ are nested if Λ′ ⊆ Λ

9 / 33

slide-11
SLIDE 11

Nested lattice codes [Erez-Zamir ’04] ◮ Fine lattice Λ protects against noise. ◮ Coarse lattice enforces the power constraint.

10 / 33

slide-12
SLIDE 12

Transmission

(figure from M. Gastpar and B. Nazer) 11 / 33

slide-13
SLIDE 13

Lattice decoding ◮ Quantize y with respect to the fine lattice

12 / 33

slide-14
SLIDE 14

Lattice decoding ◮ Quantize y with respect to the fine lattice ◮ Thanks to the algebraic structure (closed under addition), decoding complexity does not depend on k ◮ Single user decoder!

13 / 33

slide-15
SLIDE 15

Compute-and-forward [Nazer-Gastpar ’09] ◮ Theorem: The sum a1x1 + a2x2 can be decoded reliably if the rate R of the codebooks satisfies R < 1 2 log(1 + Ph2) − 1 2 log(a2 + P(h2a2 − (hTa)2)) where h := [h1, h2], a := [a1, a2] ∈ N2.

14 / 33

slide-16
SLIDE 16

Compute-and-forward [Nazer-Gastpar ’09] ◮ Theorem: The sum a1x1 + a2x2 can be decoded reliably if the rate R of the codebooks satisfies R < 1 2 log(1 + Ph2) − 1 2 log(a2 + P(h2a2 − (hTa)2)) where h := [h1, h2], a := [a1, a2] ∈ N2. ◮ Not enough for CFMA, no flexibility in terms of rates

14 / 33

slide-17
SLIDE 17

General compute-and-forward [Zhu-Gastpar ’14]

15 / 33

slide-18
SLIDE 18

General compute-and-forward [Zhu-Gastpar ’14] ◮ Theorem: The sum a1x1 + a2x2 can be decoded reliably if the rate Ri of the codebook i satisfies Ri < 1 2 log

  • βi(1 + Ph2)
  • − 1

2 log

  • ˜

a2 + P(h2˜ a2 − (hT˜ a)2)

  • where ˜

a := [β1a1, β2a2], with any β1, β2 ∈ R ◮ β parameters is crucial for adjusting rates

15 / 33

slide-19
SLIDE 19

Compute-Forward Multiple Access (CFMA) [Zhu-Gastpar ’17]

16 / 33

slide-20
SLIDE 20

Compute-Forward Multiple Access (CFMA) [Zhu-Gastpar ’17] Theorem : Define A := h1h2P

  • 1 + h2

1P + h2 2P

. If A ≥ 1, every rate pair on the dominant face can be achieved via CFMA.

◮ (SNR ≥ 1 + √ 2 if h1 = h2) ◮ Each rate pair corresponds to some appropriately chosen β ◮ 2×lattice decoding

16 / 33

slide-21
SLIDE 21

A Practical Implementation

17 / 33

slide-22
SLIDE 22

Practical CFMA? ◮ If directly apply the above scheme...

◮ nested lattice codes do not seem very practical ◮ lattice decoding (quantization in high-dimensional space) hard to implement

18 / 33

slide-23
SLIDE 23

Practical CFMA? ◮ If directly apply the above scheme...

◮ nested lattice codes do not seem very practical ◮ lattice decoding (quantization in high-dimensional space) hard to implement

◮ Different codes, same spirit

◮ Off-the-shelf codes (e. g. binary LDPC codes) ◮ Efficient decoding algorithm (e. g. sum-product algorithm)

18 / 33

slide-24
SLIDE 24

A practical implementation of CFMA ◮ C1, C2 LDPC codes with C2 ⊆ C1 ◮ u1 ∈ C1, u2 ∈ C2 ◮ Decode s = u1 ⊕ u2. ◮ Decode u1 using modulo sum as side information.

19 / 33

slide-25
SLIDE 25

Nested LDPC codes ◮ How to construct nested code C1 from C2 such that C2 ⊆ C1? ◮ Replacing two rows hT

i , hT j in the parity check matrix of C2 by

the new row (hi ⊕ hj)T we obtain a new code C1. C2 C1 ◮ Rate 1/2 → 5/8 ◮ C2 ⊆ C1!

20 / 33

slide-26
SLIDE 26

CFMA with binary LDPC codes ◮ Decode s ˆ si = argmax

si∈{0,1}

p(si|y) ◮ Decode u1 ˆ u1,i = argmax

u1,i∈{0,1}

p(u1,i|y, s)

21 / 33

slide-27
SLIDE 27

First Step: Decode Modulo Sum ◮ s = u1 ⊕ u2 ∈ ˜ C where ˜ C is the code with the larger rate among C1, C2. ◮ So decoding s is essentially the same as decoding one codeword! ◮ Bit-wise MAP estimation for the modulo sum: ˆ si = argmax

si∈{0,1}

  • ∼si

n

  • i=1

p(yi|si)✶ ˜ Hs = 0

  • where the summation is over all bits of s except si.

22 / 33

slide-28
SLIDE 28

First Step: Decode Modulo Sum ◮ s = u1 ⊕ u2 ∈ ˜ C where ˜ C is the code with the larger rate among C1, C2. ◮ So decoding s is essentially the same as decoding one codeword! ◮ Bit-wise MAP estimation for the modulo sum: ˆ si = argmax

si∈{0,1}

  • ∼si

n

  • i=1

p(yi|si)✶ ˜ Hs = 0

  • where the summation is over all bits of s except si.

◮ This expression should remind you of the sum-product algorithm...

22 / 33

slide-29
SLIDE 29

Second Step: Decode One of the Codewords ◮ From the bit-wise MAP estimation for one of the codewords: ˆ u1,i = argmax

u1,i∈{0,1}

p(u1,i|y, s) = argmax

u1,i∈{0,1}

  • ∼u1,i

n

  • i=1

p(yi|u1,i, si)✶

  • H1u1 = 0
  • .

◮ Similar form as in the first step. Again, sum-product algorithm.

23 / 33

slide-30
SLIDE 30

Implementation by Standard SPA ◮ Both decoding steps can be implemented efficiently using a standard SPA, with modified initial log-likelihood ratio (LLR).

LLR1 := log p(yi|si = 0) p(yi|si = 1) = log cosh

  • yi

√ P(h1 + h2)

  • cosh
  • yi

√ P(h1 − h2) − 2Ph1h2 LLR2 := log p(yi|u1,i = 0, si) p(yi|u1,i = 1, si) =

  • −2yi

√ P(h1 + h2) for si = 0 −2yi √ P(h1 − h2) for si = 1

Algorithm: CFMA with binary LDPC

  • 1. ˆ

s = SPA(˜ H, LLR1)

  • 2. ˆ

u1 = SPA(H1, LLR2)

24 / 33

slide-31
SLIDE 31

Simulations: binary LDPC codes

25 / 33

slide-32
SLIDE 32

Simulations: binary LDPC codes

26 / 33

slide-33
SLIDE 33

Extension to higher rate ◮ How to support higher rate with off-the-shelf binary codes? ◮ CFMA with multilevel binary codes:

27 / 33

slide-34
SLIDE 34

Extension to higher rate ◮ How to support higher rate with off-the-shelf binary codes? ◮ CFMA with multilevel binary codes: ◮ Each u(ℓ)

i

is a binary LDPC code; higher-order modulation. ◮ For each level ℓ, decode the sum s(ℓ) and then u(ℓ)

1

27 / 33

slide-35
SLIDE 35

CFMA with multilevel binary codes ◮ Punchline: only single-user SPA is used!

28 / 33

slide-36
SLIDE 36

Simulations: multilevel LDPC codes

29 / 33

slide-37
SLIDE 37

Simulations: multilevel LDPC codes

30 / 33

slide-38
SLIDE 38

CFMA summary ◮ “Theoretical” CFMA with nested lattice codes

◮ capacity achieving ◮ “single-user decoder” (lattice decoding) ◮ no time-sharing or rate-splitting needed

◮ “Practical” CFMA with binary LDPC codes

◮ “single-user decoder” (SPA) ◮ no time-sharing or rate-splitting needed ◮ see [Lim et al. ’18] for a performance characterization

◮ K-user case: complexity grow linearly (instead of exponentially) with K.

31 / 33

slide-39
SLIDE 39

Take-home message Two key components of CFMA: ◮ Codes with algebraic structure (so that decoding the sum is of the same complexity as decoding one codeword) ◮ Efficient decoding algorithm for the “base code” We have shown ◮ nested lattice codes + lattice decoding ◮ binary LDPC codes + sum-product algorithm but same methodology readily applies to, for example ◮ Convolutional/Turbo codes + Viterbi algorithm ◮ Polar codes + Arikan’s successive cancellation decoder ◮ . . . Could be a candidate for NOMA technology?

32 / 33

slide-40
SLIDE 40

References ◮ B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing interference through structured codes,” IEEE Trans. Inf. Theory, 2011. ◮ J. Zhu and M. Gastpar, “Gaussian Multiple Access via Compute-and-Forward,” IEEE Trans. Inf. Theory, 2017. ◮ E. Sula, J. Zhu, A. Pastore, S. H. Lim, and M. Gastpar, “Compute-Forward Multiple Access (CFMA): Practical Implementations,” IEEE Trans. Comm. 2018 ◮ S. H. Lim, C. Feng, A. Pastore, B. Nazer, and M. Gastpar, “A Joint Typicality Approach to Compute-Forward,” IEEE Trans.

  • Inf. Theory, 2018

33 / 33