Towards an Algebraic Network Information Theory Bobak Nazer (BU) - - PowerPoint PPT Presentation

towards an algebraic network information theory
SMART_READER_LITE
LIVE PREVIEW

Towards an Algebraic Network Information Theory Bobak Nazer (BU) - - PowerPoint PPT Presentation

Towards an Algebraic Network Information Theory Bobak Nazer (BU) Joint work with Sung Hoon Lim (EPFL), Chen Feng (UBC), and Michael Gastpar (EPFL). DIMACS Workshop on Network Coding: The Next 15 Years December 17th, 2015 Network Information


slide-1
SLIDE 1

Towards an Algebraic Network Information Theory

Bobak Nazer (BU) Joint work with Sung Hoon Lim (EPFL), Chen Feng (UBC), and Michael Gastpar (EPFL). DIMACS Workshop on Network Coding: The Next 15 Years December 17th, 2015

slide-2
SLIDE 2

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations.

slide-3
SLIDE 3

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codewords elementwise i.i.d.
slide-4
SLIDE 4

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codewords elementwise i.i.d.
  • Powerful generalizations including superposition coding, dirty paper coding,

block Markov coding, and many more...

slide-5
SLIDE 5

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codewords elementwise i.i.d.
  • Powerful generalizations including superposition coding, dirty paper coding,

block Markov coding, and many more...

  • Rate regions described in terms of (single-letter) information measures
  • ptimized over pmfs.
slide-6
SLIDE 6

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codewords elementwise i.i.d.
  • Powerful generalizations including superposition coding, dirty paper coding,

block Markov coding, and many more...

  • Rate regions described in terms of (single-letter) information measures
  • ptimized over pmfs.
  • Many important successes: multiple-access channels, (degraded) broadcast

channels, Slepian-Wolf compression, network coding, and many more...

slide-7
SLIDE 7

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codewords elementwise i.i.d.
  • Powerful generalizations including superposition coding, dirty paper coding,

block Markov coding, and many more...

  • Rate regions described in terms of (single-letter) information measures
  • ptimized over pmfs.
  • Many important successes: multiple-access channels, (degraded) broadcast

channels, Slepian-Wolf compression, network coding, and many more...

  • State-of-the-art elegantly captured in the recent textbook of

El Gamal and Kim.

slide-8
SLIDE 8

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codewords elementwise i.i.d.
  • Powerful generalizations including superposition coding, dirty paper coding,

block Markov coding, and many more...

  • Rate regions described in terms of (single-letter) information measures
  • ptimized over pmfs.
  • Many important successes: multiple-access channels, (degraded) broadcast

channels, Slepian-Wolf compression, network coding, and many more...

  • State-of-the-art elegantly captured in the recent textbook of

El Gamal and Kim.

  • Codes with algebraic structure are sought after to mimic the performance
  • f random i.i.d. codes.
slide-9
SLIDE 9

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations.

slide-10
SLIDE 10

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
slide-11
SLIDE 11

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

slide-12
SLIDE 12

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

  • Coding schemes exhibit behavior not found via i.i.d. ensembles.
slide-13
SLIDE 13

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

  • Coding schemes exhibit behavior not found via i.i.d. ensembles.
  • However, some classical coding techniques are still unavailable.
slide-14
SLIDE 14

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

  • Coding schemes exhibit behavior not found via i.i.d. ensembles.
  • However, some classical coding techniques are still unavailable.
  • Most of the initial efforts have focused on Gaussian networks and have

employed nested lattice codebooks.

slide-15
SLIDE 15

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

  • Coding schemes exhibit behavior not found via i.i.d. ensembles.
  • However, some classical coding techniques are still unavailable.
  • Most of the initial efforts have focused on Gaussian networks and have

employed nested lattice codebooks.

  • Are these just a collection of intriguing examples or elements of a more

general theory?

slide-16
SLIDE 16

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

  • Coding schemes exhibit behavior not found via i.i.d. ensembles.
  • However, some classical coding techniques are still unavailable.
  • Most of the initial efforts have focused on Gaussian networks and have

employed nested lattice codebooks.

  • Are these just a collection of intriguing examples or elements of a more

general theory?

This Talk: We build on previous work and propose a joint typicality approach to algebraic network information theory.

slide-17
SLIDE 17

Compute-and-Forward

Goal: Send a linear combination of the messages to the receiver.

slide-18
SLIDE 18

Compute-and-Forward

Goal: Send a linear combination of the messages to the receiver.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . . Channel

Y n D ˆ t

slide-19
SLIDE 19

Compute-and-Forward

Goal: Send a linear combination of the messages to the receiver.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . .

ν ν ν(·) = q-ary expansion

q Channel

Y n D ˆ t ν ν ν(t) =

K

  • k=1

ak ν ν ν(mk)

slide-20
SLIDE 20

Compute-and-Forward

Goal: Send linear combinations of the messages to the receivers.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . .

ν ν ν(·) = q-ary expansion

q Channel

Y n

1

D1 ˆ t1 Y n

2

D2 ˆ t2

. . .

. . . . . . Y n

K

DK ˆ tK ν ν ν(tℓ) =

K

  • k=1

aℓ,k ν ν ν(mk)

slide-21
SLIDE 21

Compute-and-Forward

Goal: Send linear combinations of the messages to the receivers.

  • Compute-and-forward can serve as a framework for communicating

messages across a network (e.g., relaying, MIMO uplink/downlink, interference alignment).

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . .

ν ν ν(·) = q-ary expansion

q Channel

Y n

1

D1 ˆ t1 Y n

2

D2 ˆ t2

. . .

. . . . . . Y n

K

DK ˆ tK ν ν ν(tℓ) =

K

  • k=1

aℓ,k ν ν ν(mk)

slide-22
SLIDE 22

Compute-and-Forward

Goal: Send linear combinations of the messages to the receivers.

  • Compute-and-forward can serve as a framework for communicating

messages across a network (e.g., relaying, MIMO uplink/downlink, interference alignment).

  • Much of the recent work has focused on Gaussian networks.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . .

Rn

ν ν ν(·) = q-ary expansion

q

H

Zn

1

Zn

2

Zn

K

Y n

1

D1 ˆ t1 Y n

2

D2 ˆ t2

. . .

. . . . . . Y n

K

DK ˆ tK ν ν ν(tℓ) =

K

  • k=1

aℓ,k ν ν ν(mk)

slide-23
SLIDE 23

The Usual Approach

slide-24
SLIDE 24

The Usual Approach

slide-25
SLIDE 25

Computation over Gaussian MACs

  • Symmetric Gaussian MAC.
slide-26
SLIDE 26

Computation over Gaussian MACs

  • Symmetric Gaussian MAC.
  • Equal power constraints:

Exℓ2 ≤ nP.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . Zn Y n D ˆ t ν ν ν(t) =

K

  • k=1

ν ν ν(mk)

slide-27
SLIDE 27

Computation over Gaussian MACs

  • Symmetric Gaussian MAC.
  • Equal power constraints:

Exℓ2 ≤ nP.

  • Use nested lattice codes.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =

K

  • k=1

ν ν ν(mk)

slide-28
SLIDE 28

Computation over Gaussian MACs

  • Symmetric Gaussian MAC.
  • Equal power constraints:

Exℓ2 ≤ nP.

  • Use nested lattice codes.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =

K

  • k=1

ν ν ν(mk)

  • Wilson-Narayanan-Pfister-Sprintson ’10, Nazer-Gastpar ’11:

Decoding is successful if the rates satisfy Rk < 1 2 log+ 1 2 + P

  • .
slide-29
SLIDE 29

Computation over Gaussian MACs

  • Symmetric Gaussian MAC.
  • Equal power constraints:

Exℓ2 ≤ nP.

  • Use nested lattice codes.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =

K

  • k=1

ν ν ν(mk)

  • Wilson-Narayanan-Pfister-Sprintson ’10, Nazer-Gastpar ’11:

Decoding is successful if the rates satisfy Rk < 1 2 log+ 1 2 + P

  • .
  • Cut-set upper bound is 1

2 log(1 + P).

slide-30
SLIDE 30

Computation over Gaussian MACs

  • Symmetric Gaussian MAC.
  • Equal power constraints:

Exℓ2 ≤ nP.

  • Use nested lattice codes.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =

K

  • k=1

ν ν ν(mk)

  • Wilson-Narayanan-Pfister-Sprintson ’10, Nazer-Gastpar ’11:

Decoding is successful if the rates satisfy Rk < 1 2 log+ 1 2 + P

  • .
  • Cut-set upper bound is 1

2 log(1 + P).

  • What about the “1+”? Still open! (Ice wine problem.)
slide-31
SLIDE 31

Computation over Gaussian MACs

  • How about general

Gaussian MACs?

slide-32
SLIDE 32

Computation over Gaussian MACs

  • How about general

Gaussian MACs?

  • Model using unequal

power constraints: Exℓ2 ≤ nPℓ.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . Zn Y n D ˆ t ν ν ν(t) =

K

  • k=1
  • 0 ν

ν ν(mk)

slide-33
SLIDE 33

Computation over Gaussian MACs

  • How about general

Gaussian MACs?

  • Model using unequal

power constraints: Exℓ2 ≤ nPℓ.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =

K

  • k=1
  • 0 ν

ν ν(mk)

  • Nam-Chung-Lee ’11: At each transmitter, use the same fine lattice

and a different coarse lattice, chosen to meet the power constraint.

slide-34
SLIDE 34

Computation over Gaussian MACs

  • How about general

Gaussian MACs?

  • Model using unequal

power constraints: Exℓ2 ≤ nPℓ.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =

K

  • k=1
  • 0 ν

ν ν(mk)

  • Nam-Chung-Lee ’11: At each transmitter, use the same fine lattice

and a different coarse lattice, chosen to meet the power constraint.

  • Decoding is successful if the rates satisfy

Rℓ < 1 2 log+

  • Pℓ

L

i=1 Pi

+ Pℓ

  • .
slide-35
SLIDE 35

Computation over Gaussian MACs

  • How about general

Gaussian MACs?

  • Model using unequal

power constraints: Exℓ2 ≤ nPℓ.

m1 E1 Xn

1

m2 E2 Xn

2

mK EK Xn

K

. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =

K

  • k=1
  • 0 ν

ν ν(mk)

  • Nam-Chung-Lee ’11: At each transmitter, use the same fine lattice

and a different coarse lattice, chosen to meet the power constraint.

  • Decoding is successful if the rates satisfy

Rℓ < 1 2 log+

  • Pℓ

L

i=1 Pi

+ Pℓ

  • .
  • Nazer-Cadambe-Ntranos-Caire ’15: Expanded compute-and-forward

framework to link unequal power setting to finite fields.

slide-36
SLIDE 36

Point-to-Point Channels

M

Encoder

Xn pY |X Y n

Decoder

ˆ M

  • Messages: m ∈ [2nR] {0, . . . , 2nR − 1}
  • Encoder: a mapping xn(m) ∈ X n for each m ∈ [2nR]
  • Decoder: a mapping ˆ

m(yn) ∈ [2nR] for each yn ∈ Yn

slide-37
SLIDE 37

Point-to-Point Channels

M

Encoder

Xn pY |X Y n

Decoder

ˆ M

  • Messages: m ∈ [2nR] {0, . . . , 2nR − 1}
  • Encoder: a mapping xn(m) ∈ X n for each m ∈ [2nR]
  • Decoder: a mapping ˆ

m(yn) ∈ [2nR] for each yn ∈ Yn

Theorem (Shannon ’48)

C = max

pX(x) I(X; Y )

slide-38
SLIDE 38

Point-to-Point Channels

M

Encoder

Xn pY |X Y n

Decoder

ˆ M

  • Messages: m ∈ [2nR] {0, . . . , 2nR − 1}
  • Encoder: a mapping xn(m) ∈ X n for each m ∈ [2nR]
  • Decoder: a mapping ˆ

m(yn) ∈ [2nR] for each yn ∈ Yn

Theorem (Shannon ’48)

C = max

pX(x) I(X; Y )

  • Proof relies on random i.i.d. codebooks combined with

joint typicality decoding.

slide-39
SLIDE 39

Random i.i.d. Codebooks

X n

T (n)

ǫ

(X)

Random i.i.d. Codes

  • Codewords are independent of one another.
  • Can directly target an input distribution pX(x).
slide-40
SLIDE 40

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

slide-41
SLIDE 41

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Pick a finite field Fq and a symbol mapping x : Fq → X.
slide-42
SLIDE 42

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Pick a finite field Fq and a symbol mapping x : Fq → X.
  • Set κ = nR/ log(q).
slide-43
SLIDE 43

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Pick a finite field Fq and a symbol mapping x : Fq → X.
  • Set κ = nR/ log(q).
  • Draw a random generator matrix G ∈ Fκ×n

q

elementwise i.i.d. Unif(Fq). Let G be a realization.

slide-44
SLIDE 44

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Pick a finite field Fq and a symbol mapping x : Fq → X.
  • Set κ = nR/ log(q).
  • Draw a random generator matrix G ∈ Fκ×n

q

elementwise i.i.d. Unif(Fq). Let G be a realization.

  • Draw a random shift (or “dither”) Dn elementwise i.i.d. Unif(Fq).

Let dn be a realization.

slide-45
SLIDE 45

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Pick a finite field Fq and a symbol mapping x : Fq → X.
  • Set κ = nR/ log(q).
  • Draw a random generator matrix G ∈ Fκ×n

q

elementwise i.i.d. Unif(Fq). Let G be a realization.

  • Draw a random shift (or “dither”) Dn elementwise i.i.d. Unif(Fq).

Let dn be a realization.

  • Take q-ary expansion of message m into the vector ν

ν ν(m) ∈ Fκ

q.

slide-46
SLIDE 46

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Pick a finite field Fq and a symbol mapping x : Fq → X.
  • Set κ = nR/ log(q).
  • Draw a random generator matrix G ∈ Fκ×n

q

elementwise i.i.d. Unif(Fq). Let G be a realization.

  • Draw a random shift (or “dither”) Dn elementwise i.i.d. Unif(Fq).

Let dn be a realization.

  • Take q-ary expansion of message m into the vector ν

ν ν(m) ∈ Fκ

q.

  • Linear codeword for message m is un(m) = ν

ν ν(m)G ⊕ dn.

slide-47
SLIDE 47

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Pick a finite field Fq and a symbol mapping x : Fq → X.
  • Set κ = nR/ log(q).
  • Draw a random generator matrix G ∈ Fκ×n

q

elementwise i.i.d. Unif(Fq). Let G be a realization.

  • Draw a random shift (or “dither”) Dn elementwise i.i.d. Unif(Fq).

Let dn be a realization.

  • Take q-ary expansion of message m into the vector ν

ν ν(m) ∈ Fκ

q.

  • Linear codeword for message m is un(m) = ν

ν ν(m)G ⊕ dn.

  • Channel input at time i is xi(m) = x(ui(m)).
slide-48
SLIDE 48

Random i.i.d. Codebooks

Random Linear Codes

X n

T (n)

ǫ

(X)

  • Codewords are pairwise independent of one another.
  • Codewords are uniformly distributed over Fn

q.

slide-49
SLIDE 49

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M

  • Well known that a direct application of linear coding is not sufficient

to reach the point-to-point capacity, Ahlswede ’71.

slide-50
SLIDE 50

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M

  • Well known that a direct application of linear coding is not sufficient

to reach the point-to-point capacity, Ahlswede ’71.

  • Gallager ’68: Pick Fq with q ≫ X and choose symbol mapping x(u)

to reach c.a.i.d. from Unif(Fq). This can attain the capacity.

slide-51
SLIDE 51

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M

  • Well known that a direct application of linear coding is not sufficient

to reach the point-to-point capacity, Ahlswede ’71.

  • Gallager ’68: Pick Fq with q ≫ X and choose symbol mapping x(u)

to reach c.a.i.d. from Unif(Fq). This can attain the capacity.

  • This will not work for us. Roughly speaking, if each encoder has a

different input distribution, the symbol mappings may be quite different, which will disrupt the linear structure of the codebook.

slide-52
SLIDE 52

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M

  • Well known that a direct application of linear coding is not sufficient

to reach the point-to-point capacity, Ahlswede ’71.

  • Gallager ’68: Pick Fq with q ≫ X and choose symbol mapping x(u)

to reach c.a.i.d. from Unif(Fq). This can attain the capacity.

  • This will not work for us. Roughly speaking, if each encoder has a

different input distribution, the symbol mappings may be quite different, which will disrupt the linear structure of the codebook.

  • Padakandla-Pradhan ’13: It is possible to shape the input

distribution using nested linear codes.

slide-53
SLIDE 53

Point-to-Point Channels: Linear Codes

M

Linear Code

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M

  • Well known that a direct application of linear coding is not sufficient

to reach the point-to-point capacity, Ahlswede ’71.

  • Gallager ’68: Pick Fq with q ≫ X and choose symbol mapping x(u)

to reach c.a.i.d. from Unif(Fq). This can attain the capacity.

  • This will not work for us. Roughly speaking, if each encoder has a

different input distribution, the symbol mappings may be quite different, which will disrupt the linear structure of the codebook.

  • Padakandla-Pradhan ’13: It is possible to shape the input

distribution using nested linear codes.

  • Basic idea: Generate many codewords to represent one message.

Search in this “bin” to find a codeword with the desired type, i.e., multicoding.

slide-54
SLIDE 54

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

slide-55
SLIDE 55

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Messages m ∈ [2nR] and auxiliary indices l ∈ [2n ˆ

R].

slide-56
SLIDE 56

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Messages m ∈ [2nR] and auxiliary indices l ∈ [2n ˆ

R].

  • Set κ = n(R + ˆ

R)/ log(q).

slide-57
SLIDE 57

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Messages m ∈ [2nR] and auxiliary indices l ∈ [2n ˆ

R].

  • Set κ = n(R + ˆ

R)/ log(q).

  • Pick generator matrix G and dither dn as before.
slide-58
SLIDE 58

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Messages m ∈ [2nR] and auxiliary indices l ∈ [2n ˆ

R].

  • Set κ = n(R + ˆ

R)/ log(q).

  • Pick generator matrix G and dither dn as before.
  • Take q-ary expansions
  • ν

ν ν(m) ν ν ν(l)

  • ∈ Fκ

q.

slide-59
SLIDE 59

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Code Construction:

  • Messages m ∈ [2nR] and auxiliary indices l ∈ [2n ˆ

R].

  • Set κ = n(R + ˆ

R)/ log(q).

  • Pick generator matrix G and dither dn as before.
  • Take q-ary expansions
  • ν

ν ν(m) ν ν ν(l)

  • ∈ Fκ

q.

  • Linear codewords: un(m, l) =
  • ν

ν ν(m) ν ν ν(l)

  • G ⊕ dn.
slide-60
SLIDE 60

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Encoding:

slide-61
SLIDE 61

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Encoding:

  • Fix p(u) and x(u).
slide-62
SLIDE 62

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Encoding:

  • Fix p(u) and x(u).
  • Multicoding: For each m, find an index l such that

un(m, l) ∈ T (n)

ǫ′

(U)

slide-63
SLIDE 63

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Encoding:

  • Fix p(u) and x(u).
  • Multicoding: For each m, find an index l such that

un(m, l) ∈ T (n)

ǫ′

(U)

  • Succeeds w.h.p. if ˆ

R > D(pUpq) (where pq is uniform over Fq).

slide-64
SLIDE 64

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Encoding:

  • Fix p(u) and x(u).
  • Multicoding: For each m, find an index l such that

un(m, l) ∈ T (n)

ǫ′

(U)

  • Succeeds w.h.p. if ˆ

R > D(pUpq) (where pq is uniform over Fq).

  • Transmit xi = x
  • ui(m, l)
  • .
slide-65
SLIDE 65

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Encoding:

  • Fix p(u) and x(u).
  • Multicoding: For each m, find an index l such that

un(m, l) ∈ T (n)

ǫ′

(U)

  • Succeeds w.h.p. if ˆ

R > D(pUpq) (where pq is uniform over Fq).

  • Transmit xi = x
  • ui(m, l)
  • .

Decoding:

slide-66
SLIDE 66

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Encoding:

  • Fix p(u) and x(u).
  • Multicoding: For each m, find an index l such that

un(m, l) ∈ T (n)

ǫ′

(U)

  • Succeeds w.h.p. if ˆ

R > D(pUpq) (where pq is uniform over Fq).

  • Transmit xi = x
  • ui(m, l)
  • .

Decoding:

  • Joint Typicality Decoding: Find the unique index ˆ

m such that

  • un( ˆ

m, ˆ l), yn) ∈ T (n)

ǫ

(U, Y ) for some index ˆ l.

slide-67
SLIDE 67

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M Encoding:

  • Fix p(u) and x(u).
  • Multicoding: For each m, find an index l such that

un(m, l) ∈ T (n)

ǫ′

(U)

  • Succeeds w.h.p. if ˆ

R > D(pUpq) (where pq is uniform over Fq).

  • Transmit xi = x
  • ui(m, l)
  • .

Decoding:

  • Joint Typicality Decoding: Find the unique index ˆ

m such that

  • un( ˆ

m, ˆ l), yn) ∈ T (n)

ǫ

(U, Y ) for some index ˆ l.

  • Succeeds w.h.p. if R + ˆ

R < I(U; Y ) + D(pUpq)

slide-68
SLIDE 68

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M

Theorem (Padakandla-Pradhan ’13)

Any rate R satisfying R < max

p(u), x(u) I(U; Y )

is achievable. This is equal to the capacity if q ≥ |X|.

slide-69
SLIDE 69

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M

Theorem (Padakandla-Pradhan ’13)

Any rate R satisfying R < max

p(u), x(u) I(U; Y )

is achievable. This is equal to the capacity if q ≥ |X|.

  • This is the basic coding framework that we will use for each

transmitter.

slide-70
SLIDE 70

Point-to-Point Channels: Linear Codes + Multicoding

M

Linear Code Multi- coding

U n x(u)

Encoder

Xn pY |X Y n

Decoder

ˆ M

Theorem (Padakandla-Pradhan ’13)

Any rate R satisfying R < max

p(u), x(u) I(U; Y )

is achievable. This is equal to the capacity if q ≥ |X|.

  • This is the basic coding framework that we will use for each

transmitter.

  • Next, let’s examine a two-transmitter, one-receiver

“compute-and-forward” network.

slide-71
SLIDE 71

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Code Construction:

  • Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ

Rk], k = 1, 2.

slide-72
SLIDE 72

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Code Construction:

  • Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ

Rk], k = 1, 2.

  • Set κ = n(max{R1 + ˆ

R1, R2 + ˆ R2})/ log(q).

slide-73
SLIDE 73

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Code Construction:

  • Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ

Rk], k = 1, 2.

  • Set κ = n(max{R1 + ˆ

R1, R2 + ˆ R2})/ log(q).

  • Pick generator matrix G and dithers dn

1, dn 2 as before.

slide-74
SLIDE 74

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Code Construction:

  • Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ

Rk], k = 1, 2.

  • Set κ = n(max{R1 + ˆ

R1, R2 + ˆ R2})/ log(q).

  • Pick generator matrix G and dithers dn

1, dn 2 as before.

  • Take q-ary expansions
  • ν

ν ν(m1) ν ν ν(l1)

  • ∈ Fκ

q

  • ν

ν ν(m2) ν ν ν(l2) 0

  • ∈ Fκ

q

Zero-padding

slide-75
SLIDE 75

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Code Construction:

  • Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ

Rk], k = 1, 2.

  • Set κ = n(max{R1 + ˆ

R1, R2 + ˆ R2})/ log(q).

  • Pick generator matrix G and dithers dn

1, dn 2 as before.

  • Take q-ary expansions
  • η

η η(m1, l1)

  • ∈ Fκ

q

  • η

η η(m2, l2)

  • ∈ Fκ

q

slide-76
SLIDE 76

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Code Construction:

  • Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ

Rk], k = 1, 2.

  • Set κ = n(max{R1 + ˆ

R1, R2 + ˆ R2})/ log(q).

  • Pick generator matrix G and dithers dn

1, dn 2 as before.

  • Take q-ary expansions
  • η

η η(m1, l1)

  • ∈ Fκ

q

  • η

η η(m2, l2)

  • ∈ Fκ

q

  • Linear codewords: un

1(m1, l1) = η

η η(m1, l1)G ⊕ dn

1

un

2(m2, l2) = η

η η(m2, l2)G ⊕ dn

2

slide-77
SLIDE 77

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Encoding:

slide-78
SLIDE 78

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Encoding:

  • Fix p(u1), p(u2), x1(u1), and x2(u2).
slide-79
SLIDE 79

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Encoding:

  • Fix p(u1), p(u2), x1(u1), and x2(u2).
  • Multicoding: For each mk, find an index lk such that

un

k(mk, lk) ∈ T (n) ǫ′

(Uk).

slide-80
SLIDE 80

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Encoding:

  • Fix p(u1), p(u2), x1(u1), and x2(u2).
  • Multicoding: For each mk, find an index lk such that

un

k(mk, lk) ∈ T (n) ǫ′

(Uk).

  • Succeeds w.h.p. if ˆ

Rk > D(pUkpq).

slide-81
SLIDE 81

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Encoding:

  • Fix p(u1), p(u2), x1(u1), and x2(u2).
  • Multicoding: For each mk, find an index lk such that

un

k(mk, lk) ∈ T (n) ǫ′

(Uk).

  • Succeeds w.h.p. if ˆ

Rk > D(pUkpq).

  • Transmit xki = xk
  • uki(mk, lk)
  • .
slide-82
SLIDE 82

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Encoding:

  • Fix p(u1), p(u2), x1(u1), and x2(u2).
  • Multicoding: For each mk, find an index lk such that

un

k(mk, lk) ∈ T (n) ǫ′

(Uk).

  • Succeeds w.h.p. if ˆ

Rk > D(pUkpq).

  • Transmit xki = xk
  • uki(mk, lk)
  • .
slide-83
SLIDE 83

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Computation Problem:

slide-84
SLIDE 84

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Computation Problem:

  • Consider the coefficients a ∈ F2

q, a = [a1, a2]

slide-85
SLIDE 85

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Computation Problem:

  • Consider the coefficients a ∈ F2

q, a = [a1, a2]

  • For mk ∈ [2nRk], lk ∈ [2n ˆ

Rk], the linear combination of codewords

with coefficient vector a is a1un

1(m1, l1) ⊕ a2un 2(m2, l2)

=

  • a1η

η η(m1, l1) ⊕ a2η η η(m2, l2)

  • G ⊕ a1dn

1 ⊕ a2dn 2

= ν ν ν(t)G ⊕ dn

w

= wn(t), t ∈ [2n max{R1+ ˆ

R1,R2+ ˆ R2}]

slide-86
SLIDE 86

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Computation Problem:

  • Let Mk be the chosen message and Lk the chosen index from the

multicoding step.

slide-87
SLIDE 87

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Computation Problem:

  • Let Mk be the chosen message and Lk the chosen index from the

multicoding step.

  • Decoder wants a linear combination of the codewords:

W n(T) = a1U n

1 (M1, L1) ⊕ a2U n 2 (M2, L2)

slide-88
SLIDE 88

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Computation Problem:

  • Let Mk be the chosen message and Lk the chosen index from the

multicoding step.

  • Decoder wants a linear combination of the codewords:

W n(T) = a1U n

1 (M1, L1) ⊕ a2U n 2 (M2, L2)

  • Decoder: ˆ

t(yn) ∈ [2n max{R1+ ˆ

R1,R2+ ˆ R2}], yn ∈ Yn

  • Probability of Error: P(n)

ǫ

= P{T = ˆ T}

slide-89
SLIDE 89

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Computation Problem:

  • Let Mk be the chosen message and Lk the chosen index from the

multicoding step.

  • Decoder wants a linear combination of the codewords:

W n(T) = a1U n

1 (M1, L1) ⊕ a2U n 2 (M2, L2)

  • Decoder: ˆ

t(yn) ∈ [2n max{R1+ ˆ

R1,R2+ ˆ R2}], yn ∈ Yn

  • Probability of Error: P(n)

ǫ

= P{T = ˆ T}

  • A rate pair is achievable if there exists a sequence of codes such that

P(n)

ǫ

→ 0 as n → ∞.

slide-90
SLIDE 90

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T Decoding:

  • Joint Typicality Decoding: Find an index t ∈ [2n max(R1+ ˆ

R1,R2+ ˆ R2)]

such that (wn(t), yn) ∈ T (n)

ǫ

.

slide-91
SLIDE 91

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T

Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15)

A rate pair (R1, R2) is achievable if R1 < I(W; Y ) − I(W; U2), R2 < I(W; Y ) − I(W; U1), for some p(u1)p(u2) and functions x1(u1), x2(u2), where Uk = Fq, k = 1, 2, and W = a1U1 ⊕ a2U2.

slide-92
SLIDE 92

Nested Linear Coding Architecture

M1

Linear Code Multi- coding

U n

1

x1(u1) Xn

1

M2

Linear Code Multi- coding

U n

2

x2(u2) Xn

2

pY |X1X2 Y n

Decoder

ˆ T

Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15)

A rate pair (R1, R2) is achievable if R1 < I(W; Y ) − I(W; U2), R2 < I(W; Y ) − I(W; U1), for some p(u1)p(u2) and functions x1(u1), x2(u2), where Uk = Fq, k = 1, 2, and W = a1U1 ⊕ a2U2.

  • Padakandla-Pradhan ’13: Special case where R1 = R2.
slide-93
SLIDE 93

Proof Sketch

  • WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
slide-94
SLIDE 94

Proof Sketch

  • WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
  • Union bound: P(n)

ǫ

  • t=0

P

  • (W n(t), Y n) ∈ T (n)

ǫ

|M

  • .
slide-95
SLIDE 95

Proof Sketch

  • WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
  • Union bound: P(n)

ǫ

  • t=0

P

  • (W n(t), Y n) ∈ T (n)

ǫ

|M

  • .
  • Notice that the Lk depend on the codebook so Y n and W n(t) are

not independent.

slide-96
SLIDE 96

Proof Sketch

  • WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
  • Union bound: P(n)

ǫ

  • t=0

P

  • (W n(t), Y n) ∈ T (n)

ǫ

|M

  • .
  • Notice that the Lk depend on the codebook so Y n and W n(t) are

not independent.

  • To get around this issue, we analyze

P(E) =

  • t=0

P

  • (W n(t), Y n) ∈ T (n)

ǫ

, U n

1 (0, 0) ∈ T (n) ǫ

, U n

2 (0, 0) ∈ T (n) ǫ

|M

slide-97
SLIDE 97

Proof Sketch

  • WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
  • Union bound: P(n)

ǫ

  • t=0

P

  • (W n(t), Y n) ∈ T (n)

ǫ

|M

  • .
  • Notice that the Lk depend on the codebook so Y n and W n(t) are

not independent.

  • To get around this issue, we analyze

P(E) =

  • t=0

P

  • (W n(t), Y n) ∈ T (n)

ǫ

, U n

1 (0, 0) ∈ T (n) ǫ

, U n

2 (0, 0) ∈ T (n) ǫ

|M

  • Conditioned on M, Y n → (U n

1 (0, 0), U n 2 (0, 0)) → W n(t)

slide-98
SLIDE 98

Proof Sketch

  • WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
  • Union bound: P(n)

ǫ

  • t=0

P

  • (W n(t), Y n) ∈ T (n)

ǫ

|M

  • .
  • Notice that the Lk depend on the codebook so Y n and W n(t) are

not independent.

  • To get around this issue, we analyze

P(E) =

  • t=0

P

  • (W n(t), Y n) ∈ T (n)

ǫ

, U n

1 (0, 0) ∈ T (n) ǫ

, U n

2 (0, 0) ∈ T (n) ǫ

|M

  • Conditioned on M, Y n → (U n

1 (0, 0), U n 2 (0, 0)) → W n(t)

  • P(E) tends to zero as n → ∞ if

Rk + ˆ Rk + ˆ R1 + ˆ R2 < I(W; Y ) + D(pW||pq) + D(pU1||pq) + D(pU2||pq)

slide-99
SLIDE 99

Compute-and-Forward over a Gaussian MAC

  • Consider a Gaussian MAC with real-valued channel output

Y = h1X1 + h2X2 + Z

slide-100
SLIDE 100

Compute-and-Forward over a Gaussian MAC

  • Consider a Gaussian MAC with real-valued channel output

Y = h1X1 + h2X2 + Z

  • Want to recover a1Xn

1 + a2Xn 2 for some integers a1, a2.

slide-101
SLIDE 101

Compute-and-Forward over a Gaussian MAC

  • Consider a Gaussian MAC with real-valued channel output

Y = h1X1 + h2X2 + Z

  • Want to recover a1Xn

1 + a2Xn 2 for some integers a1, a2.

  • Gaussian noise: Z ∼ N(0, 1)
slide-102
SLIDE 102

Compute-and-Forward over a Gaussian MAC

  • Consider a Gaussian MAC with real-valued channel output

Y = h1X1 + h2X2 + Z

  • Want to recover a1Xn

1 + a2Xn 2 for some integers a1, a2.

  • Gaussian noise: Z ∼ N(0, 1)
  • Usual power constraint: E[X2

k] ≤ P

slide-103
SLIDE 103

Compute-and-Forward over a Gaussian MAC

  • Consider a Gaussian MAC with real-valued channel output

Y = h1X1 + h2X2 + Z

  • Want to recover a1Xn

1 + a2Xn 2 for some integers a1, a2.

  • Gaussian noise: Z ∼ N(0, 1)
  • Usual power constraint: E[X2

k] ≤ P

  • Via Gaussian quantization arguments, we can recover the following

theorem.

slide-104
SLIDE 104

Compute-and-Forward over a Gaussian MAC

  • Consider a Gaussian MAC with real-valued channel output

Y = h1X1 + h2X2 + Z

  • Want to recover a1Xn

1 + a2Xn 2 for some integers a1, a2.

  • Gaussian noise: Z ∼ N(0, 1)
  • Usual power constraint: E[X2

k] ≤ P

  • Via Gaussian quantization arguments, we can recover the following

theorem.

Theorem (Nazer-Gastpar ’11)

For any channel vector h and integer coefficient vector a, any rate tuple satisfying Rk < Rcomp(h, a) for k s.t. ak = 0 is achievable where Rcomp(h, a) = 1 2 log+

  • P

aT P −1I + hhT−1a

slide-105
SLIDE 105

Beyond One Linear Combination

  • In some scenarios, it is of interest to decode two or more linear

combinations at each receiver.

slide-106
SLIDE 106

Beyond One Linear Combination

  • In some scenarios, it is of interest to decode two or more linear

combinations at each receiver.

  • For example, Ordentlich-Erez-Nazer ’14 approximates the sum

capacity of the symmetric Gaussian interference channel via decoding two linear combinations.

slide-107
SLIDE 107

Beyond One Linear Combination

  • In some scenarios, it is of interest to decode two or more linear

combinations at each receiver.

  • For example, Ordentlich-Erez-Nazer ’14 approximates the sum

capacity of the symmetric Gaussian interference channel via decoding two linear combinations.

  • Ordentlich-Erez-Nazer ’13 improves upon compute-and-forward for

two or more linear combinations via successive cancellation.

slide-108
SLIDE 108

Beyond One Linear Combination

  • In some scenarios, it is of interest to decode two or more linear

combinations at each receiver.

  • For example, Ordentlich-Erez-Nazer ’14 approximates the sum

capacity of the symmetric Gaussian interference channel via decoding two linear combinations.

  • Ordentlich-Erez-Nazer ’13 improves upon compute-and-forward for

two or more linear combinations via successive cancellation.

  • What about jointly decoding the linear combinations?
slide-109
SLIDE 109

Beyond One Linear Combination

  • In some scenarios, it is of interest to decode two or more linear

combinations at each receiver.

  • For example, Ordentlich-Erez-Nazer ’14 approximates the sum

capacity of the symmetric Gaussian interference channel via decoding two linear combinations.

  • Ordentlich-Erez-Nazer ’13 improves upon compute-and-forward for

two or more linear combinations via successive cancellation.

  • What about jointly decoding the linear combinations?
  • Ordentlich-Erez ’13 derived bounds for lattice-based codes.
slide-110
SLIDE 110

Beyond One Linear Combination

  • In some scenarios, it is of interest to decode two or more linear

combinations at each receiver.

  • For example, Ordentlich-Erez-Nazer ’14 approximates the sum

capacity of the symmetric Gaussian interference channel via decoding two linear combinations.

  • Ordentlich-Erez-Nazer ’13 improves upon compute-and-forward for

two or more linear combinations via successive cancellation.

  • What about jointly decoding the linear combinations?
  • Ordentlich-Erez ’13 derived bounds for lattice-based codes.
  • This talk: We can analyze this via joint typicality decoding to get an

achievable rate region.

slide-111
SLIDE 111

Jointly Decoding Two Linear Combinations of K Codewords

  • At node k ∈ [1 : K], the message Mk is encoded using the nested

linear coding architecture.

slide-112
SLIDE 112

Jointly Decoding Two Linear Combinations of K Codewords

  • At node k ∈ [1 : K], the message Mk is encoded using the nested

linear coding architecture.

  • Let Lk be the chosen index from the multicoding step.
slide-113
SLIDE 113

Jointly Decoding Two Linear Combinations of K Codewords

  • At node k ∈ [1 : K], the message Mk is encoded using the nested

linear coding architecture.

  • Let Lk be the chosen index from the multicoding step.
  • The objective of the receiver is to compute two linear combinations
  • f the codewords,

W n

1 (T1) = K

  • k=1

a1kun

k(Mk, Lk)

W n

2 (T2) = K

  • k=1

a2kun

k(Mk, Lk) ,

with vanishing probability of error.

slide-114
SLIDE 114

Jointly Decoding Two Linear Combinations of K Codewords

  • At node k ∈ [1 : K], the message Mk is encoded using the nested

linear coding architecture.

  • Let Lk be the chosen index from the multicoding step.
  • The objective of the receiver is to compute two linear combinations
  • f the codewords,

W n

1 (T1) = K

  • k=1

a1kun

k(Mk, Lk)

W n

2 (T2) = K

  • k=1

a2kun

k(Mk, Lk) ,

with vanishing probability of error.

  • Key Technical Issue: Random linear codewords are pairwise

independent, but not 4-wise independent!

slide-115
SLIDE 115

Jointly Decoding Two Linear Combinations of K Codewords

Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15)

A rate tuple (R1, . . . , RK) is achievable for computing two linear combinations if

Rk < min{H(Uk) − H(V |Y ), H(Uk) − H(W1, W2|Y, V )}, k ∈ K1 Rj < I(W2; Y, W1) − H(W2) + H(Uj), j ∈ K2, Rk + Rj < I(W1, W2; Y ) − H(W1, W2) + H(Uk) + H(Uj), k ∈ K1, j ∈ K2

  • r

Rk < I(W1; Y, W2) − H(W1) + H(Uk), k ∈ K1, Rj < min{H(Uj) − H(V |Y ), H(Uj) − H(W1, W2|Y, V )}, j ∈ K2, Rk + Rj < I(W1, W2; Y ) − H(W1, W2) + H(Uk) + H(Uj), k ∈ K1, j ∈ K2

for some K

k=1 p(uk) and xk(uk) and non-zero vector b

b b ∈ F2

q,

where Kj = {k ∈ [1 : K] : ajk = 0}, j = 1, 2 and V = b1W1 ⊕ b2W2.

slide-116
SLIDE 116

Jointly Decoding Two Linear Combinations of K Codewords

Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15)

A rate tuple (R1, . . . , RK) is achievable for computing two linear combinations if

Rk < min{H(Uk) − H(V |Y ), H(Uk) − H(W1, W2|Y, V )}, k ∈ K1 Rj < I(W2; Y, W1) − H(W2) + H(Uj), j ∈ K2, Rk + Rj < I(W1, W2; Y ) − H(W1, W2) + H(Uk) + H(Uj), k ∈ K1, j ∈ K2

  • r

Rk < I(W1; Y, W2) − H(W1) + H(Uk), k ∈ K1, Rj < min{H(Uj) − H(V |Y ), H(Uj) − H(W1, W2|Y, V )}, j ∈ K2, Rk + Rj < I(W1, W2; Y ) − H(W1, W2) + H(Uk) + H(Uj), k ∈ K1, j ∈ K2

for some K

k=1 p(uk) and xk(uk) and non-zero vector b

b b ∈ F2

q,

where Kj = {k ∈ [1 : K] : ajk = 0}, j = 1, 2 and V = b1W1 ⊕ b2W2.

  • The auxiliary linear combination V plays a key role in classifying

dependent competing pairs in the error analysis.

slide-117
SLIDE 117

Multiple-Access via Nested Linear Codes

Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15)

A rate pair (R1, R2) is achievable for the discrete memoryless multiple-access channel if R1< max

a=0 min{H(U1) − H(W|Y ), H(U1) − H(U1, U2|Y, W)},

R2 < I(X2; Y |X1), R1 + R2 < I(X1, X2; Y ),

  • r

R1 < I(X1; Y |X2), R2< max

a=0 min{H(U2) − H(W|Y ), H(U2) − H(U1, U2|Y, W)},

R1 + R2 < I(X1, X2; Y ) for some p(u1)p(u2) and x1(u1), x2(u2), where W = a1U1 ⊕ a2U2.

slide-118
SLIDE 118

Multiple-Access Rate Region

R2 R1 R1 I1

R1 < I1, R2 < I(X2; Y |X1), R1 + R2 < I(X1, X2; Y ), where I1 = max

a=0 min{H(U1) − H(W|Y ), H(U1) − H(U1, U2|Y, W)}

slide-119
SLIDE 119

Multiple-Access Rate Region

R2 I2 R1 R2

R1 < I(X1; Y |X2), R2 < I2, R1 + R2 < I(X1, X2; Y ), where I2 = max

a=0 min{H(U2) − H(W|Y ), H(U2) − H(U1, U2|Y, W)}

slide-120
SLIDE 120

Multiple-Access Rate Region

R2 I2 R1 I1

  • Multiple-access rate region via nested linear codes:

R1 ∪ R2

slide-121
SLIDE 121

“Two Help One”

R2 R1 MAC Capacity Region

  • Even if the receiver is only

interested in recovering one linear combination it can sometimes help to decode two!

slide-122
SLIDE 122

“Two Help One”

R2 CF2 R1 CF1 Decode One Linear Combination

  • Even if the receiver is only

interested in recovering one linear combination it can sometimes help to decode two!

slide-123
SLIDE 123

“Two Help One”

R2 CF2 R1 CF1 Multiple-Access via Nested Linear Codes

  • Even if the receiver is only

interested in recovering one linear combination it can sometimes help to decode two!

slide-124
SLIDE 124

“Two Help One”

R2 CF2 R1 CF1 Union

  • Even if the receiver is only

interested in recovering one linear combination it can sometimes help to decode two!

slide-125
SLIDE 125

Case Study: Two-Sender, Two-Receiver Network

M1 E1 Xn

1

M2 E2 Xn

2

1 √ 2 1 1

Zn

1

Y n

1

Zn

2

Y n

2

D1 ( ˆ M1, ˆ M2) D2 Xn

1 + Xn 2

slide-126
SLIDE 126

Case Study: Two-Sender, Two-Receiver Network

R1 R2 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 MAC capacity 1 MAC capacity 2

slide-127
SLIDE 127

Case Study: Two-Sender, Two-Receiver Network

R1 R2 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 Nested linear codes 2

slide-128
SLIDE 128

Case Study: Two-Sender, Two-Receiver Network

R1 R2 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 Nested linear codes 1

slide-129
SLIDE 129

Case Study: Two-Sender, Two-Receiver Network

R1 R2 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 Nested linear codes Lattice based SC-CF Union of MACs

slide-130
SLIDE 130

Concluding Remarks

  • First steps towards bringing algebraic network information theory

back into the realm of joint typicality.

  • Joint decoding rate region for compute-and-forward that
  • utperforms parallel and successive decoding.