Towards an Algebraic Network Information Theory Bobak Nazer Boston - - PowerPoint PPT Presentation

towards an algebraic network information theory
SMART_READER_LITE
LIVE PREVIEW

Towards an Algebraic Network Information Theory Bobak Nazer Boston - - PowerPoint PPT Presentation

Towards an Algebraic Network Information Theory Bobak Nazer Boston University Charles River Information Theory Day April 28, 2014 Network Information Theory Goal: Roughly speaking, for a given network, determine necessary and sufficient


slide-1
SLIDE 1

Towards an Algebraic Network Information Theory

Bobak Nazer Boston University Charles River Information Theory Day April 28, 2014

slide-2
SLIDE 2

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations.

slide-3
SLIDE 3

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codebooks according to some i.i.d. distributions.
slide-4
SLIDE 4

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codebooks according to some i.i.d. distributions.
  • Powerful generalizations including superposition coding, dirty paper coding,

block Markov coding, and many more...

slide-5
SLIDE 5

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codebooks according to some i.i.d. distributions.
  • Powerful generalizations including superposition coding, dirty paper coding,

block Markov coding, and many more...

  • Rate regions described in terms of (single-letter) information measures
  • ptimized over pdfs.
slide-6
SLIDE 6

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codebooks according to some i.i.d. distributions.
  • Powerful generalizations including superposition coding, dirty paper coding,

block Markov coding, and many more...

  • Rate regions described in terms of (single-letter) information measures
  • ptimized over pdfs.
  • Many important successes: multiple-access channels, (degraded) broadcast

channels, Slepian-Wolf compression, network coding, and many more...

slide-7
SLIDE 7

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codebooks according to some i.i.d. distributions.
  • Powerful generalizations including superposition coding, dirty paper coding,

block Markov coding, and many more...

  • Rate regions described in terms of (single-letter) information measures
  • ptimized over pdfs.
  • Many important successes: multiple-access channels, (degraded) broadcast

channels, Slepian-Wolf compression, network coding, and many more...

  • State-of-the-art elegantly captured in the recent textbook of El Gamal and

Kim.

slide-8
SLIDE 8

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:

  • Generate codebooks according to some i.i.d. distributions.
  • Powerful generalizations including superposition coding, dirty paper coding,

block Markov coding, and many more...

  • Rate regions described in terms of (single-letter) information measures
  • ptimized over pdfs.
  • Many important successes: multiple-access channels, (degraded) broadcast

channels, Slepian-Wolf compression, network coding, and many more...

  • State-of-the-art elegantly captured in the recent textbook of El Gamal and

Kim.

  • Codes with algebraic structure are sought after to mimic the performance
  • f i.i.d. random codes.
slide-9
SLIDE 9

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations.

slide-10
SLIDE 10

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
slide-11
SLIDE 11

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

slide-12
SLIDE 12

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

  • Coding schemes exhibit behavior not found via i.i.d. ensembles.
slide-13
SLIDE 13

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

  • Coding schemes exhibit behavior not found via i.i.d. ensembles.
  • However, some classical coding techniques are still unavailable.
slide-14
SLIDE 14

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

  • Coding schemes exhibit behavior not found via i.i.d. ensembles.
  • However, some classical coding techniques are still unavailable.
  • No general theory as of yet...
slide-15
SLIDE 15

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

  • Coding schemes exhibit behavior not found via i.i.d. ensembles.
  • However, some classical coding techniques are still unavailable.
  • No general theory as of yet... but we are making progress...
slide-16
SLIDE 16

Network Information Theory

Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:

  • Utilize linear or lattice codebooks.
  • Compelling examples starting from the work of K¨
  • rner and Marton on

distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.

  • Coding schemes exhibit behavior not found via i.i.d. ensembles.
  • However, some classical coding techniques are still unavailable.
  • No general theory as of yet... but we are making progress...
  • Most of the initial efforts have focused on Gaussian networks.
slide-17
SLIDE 17

Road Map

  • Algebraic Network Source Coding: Classic example of K¨
  • rner and

Marton.

  • Algebraic Network Channel Coding: Compute-and-forward and an

application to interference alignment.

slide-18
SLIDE 18

Slepian-Wolf Problem

s1 E1 R1 s2 E2 R2 D ˆ s1 ˆ s2

  • Joint i.i.d. sources p(s1, s2) =

n

  • i=1

pS1S2(s1i, s2i)

  • Rate Region: Set of rates (R1, R2) such that the encoders can

send s1 and s2 to the decoder with vanishing probability of error P{(ˆ s1,ˆ s2) = (s1, s2)} → 0 as n → ∞

slide-19
SLIDE 19

Random Binning

  • Codebook 1: Independently and uniformly assign each source

sequence s1 to a label {1, 2, . . . , 2nR1}

  • Codebook 2: Independently and uniformly assign each source

sequence s2 to a label {1, 2, . . . , 2nR2}

slide-20
SLIDE 20

Random Binning

  • Codebook 1: Independently and uniformly assign each source

sequence s1 to a label {1, 2, . . . , 2nR1}

  • Codebook 2: Independently and uniformly assign each source

sequence s2 to a label {1, 2, . . . , 2nR2}

  • Decoder: Look for jointly typical pair (ˆ

s1,ˆ s2) within the received

  • bin. Union bound:

P

  • jointly typical (ˆ

s1,ˆ s2) = (s1, s2) in bin (ℓ1, ℓ2)

  • jointly typical (˜

s1,˜ s2)

2−n(R1+R2) ≤ 2n(H(S1,S2)+ǫ) 2−n(R1+R2)

slide-21
SLIDE 21

Random Binning

  • Codebook 1: Independently and uniformly assign each source

sequence s1 to a label {1, 2, . . . , 2nR1}

  • Codebook 2: Independently and uniformly assign each source

sequence s2 to a label {1, 2, . . . , 2nR2}

  • Decoder: Look for jointly typical pair (ˆ

s1,ˆ s2) within the received

  • bin. Union bound:

P

  • jointly typical (ˆ

s1,ˆ s2) = (s1, s2) in bin (ℓ1, ℓ2)

  • jointly typical (˜

s1,˜ s2)

2−n(R1+R2) ≤ 2n(H(S1,S2)+ǫ) 2−n(R1+R2)

  • Need R1 + R2 > H(S1, S2).
slide-22
SLIDE 22

Random Binning

  • Codebook 1: Independently and uniformly assign each source

sequence s1 to a label {1, 2, . . . , 2nR1}

  • Codebook 2: Independently and uniformly assign each source

sequence s2 to a label {1, 2, . . . , 2nR2}

  • Decoder: Look for jointly typical pair (ˆ

s1,ˆ s2) within the received

  • bin. Union bound:

P

  • jointly typical (ˆ

s1,ˆ s2) = (s1, s2) in bin (ℓ1, ℓ2)

  • jointly typical (˜

s1,˜ s2)

2−n(R1+R2) ≤ 2n(H(S1,S2)+ǫ) 2−n(R1+R2)

  • Need R1 + R2 > H(S1, S2).
  • Similarly, R1 > H(S1|S2) and R2 > H(S2|S1)
slide-23
SLIDE 23

Slepian-Wolf Problem: Binning Illustration

1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2

slide-24
SLIDE 24

Slepian-Wolf Problem: Binning Illustration

1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2

slide-25
SLIDE 25

Random Linear Binning

  • Assume we have chosen an injective mapping from the source

alphabets to Fp.

  • Codebook 1: Generate matrix G1 with i.i.d. uniform entries drawn

from Fp. Each sequence s1 is binned via matrix multiplication, w1 = G1s1.

  • Codebook 2: Generate matrix G2 with i.i.d. uniform entries drawn

from Fp. Each sequence s2 is binned via matrix multiplication, w2 = G2s2.

slide-26
SLIDE 26

Random Linear Binning

  • Assume we have chosen an injective mapping from the source

alphabets to Fp.

  • Codebook 1: Generate matrix G1 with i.i.d. uniform entries drawn

from Fp. Each sequence s1 is binned via matrix multiplication, w1 = G1s1.

  • Codebook 2: Generate matrix G2 with i.i.d. uniform entries drawn

from Fp. Each sequence s2 is binned via matrix multiplication, w2 = G2s2.

  • Bin assignments are uniform and pairwise independent

(except for sℓ = 0)

slide-27
SLIDE 27

Random Linear Binning

  • Assume we have chosen an injective mapping from the source

alphabets to Fp.

  • Codebook 1: Generate matrix G1 with i.i.d. uniform entries drawn

from Fp. Each sequence s1 is binned via matrix multiplication, w1 = G1s1.

  • Codebook 2: Generate matrix G2 with i.i.d. uniform entries drawn

from Fp. Each sequence s2 is binned via matrix multiplication, w2 = G2s2.

  • Bin assignments are uniform and pairwise independent

(except for sℓ = 0)

  • Can apply the same union bound analysis as random binning.
slide-28
SLIDE 28

Slepian-Wolf Rate Region

Slepian-Wolf Theorem Reliable compression possible if and

  • nly if:

R1 ≥ H(S1|S2) R2 ≥ H(S2|S1) R1 + R2 ≥ H(S1, S2) Random linear binning is as good as random i.i.d. binning. R2 R1

S-W hB(θ) hB(θ) R1 + R2 = 1 + hB(θ)

slide-29
SLIDE 29

Slepian-Wolf Rate Region

Slepian-Wolf Theorem Reliable compression possible if and

  • nly if:

R1 ≥ H(S1|S2) = hB(θ) R2 ≥ H(S2|S1) = hB(θ) R1 + R2 ≥ H(S1, S2) = 1 + hB(θ) Random linear binning is as good as random i.i.d. binning. R2 R1

S-W hB(θ) hB(θ) R1 + R2 = 1 + hB(θ)

Example: Doubly Symmetric Binary Source S1 ∼ Bern(1/2) U ∼ Bern(θ) S2 = S1 ⊕ U

slide-30
SLIDE 30

  • rner-Marton Problem
  • Binary sources
  • s1 is i.i.d. Bernoulli(1/2)
  • s2 is s1 corrupted by

Bernoulli(θ) noise

  • Decoder wants the modulo-2 sum .

s1 E1 R1 s2 E2 R2 D ˆ u u = s1 ⊕ s2 Rate Region: Set of rates (R1, R2) such that there exist encoders and decoders with vanishing probability of error P{ˆ u = u} → 0 as n → ∞ Are any rate savings possible over sending s1 and s2 in their entirety?

slide-31
SLIDE 31

Random Binning

  • Sending s1 and s2 with random binning requires

R1 + R2 > 1 + hB(θ).

  • What happens if we use rates such that R1 + R2 < 1 + hB(θ)?
  • There will be exponentially many pairs (s1, s2) in each bin!
  • This would be fine if all pairs in a bin have the same sum, s1 + s2.

But this probability goes to zero exponentially fast!

slide-32
SLIDE 32

  • rner-Marton Problem: Random Binning Illustration

1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2

slide-33
SLIDE 33

  • rner-Marton Problem: Random Binning Illustration

1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2

slide-34
SLIDE 34

Linear Binning

  • Use the same random matrix G for linear binning at each encoder:

w1 = Gs1 w2 = Gs2

  • Idea from K¨
  • rner-Marton ’79: Decoder adds up the bins.

w1 ⊕ w2 = Gs1 ⊕ Gs2 = G(s1 ⊕ s2) = Gu

  • G is good for compressing u if R > H(U) = hB(θ).

  • rner-Marton Theorem

Reliable compression of the sum is possible if and only if: R1 ≥ hB(θ) R2 ≥ hB(θ) .

slide-35
SLIDE 35

  • rner-Marton Problem: Linear Binning Illustration

1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2

slide-36
SLIDE 36

  • rner-Marton Problem: Linear Illustration

1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2

slide-37
SLIDE 37

  • rner-Marton Rate Region

R2 R1

S-W K-M hB(p) hB(p)

Linear codes can improve performance! (for distributed computation of dependent sources)

slide-38
SLIDE 38

(Algebraic) Network Source Coding

  • Krithivasan-Pradhan ’09: Nested lattice coding framework for

distributed Gaussian source coding.

  • Krithivasan-Pradhan ’11: Nested group coding framework for

distributed source coding for discrete memoryless sources.

  • Can show that these rate regions sometimes outperform the

Berger-Tung region (best known performance via i.i.d. ensembles).

slide-39
SLIDE 39

(Algebraic) Network Source Coding

  • Krithivasan-Pradhan ’09: Nested lattice coding framework for

distributed Gaussian source coding.

  • Krithivasan-Pradhan ’11: Nested group coding framework for

distributed source coding for discrete memoryless sources.

  • Can show that these rate regions sometimes outperform the

Berger-Tung region (best known performance via i.i.d. ensembles).

  • Now let’s take a look at an algebraic framework for network channel

coding.

slide-40
SLIDE 40

Compute-and-Forward

Goal: Convert noisy Gaussian networks into noiseless finite field ones.

slide-41
SLIDE 41

Compute-and-Forward

Goal: Convert noisy Gaussian networks into noiseless finite field ones.

w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK

. . . . . .

z y D ˆ w1 ˆ w2 . . . ˆ wK

slide-42
SLIDE 42

Compute-and-Forward

Goal: Convert noisy Gaussian networks into noiseless finite field ones.

w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK

. . . . . .

z y D ˆ w1 ˆ w2 . . . ˆ wK

Rn Fk

p

Fk

p

slide-43
SLIDE 43

Compute-and-Forward

Goal: Convert noisy Gaussian networks into noiseless finite field ones.

w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK

. . . . . .

z y D ˆ w1 ˆ w2 . . . ˆ wK

Rn Fk

p

Fk

p

Compute-and-Forward

slide-44
SLIDE 44

Compute-and-Forward

Goal: Convert noisy Gaussian networks into noiseless finite field ones.

w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK

. . . . . .

z y D ˆ w1 ˆ w2 . . . ˆ wK

Rn Fk

p

Fk

p

Compute-and-Forward w1 w2 wK

Q

u1 u2 uK

D

ˆ w1 ˆ w2 . . . ˆ wK

. . . . . .

slide-45
SLIDE 45

Compute-and-Forward

Goal: Convert noisy Gaussian networks into noiseless finite field ones.

w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK

. . . . . .

z y D ˆ w1 ˆ w2 . . . ˆ wK

Rn Fk

p

Fk

p

Compute-and-Forward w1 w2 wK

Q

u1 u2 uK

D

ˆ w1 ˆ w2 . . . ˆ wK

. . . . . .

Fk

p

slide-46
SLIDE 46

Compute-and-Forward

Goal: Convert noisy Gaussian networks into noiseless finite field ones.

w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK

. . . . . .

z y D ˆ w1 ˆ w2 . . . ˆ wK

Rn Fk

p

Fk

p

Compute-and-Forward w1 w2 wK

Q

u1 u2 uK

D

ˆ w1 ˆ w2 . . . ˆ wK

. . . . . .

Fk

p

  • Which linear combinations can be sent over a given channel?
slide-47
SLIDE 47

Compute-and-Forward

Goal: Convert noisy Gaussian networks into noiseless finite field ones.

w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK

. . . . . .

z y D ˆ w1 ˆ w2 . . . ˆ wK

Rn Fk

p

Fk

p

Compute-and-Forward w1 w2 wK

Q

u1 u2 uK

D

ˆ w1 ˆ w2 . . . ˆ wK

. . . . . .

Fk

p

  • Which linear combinations can be sent over a given channel?
  • Where can this help us?
slide-48
SLIDE 48

Compute-and-Forward: Problem Statement

w1 E1 x1 h1 w2 E2 x2 h2 wL EL xL hL . . . z y D ˆ u1 ˆ u2 . . . ˆ uM um =

L

  • ℓ=1

qmℓwℓ

  • Messages are finite field vectors,

wℓ ∈ Fk

p.

  • Real-valued inputs and outputs,

xℓ, y ∈ Rn.

  • Power constraint, 1

nExℓ2 ≤ P.

  • Gaussian noise, z ∼ N(0, I).
  • Equal rates: R = k

n log2 p

slide-49
SLIDE 49

Compute-and-Forward: Problem Statement

w1 E1 x1 h1 w2 E2 x2 h2 wL EL xL hL . . . z y D ˆ u1 ˆ u2 . . . ˆ uM um =

L

  • ℓ=1

qmℓwℓ

  • Messages are finite field vectors,

wℓ ∈ Fk

p.

  • Real-valued inputs and outputs,

xℓ, y ∈ Rn.

  • Power constraint, 1

nExℓ2 ≤ P.

  • Gaussian noise, z ∼ N(0, I).
  • Equal rates: R = k

n log2 p

  • Decoder wants M linear combinations of the messages with

vanishing probability of error lim

n→∞P m{ˆ

um = um}

  • = 0.
slide-50
SLIDE 50

Compute-and-Forward: Problem Statement

w1 E1 x1 h1 w2 E2 x2 h2 wL EL xL hL . . . z y D ˆ u1 ˆ u2 . . . ˆ uM um =

L

  • ℓ=1

qmℓwℓ

  • Messages are finite field vectors,

wℓ ∈ Fk

p.

  • Real-valued inputs and outputs,

xℓ, y ∈ Rn.

  • Power constraint, 1

nExℓ2 ≤ P.

  • Gaussian noise, z ∼ N(0, I).
  • Equal rates: R = k

n log2 p

  • Decoder wants M linear combinations of the messages with

vanishing probability of error lim

n→∞P m{ˆ

um = um}

  • = 0.
  • Receiver can use its channel state information (CSI) to match the

linear combination coefficients qmℓ ∈ Fp to the channel coefficients hℓ ∈ R. Transmitters do not require CSI.

slide-51
SLIDE 51

Compute-and-Forward: Problem Statement

w1 E1 x1 h1 w2 E2 x2 h2 wL EL xL hL . . . z y D ˆ u1 ˆ u2 . . . ˆ uM um =

L

  • ℓ=1

qmℓwℓ

  • Messages are finite field vectors,

wℓ ∈ Fk

p.

  • Real-valued inputs and outputs,

xℓ, y ∈ Rn.

  • Power constraint, 1

nExℓ2 ≤ P.

  • Gaussian noise, z ∼ N(0, I).
  • Equal rates: R = k

n log2 p

  • Decoder wants M linear combinations of the messages with

vanishing probability of error lim

n→∞P m{ˆ

um = um}

  • = 0.
  • Receiver can use its channel state information (CSI) to match the

linear combination coefficients qmℓ ∈ Fp to the channel coefficients hℓ ∈ R. Transmitters do not require CSI.

  • What rates are achievable as a function of hℓ and qmℓ?
slide-52
SLIDE 52

Computation Rate

  • Want to characterize achievable rates as a function of hℓ and qmℓ.
slide-53
SLIDE 53

Computation Rate

  • Want to characterize achievable rates as a function of hℓ and qmℓ.
  • Easier to think about integer rather than finite field coefficients.
slide-54
SLIDE 54

Computation Rate

  • Want to characterize achievable rates as a function of hℓ and qmℓ.
  • Easier to think about integer rather than finite field coefficients.
  • The linear combination with integer coefficient vector

am = [am1 am2 · · · amL]T ∈ ZL corresponds to um =

L

  • ℓ=1

qmℓwℓ where qmℓ = [amℓ] mod p (where we assume an implicit mapping between Fp and Zp).

slide-55
SLIDE 55

Computation Rate

  • Want to characterize achievable rates as a function of hℓ and qmℓ.
  • Easier to think about integer rather than finite field coefficients.
  • The linear combination with integer coefficient vector

am = [am1 am2 · · · amL]T ∈ ZL corresponds to um =

L

  • ℓ=1

qmℓwℓ where qmℓ = [amℓ] mod p (where we assume an implicit mapping between Fp and Zp).

  • Key Definition: The computation rate region described by

Rcomp(h, a) is achievable if, for any ǫ > 0 and n, p large enough, a receiver can decode any linear combinations with integer coefficient vectors a1, . . . , aM ∈ ZL for which the message rate R satisfies R < min

m Rcomp(h, am)

slide-56
SLIDE 56

Compute-and-Forward: Achievable Rates

Theorem (Nazer-Gastpar ’11)

The computation rate region described by Rcomp(h, a) = max

α∈R

1 2 log+

  • P

α2 + Pαh − a2

  • is achievable.
slide-57
SLIDE 57

Compute-and-Forward: Achievable Rates

Theorem (Nazer-Gastpar ’11)

The computation rate region described by Rcomp(h, a) = 1 2 log+

  • P

aT P −1I + hhT−1a

  • is achievable.
slide-58
SLIDE 58

Compute-and-Forward: Achievable Rates

Theorem (Nazer-Gastpar ’11)

The computation rate region described by Rcomp(h, a) = 1 2 log+

  • P

aT P −1I + hhT−1a

  • is achievable.

w1 E1 x1 h1 wL EL xL hL

. . . . . .

z y D

Rn Fk

p

Compute-and-Forward w1 wL

Q

ˆ u1 ˆ uM

. . . . . .

Fk

p

if R < min

m

Rcomp(h, am) for some am ∈ ZL satisfying [am] mod p = qm.

slide-59
SLIDE 59

Compute-and-Forward: Achievable Rates

Theorem (Nazer-Gastpar ’11)

The computation rate region described by Rcomp(h, a) = 1 2 log+

  • P

aT P −1I + hhT−1a

  • is achievable.

Special Cases:

  • Perfect Match: Rcomp(a, a) = 1

2 log+

  • 1

a2 + P

slide-60
SLIDE 60

Compute-and-Forward: Achievable Rates

Theorem (Nazer-Gastpar ’11)

The computation rate region described by Rcomp(h, a) = 1 2 log+

  • P

aT P −1I + hhT−1a

  • is achievable.

Special Cases:

  • Perfect Match: Rcomp(a, a) = 1

2 log+

  • 1

a2 + P

  • Decode a Message:

Rcomp

  • h, [ 0 · · · 0

m−1 zeros

1 0 · · · 0]T = 1 2 log

  • 1 +

h2

mP

1 + P

  • ℓ=m

h2

slide-61
SLIDE 61

Compute-and-Forward: Effective Noise

y =

L

  • ℓ=1

hℓxℓ + z =

L

  • ℓ=1

aℓxℓ +

L

  • ℓ=1

(hℓ − aℓ)xℓ + z Desired Codebook:

  • Closed under integer linear combinations =

⇒ lattice codebook.

slide-62
SLIDE 62

Compute-and-Forward: Effective Noise

y =

L

  • ℓ=1

hℓxℓ + z =

L

  • ℓ=1

aℓxℓ +

L

  • ℓ=1

(hℓ − aℓ)xℓ + z Effective Noise Desired Codebook:

  • Closed under integer linear combinations =

⇒ lattice codebook.

  • Independent effective noise =

⇒ dithering.

slide-63
SLIDE 63

Compute-and-Forward: Effective Noise

y =

L

  • ℓ=1

hℓxℓ + z =

L

  • ℓ=1

aℓxℓ +

L

  • ℓ=1

(hℓ − aℓ)xℓ + z Decode

− − − →

L

  • ℓ=1

qℓwℓ Effective Noise Desired Codebook:

  • Closed under integer linear combinations =

⇒ lattice codebook.

  • Independent effective noise =

⇒ dithering.

  • Isomorphic to Fk

p =

⇒ nested lattice codebook.

slide-64
SLIDE 64

Nested Lattices

  • A lattice is a discrete subgroup of Rn.
slide-65
SLIDE 65

Nested Lattices

  • A lattice is a discrete subgroup of Rn.
  • Nearest neighbor quantizer:

QΛ(x) = arg min

λ∈Λ

x − λ2

slide-66
SLIDE 66

Nested Lattices

  • A lattice is a discrete subgroup of Rn.
  • Nearest neighbor quantizer:

QΛ(x) = arg min

λ∈Λ

x − λ2

  • Two lattices Λ and ΛFINE are nested

if Λ ⊂ ΛFINE

slide-67
SLIDE 67

Nested Lattices

  • A lattice is a discrete subgroup of Rn.
  • Nearest neighbor quantizer:

QΛ(x) = arg min

λ∈Λ

x − λ2

  • Two lattices Λ and ΛFINE are nested

if Λ ⊂ ΛFINE

slide-68
SLIDE 68

Nested Lattices

  • A lattice is a discrete subgroup of Rn.
  • Nearest neighbor quantizer:

QΛ(x) = arg min

λ∈Λ

x − λ2

  • Two lattices Λ and ΛFINE are nested

if Λ ⊂ ΛFINE

  • Quantization error serves as modulo
  • peration:

[x] mod Λ = x − QΛ(x) . Distributive Law:

  • x1 + a[x2] mod Λ
  • mod Λ = [x1 + ax2] mod Λ

for all a ∈ Z.

slide-69
SLIDE 69

Nested Lattice Codes

  • Nested Lattice Code: Formed by

taking all elements of ΛFINE that lie in the fundamental Voronoi region of Λ.

slide-70
SLIDE 70

Nested Lattice Codes

  • Nested Lattice Code: Formed by

taking all elements of ΛFINE that lie in the fundamental Voronoi region of Λ.

  • Fine lattice ΛFINE protects against

noise.

slide-71
SLIDE 71

Nested Lattice Codes

  • Nested Lattice Code: Formed by

taking all elements of ΛFINE that lie in the fundamental Voronoi region of Λ.

  • Fine lattice ΛFINE protects against

noise.

  • Coarse lattice Λ enforces the power

constraint. B(0, √ nP)

slide-72
SLIDE 72

Nested Lattice Codes

  • Nested Lattice Code: Formed by

taking all elements of ΛFINE that lie in the fundamental Voronoi region of Λ.

  • Fine lattice ΛFINE protects against

noise.

  • Coarse lattice Λ enforces the power

constraint.

  • Existence of good nested lattice codes:

Loeliger ’97, Forney-Trott-Chung ’00, Erez-Litsyn-Zamir ’05, Ordentlich-Erez ’12.

  • Erez-Zamir ’04: Nested lattice codes

can achieve the point-to-point Gaussian capacity. B(0, √ nP)

slide-73
SLIDE 73

Compute-and-Forward: Illustration

All users employ the same nested lattice code:

slide-74
SLIDE 74

Compute-and-Forward: Illustration

Choose message vectors over finite field wℓ ∈ Fk

p:

w2 w1

slide-75
SLIDE 75

Compute-and-Forward: Illustration

Map wℓ to lattice point tℓ = φ(wℓ): w2 w1

slide-76
SLIDE 76

Compute-and-Forward: Illustration

Transmit lattice points over the channel: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ]

slide-77
SLIDE 77

Compute-and-Forward: Illustration

Transmit lattice points over the channel: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ]

slide-78
SLIDE 78

Compute-and-Forward: Illustration

Lattice codewords are scaled by channel coefficients: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ]

slide-79
SLIDE 79

Compute-and-Forward: Illustration

Scaled codewords added together plus noise: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ]

slide-80
SLIDE 80

Compute-and-Forward: Illustration

Scaled codewords added together plus noise: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ]

slide-81
SLIDE 81

Compute-and-Forward: Illustration

Extra noise penalty for non-integer channel coefficients: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ] Effective noise: 1 + Ph − am2

slide-82
SLIDE 82

Compute-and-Forward: Illustration

Scale output by α to reduce non-integer noise penalty: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] am = [ 2 3 ] Effective noise: α2 + Pαh − am2

slide-83
SLIDE 83

Compute-and-Forward: Illustration

Scale output by α to reduce non-integer noise penalty: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] am = [ 2 3 ] Effective noise: α2 + Pαh − am2

slide-84
SLIDE 84

Compute-and-Forward: Illustration

Decode to the closest lattice point: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] am = [ 2 3 ] Effective noise: α2 + Pαh − am2

slide-85
SLIDE 85

Compute-and-Forward: Illustration

Recover integer linear combination mod ΛC: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] am = [ 2 3 ] Effective noise: α2 + Pαh − am2

slide-86
SLIDE 86

Compute-and-Forward: Illustration

Map back to linear combination of the messages: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] am = [ 2 3 ] Effective noise: α2 + Pαh − am2

L

  • ℓ=1

qmℓwℓ

slide-87
SLIDE 87

Random i.i.d. codes are not good for computation

2nR codewords each. 2n2R possible sums of codewords.

slide-88
SLIDE 88

Random i.i.d. codes are not good for computation

2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y

slide-89
SLIDE 89

Random i.i.d. codes are not good for computation

2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y

slide-90
SLIDE 90

Random i.i.d. codes are not good for computation

2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y

slide-91
SLIDE 91

(Algebraic) Network Channel Coding

  • Compute-and-forward is a useful setting to develop algebraic

multi-user coding techniques.

slide-92
SLIDE 92

(Algebraic) Network Channel Coding

  • Compute-and-forward is a useful setting to develop algebraic

multi-user coding techniques.

  • Ordentlich-Erez-Nazer ’13: In a K-user Gaussian multiple-access

channel, the sum of the K best computation rates is exactly equal to the multiple-access sum capacity. Algebraic successive cancellation gives this operational meaning.

slide-93
SLIDE 93

(Algebraic) Network Channel Coding

  • Compute-and-forward is a useful setting to develop algebraic

multi-user coding techniques.

  • Ordentlich-Erez-Nazer ’13: In a K-user Gaussian multiple-access

channel, the sum of the K best computation rates is exactly equal to the multiple-access sum capacity. Algebraic successive cancellation gives this operational meaning.

  • Upcoming work on a compute-and-forward framework for discrete

memoryless networks.

slide-94
SLIDE 94

(Algebraic) Network Channel Coding

  • Compute-and-forward is a useful setting to develop algebraic

multi-user coding techniques.

  • Ordentlich-Erez-Nazer ’13: In a K-user Gaussian multiple-access

channel, the sum of the K best computation rates is exactly equal to the multiple-access sum capacity. Algebraic successive cancellation gives this operational meaning.

  • Upcoming work on a compute-and-forward framework for discrete

memoryless networks.

  • Let’s take a look at an application of compute-and-forward to

interference alignment.

slide-95
SLIDE 95

Interference-Free Capacity 1 1

slide-96
SLIDE 96

Interference-Free Capacity 1 1

slide-97
SLIDE 97

Time Division 1 2 K 1 2 K

slide-98
SLIDE 98

Time Division 1 2 K 1 2 K

slide-99
SLIDE 99

Time Division 1 2 K 1 2 K

slide-100
SLIDE 100

Time Division 1 2 K 1 2 K

slide-101
SLIDE 101

Interference Alignment 1 2 K 1 2 K

  • Cadambe-Jafar ’08: Alignment can achieve K/2 degrees-of-freedom

for the K-user interference channel.

  • Birk-Kol ’98: Alignment for index coding. Maddah-Ali - Motahari -

Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11

monograph (or recent e-book) for a richer history.

slide-102
SLIDE 102

Interference Alignment 1 2 K 1 2 K

  • Cadambe-Jafar ’08: Alignment can achieve K/2 degrees-of-freedom

for the K-user interference channel.

  • Birk-Kol ’98: Alignment for index coding. Maddah-Ali - Motahari -

Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11

monograph (or recent e-book) for a richer history.

slide-103
SLIDE 103

Interference Alignment 1 2 K 1 2 K

  • Cadambe-Jafar ’08: Alignment can achieve K/2 degrees-of-freedom

for the K-user interference channel.

  • Birk-Kol ’98: Alignment for index coding. Maddah-Ali - Motahari -

Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11

monograph (or recent e-book) for a richer history.

slide-104
SLIDE 104

Interference Alignment 1 2 K 1 2 K

  • Cadambe-Jafar ’08: Alignment can achieve K/2 degrees-of-freedom

for the K-user interference channel.

  • Birk-Kol ’98: Alignment for index coding. Maddah-Ali - Motahari -

Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11

monograph (or recent e-book) for a richer history.

slide-105
SLIDE 105

Interference Alignment 1 2 K 1 2 K

  • Cadambe-Jafar ’08: Alignment can achieve K/2 degrees-of-freedom

for the K-user interference channel.

  • Birk-Kol ’98: Alignment for index coding. Maddah-Ali - Motahari -

Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11

monograph (or recent e-book) for a richer history.

slide-106
SLIDE 106

Interference Alignment 1 2 K 1 2 K

  • Cadambe-Jafar ’08: Alignment can achieve K/2 degrees-of-freedom

for the K-user interference channel.

  • Birk-Kol ’98: Alignment for index coding. Maddah-Ali - Motahari -

Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11

monograph (or recent e-book) for a richer history.

slide-107
SLIDE 107

Interference Alignment 1 2 K 1 2 K

  • Cadambe-Jafar ’08: Alignment can achieve K/2 degrees-of-freedom

for the K-user interference channel.

  • Birk-Kol ’98: Alignment for index coding. Maddah-Ali - Motahari -

Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11

monograph (or recent e-book) for a richer history.

slide-108
SLIDE 108

Interference Alignment 1 2 K 1 2 K

  • Cadambe-Jafar ’08: Alignment can achieve K/2 degrees-of-freedom

for the K-user interference channel.

  • Birk-Kol ’98: Alignment for index coding. Maddah-Ali - Motahari -

Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11

monograph (or recent e-book) for a richer history.

slide-109
SLIDE 109

Symmetric K-User Gaussian Interference Channel

w1 E1 x1 w2 E2 x2 . . . wK EK xK

H

z1 y1 z2 y2 zK yK D1 ˆ w1 D2 ˆ w2 . . . DK ˆ wK

  • Signal space alignment (e.g., beamforming) is infeasible.
  • Signal scale alignment attains K/2 degrees-of-freedom for almost all

channel gains, Motahari et al. ’09, Wu-Shamai-Verdu ’11.

  • At finite SNR, the approximate capacity known in some special

cases: two-user Etkin-Tse-Wang ’08, many-to-one and one-to-many

Bresler-Parekh-Tse ’10, cyclic Zhou-Yu ’13.

slide-110
SLIDE 110

Symmetric K-User Gaussian Interference Channel

w1 E1 x1 w2 E2 x2 . . . wK EK xK

1 g · · · g g 1 · · · g . . . . . . ... . . . g g · · · 1

z1 y1 z2 y2 zK yK D1 ˆ w1 D2 ˆ w2 . . . DK ˆ wK

  • Signal space alignment (e.g., beamforming) is infeasible.
  • Signal scale alignment attains K/2 degrees-of-freedom for almost all

channel gains, Motahari et al. ’09, Wu-Shamai-Verdu ’11.

  • At finite SNR, the approximate capacity known in some special

cases: two-user Etkin-Tse-Wang ’08, many-to-one and one-to-many

Bresler-Parekh-Tse ’10, cyclic Zhou-Yu ’13.

  • Let’s look at the symmetric case.
slide-111
SLIDE 111

Effective Multiple-Access Channel

  • Each receiver sees an effective two-user multiple-access channel,

yk = xk + g

  • ℓ=k

xℓ + zk .

slide-112
SLIDE 112

Effective Multiple-Access Channel

  • Each receiver sees an effective two-user multiple-access channel,

yk = xk + g

  • ℓ=k

xℓ + zk . Successive Cancellation Decoding:

  • Decode and subtract interference
  • ℓ=k

xℓ, then decode xk.

  • Only optimal when interference is very strong, Sridharan et al. ’08.
slide-113
SLIDE 113

Effective Multiple-Access Channel

  • Each receiver sees an effective two-user multiple-access channel,

yk = xk + g

  • ℓ=k

xℓ + zk . Successive Cancellation Decoding:

  • Decode and subtract interference
  • ℓ=k

xℓ, then decode xk.

  • Only optimal when interference is very strong, Sridharan et al. ’08.

Joint Decoding:

  • Direct analysis is hindered by dependencies between codeword pairs.
  • Existing work only applies at very high SNR, Ordentlich-Erez ’13.
slide-114
SLIDE 114

Example: Two-User Lattice Alignment

1 √ 2

  • Two lattice codewords can be recovered from their linear

combination if the ratio of the coefficients is irrational.

slide-115
SLIDE 115

Example: Two-User Lattice Alignment

1 2

  • Two lattice codewords can be recovered from their linear

combination if the ratio of the coefficients is irrational.

  • If the ratio is rational, it is not always possible to uniquely identify

the pair of codewords.

slide-116
SLIDE 116

Alignment via Two Equations

  • High SNR behavior: K/2 degrees-of-freedom can be attained up to

a set of channel gains of measure zero. Loss of degrees-of-freedom for rational coefficients. Etkin-Ordentlich ’09, Motahari et al. ’09,

Wu-Shamai-Verdu ’11.

slide-117
SLIDE 117

Alignment via Two Equations

  • High SNR behavior: K/2 degrees-of-freedom can be attained up to

a set of channel gains of measure zero. Loss of degrees-of-freedom for rational coefficients. Etkin-Ordentlich ’09, Motahari et al. ’09,

Wu-Shamai-Verdu ’11.

  • Ordentlich-Erez-Nazer ’14: Decode two linear combinations:

a1xk + a2

  • ℓ=k

xℓ b1xk + b2

  • ℓ=k

xℓ using the compute-and-forward framework. If the coefficients are linearly independent, we can solve for the desired message.

slide-118
SLIDE 118

Alignment via Two Equations

  • High SNR behavior: K/2 degrees-of-freedom can be attained up to

a set of channel gains of measure zero. Loss of degrees-of-freedom for rational coefficients. Etkin-Ordentlich ’09, Motahari et al. ’09,

Wu-Shamai-Verdu ’11.

  • Ordentlich-Erez-Nazer ’14: Decode two linear combinations:

a1xk + a2

  • ℓ=k

xℓ b1xk + b2

  • ℓ=k

xℓ using the compute-and-forward framework. If the coefficients are linearly independent, we can solve for the desired message.

  • Set of “bad rationals” depends on the SNR. Only rationals with

denominator SNR1/4 or smaller cause issues.

slide-119
SLIDE 119

Symmetric K-User Gaussian Interference Channel

1 1.5 2 2.5 3 3.5 4 0.5 1 1.5 2 2.5 3 3.5 Cross−Channel Gain g Symmetric Rate Two−User Upper Bound Lattice Alignment 15 dB 25 dB

slide-120
SLIDE 120

Approximate Capacity Results: Strong Regime

  • Using the fact that the sum of the computation rates is nearly equal

to the multiple-access sum capacity, we can approximate the sum capacity of the symmetric K-user Gaussian interference channel in all regimes. Rsym > 1 2 log

  • 1 + (1 + 2g2)SNR
  • − max

a∈Z2 Rcomp

  • [1 g]T, a
  • − 1
  • Via basic results from Diophantine approximation, we can

approximate the sum capacity up to an outage set.

slide-121
SLIDE 121

Approximate Capacity Results: Strong Regime

  • Using the fact that the sum of the computation rates is nearly equal

to the multiple-access sum capacity, we can approximate the sum capacity of the symmetric K-user Gaussian interference channel in all regimes. Rsym > 1 2 log

  • 1 + (1 + 2g2)SNR
  • − max

a∈Z2 Rcomp

  • [1 g]T, a
  • − 1
  • Via basic results from Diophantine approximation, we can

approximate the sum capacity up to an outage set.

  • Sample Result: In the strong interference regime,

1 4 log+(g2SNR) − c 2 − 3 ≤ Csym ≤ 1 4 log+(g2SNR) + 1 for all channel gains except for an outage set whose measure is a fraction of 2−c of the interval 1 < |g| < √ SNR, for any c > 0.

slide-122
SLIDE 122

Generalizations

  • What about beyond the symmetric case?
slide-123
SLIDE 123

Generalizations

  • What about beyond the symmetric case?
  • Ntranos-Cadambe-Nazer-Caire ’13: Framework for lattice

interference alignment for any setting where we have “stream-by-stream” alignment.

slide-124
SLIDE 124

Algebraic Structure in Network Information Theory

Some topics we did not have a chance to cover:

  • Relaying: Wilson-Narayanan-Pfister-Sprintson ’10,

Nam-Chung-Lee ’10, ’11, Goseling-Gastpar-Weber ’11, Song-Devroye ’13, Nokleby-Aazhang ’12

  • Cellular and MIMO Networks: Sanderovich-Peleg-Shamai ’11,

Nazer-Sanderovich-Gastpar-Shamai ’09, Zhan-Nazer-Erez-Gastpar ’12, Hong-Caire ’13, Ordentlich-Erez ’13

  • Distributed Dirty-Paper Coding: Philosof-Zamir ’09,

Philosof-Zamir-Erez-Khisti ’11, Wang ’12

  • Joint Source-Channel Coding: Kochman-Zamir ’09,

Nazer-Gastpar ’07, ’08, Soundararajan-Vishwanath ’12

  • Physical-Layer Secrecy: He-Yener ’11, ’14,

Kashyap-Shashank-Thangaraj ’12

slide-125
SLIDE 125

Concluding Remarks

  • Codes with algebraic structure can sometimes outperform

i.i.d. ensembles.

  • Ongoing efforts towards developing an algebraic framework for

network source and channel coding.

  • Preliminary efforts have focused on the Gaussian case but discrete

memoryless analogues of these results now seem within reach.

  • An open question: How should we choose the underlying algebraic

structure?

  • Tutorial slides from 2014 European School of Information Theory

available on my website.

  • Upcoming textbook by Ram Zamir on “Lattice Coding for Signals

and Networks.”