Outline I. Discrete Alphabets II. AWGN Channels III. Network - - PowerPoint PPT Presentation

outline
SMART_READER_LITE
LIVE PREVIEW

Outline I. Discrete Alphabets II. AWGN Channels III. Network - - PowerPoint PPT Presentation

Outline I. Discrete Alphabets II. AWGN Channels III. Network Applications Gaussian Multiple-Access Channel Rate Region x 1 z w 1 E 1 R 1 < 1 1 + P 1 2 log y w 1 N D w 2 R 2 < 1 1 + P 2 x 2 2 log w 2 E 2


slide-1
SLIDE 1

Outline

  • I. Discrete Alphabets
  • II. AWGN Channels
  • III. Network Applications
slide-2
SLIDE 2

Gaussian Multiple-Access Channel

Rate Region

R1 < 1 2 log

  • 1 + P1

N

  • R2 < 1

2 log

  • 1 + P2

N

  • R1 + R2 < 1

2 log

  • 1 + P1 + P2

N

  • w1

E1 x1 w2 E2 x2 z y D ˆ w1 ˆ w2 Power constraints P1, P2. Noise variance N.

Successive Cancellation

R2 R1 1 2 log

  • 1 +

P1 N + P2

  • , 1

2 log

  • 1 + P2

N

  • Corner Point
  • 1. Decode x1, treating x2 as noise.
  • 2. Subtract x1 from y.
  • 3. Decode x2.
slide-3
SLIDE 3

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Codebook Generation Select a nested lattice code:

  • Coarse lattice Λ = BZn for shaping.
  • Fine lattice from q-ary linear code G

for coding. Encoding Tx 1 Tx 2

slide-4
SLIDE 4

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Codebook Generation Select a nested lattice code:

  • Coarse lattice Λ = BZn for shaping.
  • Fine lattice from q-ary linear code G

for coding. Encoding

  • Map messages w1, w2 to lattice

points t1, t2. t1 = [BγGw1] mod Λ Tx 1 t2 = [BγGw2] mod Λ Tx 2

slide-5
SLIDE 5

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Codebook Generation Select a nested lattice code:

  • Coarse lattice Λ = BZn for shaping.
  • Fine lattice from q-ary linear code G

for coding. Encoding

  • Map messages w1, w2 to lattice

points t1, t2.

  • Choose independent dithers d1, d2

uniformly over Voronoi region V. t1 = [BγGw1] mod Λ Tx 1 t2 = [BγGw2] mod Λ Tx 2

slide-6
SLIDE 6

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Codebook Generation Select a nested lattice code:

  • Coarse lattice Λ = BZn for shaping.
  • Fine lattice from q-ary linear code G

for coding. Encoding

  • Map messages w1, w2 to lattice

points t1, t2.

  • Choose independent dithers d1, d2

uniformly over Voronoi region V.

  • Add dithers to lattice points and

take mod Λ to get transmitted signals x1, x2. t1 = [BγGw1] mod Λ x1 = [t1 + d1] mod Λ Tx 1 t2 = [BγGw2] mod Λ x2 = [t1 + d2] mod Λ Tx 2

slide-7
SLIDE 7

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Codebook Generation Select a nested lattice code:

  • Coarse lattice Λ = BZn for shaping.
  • Fine lattice from q-ary linear code G

for coding. Encoding

  • Map messages w1, w2 to lattice

points t1, t2.

  • Choose independent dithers d1, d2

uniformly over Voronoi region V.

  • Add dithers to lattice points and

take mod Λ to get transmitted signals x1, x2. t1 = [BγGw1] mod Λ x1 = [t1 + d1] mod Λ Tx 1 t2 = [BγGw2] mod Λ x2 = [t1 + d2] mod Λ Tx 2

slide-8
SLIDE 8

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Codebook Generation Select a nested lattice code:

  • Coarse lattice Λ = BZn for shaping.
  • Fine lattice from q-ary linear code G

for coding. Encoding

  • Map messages w1, w2 to lattice

points t1, t2.

  • Choose independent dithers d1, d2

uniformly over Voronoi region V.

  • Add dithers to lattice points and

take mod Λ to get transmitted signals x1, x2. t1 = [BγGw1] mod Λ x1 = [t1 + d1] mod Λ Tx 1 t2 = [BγGw2] mod Λ x2 = [t1 + d2] mod Λ Tx 2

slide-9
SLIDE 9

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Receiver observes y = x1 + x2 + z. Decoding Rx

slide-10
SLIDE 10

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Receiver observes y = x1 + x2 + z. Decoding Rx

slide-11
SLIDE 11

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Receiver observes y = x1 + x2 + z. Decoding

  • Scale by α.

Rx

slide-12
SLIDE 12

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Receiver observes y = x1 + x2 + z. Decoding

  • Scale by α.
  • Subtract dither d1.

Rx

slide-13
SLIDE 13

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Receiver observes y = x1 + x2 + z. Decoding

  • Scale by α.
  • Subtract dither d1.
  • Take mod Λ.

Rx

slide-14
SLIDE 14

Lattice Achievability “Recipe” – Multiple-Access Corner Point

Receiver observes y = x1 + x2 + z. Decoding

  • Scale by α.
  • Subtract dither d1.
  • Take mod Λ.
  • Decode to nearest codeword.

[αy − d1] mod Λ = [α(x1 + x2 + z) − d1] mod Λ = [x1 − d1 + αz + αx2 − (1 − α)x1] mod Λ =

  • [t1 + d1] mod Λ − d1 + αz + αx2 − (1 − α)x1
  • mod Λ

= [t1 + αz + αx2 − (1 − α)x1] Effective Noise Rx

slide-15
SLIDE 15

Lattice Achievability “Recipe” – Multiple-Access Corner Point

  • Effective noise after scaling is NEFFEC = α2(N + P2) + (1 − α)2P1.
  • Minimized by setting α to be the MMSE coefficient:

αMMSE = P1 N + P1 + P2

  • Plugging in, we get

NEFFEC = (N + P2)P1 N + P1 + P2

  • Resulting rate is

R = 1 2 log

  • P1

NEFFEC

  • = 1

2 log

  • 1 +

P1 N + P2

  • To obtain different rates for x1 and x2, use nested linear codes G1

and G2 inside Voronoi region V.

slide-16
SLIDE 16

AWGN Two-Way Relay Channel – Symmetric Rates

w1

Has Wants w2

w1

Has Wants

w2

Relay

slide-17
SLIDE 17

AWGN Two-Way Relay Channel – Symmetric Rates

zMAC yMAC Relay xBC z2 z1 User 1 x1 w1 ˆ w2 User 2 x2 w2 ˆ w1

  • Equal power constraints P.
  • Equal noise variances N.
  • Equal rates R.
slide-18
SLIDE 18

AWGN Two-Way Relay Channel – Symmetric Rates

zMAC yMAC Relay xBC z2 z1 User 1 x1 w1 ˆ w2 User 2 x2 w2 ˆ w1

  • Equal power constraints P.
  • Equal noise variances N.
  • Equal rates R.
  • Upper Bound:

R ≤ 1 2 log

  • 1 + P

N

  • Decode-and-Forward: Relay decodes w1, w2 and transmits w1 ⊕ w2.

R = 1 4 log

  • 1 + 2P

N

  • Compress-and-Forward: Relay transmits quantized y.

R = 1 2 log

  • 1 + P

N P 3P + N

slide-19
SLIDE 19

AWGN Two-Way Relay Channel – Symmetric Rates

5 10 15 20 0.5 1 1.5 2 2.5 3 3.5 SNR in dB Rate per User Upper Bound Compress Decode

slide-20
SLIDE 20

Decoding the Sum of Lattice Codewords

Encoders use the same nested lattice codebook. Transmit lattice codewords: x1 = t1 x2 = t2 t1 E1 x1 t2 E2 x2 z y D ˆ v v = [t1 + t2] mod Λ Decoder recovers modulo sum. [y] mod Λ = [x1 + x2 + z] mod Λ = [t1 + t2 + z] mod Λ =

  • [t1 + t2] mod Λ + z
  • mod Λ

Distributive Law = [v + z] mod Λ R = 1 2 log P N

slide-21
SLIDE 21

Decoding the Sum of Lattice Codewords – MMSE Scaling

Encoders use the same nested lattice codebook. Transmit dithered codewords: x1 = [t1 + d1] mod Λ x2 = [t2 + d2] mod Λ t1 E1 x1 t2 E2 x2 z y D ˆ v v = [t1 + t2] mod Λ Decoder scales by α, removes dithers, recovers modulo sum. [αy − d1 − d2] mod Λ = [α(x1 + x2 + z) − d1 − d2] mod Λ = [x1 + x2 − (1 − α)(x1 + x2) + αz − d1 − d2] mod Λ =

  • [t1 + t2] mod Λ − (1 − α)(x1 + x2) + αz
  • mod Λ

= [v − (1 − α)(x1 + x2) + αz] mod Λ Effective Noise NEFFEC = (1 − α)22P + α2N

slide-22
SLIDE 22

Decoding the Sum of Lattice Codewords – MMSE Scaling

  • Effective noise after scaling is NEFFEC = (1 − α)22P + α2N.
  • Minimized by setting α to be the MMSE coefficient:

αMMSE = 2P N + 2P

  • Plugging in, we get

NEFFEC = 2NP N + 2P

  • Resulting rate is

R = 1 2 log

  • P

NEFFEC

  • = 1

2 log 1 2 + P N

  • Getting the full “one plus” term is an open challenge. Does not

seem possible with nested lattices.

slide-23
SLIDE 23

From Messages to Lattice Points and Back

  • Map messages to lattice points

t1 = φ(w1) = [BγGw1] mod Λ t2 = φ(w2) = [BγGw2] mod Λ

  • Mapping between finite field messages and lattice codewords

preserves linearity: φ−1 [t1 + t2] mod Λ

  • = w1 ⊕ w2
  • This means that after decoding a mod Λ equation of lattice points

we can immediately recover the finite field equation of the messages. See Nazer-Gastpar ’11 for more details.

slide-24
SLIDE 24

Finite Field Computation over a Gaussian MAC

Map messages to lattice points: t1 = φ(w1) t2 = φ(w2) Transmit dithered codewords: x1 = [t1 + d1] mod Λ x2 = [t2 + d2] mod Λ w1 E1 x1 w2 E2 x2 z y D ˆ u u = w1 ⊕ w2

  • If decoder can recover [t1 + t2] mod Λ, it also can get the sum of

the messages w1 ⊕ w2 = φ−1 [t1 + t2] mod Λ

  • .
  • Achievable rate R = 1

2 log 1 2 + P N

  • .
slide-25
SLIDE 25

AWGN Two-Way Relay Channel – Symmetric Rates

w1

Has Wants w2

w1

Has Wants

w2

Relay

  • Equal power constraints P.
  • Equal noise variances N.
  • Equal rates R.
  • Upper Bound:

R ≤ 1 2 log

  • 1 + P

N

  • Compute-and-Forward: Relay decodes w1 ⊕ w2 and retransmits.

R = 1 2 log 1 2 + P N

  • Wilson-Narayanan-Pfister-Sprintson ’10: Applies nested lattice codes

to the two-way relay channel.

slide-26
SLIDE 26

AWGN Two-Way Relay Channel – Symmetric Rates

zMAC yMAC Relay xBC z2 z1 User 1 x1 w1 ˆ w2 User 2 x2 w2 ˆ w1

  • Equal power constraints P.
  • Equal noise variances N.
  • Equal rates R.
  • Upper Bound:

R ≤ 1 2 log

  • 1 + P

N

  • Compute-and-Forward: Relay decodes w1 ⊕ w2 and retransmits.

R = 1 2 log 1 2 + P N

  • Wilson-Narayanan-Pfister-Sprintson ’10: Applies nested lattice codes

to the two-way relay channel.

slide-27
SLIDE 27

AWGN Two-Way Relay Channel – Symmetric Rates

5 10 15 20 0.5 1 1.5 2 2.5 3 3.5 SNR in dB Rate per User Upper Bound Compute Compress Decode

slide-28
SLIDE 28

Compute-and-Forward Illustration

w2 w1 x1 x2 z y w1 ⊕ w2

slide-29
SLIDE 29

Compute-and-Forward Illustration

w2 w1 x1 x2 z y w1 ⊕ w2

slide-30
SLIDE 30

Random i.i.d. codes are not good for computation

2nR codewords each. 2n2R possible sums of codewords.

slide-31
SLIDE 31

Random i.i.d. codes are not good for computation

2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y

slide-32
SLIDE 32

Random i.i.d. codes are not good for computation

2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y

slide-33
SLIDE 33

Random i.i.d. codes are not good for computation

2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y

slide-34
SLIDE 34

Multiple-Access Networks

w Z2 Z1 Z3 ˆ w ˆ w ˆ w

  • Multicast

demands

  • Multi-access

interference

  • No broadcast

constraints

  • Compute-and-forward is well-suited for multicasting over

multiple-access networks.

  • Equal transmitter powers: Nazer-Gastpar ’07.

Unequal transmitter powers: Nam-Chung-Lee ’09.

slide-35
SLIDE 35

Exercise: Sum-Difference Network

x1 x2 z1 y1 z2 y2 x1 x2 Find the achievable rate for compute-and-forward. What is the requirement on the backhaul rates?

slide-36
SLIDE 36

Outline

  • I. Discrete Alphabets
  • II. AWGN Channels
  • III. Network Applications
slide-37
SLIDE 37

Dirty Paper Coding

s is interference known noncausally to the encoder. Assume s i.i.d. Gaussian, very large variance PS.

Erez-Shamai-Zamir ’05:

Encoder subtracts αs, dithers, and takes mod Λ. x = [t − αs + d] mod Λ w E x s z y D ˆ w Decoder scales by α, removes dither, takes mod Λ, and recovers t. Interference is cancelled.

[αy − d] mod Λ = [x + αs − d + z − (1 − α)x] mod Λ =

  • [t − αs + d] mod Λ + αs − d + z − (1 − α)x
  • mod Λ

=

  • t + z − (1 − α)x
  • mod Λ
slide-38
SLIDE 38

Dirty Paper Coding

s is interference known noncausally to the encoder. Assume s i.i.d. Gaussian, very large variance PS.

Erez-Shamai-Zamir ’05:

Encoder subtracts αs, dithers, and takes mod Λ. x = [t − αs + d] mod Λ w E x s z y D ˆ w Decoder scales by α, removes dither, takes mod Λ, and recovers t. Interference is cancelled.

[αy − d] mod Λ = [x + αs − d + z − (1 − α)x] mod Λ =

  • [t − αs + d] mod Λ + αs − d + z − (1 − α)x
  • mod Λ

=

  • t + z − (1 − α)x
  • mod Λ
slide-39
SLIDE 39

Dirty Paper Coding

s is interference known noncausally to the encoder. Assume s i.i.d. Gaussian, very large variance PS.

Erez-Shamai-Zamir ’05:

Encoder subtracts αs, dithers, and takes mod Λ. x = [t − αs + d] mod Λ w E x s z y D ˆ w Decoder scales by α, removes dither, takes mod Λ, and recovers t. Interference is cancelled.

[αy − d] mod Λ = [x + αs − d + z − (1 − α)x] mod Λ =

  • [t − αs + d] mod Λ + αs − d + z − (1 − α)x
  • mod Λ

=

  • t + z − (1 − α)x
  • mod Λ
slide-40
SLIDE 40

Dirty Paper Coding

s is interference known noncausally to the encoder. Assume s i.i.d. Gaussian, very large variance PS.

Erez-Shamai-Zamir ’05:

Encoder subtracts αs, dithers, and takes mod Λ. x = [t − αs + d] mod Λ w E x s z y D ˆ w Decoder scales by α, removes dither, takes mod Λ, and recovers t. Interference is cancelled.

[αy − d] mod Λ = [x + αs − d + z − (1 − α)x] mod Λ =

  • [t − αs + d] mod Λ + αs − d + z − (1 − α)x
  • mod Λ

=

  • t + z − (1 − α)x
  • mod Λ
slide-41
SLIDE 41

Dirty Gaussian Multiple-Access Channel

w1 E1 x1 s1 w2 E2 x2 s2 z y D ˆ w1, ˆ w2

Philosof-Zamir-Erez-Khisti ’11:

  • Encoder 1 knows interference s1.
  • Encoder 2 knows interference s2.
  • Need to cancel out interference in a distributed fashion.
  • Assume i.i.d. Gaussian interference with very large variance PS.

Random i.i.d. methods yield rate that goes to 0 as PS goes to infinity.

slide-42
SLIDE 42

Exercise: Dirty Gaussian Multiple-Access Channel

Subtract (part of) the interference signals ahead of time: x1 = [t1 − αs1 + d1] mod Λ x2 = [t2 − αs2 + d2] mod Λ

slide-43
SLIDE 43

Exercise: Dirty Gaussian Multiple-Access Channel

Subtract (part of) the interference signals ahead of time: x1 = [t1 − αs1 + d1] mod Λ x2 = [t2 − αs2 + d2] mod Λ Decoder removes dithers: [αy − d1 − d2] mod Λ = [α(x1 + x2 + s1 + s2 + z) − d1 − d2] mod Λ = [x1 + x2 + α(s1 + s2) − (1 − α)(x1 + x2) + αz) − d1 − d2] mod Λ =

  • t1 + t2 + (1 − α)(x1 + x2) + αz
  • mod Λ
slide-44
SLIDE 44

Exercise: Dirty Gaussian Multiple-Access Channel

Subtract (part of) the interference signals ahead of time: x1 = [t1 − αs1 + d1] mod Λ x2 = [t2 − αs2 + d2] mod Λ Decoder removes dithers: [αy − d1 − d2] mod Λ = [α(x1 + x2 + s1 + s2 + z) − d1 − d2] mod Λ = [x1 + x2 + α(s1 + s2) − (1 − α)(x1 + x2) + αz) − d1 − d2] mod Λ =

  • t1 + t2 + (1 − α)(x1 + x2) + αz
  • mod Λ

Select α = 2P/(2P + N) to obtain R1 + R2 ≤

  • 1

2 log 1 2 + P N +

slide-45
SLIDE 45

Computation over Fading Channels

Transmitters do not know channel realization. Encoders use the same nested lattice codebook. Transmit dithered codewords: xℓ = [tℓ + dℓ] mod Λ t1 E1 x1 h1 t2 E2 x2 h2 tK EK xK hK . . . z y D ˆ v

v =

  • K
  • ℓ=1

aℓtℓ

  • mod Λ
  • Decoder removes dithers and recovers integer combination

v = K

  • ℓ=1

aℓtℓ

  • mod Λ
  • Receiver can use its knowledge of the channel gains to match the

equation coefficients aℓ to the channel coefficients hℓ.

slide-46
SLIDE 46

Distributive Law

  • Distributive Law also holds for integer combinations. Let a, b ∈ Z.
  • a[x1] mod Λ + b[x2] mod Λ
  • mod Λ

=

  • a
  • x1 − QΛ(x1)
  • + b
  • x2 − QΛ(x2)
  • mod Λ

=

  • ax1 + bx2 − aQΛ(x1) − bQΛ(x2)
  • mod Λ

= [ax1 + bx2] mod Λ

  • Last step follows since since aQΛ(x1) and bQΛ(x2) are elements of

the lattice Λ.

slide-47
SLIDE 47

Computation over Fading Channels

  • Transmit dithered codewords xℓ = [tℓ + dℓ] mod Λ
  • Decoder removes dithers and recovers integer combination
  • y −

K

  • ℓ=1

aℓdℓ

  • mod Λ

= K

  • ℓ=1

hℓxℓ + z −

K

  • ℓ=1

aℓdℓ

  • mod Λ

= K

  • ℓ=1

aℓ(xℓ − dℓ) +

K

  • ℓ=1

(hℓ − aℓ)xℓ + z

  • mod Λ

= K

  • ℓ=1

aℓtℓ

  • mod Λ +

K

  • ℓ=1

(hℓ − aℓ)xℓ + z

  • mod Λ

Distributive Law Effective Noise

slide-48
SLIDE 48

Computation over Fading Channels – Effective Noise

  • Effective noise due to mismatch between channel coefficients

h = [h1 · · · hK]T and equation coefficients a = [a1 · · · aK]T . NEFFEC = N + Ph − a2 R = 1 2 log

  • P

N + Ph − a2

slide-49
SLIDE 49

Computation over Fading Channels – Effective Noise

  • Effective noise due to mismatch between channel coefficients

h = [h1 · · · hK]T and equation coefficients a = [a1 · · · aK]T . NEFFEC = N + Ph − a2 R = 1 2 log

  • P

N + Ph − a2

  • Can do better with MMSE scaling.

NEFFEC = α2N + Pαh − a2 R = max

α

1 2 log

  • P

α2N + Pαh − a2

  • = 1

2 log

  • N + Ph2

Na2 + P(h2a2 − (hT a)2)

  • See Nazer-Gastpar ’11 for more details.
slide-50
SLIDE 50

Computation over Fading Channels – Special Cases

  • The rate expression simplifies in some special cases.

R = 1 2 log

  • N + Ph2

Na2 + P(h2a2 − (hT a)2)

  • Integer channels: h = a.

R = 1 2 log

  • 1

a2 + P N

  • Recovering a single message: Set a = δm, the mth unit vector.

R = 1 2 log

  • 1 +

h2

mP

N + P

ℓ=m h2 ℓ

slide-51
SLIDE 51

Finite Field Computation over Fading Channels

Transmitters do not know channel realization. Encoders use the same nested lattice codebook. Transmit dithered codewords: xℓ = [tℓ + dℓ] mod Λ w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK . . . z y D ˆ u u =

K

  • ℓ=1

aℓwℓ

  • Recall that mapping tℓ = φ(wℓ) between messages and lattice

points preserves linearity. φ−1 K

  • ℓ=1

aℓtℓ

  • mod Λ
  • =

K

  • ℓ=1

aℓwℓ

  • mod q =

K

  • ℓ=1

aℓwℓ

  • Digital interface that fits well with network coding.
slide-52
SLIDE 52

Computation Coding

All users pick the same nested lattice code:

slide-53
SLIDE 53

Computation Coding

Choose messages over field wℓ ∈ Fk

q:

w2 w1

slide-54
SLIDE 54

Computation Coding

Map wℓ to lattice point tℓ = φ(wℓ): w2 w1

slide-55
SLIDE 55

Computation Coding

Transmit lattice points over the channel: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ]

slide-56
SLIDE 56

Computation Coding

Transmit lattice points over the channel: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ]

slide-57
SLIDE 57

Computation Coding

Lattice codewords are scaled by channel coefficients: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ]

slide-58
SLIDE 58

Computation Coding

Scaled codewords added together plus noise: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ]

slide-59
SLIDE 59

Computation Coding

Scaled codewords added together plus noise: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ]

slide-60
SLIDE 60

Computation Coding

Extra noise penalty for non-integer channel coefficients: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ] Effective noise: N + Ph − a2

slide-61
SLIDE 61

Computation Coding

Scale output by α to reduce non-integer noise penalty: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] a = [ 2 3 ] Effective noise: α2N + Pαh − a2

slide-62
SLIDE 62

Computation Coding

Scale output by α to reduce non-integer noise penalty: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] a = [ 2 3 ] Effective noise: α2N + Pαh − a2

slide-63
SLIDE 63

Computation Coding

Decode to closest lattice point: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] a = [ 2 3 ] Effective noise: α2N + Pαh − a2

slide-64
SLIDE 64

Computation Coding

Compute sum of lattice points modulo the coarse lattice: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] a = [ 2 3 ] Effective noise: α2N + Pαh − a2

slide-65
SLIDE 65

Computation Coding

Map back to equation of message symbols over the field: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] a = [ 2 3 ] Effective noise: α2N + Pαh − a2

K

  • ℓ=1

aℓwℓ

slide-66
SLIDE 66

Computation over Fading Channels – Multiple Receivers

w1 E1 x1 w2 E2 x2 . . . wK EK xK

H

z1 y1 z2 y2 zK yK D1 ˆ u1 D2 ˆ u2 . . . DK ˆ uK

  • Equal rates R. No channel state information (CSI) at transmitters.
  • Receivers use their CSI to select coefficients, decode linear equation

uk =

K

  • ℓ=1

akℓwℓ

  • Reliable decoding possible if

R < min

k:akℓ=0

1 2 log

  • N + Phk2

Nak2 + P(hk2ak2 − (hT

k ak)2)

slide-67
SLIDE 67

Case Study – Hadamard Relay Network

w1 E1 x1 w2 E2 x2 . . . wK EK xK

H

z1 y1 z2 y2 zK yK R1 x1R R2 x2R . . . RK xKR z1R y1R z2R y2R zKR yKR D ˆ w1 ˆ w2 . . . ˆ wK

  • Equal rates R. H is a Hadamard matrix, HHT = KI

Upper Bound Compute-and-Forward 1 2 log

  • 1 + P

N

  • 1

2 log 1 K + P N

  • Compress-and-Forward

Decode-and-Forward 1 2 log

  • 1 + P

N P N + KP

  • 1

2K log

  • 1 + KP

N

slide-68
SLIDE 68

Case Study – Hadamard Relay Network

w1 E1 x1 w2 E2 x2 . . . wK EK xK

1 1 · · · 1 1 1 · · · −1 . . . . . . ... . . . 1 −1 · · · −1

z1 y1 z2 y2 zK yK R1 x1R R2 x2R . . . RK xKR z1R y1R z2R y2R zKR yKR D ˆ w1 ˆ w2 . . . ˆ wK

  • Equal rates R. H is a Hadamard matrix, HHT = KI

Upper Bound Compute-and-Forward 1 2 log

  • 1 + P

N

  • 1

2 log 1 K + P N

  • Compress-and-Forward

Decode-and-Forward 1 2 log

  • 1 + P

N P N + KP

  • 1

2K log

  • 1 + KP

N

slide-69
SLIDE 69

Computation over Fading Channels – No CSIT

w1 E1 x1 h1 w2 E2 x2 h2 w3 E3 x3 h3 z y D ˆ u u =

K

  • ℓ=1

aℓwℓ

  • Three transmitters that

do not know the fading coefficients.

  • Average rate plotted for

i.i.d. Gaussian fading. Relay either decodes some linear function of messages

  • r an individual message.

5 10 15 20 25 0.5 1 1.5 2 2.5 Transmitter Power in dB Averate Rate in bits per channel use Decode an Equation Decode a Message Interference as Noise

slide-70
SLIDE 70

Computation over Fading Channels – No CSIT

  • Receiver observes y = x1 + hx2 + z.
  • Recovers aw1 ⊕ bw2 for a, b = 0.

10dB

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

Channel coefficient h Message rate R

Upper Bound Compute Decode Both

slide-71
SLIDE 71

Computation over Fading Channels – No CSIT

  • Receiver observes y = x1 + hx2 + z.
  • Recovers aw1 ⊕ bw2 for a, b = 0.

20dB

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 1.5 2 2.5 3 3.5

Channel coefficient h Message rate R

Upper Bound Compute Decode Both

slide-72
SLIDE 72

Computation over Fading Channels – No CSIT

  • Receiver observes y = x1 + hx2 + z.
  • Recovers aw1 ⊕ bw2 for a, b = 0.

30dB

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Channel coefficient h Message rate R

Upper Bound Compute Decode Both

slide-73
SLIDE 73

Computation over Fading Channels – No CSIT

  • Receiver observes y = x1 + hx2 + z.
  • Recovers aw1 ⊕ bw2 for a, b = 0.

40dB

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7

Channel coefficient h Message rate R

Upper Bound Compute Decode Both

slide-74
SLIDE 74

Computation over Fading Channels – No CSIT

  • Receiver observes y = x1 + hx2 + z.
  • Recovers aw1 ⊕ bw2 for a, b = 0.

50dB

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 8 9

Channel coefficient h Message rate R

Upper Bound Compute Decode Both

slide-75
SLIDE 75

Diophantine Approximation

  • Choose equation coefficients to maximize rate:

RCOMP = max

a∈ZK max α

1 2 log

  • P

α2N + Pαh − a2

  • Equivalently min

a∈ZK min α α2N + Pαh − a2.

  • Closely connected to Diophantine approximation, i.e. approximating

irrationals with rationals.

slide-76
SLIDE 76

Computation over Fading Channels – Multiple Receivers

w1 E1 x1 w2 E2 x2 . . . wK EK xK

H

z1 y1 z2 y2 zK yK D1 ˆ u1 D2 ˆ u2 . . . DK ˆ uK

  • Equal rates R. No channel state information (CSI) at transmitters.
  • Receivers use their CSI to select coefficients, decode linear equation

uk =

K

  • ℓ=1

akℓwℓ

  • Reliable decoding possible if

R < min

k:akℓ=0

1 2 log

  • N + Phk2

Nak2 + P(hk2ak2 − (hT

k ak)2)

slide-77
SLIDE 77

Diophantine Approximation

  • For each receiver, choose equation coefficients to maximize rate:

RCOMP = max

a∈ZK max α

1 2 log

  • P

α2N + Pαh − a2

  • Equivalently min

a∈ZK min α α2N + Pαh − a2.

  • Closely connected to Diophantine approximation, i.e. approximating

irrationals with rationals.

  • When requiring the K linear combinations decoded by the K

receivers to be linearly independent, Niesen-Whiting ’11 shows that DoF = lim

P →∞

RCOMP

1 2 log(1 + P) ≤ 2

  • Also shows that by combining compute-and-forward with

interference alignment can get DoF to K (but this requires channel state information at the transmitters).

slide-78
SLIDE 78

Static Linear Pre-processing: Integer-forcing Receiver

w1 E1 x1 w2 E2 x2 . . . wK EK xK

H

z1 y1 z2 y2 zK yK D1 ˆ u1 D2 ˆ u2 . . . DK ˆ uK

slide-79
SLIDE 79

Static Linear Pre-processing: Integer-forcing Receiver

w1 E1 x1 w2 E2 x2 . . . wK EK xK

H

z1 y1 z2 y2 zK yK

B

D1 ˆ u1 D2 ˆ u2 . . . DK ˆ uK

slide-80
SLIDE 80

Integer-forcing: Achievable rate

Theorem

Consider the MIMO channel with channel matrix H ∈ R2N×2M. Under the integer-forcing architecture, the following rate is achievable: R < min

m 2MR(H, am, bm)

R(H, am, bm) = 1 2 log+

  • SNR

bm2 + SNRHT bm − am2

  • for any full-rank integer matrix A ∈ Z2M×2M and any matrix

B ∈ R2M×2N.

slide-81
SLIDE 81

Integer-forcing: Diophantine Approximation Problem

Example: H = 0.7 1.3 0.8 1.5

  • .

We find the eigenvalues and eigenvectors to be λ1 ≈ 2.1954 v1 ≈ (−0.6561, −0.7547)T λ2 ≈ 0.0046 v2 ≈ (−0.8818, 0.4717)T

slide-82
SLIDE 82

Integer-forcing: Diophantine Approximation Problem

Now, there is more wiggle room in the diophantine approximation...

b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b

1 λMAX vMAX 1 λMIN vMIN

a1 a2

b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b

1 λMAX vMAX 1 λMIN vMIN

a2 a1

slide-83
SLIDE 83

Integer-forcing: Diophantine Approximation Problem

10 20 30 40 50 1 2 3 4 5 6 SNR (dB) Achievable Rate (bits per real symbol) Joint ML Integer MMSE−SIC MMSE Decorrelator

slide-84
SLIDE 84

High-SNR (Ex: DMT for 4 × 4 with Rayleigh fading)

1 2 3 4 0.5 1 1.5 2 2.5 3 3.5 4 diversity gain d multiplexing gain r Joint ML Integer V−BLAST III V−BLAST II V−BLAST I Decorrelator

slide-85
SLIDE 85

Concluding Remarks

  • Algebraic structure appears to play a role in Network Information

Theory (good news or bad news?)

  • In particular, codes with algebraic structure lead to the highest

known achievable rates for some communication scenarios of great interest.

  • We have only considered linear/lattice codes in the

“compute-and-forward” channel coding perspective.

  • Similar insights apply to source coding and to joint source-channel

coding.

  • Another (though related) form of algebraic structure appears in

interference alignment.

slide-86
SLIDE 86

Resources

For an extended list of references (as well as proofs), see

  • B. Nazer and M. Gastpar, “Computation over multiple-access

channels,” IEEE Transactions on Information Theory, vol. 53,

  • no. 10, pp. 3498–3516, Oct. 2007.
  • B. Nazer and M. Gastpar, “Reliable physical layer network coding,”

Proceedings of the IEEE, vol. 99, no. 3, pp. 438–460, March 2011.

  • B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing

interference through structured codes,” IEEE Transactions on Information Theory, vol. 57, no. 10, pp. 6463-6486, Oct. 2011.