SLIDE 1 Outline
- I. Discrete Alphabets
- II. AWGN Channels
- III. Network Applications
SLIDE 2 Gaussian Multiple-Access Channel
Rate Region
R1 < 1 2 log
N
2 log
N
2 log
N
E1 x1 w2 E2 x2 z y D ˆ w1 ˆ w2 Power constraints P1, P2. Noise variance N.
Successive Cancellation
R2 R1 1 2 log
P1 N + P2
2 log
N
- Corner Point
- 1. Decode x1, treating x2 as noise.
- 2. Subtract x1 from y.
- 3. Decode x2.
SLIDE 3 Lattice Achievability “Recipe” – Multiple-Access Corner Point
Codebook Generation Select a nested lattice code:
- Coarse lattice Λ = BZn for shaping.
- Fine lattice from q-ary linear code G
for coding. Encoding Tx 1 Tx 2
SLIDE 4 Lattice Achievability “Recipe” – Multiple-Access Corner Point
Codebook Generation Select a nested lattice code:
- Coarse lattice Λ = BZn for shaping.
- Fine lattice from q-ary linear code G
for coding. Encoding
- Map messages w1, w2 to lattice
points t1, t2. t1 = [BγGw1] mod Λ Tx 1 t2 = [BγGw2] mod Λ Tx 2
SLIDE 5 Lattice Achievability “Recipe” – Multiple-Access Corner Point
Codebook Generation Select a nested lattice code:
- Coarse lattice Λ = BZn for shaping.
- Fine lattice from q-ary linear code G
for coding. Encoding
- Map messages w1, w2 to lattice
points t1, t2.
- Choose independent dithers d1, d2
uniformly over Voronoi region V. t1 = [BγGw1] mod Λ Tx 1 t2 = [BγGw2] mod Λ Tx 2
SLIDE 6 Lattice Achievability “Recipe” – Multiple-Access Corner Point
Codebook Generation Select a nested lattice code:
- Coarse lattice Λ = BZn for shaping.
- Fine lattice from q-ary linear code G
for coding. Encoding
- Map messages w1, w2 to lattice
points t1, t2.
- Choose independent dithers d1, d2
uniformly over Voronoi region V.
- Add dithers to lattice points and
take mod Λ to get transmitted signals x1, x2. t1 = [BγGw1] mod Λ x1 = [t1 + d1] mod Λ Tx 1 t2 = [BγGw2] mod Λ x2 = [t1 + d2] mod Λ Tx 2
SLIDE 7 Lattice Achievability “Recipe” – Multiple-Access Corner Point
Codebook Generation Select a nested lattice code:
- Coarse lattice Λ = BZn for shaping.
- Fine lattice from q-ary linear code G
for coding. Encoding
- Map messages w1, w2 to lattice
points t1, t2.
- Choose independent dithers d1, d2
uniformly over Voronoi region V.
- Add dithers to lattice points and
take mod Λ to get transmitted signals x1, x2. t1 = [BγGw1] mod Λ x1 = [t1 + d1] mod Λ Tx 1 t2 = [BγGw2] mod Λ x2 = [t1 + d2] mod Λ Tx 2
SLIDE 8 Lattice Achievability “Recipe” – Multiple-Access Corner Point
Codebook Generation Select a nested lattice code:
- Coarse lattice Λ = BZn for shaping.
- Fine lattice from q-ary linear code G
for coding. Encoding
- Map messages w1, w2 to lattice
points t1, t2.
- Choose independent dithers d1, d2
uniformly over Voronoi region V.
- Add dithers to lattice points and
take mod Λ to get transmitted signals x1, x2. t1 = [BγGw1] mod Λ x1 = [t1 + d1] mod Λ Tx 1 t2 = [BγGw2] mod Λ x2 = [t1 + d2] mod Λ Tx 2
SLIDE 9
Lattice Achievability “Recipe” – Multiple-Access Corner Point
Receiver observes y = x1 + x2 + z. Decoding Rx
SLIDE 10
Lattice Achievability “Recipe” – Multiple-Access Corner Point
Receiver observes y = x1 + x2 + z. Decoding Rx
SLIDE 11 Lattice Achievability “Recipe” – Multiple-Access Corner Point
Receiver observes y = x1 + x2 + z. Decoding
Rx
SLIDE 12 Lattice Achievability “Recipe” – Multiple-Access Corner Point
Receiver observes y = x1 + x2 + z. Decoding
- Scale by α.
- Subtract dither d1.
Rx
SLIDE 13 Lattice Achievability “Recipe” – Multiple-Access Corner Point
Receiver observes y = x1 + x2 + z. Decoding
- Scale by α.
- Subtract dither d1.
- Take mod Λ.
Rx
SLIDE 14 Lattice Achievability “Recipe” – Multiple-Access Corner Point
Receiver observes y = x1 + x2 + z. Decoding
- Scale by α.
- Subtract dither d1.
- Take mod Λ.
- Decode to nearest codeword.
[αy − d1] mod Λ = [α(x1 + x2 + z) − d1] mod Λ = [x1 − d1 + αz + αx2 − (1 − α)x1] mod Λ =
- [t1 + d1] mod Λ − d1 + αz + αx2 − (1 − α)x1
- mod Λ
= [t1 + αz + αx2 − (1 − α)x1] Effective Noise Rx
SLIDE 15 Lattice Achievability “Recipe” – Multiple-Access Corner Point
- Effective noise after scaling is NEFFEC = α2(N + P2) + (1 − α)2P1.
- Minimized by setting α to be the MMSE coefficient:
αMMSE = P1 N + P1 + P2
NEFFEC = (N + P2)P1 N + P1 + P2
R = 1 2 log
NEFFEC
2 log
P1 N + P2
- To obtain different rates for x1 and x2, use nested linear codes G1
and G2 inside Voronoi region V.
SLIDE 16
AWGN Two-Way Relay Channel – Symmetric Rates
w1
Has Wants w2
w1
Has Wants
w2
Relay
SLIDE 17 AWGN Two-Way Relay Channel – Symmetric Rates
zMAC yMAC Relay xBC z2 z1 User 1 x1 w1 ˆ w2 User 2 x2 w2 ˆ w1
- Equal power constraints P.
- Equal noise variances N.
- Equal rates R.
SLIDE 18 AWGN Two-Way Relay Channel – Symmetric Rates
zMAC yMAC Relay xBC z2 z1 User 1 x1 w1 ˆ w2 User 2 x2 w2 ˆ w1
- Equal power constraints P.
- Equal noise variances N.
- Equal rates R.
- Upper Bound:
R ≤ 1 2 log
N
- Decode-and-Forward: Relay decodes w1, w2 and transmits w1 ⊕ w2.
R = 1 4 log
N
- Compress-and-Forward: Relay transmits quantized y.
R = 1 2 log
N P 3P + N
SLIDE 19
AWGN Two-Way Relay Channel – Symmetric Rates
5 10 15 20 0.5 1 1.5 2 2.5 3 3.5 SNR in dB Rate per User Upper Bound Compress Decode
SLIDE 20 Decoding the Sum of Lattice Codewords
Encoders use the same nested lattice codebook. Transmit lattice codewords: x1 = t1 x2 = t2 t1 E1 x1 t2 E2 x2 z y D ˆ v v = [t1 + t2] mod Λ Decoder recovers modulo sum. [y] mod Λ = [x1 + x2 + z] mod Λ = [t1 + t2 + z] mod Λ =
- [t1 + t2] mod Λ + z
- mod Λ
Distributive Law = [v + z] mod Λ R = 1 2 log P N
SLIDE 21 Decoding the Sum of Lattice Codewords – MMSE Scaling
Encoders use the same nested lattice codebook. Transmit dithered codewords: x1 = [t1 + d1] mod Λ x2 = [t2 + d2] mod Λ t1 E1 x1 t2 E2 x2 z y D ˆ v v = [t1 + t2] mod Λ Decoder scales by α, removes dithers, recovers modulo sum. [αy − d1 − d2] mod Λ = [α(x1 + x2 + z) − d1 − d2] mod Λ = [x1 + x2 − (1 − α)(x1 + x2) + αz − d1 − d2] mod Λ =
- [t1 + t2] mod Λ − (1 − α)(x1 + x2) + αz
- mod Λ
= [v − (1 − α)(x1 + x2) + αz] mod Λ Effective Noise NEFFEC = (1 − α)22P + α2N
SLIDE 22 Decoding the Sum of Lattice Codewords – MMSE Scaling
- Effective noise after scaling is NEFFEC = (1 − α)22P + α2N.
- Minimized by setting α to be the MMSE coefficient:
αMMSE = 2P N + 2P
NEFFEC = 2NP N + 2P
R = 1 2 log
NEFFEC
2 log 1 2 + P N
- Getting the full “one plus” term is an open challenge. Does not
seem possible with nested lattices.
SLIDE 23 From Messages to Lattice Points and Back
- Map messages to lattice points
t1 = φ(w1) = [BγGw1] mod Λ t2 = φ(w2) = [BγGw2] mod Λ
- Mapping between finite field messages and lattice codewords
preserves linearity: φ−1 [t1 + t2] mod Λ
- = w1 ⊕ w2
- This means that after decoding a mod Λ equation of lattice points
we can immediately recover the finite field equation of the messages. See Nazer-Gastpar ’11 for more details.
SLIDE 24 Finite Field Computation over a Gaussian MAC
Map messages to lattice points: t1 = φ(w1) t2 = φ(w2) Transmit dithered codewords: x1 = [t1 + d1] mod Λ x2 = [t2 + d2] mod Λ w1 E1 x1 w2 E2 x2 z y D ˆ u u = w1 ⊕ w2
- If decoder can recover [t1 + t2] mod Λ, it also can get the sum of
the messages w1 ⊕ w2 = φ−1 [t1 + t2] mod Λ
2 log 1 2 + P N
SLIDE 25 AWGN Two-Way Relay Channel – Symmetric Rates
w1
Has Wants w2
w1
Has Wants
w2
Relay
- Equal power constraints P.
- Equal noise variances N.
- Equal rates R.
- Upper Bound:
R ≤ 1 2 log
N
- Compute-and-Forward: Relay decodes w1 ⊕ w2 and retransmits.
R = 1 2 log 1 2 + P N
- Wilson-Narayanan-Pfister-Sprintson ’10: Applies nested lattice codes
to the two-way relay channel.
SLIDE 26 AWGN Two-Way Relay Channel – Symmetric Rates
zMAC yMAC Relay xBC z2 z1 User 1 x1 w1 ˆ w2 User 2 x2 w2 ˆ w1
- Equal power constraints P.
- Equal noise variances N.
- Equal rates R.
- Upper Bound:
R ≤ 1 2 log
N
- Compute-and-Forward: Relay decodes w1 ⊕ w2 and retransmits.
R = 1 2 log 1 2 + P N
- Wilson-Narayanan-Pfister-Sprintson ’10: Applies nested lattice codes
to the two-way relay channel.
SLIDE 27
AWGN Two-Way Relay Channel – Symmetric Rates
5 10 15 20 0.5 1 1.5 2 2.5 3 3.5 SNR in dB Rate per User Upper Bound Compute Compress Decode
SLIDE 28
Compute-and-Forward Illustration
w2 w1 x1 x2 z y w1 ⊕ w2
SLIDE 29
Compute-and-Forward Illustration
w2 w1 x1 x2 z y w1 ⊕ w2
SLIDE 30
Random i.i.d. codes are not good for computation
2nR codewords each. 2n2R possible sums of codewords.
SLIDE 31
Random i.i.d. codes are not good for computation
2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y
SLIDE 32
Random i.i.d. codes are not good for computation
2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y
SLIDE 33
Random i.i.d. codes are not good for computation
2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y
SLIDE 34 Multiple-Access Networks
w Z2 Z1 Z3 ˆ w ˆ w ˆ w
demands
interference
constraints
- Compute-and-forward is well-suited for multicasting over
multiple-access networks.
- Equal transmitter powers: Nazer-Gastpar ’07.
Unequal transmitter powers: Nam-Chung-Lee ’09.
SLIDE 35
Exercise: Sum-Difference Network
x1 x2 z1 y1 z2 y2 x1 x2 Find the achievable rate for compute-and-forward. What is the requirement on the backhaul rates?
SLIDE 36 Outline
- I. Discrete Alphabets
- II. AWGN Channels
- III. Network Applications
SLIDE 37 Dirty Paper Coding
s is interference known noncausally to the encoder. Assume s i.i.d. Gaussian, very large variance PS.
Erez-Shamai-Zamir ’05:
Encoder subtracts αs, dithers, and takes mod Λ. x = [t − αs + d] mod Λ w E x s z y D ˆ w Decoder scales by α, removes dither, takes mod Λ, and recovers t. Interference is cancelled.
[αy − d] mod Λ = [x + αs − d + z − (1 − α)x] mod Λ =
- [t − αs + d] mod Λ + αs − d + z − (1 − α)x
- mod Λ
=
SLIDE 38 Dirty Paper Coding
s is interference known noncausally to the encoder. Assume s i.i.d. Gaussian, very large variance PS.
Erez-Shamai-Zamir ’05:
Encoder subtracts αs, dithers, and takes mod Λ. x = [t − αs + d] mod Λ w E x s z y D ˆ w Decoder scales by α, removes dither, takes mod Λ, and recovers t. Interference is cancelled.
[αy − d] mod Λ = [x + αs − d + z − (1 − α)x] mod Λ =
- [t − αs + d] mod Λ + αs − d + z − (1 − α)x
- mod Λ
=
SLIDE 39 Dirty Paper Coding
s is interference known noncausally to the encoder. Assume s i.i.d. Gaussian, very large variance PS.
Erez-Shamai-Zamir ’05:
Encoder subtracts αs, dithers, and takes mod Λ. x = [t − αs + d] mod Λ w E x s z y D ˆ w Decoder scales by α, removes dither, takes mod Λ, and recovers t. Interference is cancelled.
[αy − d] mod Λ = [x + αs − d + z − (1 − α)x] mod Λ =
- [t − αs + d] mod Λ + αs − d + z − (1 − α)x
- mod Λ
=
SLIDE 40 Dirty Paper Coding
s is interference known noncausally to the encoder. Assume s i.i.d. Gaussian, very large variance PS.
Erez-Shamai-Zamir ’05:
Encoder subtracts αs, dithers, and takes mod Λ. x = [t − αs + d] mod Λ w E x s z y D ˆ w Decoder scales by α, removes dither, takes mod Λ, and recovers t. Interference is cancelled.
[αy − d] mod Λ = [x + αs − d + z − (1 − α)x] mod Λ =
- [t − αs + d] mod Λ + αs − d + z − (1 − α)x
- mod Λ
=
SLIDE 41 Dirty Gaussian Multiple-Access Channel
w1 E1 x1 s1 w2 E2 x2 s2 z y D ˆ w1, ˆ w2
Philosof-Zamir-Erez-Khisti ’11:
- Encoder 1 knows interference s1.
- Encoder 2 knows interference s2.
- Need to cancel out interference in a distributed fashion.
- Assume i.i.d. Gaussian interference with very large variance PS.
Random i.i.d. methods yield rate that goes to 0 as PS goes to infinity.
SLIDE 42
Exercise: Dirty Gaussian Multiple-Access Channel
Subtract (part of) the interference signals ahead of time: x1 = [t1 − αs1 + d1] mod Λ x2 = [t2 − αs2 + d2] mod Λ
SLIDE 43 Exercise: Dirty Gaussian Multiple-Access Channel
Subtract (part of) the interference signals ahead of time: x1 = [t1 − αs1 + d1] mod Λ x2 = [t2 − αs2 + d2] mod Λ Decoder removes dithers: [αy − d1 − d2] mod Λ = [α(x1 + x2 + s1 + s2 + z) − d1 − d2] mod Λ = [x1 + x2 + α(s1 + s2) − (1 − α)(x1 + x2) + αz) − d1 − d2] mod Λ =
- t1 + t2 + (1 − α)(x1 + x2) + αz
- mod Λ
SLIDE 44 Exercise: Dirty Gaussian Multiple-Access Channel
Subtract (part of) the interference signals ahead of time: x1 = [t1 − αs1 + d1] mod Λ x2 = [t2 − αs2 + d2] mod Λ Decoder removes dithers: [αy − d1 − d2] mod Λ = [α(x1 + x2 + s1 + s2 + z) − d1 − d2] mod Λ = [x1 + x2 + α(s1 + s2) − (1 − α)(x1 + x2) + αz) − d1 − d2] mod Λ =
- t1 + t2 + (1 − α)(x1 + x2) + αz
- mod Λ
Select α = 2P/(2P + N) to obtain R1 + R2 ≤
2 log 1 2 + P N +
SLIDE 45 Computation over Fading Channels
Transmitters do not know channel realization. Encoders use the same nested lattice codebook. Transmit dithered codewords: xℓ = [tℓ + dℓ] mod Λ t1 E1 x1 h1 t2 E2 x2 h2 tK EK xK hK . . . z y D ˆ v
v =
aℓtℓ
- mod Λ
- Decoder removes dithers and recovers integer combination
v = K
aℓtℓ
- mod Λ
- Receiver can use its knowledge of the channel gains to match the
equation coefficients aℓ to the channel coefficients hℓ.
SLIDE 46 Distributive Law
- Distributive Law also holds for integer combinations. Let a, b ∈ Z.
- a[x1] mod Λ + b[x2] mod Λ
- mod Λ
=
- a
- x1 − QΛ(x1)
- + b
- x2 − QΛ(x2)
- mod Λ
=
- ax1 + bx2 − aQΛ(x1) − bQΛ(x2)
- mod Λ
= [ax1 + bx2] mod Λ
- Last step follows since since aQΛ(x1) and bQΛ(x2) are elements of
the lattice Λ.
SLIDE 47 Computation over Fading Channels
- Transmit dithered codewords xℓ = [tℓ + dℓ] mod Λ
- Decoder removes dithers and recovers integer combination
- y −
K
aℓdℓ
= K
hℓxℓ + z −
K
aℓdℓ
= K
aℓ(xℓ − dℓ) +
K
(hℓ − aℓ)xℓ + z
= K
aℓtℓ
K
(hℓ − aℓ)xℓ + z
Distributive Law Effective Noise
SLIDE 48 Computation over Fading Channels – Effective Noise
- Effective noise due to mismatch between channel coefficients
h = [h1 · · · hK]T and equation coefficients a = [a1 · · · aK]T . NEFFEC = N + Ph − a2 R = 1 2 log
N + Ph − a2
SLIDE 49 Computation over Fading Channels – Effective Noise
- Effective noise due to mismatch between channel coefficients
h = [h1 · · · hK]T and equation coefficients a = [a1 · · · aK]T . NEFFEC = N + Ph − a2 R = 1 2 log
N + Ph − a2
- Can do better with MMSE scaling.
NEFFEC = α2N + Pαh − a2 R = max
α
1 2 log
α2N + Pαh − a2
2 log
Na2 + P(h2a2 − (hT a)2)
- See Nazer-Gastpar ’11 for more details.
SLIDE 50 Computation over Fading Channels – Special Cases
- The rate expression simplifies in some special cases.
R = 1 2 log
Na2 + P(h2a2 − (hT a)2)
R = 1 2 log
a2 + P N
- Recovering a single message: Set a = δm, the mth unit vector.
R = 1 2 log
h2
mP
N + P
ℓ=m h2 ℓ
SLIDE 51 Finite Field Computation over Fading Channels
Transmitters do not know channel realization. Encoders use the same nested lattice codebook. Transmit dithered codewords: xℓ = [tℓ + dℓ] mod Λ w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK . . . z y D ˆ u u =
K
aℓwℓ
- Recall that mapping tℓ = φ(wℓ) between messages and lattice
points preserves linearity. φ−1 K
aℓtℓ
K
aℓwℓ
K
aℓwℓ
- Digital interface that fits well with network coding.
SLIDE 52
Computation Coding
All users pick the same nested lattice code:
SLIDE 53
Computation Coding
Choose messages over field wℓ ∈ Fk
q:
w2 w1
SLIDE 54
Computation Coding
Map wℓ to lattice point tℓ = φ(wℓ): w2 w1
SLIDE 55
Computation Coding
Transmit lattice points over the channel: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ]
SLIDE 56
Computation Coding
Transmit lattice points over the channel: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ]
SLIDE 57
Computation Coding
Lattice codewords are scaled by channel coefficients: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ]
SLIDE 58
Computation Coding
Scaled codewords added together plus noise: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ]
SLIDE 59
Computation Coding
Scaled codewords added together plus noise: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ]
SLIDE 60
Computation Coding
Extra noise penalty for non-integer channel coefficients: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] a = [ 2 3 ] Effective noise: N + Ph − a2
SLIDE 61
Computation Coding
Scale output by α to reduce non-integer noise penalty: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] a = [ 2 3 ] Effective noise: α2N + Pαh − a2
SLIDE 62
Computation Coding
Scale output by α to reduce non-integer noise penalty: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] a = [ 2 3 ] Effective noise: α2N + Pαh − a2
SLIDE 63
Computation Coding
Decode to closest lattice point: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] a = [ 2 3 ] Effective noise: α2N + Pαh − a2
SLIDE 64
Computation Coding
Compute sum of lattice points modulo the coarse lattice: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] a = [ 2 3 ] Effective noise: α2N + Pαh − a2
SLIDE 65 Computation Coding
Map back to equation of message symbols over the field: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] a = [ 2 3 ] Effective noise: α2N + Pαh − a2
K
aℓwℓ
SLIDE 66 Computation over Fading Channels – Multiple Receivers
w1 E1 x1 w2 E2 x2 . . . wK EK xK
H
z1 y1 z2 y2 zK yK D1 ˆ u1 D2 ˆ u2 . . . DK ˆ uK
- Equal rates R. No channel state information (CSI) at transmitters.
- Receivers use their CSI to select coefficients, decode linear equation
uk =
K
akℓwℓ
- Reliable decoding possible if
R < min
k:akℓ=0
1 2 log
Nak2 + P(hk2ak2 − (hT
k ak)2)
SLIDE 67 Case Study – Hadamard Relay Network
w1 E1 x1 w2 E2 x2 . . . wK EK xK
H
z1 y1 z2 y2 zK yK R1 x1R R2 x2R . . . RK xKR z1R y1R z2R y2R zKR yKR D ˆ w1 ˆ w2 . . . ˆ wK
- Equal rates R. H is a Hadamard matrix, HHT = KI
Upper Bound Compute-and-Forward 1 2 log
N
2 log 1 K + P N
Decode-and-Forward 1 2 log
N P N + KP
2K log
N
SLIDE 68 Case Study – Hadamard Relay Network
w1 E1 x1 w2 E2 x2 . . . wK EK xK
1 1 · · · 1 1 1 · · · −1 . . . . . . ... . . . 1 −1 · · · −1
z1 y1 z2 y2 zK yK R1 x1R R2 x2R . . . RK xKR z1R y1R z2R y2R zKR yKR D ˆ w1 ˆ w2 . . . ˆ wK
- Equal rates R. H is a Hadamard matrix, HHT = KI
Upper Bound Compute-and-Forward 1 2 log
N
2 log 1 K + P N
Decode-and-Forward 1 2 log
N P N + KP
2K log
N
SLIDE 69 Computation over Fading Channels – No CSIT
w1 E1 x1 h1 w2 E2 x2 h2 w3 E3 x3 h3 z y D ˆ u u =
K
aℓwℓ
do not know the fading coefficients.
i.i.d. Gaussian fading. Relay either decodes some linear function of messages
5 10 15 20 25 0.5 1 1.5 2 2.5 Transmitter Power in dB Averate Rate in bits per channel use Decode an Equation Decode a Message Interference as Noise
SLIDE 70 Computation over Fading Channels – No CSIT
- Receiver observes y = x1 + hx2 + z.
- Recovers aw1 ⊕ bw2 for a, b = 0.
10dB
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
Channel coefficient h Message rate R
Upper Bound Compute Decode Both
SLIDE 71 Computation over Fading Channels – No CSIT
- Receiver observes y = x1 + hx2 + z.
- Recovers aw1 ⊕ bw2 for a, b = 0.
20dB
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 1.5 2 2.5 3 3.5
Channel coefficient h Message rate R
Upper Bound Compute Decode Both
SLIDE 72 Computation over Fading Channels – No CSIT
- Receiver observes y = x1 + hx2 + z.
- Recovers aw1 ⊕ bw2 for a, b = 0.
30dB
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Channel coefficient h Message rate R
Upper Bound Compute Decode Both
SLIDE 73 Computation over Fading Channels – No CSIT
- Receiver observes y = x1 + hx2 + z.
- Recovers aw1 ⊕ bw2 for a, b = 0.
40dB
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7
Channel coefficient h Message rate R
Upper Bound Compute Decode Both
SLIDE 74 Computation over Fading Channels – No CSIT
- Receiver observes y = x1 + hx2 + z.
- Recovers aw1 ⊕ bw2 for a, b = 0.
50dB
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 8 9
Channel coefficient h Message rate R
Upper Bound Compute Decode Both
SLIDE 75 Diophantine Approximation
- Choose equation coefficients to maximize rate:
RCOMP = max
a∈ZK max α
1 2 log
α2N + Pαh − a2
a∈ZK min α α2N + Pαh − a2.
- Closely connected to Diophantine approximation, i.e. approximating
irrationals with rationals.
SLIDE 76 Computation over Fading Channels – Multiple Receivers
w1 E1 x1 w2 E2 x2 . . . wK EK xK
H
z1 y1 z2 y2 zK yK D1 ˆ u1 D2 ˆ u2 . . . DK ˆ uK
- Equal rates R. No channel state information (CSI) at transmitters.
- Receivers use their CSI to select coefficients, decode linear equation
uk =
K
akℓwℓ
- Reliable decoding possible if
R < min
k:akℓ=0
1 2 log
Nak2 + P(hk2ak2 − (hT
k ak)2)
SLIDE 77 Diophantine Approximation
- For each receiver, choose equation coefficients to maximize rate:
RCOMP = max
a∈ZK max α
1 2 log
α2N + Pαh − a2
a∈ZK min α α2N + Pαh − a2.
- Closely connected to Diophantine approximation, i.e. approximating
irrationals with rationals.
- When requiring the K linear combinations decoded by the K
receivers to be linearly independent, Niesen-Whiting ’11 shows that DoF = lim
P →∞
RCOMP
1 2 log(1 + P) ≤ 2
- Also shows that by combining compute-and-forward with
interference alignment can get DoF to K (but this requires channel state information at the transmitters).
SLIDE 78
Static Linear Pre-processing: Integer-forcing Receiver
w1 E1 x1 w2 E2 x2 . . . wK EK xK
H
z1 y1 z2 y2 zK yK D1 ˆ u1 D2 ˆ u2 . . . DK ˆ uK
SLIDE 79
Static Linear Pre-processing: Integer-forcing Receiver
w1 E1 x1 w2 E2 x2 . . . wK EK xK
H
z1 y1 z2 y2 zK yK
B
D1 ˆ u1 D2 ˆ u2 . . . DK ˆ uK
SLIDE 80 Integer-forcing: Achievable rate
Theorem
Consider the MIMO channel with channel matrix H ∈ R2N×2M. Under the integer-forcing architecture, the following rate is achievable: R < min
m 2MR(H, am, bm)
R(H, am, bm) = 1 2 log+
bm2 + SNRHT bm − am2
- for any full-rank integer matrix A ∈ Z2M×2M and any matrix
B ∈ R2M×2N.
SLIDE 81 Integer-forcing: Diophantine Approximation Problem
Example: H = 0.7 1.3 0.8 1.5
We find the eigenvalues and eigenvectors to be λ1 ≈ 2.1954 v1 ≈ (−0.6561, −0.7547)T λ2 ≈ 0.0046 v2 ≈ (−0.8818, 0.4717)T
SLIDE 82 Integer-forcing: Diophantine Approximation Problem
Now, there is more wiggle room in the diophantine approximation...
b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b
1 λMAX vMAX 1 λMIN vMIN
a1 a2
b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b
1 λMAX vMAX 1 λMIN vMIN
a2 a1
SLIDE 83
Integer-forcing: Diophantine Approximation Problem
10 20 30 40 50 1 2 3 4 5 6 SNR (dB) Achievable Rate (bits per real symbol) Joint ML Integer MMSE−SIC MMSE Decorrelator
SLIDE 84
High-SNR (Ex: DMT for 4 × 4 with Rayleigh fading)
1 2 3 4 0.5 1 1.5 2 2.5 3 3.5 4 diversity gain d multiplexing gain r Joint ML Integer V−BLAST III V−BLAST II V−BLAST I Decorrelator
SLIDE 85 Concluding Remarks
- Algebraic structure appears to play a role in Network Information
Theory (good news or bad news?)
- In particular, codes with algebraic structure lead to the highest
known achievable rates for some communication scenarios of great interest.
- We have only considered linear/lattice codes in the
“compute-and-forward” channel coding perspective.
- Similar insights apply to source coding and to joint source-channel
coding.
- Another (though related) form of algebraic structure appears in
interference alignment.
SLIDE 86 Resources
For an extended list of references (as well as proofs), see
- B. Nazer and M. Gastpar, “Computation over multiple-access
channels,” IEEE Transactions on Information Theory, vol. 53,
- no. 10, pp. 3498–3516, Oct. 2007.
- B. Nazer and M. Gastpar, “Reliable physical layer network coding,”
Proceedings of the IEEE, vol. 99, no. 3, pp. 438–460, March 2011.
- B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing
interference through structured codes,” IEEE Transactions on Information Theory, vol. 57, no. 10, pp. 6463-6486, Oct. 2011.