Towards an Algebraic Network Information Theory Bobak Nazer (BU) - - PowerPoint PPT Presentation
Towards an Algebraic Network Information Theory Bobak Nazer (BU) - - PowerPoint PPT Presentation
Towards an Algebraic Network Information Theory Bobak Nazer (BU) Joint work with Sung Hoon Lim (EPFL), Chen Feng (UBC), and Michael Gastpar (EPFL). DIMACS Workshop on Network Coding: The Next 15 Years December 17th, 2015 Network Information
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
- Generate codewords elementwise i.i.d.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
- Generate codewords elementwise i.i.d.
- Powerful generalizations including superposition coding, dirty paper coding,
block Markov coding, and many more...
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
- Generate codewords elementwise i.i.d.
- Powerful generalizations including superposition coding, dirty paper coding,
block Markov coding, and many more...
- Rate regions described in terms of (single-letter) information measures
- ptimized over pmfs.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
- Generate codewords elementwise i.i.d.
- Powerful generalizations including superposition coding, dirty paper coding,
block Markov coding, and many more...
- Rate regions described in terms of (single-letter) information measures
- ptimized over pmfs.
- Many important successes: multiple-access channels, (degraded) broadcast
channels, Slepian-Wolf compression, network coding, and many more...
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
- Generate codewords elementwise i.i.d.
- Powerful generalizations including superposition coding, dirty paper coding,
block Markov coding, and many more...
- Rate regions described in terms of (single-letter) information measures
- ptimized over pmfs.
- Many important successes: multiple-access channels, (degraded) broadcast
channels, Slepian-Wolf compression, network coding, and many more...
- State-of-the-art elegantly captured in the recent textbook of
El Gamal and Kim.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
- Generate codewords elementwise i.i.d.
- Powerful generalizations including superposition coding, dirty paper coding,
block Markov coding, and many more...
- Rate regions described in terms of (single-letter) information measures
- ptimized over pmfs.
- Many important successes: multiple-access channels, (degraded) broadcast
channels, Slepian-Wolf compression, network coding, and many more...
- State-of-the-art elegantly captured in the recent textbook of
El Gamal and Kim.
- Codes with algebraic structure are sought after to mimic the performance
- f random i.i.d. codes.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
- Utilize linear or lattice codebooks.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
- Utilize linear or lattice codebooks.
- Compelling examples starting from the work of K¨
- rner and Marton on
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
- Utilize linear or lattice codebooks.
- Compelling examples starting from the work of K¨
- rner and Marton on
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
- Coding schemes exhibit behavior not found via i.i.d. ensembles.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
- Utilize linear or lattice codebooks.
- Compelling examples starting from the work of K¨
- rner and Marton on
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
- Coding schemes exhibit behavior not found via i.i.d. ensembles.
- However, some classical coding techniques are still unavailable.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
- Utilize linear or lattice codebooks.
- Compelling examples starting from the work of K¨
- rner and Marton on
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
- Coding schemes exhibit behavior not found via i.i.d. ensembles.
- However, some classical coding techniques are still unavailable.
- Most of the initial efforts have focused on Gaussian networks and have
employed nested lattice codebooks.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
- Utilize linear or lattice codebooks.
- Compelling examples starting from the work of K¨
- rner and Marton on
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
- Coding schemes exhibit behavior not found via i.i.d. ensembles.
- However, some classical coding techniques are still unavailable.
- Most of the initial efforts have focused on Gaussian networks and have
employed nested lattice codebooks.
- Are these just a collection of intriguing examples or elements of a more
general theory?
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
- Utilize linear or lattice codebooks.
- Compelling examples starting from the work of K¨
- rner and Marton on
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
- Coding schemes exhibit behavior not found via i.i.d. ensembles.
- However, some classical coding techniques are still unavailable.
- Most of the initial efforts have focused on Gaussian networks and have
employed nested lattice codebooks.
- Are these just a collection of intriguing examples or elements of a more
general theory?
This Talk: We build on previous work and propose a joint typicality approach to algebraic network information theory.
Compute-and-Forward
Goal: Send a linear combination of the messages to the receiver.
Compute-and-Forward
Goal: Send a linear combination of the messages to the receiver.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . . Channel
Y n D ˆ t
Compute-and-Forward
Goal: Send a linear combination of the messages to the receiver.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . .
ν ν ν(·) = q-ary expansion
Fκ
q Channel
Y n D ˆ t ν ν ν(t) =
K
- k=1
ak ν ν ν(mk)
Compute-and-Forward
Goal: Send linear combinations of the messages to the receivers.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . .
ν ν ν(·) = q-ary expansion
Fκ
q Channel
Y n
1
D1 ˆ t1 Y n
2
D2 ˆ t2
. . .
. . . . . . Y n
K
DK ˆ tK ν ν ν(tℓ) =
K
- k=1
aℓ,k ν ν ν(mk)
Compute-and-Forward
Goal: Send linear combinations of the messages to the receivers.
- Compute-and-forward can serve as a framework for communicating
messages across a network (e.g., relaying, MIMO uplink/downlink, interference alignment).
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . .
ν ν ν(·) = q-ary expansion
Fκ
q Channel
Y n
1
D1 ˆ t1 Y n
2
D2 ˆ t2
. . .
. . . . . . Y n
K
DK ˆ tK ν ν ν(tℓ) =
K
- k=1
aℓ,k ν ν ν(mk)
Compute-and-Forward
Goal: Send linear combinations of the messages to the receivers.
- Compute-and-forward can serve as a framework for communicating
messages across a network (e.g., relaying, MIMO uplink/downlink, interference alignment).
- Much of the recent work has focused on Gaussian networks.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . .
Rn
ν ν ν(·) = q-ary expansion
Fκ
q
H
Zn
1
Zn
2
Zn
K
Y n
1
D1 ˆ t1 Y n
2
D2 ˆ t2
. . .
. . . . . . Y n
K
DK ˆ tK ν ν ν(tℓ) =
K
- k=1
aℓ,k ν ν ν(mk)
The Usual Approach
The Usual Approach
Computation over Gaussian MACs
- Symmetric Gaussian MAC.
Computation over Gaussian MACs
- Symmetric Gaussian MAC.
- Equal power constraints:
Exℓ2 ≤ nP.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . Zn Y n D ˆ t ν ν ν(t) =
K
- k=1
ν ν ν(mk)
Computation over Gaussian MACs
- Symmetric Gaussian MAC.
- Equal power constraints:
Exℓ2 ≤ nP.
- Use nested lattice codes.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =
K
- k=1
ν ν ν(mk)
Computation over Gaussian MACs
- Symmetric Gaussian MAC.
- Equal power constraints:
Exℓ2 ≤ nP.
- Use nested lattice codes.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =
K
- k=1
ν ν ν(mk)
- Wilson-Narayanan-Pfister-Sprintson ’10, Nazer-Gastpar ’11:
Decoding is successful if the rates satisfy Rk < 1 2 log+ 1 2 + P
- .
Computation over Gaussian MACs
- Symmetric Gaussian MAC.
- Equal power constraints:
Exℓ2 ≤ nP.
- Use nested lattice codes.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =
K
- k=1
ν ν ν(mk)
- Wilson-Narayanan-Pfister-Sprintson ’10, Nazer-Gastpar ’11:
Decoding is successful if the rates satisfy Rk < 1 2 log+ 1 2 + P
- .
- Cut-set upper bound is 1
2 log(1 + P).
Computation over Gaussian MACs
- Symmetric Gaussian MAC.
- Equal power constraints:
Exℓ2 ≤ nP.
- Use nested lattice codes.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =
K
- k=1
ν ν ν(mk)
- Wilson-Narayanan-Pfister-Sprintson ’10, Nazer-Gastpar ’11:
Decoding is successful if the rates satisfy Rk < 1 2 log+ 1 2 + P
- .
- Cut-set upper bound is 1
2 log(1 + P).
- What about the “1+”? Still open! (Ice wine problem.)
Computation over Gaussian MACs
- How about general
Gaussian MACs?
Computation over Gaussian MACs
- How about general
Gaussian MACs?
- Model using unequal
power constraints: Exℓ2 ≤ nPℓ.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . Zn Y n D ˆ t ν ν ν(t) =
K
- k=1
- 0 ν
ν ν(mk)
Computation over Gaussian MACs
- How about general
Gaussian MACs?
- Model using unequal
power constraints: Exℓ2 ≤ nPℓ.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =
K
- k=1
- 0 ν
ν ν(mk)
- Nam-Chung-Lee ’11: At each transmitter, use the same fine lattice
and a different coarse lattice, chosen to meet the power constraint.
Computation over Gaussian MACs
- How about general
Gaussian MACs?
- Model using unequal
power constraints: Exℓ2 ≤ nPℓ.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =
K
- k=1
- 0 ν
ν ν(mk)
- Nam-Chung-Lee ’11: At each transmitter, use the same fine lattice
and a different coarse lattice, chosen to meet the power constraint.
- Decoding is successful if the rates satisfy
Rℓ < 1 2 log+
- Pℓ
L
i=1 Pi
+ Pℓ
- .
Computation over Gaussian MACs
- How about general
Gaussian MACs?
- Model using unequal
power constraints: Exℓ2 ≤ nPℓ.
m1 E1 Xn
1
m2 E2 Xn
2
mK EK Xn
K
. . . . . . . . . . . . Zn Y n D ˆ t ν ν ν(t) =
K
- k=1
- 0 ν
ν ν(mk)
- Nam-Chung-Lee ’11: At each transmitter, use the same fine lattice
and a different coarse lattice, chosen to meet the power constraint.
- Decoding is successful if the rates satisfy
Rℓ < 1 2 log+
- Pℓ
L
i=1 Pi
+ Pℓ
- .
- Nazer-Cadambe-Ntranos-Caire ’15: Expanded compute-and-forward
framework to link unequal power setting to finite fields.
Point-to-Point Channels
M
Encoder
Xn pY |X Y n
Decoder
ˆ M
- Messages: m ∈ [2nR] {0, . . . , 2nR − 1}
- Encoder: a mapping xn(m) ∈ X n for each m ∈ [2nR]
- Decoder: a mapping ˆ
m(yn) ∈ [2nR] for each yn ∈ Yn
Point-to-Point Channels
M
Encoder
Xn pY |X Y n
Decoder
ˆ M
- Messages: m ∈ [2nR] {0, . . . , 2nR − 1}
- Encoder: a mapping xn(m) ∈ X n for each m ∈ [2nR]
- Decoder: a mapping ˆ
m(yn) ∈ [2nR] for each yn ∈ Yn
Theorem (Shannon ’48)
C = max
pX(x) I(X; Y )
Point-to-Point Channels
M
Encoder
Xn pY |X Y n
Decoder
ˆ M
- Messages: m ∈ [2nR] {0, . . . , 2nR − 1}
- Encoder: a mapping xn(m) ∈ X n for each m ∈ [2nR]
- Decoder: a mapping ˆ
m(yn) ∈ [2nR] for each yn ∈ Yn
Theorem (Shannon ’48)
C = max
pX(x) I(X; Y )
- Proof relies on random i.i.d. codebooks combined with
joint typicality decoding.
Random i.i.d. Codebooks
X n
T (n)
ǫ
(X)
Random i.i.d. Codes
- Codewords are independent of one another.
- Can directly target an input distribution pX(x).
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Pick a finite field Fq and a symbol mapping x : Fq → X.
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Pick a finite field Fq and a symbol mapping x : Fq → X.
- Set κ = nR/ log(q).
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Pick a finite field Fq and a symbol mapping x : Fq → X.
- Set κ = nR/ log(q).
- Draw a random generator matrix G ∈ Fκ×n
q
elementwise i.i.d. Unif(Fq). Let G be a realization.
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Pick a finite field Fq and a symbol mapping x : Fq → X.
- Set κ = nR/ log(q).
- Draw a random generator matrix G ∈ Fκ×n
q
elementwise i.i.d. Unif(Fq). Let G be a realization.
- Draw a random shift (or “dither”) Dn elementwise i.i.d. Unif(Fq).
Let dn be a realization.
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Pick a finite field Fq and a symbol mapping x : Fq → X.
- Set κ = nR/ log(q).
- Draw a random generator matrix G ∈ Fκ×n
q
elementwise i.i.d. Unif(Fq). Let G be a realization.
- Draw a random shift (or “dither”) Dn elementwise i.i.d. Unif(Fq).
Let dn be a realization.
- Take q-ary expansion of message m into the vector ν
ν ν(m) ∈ Fκ
q.
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Pick a finite field Fq and a symbol mapping x : Fq → X.
- Set κ = nR/ log(q).
- Draw a random generator matrix G ∈ Fκ×n
q
elementwise i.i.d. Unif(Fq). Let G be a realization.
- Draw a random shift (or “dither”) Dn elementwise i.i.d. Unif(Fq).
Let dn be a realization.
- Take q-ary expansion of message m into the vector ν
ν ν(m) ∈ Fκ
q.
- Linear codeword for message m is un(m) = ν
ν ν(m)G ⊕ dn.
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Pick a finite field Fq and a symbol mapping x : Fq → X.
- Set κ = nR/ log(q).
- Draw a random generator matrix G ∈ Fκ×n
q
elementwise i.i.d. Unif(Fq). Let G be a realization.
- Draw a random shift (or “dither”) Dn elementwise i.i.d. Unif(Fq).
Let dn be a realization.
- Take q-ary expansion of message m into the vector ν
ν ν(m) ∈ Fκ
q.
- Linear codeword for message m is un(m) = ν
ν ν(m)G ⊕ dn.
- Channel input at time i is xi(m) = x(ui(m)).
Random i.i.d. Codebooks
Random Linear Codes
X n
T (n)
ǫ
(X)
- Codewords are pairwise independent of one another.
- Codewords are uniformly distributed over Fn
q.
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M
- Well known that a direct application of linear coding is not sufficient
to reach the point-to-point capacity, Ahlswede ’71.
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M
- Well known that a direct application of linear coding is not sufficient
to reach the point-to-point capacity, Ahlswede ’71.
- Gallager ’68: Pick Fq with q ≫ X and choose symbol mapping x(u)
to reach c.a.i.d. from Unif(Fq). This can attain the capacity.
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M
- Well known that a direct application of linear coding is not sufficient
to reach the point-to-point capacity, Ahlswede ’71.
- Gallager ’68: Pick Fq with q ≫ X and choose symbol mapping x(u)
to reach c.a.i.d. from Unif(Fq). This can attain the capacity.
- This will not work for us. Roughly speaking, if each encoder has a
different input distribution, the symbol mappings may be quite different, which will disrupt the linear structure of the codebook.
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M
- Well known that a direct application of linear coding is not sufficient
to reach the point-to-point capacity, Ahlswede ’71.
- Gallager ’68: Pick Fq with q ≫ X and choose symbol mapping x(u)
to reach c.a.i.d. from Unif(Fq). This can attain the capacity.
- This will not work for us. Roughly speaking, if each encoder has a
different input distribution, the symbol mappings may be quite different, which will disrupt the linear structure of the codebook.
- Padakandla-Pradhan ’13: It is possible to shape the input
distribution using nested linear codes.
Point-to-Point Channels: Linear Codes
M
Linear Code
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M
- Well known that a direct application of linear coding is not sufficient
to reach the point-to-point capacity, Ahlswede ’71.
- Gallager ’68: Pick Fq with q ≫ X and choose symbol mapping x(u)
to reach c.a.i.d. from Unif(Fq). This can attain the capacity.
- This will not work for us. Roughly speaking, if each encoder has a
different input distribution, the symbol mappings may be quite different, which will disrupt the linear structure of the codebook.
- Padakandla-Pradhan ’13: It is possible to shape the input
distribution using nested linear codes.
- Basic idea: Generate many codewords to represent one message.
Search in this “bin” to find a codeword with the desired type, i.e., multicoding.
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Messages m ∈ [2nR] and auxiliary indices l ∈ [2n ˆ
R].
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Messages m ∈ [2nR] and auxiliary indices l ∈ [2n ˆ
R].
- Set κ = n(R + ˆ
R)/ log(q).
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Messages m ∈ [2nR] and auxiliary indices l ∈ [2n ˆ
R].
- Set κ = n(R + ˆ
R)/ log(q).
- Pick generator matrix G and dither dn as before.
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Messages m ∈ [2nR] and auxiliary indices l ∈ [2n ˆ
R].
- Set κ = n(R + ˆ
R)/ log(q).
- Pick generator matrix G and dither dn as before.
- Take q-ary expansions
- ν
ν ν(m) ν ν ν(l)
- ∈ Fκ
q.
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Code Construction:
- Messages m ∈ [2nR] and auxiliary indices l ∈ [2n ˆ
R].
- Set κ = n(R + ˆ
R)/ log(q).
- Pick generator matrix G and dither dn as before.
- Take q-ary expansions
- ν
ν ν(m) ν ν ν(l)
- ∈ Fκ
q.
- Linear codewords: un(m, l) =
- ν
ν ν(m) ν ν ν(l)
- G ⊕ dn.
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Encoding:
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Encoding:
- Fix p(u) and x(u).
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Encoding:
- Fix p(u) and x(u).
- Multicoding: For each m, find an index l such that
un(m, l) ∈ T (n)
ǫ′
(U)
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Encoding:
- Fix p(u) and x(u).
- Multicoding: For each m, find an index l such that
un(m, l) ∈ T (n)
ǫ′
(U)
- Succeeds w.h.p. if ˆ
R > D(pUpq) (where pq is uniform over Fq).
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Encoding:
- Fix p(u) and x(u).
- Multicoding: For each m, find an index l such that
un(m, l) ∈ T (n)
ǫ′
(U)
- Succeeds w.h.p. if ˆ
R > D(pUpq) (where pq is uniform over Fq).
- Transmit xi = x
- ui(m, l)
- .
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Encoding:
- Fix p(u) and x(u).
- Multicoding: For each m, find an index l such that
un(m, l) ∈ T (n)
ǫ′
(U)
- Succeeds w.h.p. if ˆ
R > D(pUpq) (where pq is uniform over Fq).
- Transmit xi = x
- ui(m, l)
- .
Decoding:
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Encoding:
- Fix p(u) and x(u).
- Multicoding: For each m, find an index l such that
un(m, l) ∈ T (n)
ǫ′
(U)
- Succeeds w.h.p. if ˆ
R > D(pUpq) (where pq is uniform over Fq).
- Transmit xi = x
- ui(m, l)
- .
Decoding:
- Joint Typicality Decoding: Find the unique index ˆ
m such that
- un( ˆ
m, ˆ l), yn) ∈ T (n)
ǫ
(U, Y ) for some index ˆ l.
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M Encoding:
- Fix p(u) and x(u).
- Multicoding: For each m, find an index l such that
un(m, l) ∈ T (n)
ǫ′
(U)
- Succeeds w.h.p. if ˆ
R > D(pUpq) (where pq is uniform over Fq).
- Transmit xi = x
- ui(m, l)
- .
Decoding:
- Joint Typicality Decoding: Find the unique index ˆ
m such that
- un( ˆ
m, ˆ l), yn) ∈ T (n)
ǫ
(U, Y ) for some index ˆ l.
- Succeeds w.h.p. if R + ˆ
R < I(U; Y ) + D(pUpq)
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M
Theorem (Padakandla-Pradhan ’13)
Any rate R satisfying R < max
p(u), x(u) I(U; Y )
is achievable. This is equal to the capacity if q ≥ |X|.
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M
Theorem (Padakandla-Pradhan ’13)
Any rate R satisfying R < max
p(u), x(u) I(U; Y )
is achievable. This is equal to the capacity if q ≥ |X|.
- This is the basic coding framework that we will use for each
transmitter.
Point-to-Point Channels: Linear Codes + Multicoding
M
Linear Code Multi- coding
U n x(u)
Encoder
Xn pY |X Y n
Decoder
ˆ M
Theorem (Padakandla-Pradhan ’13)
Any rate R satisfying R < max
p(u), x(u) I(U; Y )
is achievable. This is equal to the capacity if q ≥ |X|.
- This is the basic coding framework that we will use for each
transmitter.
- Next, let’s examine a two-transmitter, one-receiver
“compute-and-forward” network.
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Code Construction:
- Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ
Rk], k = 1, 2.
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Code Construction:
- Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ
Rk], k = 1, 2.
- Set κ = n(max{R1 + ˆ
R1, R2 + ˆ R2})/ log(q).
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Code Construction:
- Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ
Rk], k = 1, 2.
- Set κ = n(max{R1 + ˆ
R1, R2 + ˆ R2})/ log(q).
- Pick generator matrix G and dithers dn
1, dn 2 as before.
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Code Construction:
- Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ
Rk], k = 1, 2.
- Set κ = n(max{R1 + ˆ
R1, R2 + ˆ R2})/ log(q).
- Pick generator matrix G and dithers dn
1, dn 2 as before.
- Take q-ary expansions
- ν
ν ν(m1) ν ν ν(l1)
- ∈ Fκ
q
- ν
ν ν(m2) ν ν ν(l2) 0
- ∈ Fκ
q
Zero-padding
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Code Construction:
- Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ
Rk], k = 1, 2.
- Set κ = n(max{R1 + ˆ
R1, R2 + ˆ R2})/ log(q).
- Pick generator matrix G and dithers dn
1, dn 2 as before.
- Take q-ary expansions
- η
η η(m1, l1)
- ∈ Fκ
q
- η
η η(m2, l2)
- ∈ Fκ
q
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Code Construction:
- Messages mk ∈ [2nRk] and auxiliary indices lk ∈ [2n ˆ
Rk], k = 1, 2.
- Set κ = n(max{R1 + ˆ
R1, R2 + ˆ R2})/ log(q).
- Pick generator matrix G and dithers dn
1, dn 2 as before.
- Take q-ary expansions
- η
η η(m1, l1)
- ∈ Fκ
q
- η
η η(m2, l2)
- ∈ Fκ
q
- Linear codewords: un
1(m1, l1) = η
η η(m1, l1)G ⊕ dn
1
un
2(m2, l2) = η
η η(m2, l2)G ⊕ dn
2
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Encoding:
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Encoding:
- Fix p(u1), p(u2), x1(u1), and x2(u2).
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Encoding:
- Fix p(u1), p(u2), x1(u1), and x2(u2).
- Multicoding: For each mk, find an index lk such that
un
k(mk, lk) ∈ T (n) ǫ′
(Uk).
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Encoding:
- Fix p(u1), p(u2), x1(u1), and x2(u2).
- Multicoding: For each mk, find an index lk such that
un
k(mk, lk) ∈ T (n) ǫ′
(Uk).
- Succeeds w.h.p. if ˆ
Rk > D(pUkpq).
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Encoding:
- Fix p(u1), p(u2), x1(u1), and x2(u2).
- Multicoding: For each mk, find an index lk such that
un
k(mk, lk) ∈ T (n) ǫ′
(Uk).
- Succeeds w.h.p. if ˆ
Rk > D(pUkpq).
- Transmit xki = xk
- uki(mk, lk)
- .
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Encoding:
- Fix p(u1), p(u2), x1(u1), and x2(u2).
- Multicoding: For each mk, find an index lk such that
un
k(mk, lk) ∈ T (n) ǫ′
(Uk).
- Succeeds w.h.p. if ˆ
Rk > D(pUkpq).
- Transmit xki = xk
- uki(mk, lk)
- .
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Computation Problem:
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Computation Problem:
- Consider the coefficients a ∈ F2
q, a = [a1, a2]
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Computation Problem:
- Consider the coefficients a ∈ F2
q, a = [a1, a2]
- For mk ∈ [2nRk], lk ∈ [2n ˆ
Rk], the linear combination of codewords
with coefficient vector a is a1un
1(m1, l1) ⊕ a2un 2(m2, l2)
=
- a1η
η η(m1, l1) ⊕ a2η η η(m2, l2)
- G ⊕ a1dn
1 ⊕ a2dn 2
= ν ν ν(t)G ⊕ dn
w
= wn(t), t ∈ [2n max{R1+ ˆ
R1,R2+ ˆ R2}]
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Computation Problem:
- Let Mk be the chosen message and Lk the chosen index from the
multicoding step.
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Computation Problem:
- Let Mk be the chosen message and Lk the chosen index from the
multicoding step.
- Decoder wants a linear combination of the codewords:
W n(T) = a1U n
1 (M1, L1) ⊕ a2U n 2 (M2, L2)
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Computation Problem:
- Let Mk be the chosen message and Lk the chosen index from the
multicoding step.
- Decoder wants a linear combination of the codewords:
W n(T) = a1U n
1 (M1, L1) ⊕ a2U n 2 (M2, L2)
- Decoder: ˆ
t(yn) ∈ [2n max{R1+ ˆ
R1,R2+ ˆ R2}], yn ∈ Yn
- Probability of Error: P(n)
ǫ
= P{T = ˆ T}
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Computation Problem:
- Let Mk be the chosen message and Lk the chosen index from the
multicoding step.
- Decoder wants a linear combination of the codewords:
W n(T) = a1U n
1 (M1, L1) ⊕ a2U n 2 (M2, L2)
- Decoder: ˆ
t(yn) ∈ [2n max{R1+ ˆ
R1,R2+ ˆ R2}], yn ∈ Yn
- Probability of Error: P(n)
ǫ
= P{T = ˆ T}
- A rate pair is achievable if there exists a sequence of codes such that
P(n)
ǫ
→ 0 as n → ∞.
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T Decoding:
- Joint Typicality Decoding: Find an index t ∈ [2n max(R1+ ˆ
R1,R2+ ˆ R2)]
such that (wn(t), yn) ∈ T (n)
ǫ
.
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T
Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15)
A rate pair (R1, R2) is achievable if R1 < I(W; Y ) − I(W; U2), R2 < I(W; Y ) − I(W; U1), for some p(u1)p(u2) and functions x1(u1), x2(u2), where Uk = Fq, k = 1, 2, and W = a1U1 ⊕ a2U2.
Nested Linear Coding Architecture
M1
Linear Code Multi- coding
U n
1
x1(u1) Xn
1
M2
Linear Code Multi- coding
U n
2
x2(u2) Xn
2
pY |X1X2 Y n
Decoder
ˆ T
Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15)
A rate pair (R1, R2) is achievable if R1 < I(W; Y ) − I(W; U2), R2 < I(W; Y ) − I(W; U1), for some p(u1)p(u2) and functions x1(u1), x2(u2), where Uk = Fq, k = 1, 2, and W = a1U1 ⊕ a2U2.
- Padakandla-Pradhan ’13: Special case where R1 = R2.
Proof Sketch
- WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
Proof Sketch
- WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
- Union bound: P(n)
ǫ
≤
- t=0
P
- (W n(t), Y n) ∈ T (n)
ǫ
|M
- .
Proof Sketch
- WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
- Union bound: P(n)
ǫ
≤
- t=0
P
- (W n(t), Y n) ∈ T (n)
ǫ
|M
- .
- Notice that the Lk depend on the codebook so Y n and W n(t) are
not independent.
Proof Sketch
- WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
- Union bound: P(n)
ǫ
≤
- t=0
P
- (W n(t), Y n) ∈ T (n)
ǫ
|M
- .
- Notice that the Lk depend on the codebook so Y n and W n(t) are
not independent.
- To get around this issue, we analyze
P(E) =
- t=0
P
- (W n(t), Y n) ∈ T (n)
ǫ
, U n
1 (0, 0) ∈ T (n) ǫ
, U n
2 (0, 0) ∈ T (n) ǫ
|M
Proof Sketch
- WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
- Union bound: P(n)
ǫ
≤
- t=0
P
- (W n(t), Y n) ∈ T (n)
ǫ
|M
- .
- Notice that the Lk depend on the codebook so Y n and W n(t) are
not independent.
- To get around this issue, we analyze
P(E) =
- t=0
P
- (W n(t), Y n) ∈ T (n)
ǫ
, U n
1 (0, 0) ∈ T (n) ǫ
, U n
2 (0, 0) ∈ T (n) ǫ
|M
- Conditioned on M, Y n → (U n
1 (0, 0), U n 2 (0, 0)) → W n(t)
Proof Sketch
- WLOG assume M = {M1 = 0, M2 = 0, L1 = 0, L2 = 0}.
- Union bound: P(n)
ǫ
≤
- t=0
P
- (W n(t), Y n) ∈ T (n)
ǫ
|M
- .
- Notice that the Lk depend on the codebook so Y n and W n(t) are
not independent.
- To get around this issue, we analyze
P(E) =
- t=0
P
- (W n(t), Y n) ∈ T (n)
ǫ
, U n
1 (0, 0) ∈ T (n) ǫ
, U n
2 (0, 0) ∈ T (n) ǫ
|M
- Conditioned on M, Y n → (U n
1 (0, 0), U n 2 (0, 0)) → W n(t)
- P(E) tends to zero as n → ∞ if
Rk + ˆ Rk + ˆ R1 + ˆ R2 < I(W; Y ) + D(pW||pq) + D(pU1||pq) + D(pU2||pq)
Compute-and-Forward over a Gaussian MAC
- Consider a Gaussian MAC with real-valued channel output
Y = h1X1 + h2X2 + Z
Compute-and-Forward over a Gaussian MAC
- Consider a Gaussian MAC with real-valued channel output
Y = h1X1 + h2X2 + Z
- Want to recover a1Xn
1 + a2Xn 2 for some integers a1, a2.
Compute-and-Forward over a Gaussian MAC
- Consider a Gaussian MAC with real-valued channel output
Y = h1X1 + h2X2 + Z
- Want to recover a1Xn
1 + a2Xn 2 for some integers a1, a2.
- Gaussian noise: Z ∼ N(0, 1)
Compute-and-Forward over a Gaussian MAC
- Consider a Gaussian MAC with real-valued channel output
Y = h1X1 + h2X2 + Z
- Want to recover a1Xn
1 + a2Xn 2 for some integers a1, a2.
- Gaussian noise: Z ∼ N(0, 1)
- Usual power constraint: E[X2
k] ≤ P
Compute-and-Forward over a Gaussian MAC
- Consider a Gaussian MAC with real-valued channel output
Y = h1X1 + h2X2 + Z
- Want to recover a1Xn
1 + a2Xn 2 for some integers a1, a2.
- Gaussian noise: Z ∼ N(0, 1)
- Usual power constraint: E[X2
k] ≤ P
- Via Gaussian quantization arguments, we can recover the following
theorem.
Compute-and-Forward over a Gaussian MAC
- Consider a Gaussian MAC with real-valued channel output
Y = h1X1 + h2X2 + Z
- Want to recover a1Xn
1 + a2Xn 2 for some integers a1, a2.
- Gaussian noise: Z ∼ N(0, 1)
- Usual power constraint: E[X2
k] ≤ P
- Via Gaussian quantization arguments, we can recover the following
theorem.
Theorem (Nazer-Gastpar ’11)
For any channel vector h and integer coefficient vector a, any rate tuple satisfying Rk < Rcomp(h, a) for k s.t. ak = 0 is achievable where Rcomp(h, a) = 1 2 log+
- P
aT P −1I + hhT−1a
Beyond One Linear Combination
- In some scenarios, it is of interest to decode two or more linear
combinations at each receiver.
Beyond One Linear Combination
- In some scenarios, it is of interest to decode two or more linear
combinations at each receiver.
- For example, Ordentlich-Erez-Nazer ’14 approximates the sum
capacity of the symmetric Gaussian interference channel via decoding two linear combinations.
Beyond One Linear Combination
- In some scenarios, it is of interest to decode two or more linear
combinations at each receiver.
- For example, Ordentlich-Erez-Nazer ’14 approximates the sum
capacity of the symmetric Gaussian interference channel via decoding two linear combinations.
- Ordentlich-Erez-Nazer ’13 improves upon compute-and-forward for
two or more linear combinations via successive cancellation.
Beyond One Linear Combination
- In some scenarios, it is of interest to decode two or more linear
combinations at each receiver.
- For example, Ordentlich-Erez-Nazer ’14 approximates the sum
capacity of the symmetric Gaussian interference channel via decoding two linear combinations.
- Ordentlich-Erez-Nazer ’13 improves upon compute-and-forward for
two or more linear combinations via successive cancellation.
- What about jointly decoding the linear combinations?
Beyond One Linear Combination
- In some scenarios, it is of interest to decode two or more linear
combinations at each receiver.
- For example, Ordentlich-Erez-Nazer ’14 approximates the sum
capacity of the symmetric Gaussian interference channel via decoding two linear combinations.
- Ordentlich-Erez-Nazer ’13 improves upon compute-and-forward for
two or more linear combinations via successive cancellation.
- What about jointly decoding the linear combinations?
- Ordentlich-Erez ’13 derived bounds for lattice-based codes.
Beyond One Linear Combination
- In some scenarios, it is of interest to decode two or more linear
combinations at each receiver.
- For example, Ordentlich-Erez-Nazer ’14 approximates the sum
capacity of the symmetric Gaussian interference channel via decoding two linear combinations.
- Ordentlich-Erez-Nazer ’13 improves upon compute-and-forward for
two or more linear combinations via successive cancellation.
- What about jointly decoding the linear combinations?
- Ordentlich-Erez ’13 derived bounds for lattice-based codes.
- This talk: We can analyze this via joint typicality decoding to get an
achievable rate region.
Jointly Decoding Two Linear Combinations of K Codewords
- At node k ∈ [1 : K], the message Mk is encoded using the nested
linear coding architecture.
Jointly Decoding Two Linear Combinations of K Codewords
- At node k ∈ [1 : K], the message Mk is encoded using the nested
linear coding architecture.
- Let Lk be the chosen index from the multicoding step.
Jointly Decoding Two Linear Combinations of K Codewords
- At node k ∈ [1 : K], the message Mk is encoded using the nested
linear coding architecture.
- Let Lk be the chosen index from the multicoding step.
- The objective of the receiver is to compute two linear combinations
- f the codewords,
W n
1 (T1) = K
- k=1
a1kun
k(Mk, Lk)
W n
2 (T2) = K
- k=1
a2kun
k(Mk, Lk) ,
with vanishing probability of error.
Jointly Decoding Two Linear Combinations of K Codewords
- At node k ∈ [1 : K], the message Mk is encoded using the nested
linear coding architecture.
- Let Lk be the chosen index from the multicoding step.
- The objective of the receiver is to compute two linear combinations
- f the codewords,
W n
1 (T1) = K
- k=1
a1kun
k(Mk, Lk)
W n
2 (T2) = K
- k=1
a2kun
k(Mk, Lk) ,
with vanishing probability of error.
- Key Technical Issue: Random linear codewords are pairwise
independent, but not 4-wise independent!
Jointly Decoding Two Linear Combinations of K Codewords
Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15)
A rate tuple (R1, . . . , RK) is achievable for computing two linear combinations if
Rk < min{H(Uk) − H(V |Y ), H(Uk) − H(W1, W2|Y, V )}, k ∈ K1 Rj < I(W2; Y, W1) − H(W2) + H(Uj), j ∈ K2, Rk + Rj < I(W1, W2; Y ) − H(W1, W2) + H(Uk) + H(Uj), k ∈ K1, j ∈ K2
- r
Rk < I(W1; Y, W2) − H(W1) + H(Uk), k ∈ K1, Rj < min{H(Uj) − H(V |Y ), H(Uj) − H(W1, W2|Y, V )}, j ∈ K2, Rk + Rj < I(W1, W2; Y ) − H(W1, W2) + H(Uk) + H(Uj), k ∈ K1, j ∈ K2
for some K
k=1 p(uk) and xk(uk) and non-zero vector b
b b ∈ F2
q,
where Kj = {k ∈ [1 : K] : ajk = 0}, j = 1, 2 and V = b1W1 ⊕ b2W2.
Jointly Decoding Two Linear Combinations of K Codewords
Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15)
A rate tuple (R1, . . . , RK) is achievable for computing two linear combinations if
Rk < min{H(Uk) − H(V |Y ), H(Uk) − H(W1, W2|Y, V )}, k ∈ K1 Rj < I(W2; Y, W1) − H(W2) + H(Uj), j ∈ K2, Rk + Rj < I(W1, W2; Y ) − H(W1, W2) + H(Uk) + H(Uj), k ∈ K1, j ∈ K2
- r
Rk < I(W1; Y, W2) − H(W1) + H(Uk), k ∈ K1, Rj < min{H(Uj) − H(V |Y ), H(Uj) − H(W1, W2|Y, V )}, j ∈ K2, Rk + Rj < I(W1, W2; Y ) − H(W1, W2) + H(Uk) + H(Uj), k ∈ K1, j ∈ K2
for some K
k=1 p(uk) and xk(uk) and non-zero vector b
b b ∈ F2
q,
where Kj = {k ∈ [1 : K] : ajk = 0}, j = 1, 2 and V = b1W1 ⊕ b2W2.
- The auxiliary linear combination V plays a key role in classifying
dependent competing pairs in the error analysis.
Multiple-Access via Nested Linear Codes
Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15)
A rate pair (R1, R2) is achievable for the discrete memoryless multiple-access channel if R1< max
a=0 min{H(U1) − H(W|Y ), H(U1) − H(U1, U2|Y, W)},
R2 < I(X2; Y |X1), R1 + R2 < I(X1, X2; Y ),
- r
R1 < I(X1; Y |X2), R2< max
a=0 min{H(U2) − H(W|Y ), H(U2) − H(U1, U2|Y, W)},
R1 + R2 < I(X1, X2; Y ) for some p(u1)p(u2) and x1(u1), x2(u2), where W = a1U1 ⊕ a2U2.
Multiple-Access Rate Region
R2 R1 R1 I1
R1 < I1, R2 < I(X2; Y |X1), R1 + R2 < I(X1, X2; Y ), where I1 = max
a=0 min{H(U1) − H(W|Y ), H(U1) − H(U1, U2|Y, W)}
Multiple-Access Rate Region
R2 I2 R1 R2
R1 < I(X1; Y |X2), R2 < I2, R1 + R2 < I(X1, X2; Y ), where I2 = max
a=0 min{H(U2) − H(W|Y ), H(U2) − H(U1, U2|Y, W)}
Multiple-Access Rate Region
R2 I2 R1 I1
- Multiple-access rate region via nested linear codes:
R1 ∪ R2
“Two Help One”
R2 R1 MAC Capacity Region
- Even if the receiver is only
interested in recovering one linear combination it can sometimes help to decode two!
“Two Help One”
R2 CF2 R1 CF1 Decode One Linear Combination
- Even if the receiver is only
interested in recovering one linear combination it can sometimes help to decode two!
“Two Help One”
R2 CF2 R1 CF1 Multiple-Access via Nested Linear Codes
- Even if the receiver is only
interested in recovering one linear combination it can sometimes help to decode two!
“Two Help One”
R2 CF2 R1 CF1 Union
- Even if the receiver is only
interested in recovering one linear combination it can sometimes help to decode two!
Case Study: Two-Sender, Two-Receiver Network
M1 E1 Xn
1
M2 E2 Xn
2
1 √ 2 1 1
Zn
1
Y n
1
Zn
2
Y n
2
D1 ( ˆ M1, ˆ M2) D2 Xn
1 + Xn 2
Case Study: Two-Sender, Two-Receiver Network
R1 R2 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 MAC capacity 1 MAC capacity 2
Case Study: Two-Sender, Two-Receiver Network
R1 R2 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 Nested linear codes 2
Case Study: Two-Sender, Two-Receiver Network
R1 R2 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 Nested linear codes 1
Case Study: Two-Sender, Two-Receiver Network
R1 R2 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 Nested linear codes Lattice based SC-CF Union of MACs
Concluding Remarks
- First steps towards bringing algebraic network information theory
back into the realm of joint typicality.
- Joint decoding rate region for compute-and-forward that
- utperforms parallel and successive decoding.