towards an algebraic network information theory
play

Towards an Algebraic Network Information Theory Bobak Nazer (BU) - PowerPoint PPT Presentation

Towards an Algebraic Network Information Theory Bobak Nazer (BU) Joint work with Sung Hoon Lim (EPFL), Chen Feng (UBC), and Michael Gastpar (EPFL). DIMACS Workshop on Network Coding: The Next 15 Years December 17th, 2015 Network Information


  1. Compute-and-Forward Goal: Send linear combinations of the messages to the receivers. • Compute-and-forward can serve as a framework for communicating messages across a network (e.g., relaying, MIMO uplink/downlink, interference alignment). X n Y n 1 1 m 1 ˆ E 1 D 1 t 1 X n Y n 2 2 ˆ m 2 E 2 D 2 t 2 ν ν ( · ) = q-ary expansion ν Channel K . . . . . . � . . . . ν ν ν ( t ℓ ) = a ℓ,k ν ν ν ( m k ) . . . . . . . . k =1 X n Y n K K ˆ m K E K D K t K F κ q

  2. Compute-and-Forward Goal: Send linear combinations of the messages to the receivers. • Compute-and-forward can serve as a framework for communicating messages across a network (e.g., relaying, MIMO uplink/downlink, interference alignment). • Much of the recent work has focused on Gaussian networks. Z n 1 X n Y n 1 1 m 1 ˆ E 1 D 1 t 1 Z n 2 X n Y n 2 H 2 ˆ m 2 E 2 D 2 t 2 ν ν ν ( · ) = q-ary expansion K . . . . . . � . . . . ν ν ν ( t ℓ ) = a ℓ,k ν ν ν ( m k ) . . . . . . . . Z n K k =1 X n Y n K K ˆ m K E K D K t K F κ R n q

  3. The Usual Approach

  4. The Usual Approach

  5. Computation over Gaussian MACs • Symmetric Gaussian MAC.

  6. Computation over Gaussian MACs • Symmetric Gaussian MAC. X n m 1 1 E 1 Z n • Equal power constraints: E � x ℓ � 2 ≤ nP . X n Y n m 2 2 ˆ E 2 D t . . . . K . . � ν ν ν ( t ) = ν ν ν ( m k ) X n K m K E K k =1

  7. Computation over Gaussian MACs • Symmetric Gaussian MAC. X n m 1 1 E 1 Z n • Equal power constraints: E � x ℓ � 2 ≤ nP . X n Y n m 2 2 ˆ E 2 D t • Use nested lattice codes. . . . . . . . . K . . . . � ν ν ν ( t ) = ν ν ν ( m k ) X n K m K E K k =1

  8. Computation over Gaussian MACs • Symmetric Gaussian MAC. X n m 1 1 E 1 Z n • Equal power constraints: E � x ℓ � 2 ≤ nP . X n Y n m 2 2 ˆ E 2 D t • Use nested lattice codes. . . . . . . . . K . . . . � ν ν ν ( t ) = ν ( m k ) ν ν X n K m K E K k =1 • Wilson-Narayanan-Pfister-Sprintson ’10, Nazer-Gastpar ’11: Decoding is successful if the rates satisfy � 1 � R k < 1 2 log + 2 + P .

  9. Computation over Gaussian MACs • Symmetric Gaussian MAC. X n m 1 1 E 1 Z n • Equal power constraints: E � x ℓ � 2 ≤ nP . X n Y n m 2 2 ˆ E 2 D t • Use nested lattice codes. . . . . . . . . K . . . . � ν ν ν ( t ) = ν ( m k ) ν ν X n K m K E K k =1 • Wilson-Narayanan-Pfister-Sprintson ’10, Nazer-Gastpar ’11: Decoding is successful if the rates satisfy � 1 � R k < 1 2 log + 2 + P . • Cut-set upper bound is 1 2 log(1 + P ) .

  10. Computation over Gaussian MACs • Symmetric Gaussian MAC. X n m 1 1 E 1 Z n • Equal power constraints: E � x ℓ � 2 ≤ nP . X n Y n m 2 2 ˆ E 2 D t • Use nested lattice codes. . . . . . . . . K . . . . � ν ν ν ( t ) = ν ( m k ) ν ν X n K m K E K k =1 • Wilson-Narayanan-Pfister-Sprintson ’10, Nazer-Gastpar ’11: Decoding is successful if the rates satisfy � 1 � R k < 1 2 log + 2 + P . • Cut-set upper bound is 1 2 log(1 + P ) . • What about the “1+”? Still open! (Ice wine problem.)

  11. Computation over Gaussian MACs • How about general Gaussian MACs?

  12. Computation over Gaussian MACs • How about general X n 1 m 1 E 1 Gaussian MACs? Z n X n Y n • Model using unequal m 2 2 E 2 ˆ D t power constraints: . . E � x ℓ � 2 ≤ nP ℓ . . . K . . � ν � ν � ν ν ( t ) = 0 ν ν ( m k ) X n m K K E K k =1

  13. Computation over Gaussian MACs • How about general X n 1 m 1 E 1 Gaussian MACs? Z n X n Y n • Model using unequal m 2 2 E 2 ˆ D t power constraints: . . . . E � x ℓ � 2 ≤ nP ℓ . . . . . K . . . . � ν � ν � ν ν ( t ) = 0 ν ν ( m k ) X n m K K E K k =1 • Nam-Chung-Lee ’11: At each transmitter, use the same fine lattice and a different coarse lattice, chosen to meet the power constraint.

  14. Computation over Gaussian MACs • How about general X n 1 m 1 E 1 Gaussian MACs? Z n X n Y n • Model using unequal m 2 2 E 2 ˆ D t power constraints: . . . . E � x ℓ � 2 ≤ nP ℓ . . . . . K . . . . � ν � ν � ν ν ( t ) = 0 ν ν ( m k ) X n m K K E K k =1 • Nam-Chung-Lee ’11: At each transmitter, use the same fine lattice and a different coarse lattice, chosen to meet the power constraint. • Decoding is successful if the rates satisfy � � R ℓ < 1 P ℓ 2 log + + P ℓ . � L i =1 P i

  15. Computation over Gaussian MACs • How about general X n 1 m 1 E 1 Gaussian MACs? Z n X n Y n • Model using unequal m 2 2 E 2 ˆ D t power constraints: . . . . E � x ℓ � 2 ≤ nP ℓ . . . . . K . . . . � ν � ν � ν ν ( t ) = 0 ν ν ( m k ) X n m K K E K k =1 • Nam-Chung-Lee ’11: At each transmitter, use the same fine lattice and a different coarse lattice, chosen to meet the power constraint. • Decoding is successful if the rates satisfy � � R ℓ < 1 P ℓ 2 log + + P ℓ . � L i =1 P i • Nazer-Cadambe-Ntranos-Caire ’15: Expanded compute-and-forward framework to link unequal power setting to finite fields.

  16. Point-to-Point Channels X n Y n p Y | X ˆ M Encoder Decoder M • Messages: m ∈ [2 nR ] � { 0 , . . . , 2 nR − 1 } • Encoder: a mapping x n ( m ) ∈ X n for each m ∈ [2 nR ] m ( y n ) ∈ [2 nR ] for each y n ∈ Y n • Decoder: a mapping ˆ

  17. Point-to-Point Channels X n Y n p Y | X ˆ M Encoder Decoder M • Messages: m ∈ [2 nR ] � { 0 , . . . , 2 nR − 1 } • Encoder: a mapping x n ( m ) ∈ X n for each m ∈ [2 nR ] m ( y n ) ∈ [2 nR ] for each y n ∈ Y n • Decoder: a mapping ˆ Theorem (Shannon ’48) C = max p X ( x ) I ( X ; Y )

  18. Point-to-Point Channels X n Y n p Y | X ˆ M Encoder Decoder M • Messages: m ∈ [2 nR ] � { 0 , . . . , 2 nR − 1 } • Encoder: a mapping x n ( m ) ∈ X n for each m ∈ [2 nR ] m ( y n ) ∈ [2 nR ] for each y n ∈ Y n • Decoder: a mapping ˆ Theorem (Shannon ’48) C = max p X ( x ) I ( X ; Y ) • Proof relies on random i.i.d. codebooks combined with joint typicality decoding.

  19. Random i.i.d. Codebooks T ( n ) ( X ) ǫ X n Random i.i.d. Codes • Codewords are independent of one another. • Can directly target an input distribution p X ( x ) .

  20. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder Code Construction:

  21. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder Code Construction: • Pick a finite field F q and a symbol mapping x : F q → X .

  22. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder Code Construction: • Pick a finite field F q and a symbol mapping x : F q → X . • Set κ = nR/ log( q ) .

  23. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder Code Construction: • Pick a finite field F q and a symbol mapping x : F q → X . • Set κ = nR/ log( q ) . • Draw a random generator matrix G ∈ F κ × n elementwise q i.i.d. Unif( F q ) . Let G be a realization.

  24. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder Code Construction: • Pick a finite field F q and a symbol mapping x : F q → X . • Set κ = nR/ log( q ) . • Draw a random generator matrix G ∈ F κ × n elementwise q i.i.d. Unif( F q ) . Let G be a realization. • Draw a random shift (or “dither”) D n elementwise i.i.d. Unif( F q ) . Let d n be a realization.

  25. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder Code Construction: • Pick a finite field F q and a symbol mapping x : F q → X . • Set κ = nR/ log( q ) . • Draw a random generator matrix G ∈ F κ × n elementwise q i.i.d. Unif( F q ) . Let G be a realization. • Draw a random shift (or “dither”) D n elementwise i.i.d. Unif( F q ) . Let d n be a realization. ν ( m ) ∈ F κ • Take q-ary expansion of message m into the vector ν ν q .

  26. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder Code Construction: • Pick a finite field F q and a symbol mapping x : F q → X . • Set κ = nR/ log( q ) . • Draw a random generator matrix G ∈ F κ × n elementwise q i.i.d. Unif( F q ) . Let G be a realization. • Draw a random shift (or “dither”) D n elementwise i.i.d. Unif( F q ) . Let d n be a realization. ν ( m ) ∈ F κ • Take q-ary expansion of message m into the vector ν ν q . • Linear codeword for message m is u n ( m ) = ν ν ( m ) G ⊕ d n . ν

  27. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder Code Construction: • Pick a finite field F q and a symbol mapping x : F q → X . • Set κ = nR/ log( q ) . • Draw a random generator matrix G ∈ F κ × n elementwise q i.i.d. Unif( F q ) . Let G be a realization. • Draw a random shift (or “dither”) D n elementwise i.i.d. Unif( F q ) . Let d n be a realization. ν ( m ) ∈ F κ • Take q-ary expansion of message m into the vector ν ν q . • Linear codeword for message m is u n ( m ) = ν ν ( m ) G ⊕ d n . ν • Channel input at time i is x i ( m ) = x ( u i ( m )) .

  28. Random i.i.d. Codebooks T ( n ) ( X ) ǫ X n Random Linear Codes • Codewords are pairwise independent of one another. • Codewords are uniformly distributed over F n q .

  29. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder • Well known that a direct application of linear coding is not sufficient to reach the point-to-point capacity, Ahlswede ’71.

  30. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder • Well known that a direct application of linear coding is not sufficient to reach the point-to-point capacity, Ahlswede ’71. • Gallager ’68: Pick F q with q ≫ X and choose symbol mapping x ( u ) to reach c.a.i.d. from Unif( F q ) . This can attain the capacity.

  31. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder • Well known that a direct application of linear coding is not sufficient to reach the point-to-point capacity, Ahlswede ’71. • Gallager ’68: Pick F q with q ≫ X and choose symbol mapping x ( u ) to reach c.a.i.d. from Unif( F q ) . This can attain the capacity. • This will not work for us. Roughly speaking, if each encoder has a different input distribution, the symbol mappings may be quite different, which will disrupt the linear structure of the codebook.

  32. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder • Well known that a direct application of linear coding is not sufficient to reach the point-to-point capacity, Ahlswede ’71. • Gallager ’68: Pick F q with q ≫ X and choose symbol mapping x ( u ) to reach c.a.i.d. from Unif( F q ) . This can attain the capacity. • This will not work for us. Roughly speaking, if each encoder has a different input distribution, the symbol mappings may be quite different, which will disrupt the linear structure of the codebook. • Padakandla-Pradhan ’13: It is possible to shape the input distribution using nested linear codes.

  33. Point-to-Point Channels: Linear Codes U n X n Y n Linear p Y | X ˆ x ( u ) M M Decoder Code Encoder • Well known that a direct application of linear coding is not sufficient to reach the point-to-point capacity, Ahlswede ’71. • Gallager ’68: Pick F q with q ≫ X and choose symbol mapping x ( u ) to reach c.a.i.d. from Unif( F q ) . This can attain the capacity. • This will not work for us. Roughly speaking, if each encoder has a different input distribution, the symbol mappings may be quite different, which will disrupt the linear structure of the codebook. • Padakandla-Pradhan ’13: It is possible to shape the input distribution using nested linear codes. • Basic idea: Generate many codewords to represent one message. Search in this “bin” to find a codeword with the desired type, i.e., multicoding.

  34. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Code Construction:

  35. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Code Construction: • Messages m ∈ [2 nR ] and auxiliary indices l ∈ [2 n ˆ R ] .

  36. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Code Construction: • Messages m ∈ [2 nR ] and auxiliary indices l ∈ [2 n ˆ R ] . • Set κ = n ( R + ˆ R ) / log( q ) .

  37. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Code Construction: • Messages m ∈ [2 nR ] and auxiliary indices l ∈ [2 n ˆ R ] . • Set κ = n ( R + ˆ R ) / log( q ) . • Pick generator matrix G and dither d n as before.

  38. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Code Construction: • Messages m ∈ [2 nR ] and auxiliary indices l ∈ [2 n ˆ R ] . • Set κ = n ( R + ˆ R ) / log( q ) . • Pick generator matrix G and dither d n as before. ∈ F κ • Take q-ary expansions � ν ν � ν ν ( m ) ν ν ( l ) q .

  39. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Code Construction: • Messages m ∈ [2 nR ] and auxiliary indices l ∈ [2 n ˆ R ] . • Set κ = n ( R + ˆ R ) / log( q ) . • Pick generator matrix G and dither d n as before. ∈ F κ • Take q-ary expansions � ν ν � ν ν ( m ) ν ν ( l ) q . • Linear codewords: u n ( m, l ) = � � G ⊕ d n . ν ν ν ( m ) ν ν ν ( l )

  40. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Encoding:

  41. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Encoding: • Fix p ( u ) and x ( u ) .

  42. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Encoding: • Fix p ( u ) and x ( u ) . • Multicoding: For each m , find an index l such that u n ( m, l ) ∈ T ( n ) ( U ) ǫ ′

  43. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Encoding: • Fix p ( u ) and x ( u ) . • Multicoding: For each m , find an index l such that u n ( m, l ) ∈ T ( n ) ( U ) ǫ ′ • Succeeds w.h.p. if ˆ R > D ( p U � p q ) (where p q is uniform over F q ).

  44. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Encoding: • Fix p ( u ) and x ( u ) . • Multicoding: For each m , find an index l such that u n ( m, l ) ∈ T ( n ) ( U ) ǫ ′ • Succeeds w.h.p. if ˆ R > D ( p U � p q ) (where p q is uniform over F q ). • Transmit x i = x � � u i ( m, l ) .

  45. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Encoding: • Fix p ( u ) and x ( u ) . • Multicoding: For each m , find an index l such that u n ( m, l ) ∈ T ( n ) ( U ) ǫ ′ • Succeeds w.h.p. if ˆ R > D ( p U � p q ) (where p q is uniform over F q ). • Transmit x i = x � � u i ( m, l ) . Decoding:

  46. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Encoding: • Fix p ( u ) and x ( u ) . • Multicoding: For each m , find an index l such that u n ( m, l ) ∈ T ( n ) ( U ) ǫ ′ • Succeeds w.h.p. if ˆ R > D ( p U � p q ) (where p q is uniform over F q ). • Transmit x i = x � � u i ( m, l ) . Decoding: • Joint Typicality Decoding: Find the unique index ˆ m such that m, ˆ l ) , y n ) ∈ T ( n ) ( U, Y ) for some index ˆ � u n ( ˆ l . ǫ

  47. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Encoding: • Fix p ( u ) and x ( u ) . • Multicoding: For each m , find an index l such that u n ( m, l ) ∈ T ( n ) ( U ) ǫ ′ • Succeeds w.h.p. if ˆ R > D ( p U � p q ) (where p q is uniform over F q ). • Transmit x i = x � � u i ( m, l ) . Decoding: • Joint Typicality Decoding: Find the unique index ˆ m such that m, ˆ l ) , y n ) ∈ T ( n ) ( U, Y ) for some index ˆ � u n ( ˆ l . ǫ • Succeeds w.h.p. if R + ˆ R < I ( U ; Y ) + D ( p U � p q )

  48. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Theorem (Padakandla-Pradhan ’13) Any rate R satisfying R < p ( u ) , x ( u ) I ( U ; Y ) max is achievable. This is equal to the capacity if q ≥ |X| .

  49. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Theorem (Padakandla-Pradhan ’13) Any rate R satisfying R < p ( u ) , x ( u ) I ( U ; Y ) max is achievable. This is equal to the capacity if q ≥ |X| . • This is the basic coding framework that we will use for each transmitter.

  50. Point-to-Point Channels: Linear Codes + Multicoding U n X n Y n Linear Multi- p Y | X ˆ x ( u ) M Decoder M coding Code Encoder Theorem (Padakandla-Pradhan ’13) Any rate R satisfying R < p ( u ) , x ( u ) I ( U ; Y ) max is achievable. This is equal to the capacity if q ≥ |X| . • This is the basic coding framework that we will use for each transmitter. • Next, let’s examine a two-transmitter, one-receiver “compute-and-forward” network.

  51. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Code Construction: • Messages m k ∈ [2 nR k ] and auxiliary indices l k ∈ [2 n ˆ R k ] , k = 1 , 2 .

  52. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Code Construction: • Messages m k ∈ [2 nR k ] and auxiliary indices l k ∈ [2 n ˆ R k ] , k = 1 , 2 . • Set κ = n (max { R 1 + ˆ R 1 , R 2 + ˆ R 2 } ) / log( q ) .

  53. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Code Construction: • Messages m k ∈ [2 nR k ] and auxiliary indices l k ∈ [2 n ˆ R k ] , k = 1 , 2 . • Set κ = n (max { R 1 + ˆ R 1 , R 2 + ˆ R 2 } ) / log( q ) . • Pick generator matrix G and dithers d n 1 , d n 2 as before.

  54. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Code Construction: • Messages m k ∈ [2 nR k ] and auxiliary indices l k ∈ [2 n ˆ R k ] , k = 1 , 2 . • Set κ = n (max { R 1 + ˆ R 1 , R 2 + ˆ R 2 } ) / log( q ) . • Pick generator matrix G and dithers d n 1 , d n 2 as before. ∈ F κ • Take q-ary expansions � � ν ν ν ( m 1 ) ν ν ( l 1 ) ν q ∈ F κ � ν ν � ν ν ( m 2 ) ν ν ( l 2 ) 0 Zero-padding q

  55. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Code Construction: • Messages m k ∈ [2 nR k ] and auxiliary indices l k ∈ [2 n ˆ R k ] , k = 1 , 2 . • Set κ = n (max { R 1 + ˆ R 1 , R 2 + ˆ R 2 } ) / log( q ) . • Pick generator matrix G and dithers d n 1 , d n 2 as before. ∈ F κ • Take q-ary expansions � � η η η ( m 1 , l 1 ) q ∈ F κ � η � η η ( m 2 , l 2 ) q

  56. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Code Construction: • Messages m k ∈ [2 nR k ] and auxiliary indices l k ∈ [2 n ˆ R k ] , k = 1 , 2 . • Set κ = n (max { R 1 + ˆ R 1 , R 2 + ˆ R 2 } ) / log( q ) . • Pick generator matrix G and dithers d n 1 , d n 2 as before. ∈ F κ • Take q-ary expansions � � η η η ( m 1 , l 1 ) q ∈ F κ � η � η η ( m 2 , l 2 ) q • Linear codewords: u n η ( m 1 , l 1 ) G ⊕ d n η 1 ( m 1 , l 1 ) = η 1 u n η ( m 2 , l 2 ) G ⊕ d n 2 ( m 2 , l 2 ) = η η 2

  57. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Encoding:

  58. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Encoding: • Fix p ( u 1 ) , p ( u 2 ) , x 1 ( u 1 ) , and x 2 ( u 2 ) .

  59. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Encoding: • Fix p ( u 1 ) , p ( u 2 ) , x 1 ( u 1 ) , and x 2 ( u 2 ) . • Multicoding: For each m k , find an index l k such that k ( m k , l k ) ∈ T ( n ) u n ( U k ) . ǫ ′

  60. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Encoding: • Fix p ( u 1 ) , p ( u 2 ) , x 1 ( u 1 ) , and x 2 ( u 2 ) . • Multicoding: For each m k , find an index l k such that k ( m k , l k ) ∈ T ( n ) u n ( U k ) . ǫ ′ • Succeeds w.h.p. if ˆ R k > D ( p U k � p q ) .

  61. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Encoding: • Fix p ( u 1 ) , p ( u 2 ) , x 1 ( u 1 ) , and x 2 ( u 2 ) . • Multicoding: For each m k , find an index l k such that k ( m k , l k ) ∈ T ( n ) u n ( U k ) . ǫ ′ • Succeeds w.h.p. if ˆ R k > D ( p U k � p q ) . � � • Transmit x ki = x k u ki ( m k , l k ) .

  62. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Encoding: • Fix p ( u 1 ) , p ( u 2 ) , x 1 ( u 1 ) , and x 2 ( u 2 ) . • Multicoding: For each m k , find an index l k such that k ( m k , l k ) ∈ T ( n ) u n ( U k ) . ǫ ′ • Succeeds w.h.p. if ˆ R k > D ( p U k � p q ) . � � • Transmit x ki = x k u ki ( m k , l k ) .

  63. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Computation Problem:

  64. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Computation Problem: • Consider the coefficients a ∈ F 2 q , a = [ a 1 , a 2 ]

  65. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Computation Problem: • Consider the coefficients a ∈ F 2 q , a = [ a 1 , a 2 ] • For m k ∈ [2 nR k ] , l k ∈ [2 n ˆ R k ] , the linear combination of codewords with coefficient vector a is a 1 u n 1 ( m 1 , l 1 ) ⊕ a 2 u n 2 ( m 2 , l 2 ) G ⊕ a 1 d n 1 ⊕ a 2 d n � � = a 1 η η η ( m 1 , l 1 ) ⊕ a 2 η η ( m 2 , l 2 ) η 2 ν ( t ) G ⊕ d n = ν ν w t ∈ [2 n max { R 1 + ˆ R 1 ,R 2 + ˆ R 2 } ] = w n ( t ) ,

  66. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Computation Problem: • Let M k be the chosen message and L k the chosen index from the multicoding step.

  67. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Computation Problem: • Let M k be the chosen message and L k the chosen index from the multicoding step. • Decoder wants a linear combination of the codewords: W n ( T ) = a 1 U n 1 ( M 1 , L 1 ) ⊕ a 2 U n 2 ( M 2 , L 2 )

  68. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Computation Problem: • Let M k be the chosen message and L k the chosen index from the multicoding step. • Decoder wants a linear combination of the codewords: W n ( T ) = a 1 U n 1 ( M 1 , L 1 ) ⊕ a 2 U n 2 ( M 2 , L 2 ) t ( y n ) ∈ [2 n max { R 1 + ˆ R 1 ,R 2 + ˆ R 2 } ] , y n ∈ Y n • Decoder: ˆ • Probability of Error: P ( n ) = P { T � = ˆ T } ǫ

  69. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Computation Problem: • Let M k be the chosen message and L k the chosen index from the multicoding step. • Decoder wants a linear combination of the codewords: W n ( T ) = a 1 U n 1 ( M 1 , L 1 ) ⊕ a 2 U n 2 ( M 2 , L 2 ) t ( y n ) ∈ [2 n max { R 1 + ˆ R 1 ,R 2 + ˆ R 2 } ] , y n ∈ Y n • Decoder: ˆ • Probability of Error: P ( n ) = P { T � = ˆ T } ǫ • A rate pair is achievable if there exists a sequence of codes such that P ( n ) → 0 as n → ∞ . ǫ

  70. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Decoding: • Joint Typicality Decoding: Find an index t ∈ [2 n max( R 1 + ˆ R 1 ,R 2 + ˆ R 2 ) ] such that ( w n ( t ) , y n ) ∈ T ( n ) . ǫ

  71. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15) A rate pair ( R 1 , R 2 ) is achievable if R 1 < I ( W ; Y ) − I ( W ; U 2 ) , R 2 < I ( W ; Y ) − I ( W ; U 1 ) , for some p ( u 1 ) p ( u 2 ) and functions x 1 ( u 1 ) , x 2 ( u 2 ) , where U k = F q , k = 1 , 2 , and W = a 1 U 1 ⊕ a 2 U 2 .

  72. Nested Linear Coding Architecture U n X n Linear Multi- 1 1 M 1 x 1 ( u 1 ) coding Code p Y | X 1 X 2 Y n ˆ T Decoder U n Linear Multi- 2 M 2 x 2 ( u 2 ) coding Code X n 2 Theorem (Lim-Chen-Nazer-Gastpar Allerton ’15) A rate pair ( R 1 , R 2 ) is achievable if R 1 < I ( W ; Y ) − I ( W ; U 2 ) , R 2 < I ( W ; Y ) − I ( W ; U 1 ) , for some p ( u 1 ) p ( u 2 ) and functions x 1 ( u 1 ) , x 2 ( u 2 ) , where U k = F q , k = 1 , 2 , and W = a 1 U 1 ⊕ a 2 U 2 . • Padakandla-Pradhan ’13: Special case where R 1 = R 2 .

  73. Proof Sketch • WLOG assume M = { M 1 = 0 , M 2 = 0 , L 1 = 0 , L 2 = 0 } .

  74. Proof Sketch • WLOG assume M = { M 1 = 0 , M 2 = 0 , L 1 = 0 , L 2 = 0 } . • Union bound: P ( n ) � ( W n ( t ) , Y n ) ∈ T ( n ) � � ≤ P |M . ǫ ǫ t � =0

  75. Proof Sketch • WLOG assume M = { M 1 = 0 , M 2 = 0 , L 1 = 0 , L 2 = 0 } . • Union bound: P ( n ) � ( W n ( t ) , Y n ) ∈ T ( n ) � � ≤ P |M . ǫ ǫ t � =0 • Notice that the L k depend on the codebook so Y n and W n ( t ) are not independent.

  76. Proof Sketch • WLOG assume M = { M 1 = 0 , M 2 = 0 , L 1 = 0 , L 2 = 0 } . • Union bound: P ( n ) � ( W n ( t ) , Y n ) ∈ T ( n ) � � ≤ P |M . ǫ ǫ t � =0 • Notice that the L k depend on the codebook so Y n and W n ( t ) are not independent. • To get around this issue, we analyze � ( W n ( t ) , Y n ) ∈ T ( n ) , U n 1 (0 , 0) ∈ T ( n ) , U n 2 (0 , 0) ∈ T ( n ) P ( E ) = � |M � P ǫ ǫ ǫ t � =0

  77. Proof Sketch • WLOG assume M = { M 1 = 0 , M 2 = 0 , L 1 = 0 , L 2 = 0 } . • Union bound: P ( n ) � ( W n ( t ) , Y n ) ∈ T ( n ) � � ≤ P |M . ǫ ǫ t � =0 • Notice that the L k depend on the codebook so Y n and W n ( t ) are not independent. • To get around this issue, we analyze � ( W n ( t ) , Y n ) ∈ T ( n ) , U n 1 (0 , 0) ∈ T ( n ) , U n 2 (0 , 0) ∈ T ( n ) P ( E ) = � |M � P ǫ ǫ ǫ t � =0 • Conditioned on M , Y n → ( U n 1 (0 , 0) , U n 2 (0 , 0)) → W n ( t )

  78. Proof Sketch • WLOG assume M = { M 1 = 0 , M 2 = 0 , L 1 = 0 , L 2 = 0 } . • Union bound: P ( n ) � ( W n ( t ) , Y n ) ∈ T ( n ) � � ≤ P |M . ǫ ǫ t � =0 • Notice that the L k depend on the codebook so Y n and W n ( t ) are not independent. • To get around this issue, we analyze � ( W n ( t ) , Y n ) ∈ T ( n ) , U n 1 (0 , 0) ∈ T ( n ) , U n 2 (0 , 0) ∈ T ( n ) P ( E ) = � |M � P ǫ ǫ ǫ t � =0 • Conditioned on M , Y n → ( U n 1 (0 , 0) , U n 2 (0 , 0)) → W n ( t ) • P ( E ) tends to zero as n → ∞ if R k + ˆ R k + ˆ R 1 + ˆ R 2 < I ( W ; Y ) + D ( p W || p q ) + D ( p U 1 || p q ) + D ( p U 2 || p q )

  79. Compute-and-Forward over a Gaussian MAC • Consider a Gaussian MAC with real-valued channel output Y = h 1 X 1 + h 2 X 2 + Z

  80. Compute-and-Forward over a Gaussian MAC • Consider a Gaussian MAC with real-valued channel output Y = h 1 X 1 + h 2 X 2 + Z • Want to recover a 1 X n 1 + a 2 X n 2 for some integers a 1 , a 2 .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend