Lecture 7 Multiple Access Channel I-Hsiang Wang Department of - - PowerPoint PPT Presentation

lecture 7 multiple access channel
SMART_READER_LITE
LIVE PREVIEW

Lecture 7 Multiple Access Channel I-Hsiang Wang Department of - - PowerPoint PPT Presentation

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary Lecture 7 Multiple Access Channel I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 10, 2014 1 / 49 I-Hsiang Wang


slide-1
SLIDE 1

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Lecture 7 Multiple Access Channel

I-Hsiang Wang

Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw

December 10, 2014

1 / 49 I-Hsiang Wang NIT Lecture 7

slide-2
SLIDE 2

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Noiseless Graphical Network → Noisy Multi-User Network

We have shown that Shannon’s paradigm can be readily extended to noiseless graphical networks. Moreover, in particular for single multicast, Zero-error decoding at finite blocklength is feasible. Low-complexity explicit construction of network codes is possible. Caveat: There are couples of distinct features that a single noiseless graphical multicast problem possesses such that its solution is so elegant: Noiseless: links are modeled as noiseless finite capacitated edges. Orthogonal: links can carry independent information without interfering with one another. In other words, it well models the overlay network beyond PHY layer. Beyond wireline: in many scenarios, the communication medium is shared among multiple users, and hence the above two features may be far from valid. As a first step, we investigate several kinds of simple single-hop multi-user noisy channels.

2 / 49 I-Hsiang Wang NIT Lecture 7

slide-3
SLIDE 3

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Comparison

Lecture 6 Lecture 7,8,9 Noiseless graphical multicast Single-hop multi-user noisy channel Topology General Single-hop Linkage Noiseless, Orthogonal General (Gaussian as Main Example) Traffic Single unicast/multicast; Special cases of multiple unicast (MAC, BC) Multiple access channel, Broadcast channel, Interference channel

3 / 49 I-Hsiang Wang NIT Lecture 7

slide-4
SLIDE 4

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Key Features Missing in Noiseless Graphical Multicast

1 Superposition of signals at receiving terminals. The simplest

  • ne-hop model is the multiple access channel.

(Lecture 7)

Encoder 1 Multiple Access Channel Decoder Encoder 2 Encoder K X1 X2 XK Source 1 Source 2 Source K p (y|x1, . . . , xK) Y Destination

2 Broadcast of signals from transmitting terminals. The simplest

  • ne-hop model is the broadcast channel.

(Lecture 8)

Broadcast Channel Decoder 2 Encoder Source Destination 2 X Y1 Decoder 1 Destination1 Decoder K Destination K Y2 YK p (y1, . . . , yK|x) 4 / 49 I-Hsiang Wang NIT Lecture 7

slide-5
SLIDE 5

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Key Features Missing in Noiseless Graphical Multicast

3 Interference among independent information flows. The simplest

  • ne-hop model is the interference channel.

(Lecture 9)

Interference Channel Decoder 2 Destination 2 Y1 Decoder 1 Destination1 Decoder K Destination K Y2 YK Encoder 1 Encoder 2 Encoder K Source 1 Source 2 Source K p

  • y[1:K]|x[1:K]
  • We shall start with the multiple access channel (MAC), which answers

partially the following two kinds of questions:

1 How do multiple transmitters trade-off their rates when accessing a

single receiver? (Traffic Pattern)

2 How does a single receiver decode multiple data streams when they

are superimposed together? (Superposition)

5 / 49 I-Hsiang Wang NIT Lecture 7

slide-6
SLIDE 6

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Multiple Access Channel: Problem Formulation

ENC 1 DEC ENC 2 ENC K X1 X2 XK Y ...... ...... ...... W1 W2 WK c W1, . . . , c WK

pY |X1,...,XK

1 K independent messages {W1, . . . , WK}, each of which is only

accessible by one encoder. Wk ∼ Unif [ 1 : 2NRk] , ∀ k ∈ [1 : K].

2 Channel:

( X1, . . . , XK, pY|X1,...,XK , Y ) .

3 Rate tuple: (R1, . . . , RK).

6 / 49 I-Hsiang Wang NIT Lecture 7

slide-7
SLIDE 7

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Multiple Access Channel: Problem Formulation

ENC 1 DEC ENC 2 ENC K X1 X2 XK Y ...... ...... ...... W1 W2 WK c W1, . . . , c WK

pY |X1,...,XK

4 A

( 2NR1, 2NR2, . . . , 2NRK, N ) MAC channel code consists of

∀ k ∈ [1 : K], an encoding function enck,N : [ 1 : 2NRk] → X N

k that

maps message wk to a length N codeword xN

k .

a decoding function decN : YN →×

K k=1

[ 1 : 2NRk] that maps a channel output yN to a reconstructed message tuple ( w1, . . . , wK).

7 / 49 I-Hsiang Wang NIT Lecture 7

slide-8
SLIDE 8

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Multiple Access Channel: Problem Formulation

ENC 1 DEC ENC 2 ENC K X1 X2 XK Y ...... ...... ...... W1 W2 WK c W1, . . . , c WK

pY |X1,...,XK

5 Error probability P(N) e

:= Pr { (W1 . . . , WK) ̸= (

  • W1, . . . ,

WK )} .

6 A rate tuple R := (R1, . . . , RK) is said to be achievable if there

exist a sequence of ( 2NR1, 2NR2, . . . , 2NRK, N ) MAC channel codes such that P(N)

e

→ 0 as N → ∞.

7 The capacity region C := cl

{ R ∈ [0, ∞)K : R is achievable } .

8 / 49 I-Hsiang Wang NIT Lecture 7

slide-9
SLIDE 9

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Lecture Overview

We mainly focus on the two-user case (K = 2) in this lecture. For MAC, the results in the two-user case can be extended to the K-user case in a straightforward manner.

1 First we extend the achievability of point-to-point channels to the

two-user MAC, and establish a capacity inner bound.

2 Second we characterize the capacity region of Gaussian MAC by

proving that the inner bound above is tight.

3 We then use Gaussian MAC as an example to introduce various

schemes including successive interference cancellation (SIC), time-sharing, etc.

4 Finally we characterize the capacity region for general MAC, by

providing an enhanced achievability and a general converse proof.

9 / 49 I-Hsiang Wang NIT Lecture 7

slide-10
SLIDE 10

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

1 Basic Bounds and Gaussian MAC 2 General Discrete Memoryless MAC 3 Summary

10 / 49 I-Hsiang Wang NIT Lecture 7

slide-11
SLIDE 11

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Capacity Inner Bound

Let us begin with extending the achievability of point-to-point channel. Now the decoder has to decode two independent messages, and the key lies in how to analyze the error event in the appropriate way. Lemma 1 (Achievability) If (R1, R2) ≥ 0 satisfies the following for some (X1, X2) ∼ pX1 · pX2, then (R1, R2) is achievable. R1 < I (X1 ; Y | X2) (1) R2 < I (X2 ; Y | X1) (2) R1 + R2 < I (X1, X2 ; Y) (3) Remark: Note that the input distribution is chosen such that X1 ⊥ ⊥ X2, which is reasonable since the two encoders are not cooperating.

11 / 49 I-Hsiang Wang NIT Lecture 7

slide-12
SLIDE 12

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Proof of Achievability

pf: As in the point-to-point case, we use random coding argument to prove the existence of sequence of ( 2NR1, 2NR2, N )

  • codes such that

limN→∞ P(N)

e

= 0 as long as (1) – (3) hold. Our proof is divided into three parts (similar to the point-to-point case): (1) random codebook generation, (2) encoding and decoding, and (3) error probability analysis. Random codebook generation: ∀ k = 1, 2, randomly and independently generate 2NRk sequences xN

k (wk), wk ∈

[ 1 : 2NRk] , each i.i.d. over time according to pXk (that is, XN

k ∼ ∏N i=1 pXk (xk[i])).

Encoding: ∀ k = 1, 2, to send message wk, Encoder k transmits xN

k (wk).

Decoding: To facilitate error probability analysis, use typicality decoder: ( w1, w2) = a unique (w1, w2) ∈ [ 1 : 2NR1] × [ 1 : 2NR2] such that ( xN

1 (w1) , xN 2 (w2) , yN)

∈ T (N)

ϵ

(X1, X2, Y).

12 / 49 I-Hsiang Wang NIT Lecture 7

slide-13
SLIDE 13

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Error Probability Analysis: By the symmetry of codebook generation, we can assume WLOG the actual message tuple is (W1, W2) = (1, 1) and focus on analyzing the “averaged-over-codebook” error probability given (W1, W2) = (1, 1): (E denotes the error event

(

  • W1,

W2 ) ̸= (W1, W2))

P(1,1) {E} := Pr {E| (W1, W2) = (1, 1)} . The key is to distinguish error event E into the following four cases Ea, E(1)

t

, E(2)

t

, and E(1,2)

t

, such that E = Ea ∪ E(1)

t

∪ E(2)

t

∪ E(1,2)

t

, where                Ea := {( XN

1 (1) , XN 2 (1) , YN)

/ ∈ T (N)

ϵ

} E(1)

t

:= {( XN

1 (w1) , XN 2 (1) , YN)

∈ T (N)

ϵ

for some w1 ̸= 1 } E(2)

t

:= {( XN

1 (1) , XN 2 (w2) , YN)

∈ T (N)

ϵ

for some w2 ̸= 1 } E(1,2)

t

:= {( XN

1 (w1) , XN 2 (w2) , YN)

∈ T (N)

ϵ

for some w1 ̸= 1, w2 ̸= 1 }

13 / 49 I-Hsiang Wang NIT Lecture 7

slide-14
SLIDE 14

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Next, we would like to find a set of sufficient conditions under which the above error events have vanishing probability as N → ∞. Following the point-to-point proof, let us define event A(w1,w2) := {( XN

1 (w1) , XN 2 (w2) , YN)

∈ T (N)

ϵ

} , and rewrite            Ea = Ac

(1,1)

E(1)

t

= ∪

w1̸=1 A(w1,1)

E(2)

t

= ∪

w2̸=1 A(1,w2)

E(1,2)

t

= ∪

w1̸=1,w2̸=1 A(w1,w2)

. Hence, E = Ac

(1,1) ∪

( ∪

w1̸=1

A(w1,1) ) ∪ ( ∪

w2̸=1

A(1,w2) ) ∪ ( ∪

w1̸=1,w2̸=1

A(w1,w2) ) . Next, we present a key lemma bounding the probability of these events.

14 / 49 I-Hsiang Wang NIT Lecture 7

slide-15
SLIDE 15

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Lemma 2 P(1,1) { A(1,1) } ≥ 1 − ϵ for N large enough, and P(1,1) { A(w1,1) } ≤ 2−N(I(X1;Y|X2)−δ1(ϵ)) for all w1 ̸= 1 P(1,1) { A(1,w2) } ≤ 2−N(I(X2;Y|X1)−δ2(ϵ)) for all w2 ̸= 1 P(1,1) { A(w1,w2) } ≤ 2−N(I(X1,X2;Y)−δ1,2(ϵ)) for all w1 ̸= 1, w2 ̸= 1. where δ1 (ϵ) , δ2 (ϵ) , δ1,2 (ϵ) → 0 as ϵ → 0. With the above lemma and the union of events bound, we see that for N sufficiently large, P(1,1) {Ea} ≤ ϵ, and P(1,1) { E(1)

t

} ≤ 2NR12−N(I(X1;Y|X2)−δ1(ϵ)) P(1,1) { E(2)

t

} ≤ 2NR22−N(I(X2;Y|X1)−δ2(ϵ)) P(1,1) { E(1,2)

t

} ≤ 2N(R1+R2)2−N(I(X1,X2;Y)−δ1,2(ϵ)) ∴ As long as (R1, R2) satisfies (1) – (3), limN→∞ P(1,1) (E) = 0.

15 / 49 I-Hsiang Wang NIT Lecture 7

slide-16
SLIDE 16

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Proof of Lemma 2

Proof of P(1,1) { A(1,1) } ≥ 1 − ϵ for N large enough:

Given (W1, W2) = (1, 1), ( XN

1 (1), XN 2 (1), YN)

are distributed i.i.d. over time according to pX1,X2,Y = pX1 · pX2 · pY|X1,X2. Hence by LLN and the fact that the typicality decoder is based on pX1,X2,Y, proof complete.

Proof of P(1,1) { A(w1,1) } ≤ 2−N(I(X1;Y|X2)−δ1(ϵ)) for all w1 ̸= 1:

Due to the memoryless channel assumption and the fact that codewords are generated i.i.d. at random, XN

1 (w1) ⊥

⊥ ( XN

2 (1) , YN)

, and P(1,1) { A(w1,1) } = ∑ (xN

1 ,xN 2 ,yN)∈T (N) ϵ

p ( xN

1

) · p ( xN

2 , yN)

≤ 2N(1+ϵ)H(X1,X2,Y) · 2−N(1−ϵ)H(X1) · 2−N(1−ϵ)H(X2,Y) = 2−N(I(X1;Y|X2)−δ1(ϵ)), where δ1(ϵ) = ϵ (H (X1, X2, Y) + H (X1) + H (X2, Y)) → 0 as ϵ → 0.

Proof of the other two statements follows similarly.

16 / 49 I-Hsiang Wang NIT Lecture 7

slide-17
SLIDE 17

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Gaussian MAC: Model

ENC 1 ENC 2 X1 X2 W1 W2 DEC Y Z g2 g1 c W1, c W2 1 Channel law: Y = g1X1 + g2X2 + Z. Z ∼ N

( 0, σ2) ⊥ ⊥ (X1, X2).

2 White Gaussian: {Z [t]} is an i.i.d. (white) Gaussian random process 3 Memoryless: Z[t] ⊥

⊥ ( W1, W2, Xt−1

1

, Xt−1

2

, Zt−1) .

4 Average power constraint: 1 N

∑N

t=1 |xk[t]|2 ≤ Pk, k = 1, 2. 5 Signal-to-noise ratio: SNRk := |gk|2Pk σ2

, k = 1, 2.

17 / 49 I-Hsiang Wang NIT Lecture 7

slide-18
SLIDE 18

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Characterization of Gaussian MAC Capacity

Theorem 1 (Capacity of Gaussian MAC) If (R1, R2) ≥ 0 satisfies the following, then (R1, R2) is achievable. Rk < 1 2 log (1 + SNRk) , k = 1, 2 (4) R1 + R2 < 1 2 log (1 + SNR1 + SNR2) (5) Conversely, if (R1, R2) ≥ 0 is achievable, then it must satisfy (4) – (5) with “<” replaced by “≤”. pf: Achievability is proved by extending the inner bound in Lemma 1 to the continuous setting with input cost constraint (omitted), and pick Xk ∼ N (0, Pk), k = 1, 2. (Evaluation of mutual information is left as exercise.) To prove the converse part, we make use of Fano’s inequality and data processing inequality to obtain for k = 1, 2, (ϵk,N → 0 as N → ∞ below)

18 / 49 I-Hsiang Wang NIT Lecture 7

slide-19
SLIDE 19

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

NRk = H (Wk) = I ( Wk; Wk ) + H ( Wk

  • Wk

) ≤ I ( Wk; YN) + Nϵk,N. Bound (4) on Individual Rate: let k = 1 (k = 2 can be similarly proved), and we continue with the above inequality: N (R1 − ϵ1,N) ≤ I ( W1; YN) (a) ≤ I ( W1; YN, W2 ) (b) = I ( W1; YN|W2 ) = ∑N

t=1 I

( W1; Y[t]|Yt−1, W2 ) ≤ ∑N

t=1 I

( W1, Yt−1; Y[t]|W2 ) . (a) is due to I ( W1; W2|YN) ≥ 0; (b) is due to I (W1; W2) = 0. Next we upper bound I ( W1, Yt−1; Y[t]|W2 ) by I (X1[t]; Y[t]|X2[t]) I ( W1, Yt−1; Y[t]|W2 ) (c) = I ( W1, Yt−1, X1[t]; Y[t]|W2, X2[t] ) ≤ I ( W1, W2, Yt−1, X1[t]; Y[t]|X2[t] ) (d) = I (X1[t]; Y[t]|X2[t]) . (c) is due to the fact that Xk[t]

f

= Wk for k = 1, 2. (d) is due to the memorylessness of the channel: ( W1, W2, Yt−1) − (X1[t], X2[t]) − Y[t].

19 / 49 I-Hsiang Wang NIT Lecture 7

slide-20
SLIDE 20

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

For Gaussian MAC, note that I (X1[t]; Y[t]|X2[t]) = h (Y[t]|X2[t]) − h (Y[t]|X1[t], X2[t]) = h (g1X1[t] + Z[t]|X2[t]) − h (Z[t]|X1[t], X2[t])

(e)

≤ h (g1X1[t] + Z[t]) − h (Z[t])

(f)

≤ 1

2 log

( 1 + |g1|2 P1,t

σ2

) , P1,t := E [ |X1[t]|2] . (e) is due to “conditioning reduces entropy” and Z[t] ⊥ ⊥ (X1[t], X2[t]). (f) is due to “Gaussian maximizes differential entropy”. ∴ N (R1 − ϵ1,N) ≤ ∑N

t=1 1 2 log

( 1 + |g1|2 P1,t

σ2

) ≤ N · 1

2 log

( 1 + |g1|2

1 N

∑N

t=1 P1,t

σ2

)

(Jensen’s Inequality)

≤ N · 1

2 log

( 1 + |g1|2 P1

σ2

) = N · 1

2 log (1 + SNR1) .

20 / 49 I-Hsiang Wang NIT Lecture 7

slide-21
SLIDE 21

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Bound (5) on Sum Rate: set ϵN := ϵ1,N + ϵ2,N. Then, N (R1 + R2 − ϵN) ≤ I ( W1; YN|W2 ) + I ( W2; YN) = I ( W1, W2; YN) = ∑N

t=1 I

( W1, W2; Y[t]|Yt−1)

(a)

≤ ∑N

t=1 I (X1[t], X2[t]; Y[t]) (b)

≤ N · 1

2 log (1 + SNR1 + SNR2)

(a) is similar to the previous proof (exercise). (b) is due to the following and Jensen’s inequality (exercise): I (X1[t], X2[t]; Y[t]) = h (Y[t]) − h (Y[t]|X1[t], X2[t]) = h (g1X1[t] + g2X2[t] + Z[t]) − h (Z[t])

(c)

≤ 1

2 log

( 1 + Var[g1X1[t]+g2X2[t]]

σ2

) (d) = 1

2 log

( 1 + |g1|2P1,t+|g2|2P2,t

σ2

) (c) is due to “Gaussian maximizes differential entropy”. (d) is due to the fact that X1[t] ⊥ ⊥ X2[t] since W1 ⊥ ⊥ W2.

21 / 49 I-Hsiang Wang NIT Lecture 7

slide-22
SLIDE 22

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Capacity Region of Gaussian MAC

R1 R2

1 2 log (1 + SNR1) 1 2 log (1 + SNR2) 1 2 log

⇣ 1 +

SNR2 1+SNR1

1 2 log

⇣ 1 +

SNR1 1+SNR2

CGMAC = ( (R1, R2) ≥ 0

  • (

Rk ≤ 1

2 log (1 + SNRk) , k = 1, 2

R1 + R2 ≤ 1

2 log (1 + SNR1 + SNR2)

)

22 / 49 I-Hsiang Wang NIT Lecture 7

slide-23
SLIDE 23

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Successive Interference Cancellation

A natural scheme for the receiver to resolve multiple data streams is successive interference cancellation:

1 First decode one user’s data, treating the other user’s signal as noise. 2 Remove the decoded data, and then decode the next user’s data.

Proposition 1 (Successive Decoding Achievability) SIC with decoding order W1 → W2 achieves (R1, R2) ≥ 0 satisfying (for

some (X1, X2) ∼ pX1 · pX2 in the first equalities below)

{ R1 < I (X1; Y) = 1

2 log

( 1 +

SNR1 1+SNR2

) R2 < I (X2; Y|X1) = 1

2 log (1 + SNR2)

The proof is quite straightforward. Decoding W1 first will be successful as long as R1 < I (X1; Y). Decoding W2 with XN

1 (W1) known will be

successful as long as R2 < I (X2; Y|X1).

23 / 49 I-Hsiang Wang NIT Lecture 7

slide-24
SLIDE 24

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Time Sharing: Capacity Region is Convex

Proposition 2 (Time Sharing Achievability) If rate tuples R(1) ∈ [0, ∞)K and R(2) ∈ [0, ∞)K are both achievable, then R(λ) := λR(1) + λR(2) is also achievable ∀ λ ∈ [0, 1], λ := 1 − λ. pf: Since R(1) and R(2) are both achievable, there exist a sequence of ( 2NR(1), N )

  • codes and a sequence of

( 2NR(2), N )

  • codes, both of which

have vanishing error probability. The main idea of achieving R(λ) is to split the blocklength N into two parts λN and λN, and split the data W into two parts W(1) ∈ [ 1 : 2NλR(1)] and W(2) ∈ [ 1 : 2NλR(2)] . The proof is complete by using a ( 2λNR(1), λN )

  • code to send W(1) in

the first part (of length λN) and using a ( 2λNR(2), λN )

  • code to send

W(2) in the second part (of length λN).

24 / 49 I-Hsiang Wang NIT Lecture 7

slide-25
SLIDE 25

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2

I (X1; Y |X2) I (X2; Y |X1) I (X2; Y ) I (X1; Y )

B A

SIC with decoding order W1 → W2 achieves A (the green region) SIC with decoding order W2 → W1 achieves B (the blue region)

25 / 49 I-Hsiang Wang NIT Lecture 7

slide-26
SLIDE 26

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2

I (X1; Y |X2) I (X2; Y |X1) I (X2; Y ) I (X1; Y )

B A

With time sharing, all other rate pairs inside the inner bound region can be achieved.

26 / 49 I-Hsiang Wang NIT Lecture 7

slide-27
SLIDE 27

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

1 Basic Bounds and Gaussian MAC 2 General Discrete Memoryless MAC 3 Summary

27 / 49 I-Hsiang Wang NIT Lecture 7

slide-28
SLIDE 28

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Enlarged Achievable Region by Time Sharing

By Lemma 1, for (X1, X2) ∼ pX1 · pX2, an achievable rate region (inner bound of capacity region) Rinner (X1, X2) is defined by Rinner (X1, X2) :=   (R1, R2) ≥ 0

  • R1 ≤ I (X1; Y|X2)

R2 ≤ I (X2; Y|X1) R1 + R2 ≤ I (X1, X2; Y)    By time sharing we obtain an enlarged region (conv: convex hull operation) Rinner := conv    ∪

(X1,X2)∼pX1·pX2

Rinner (X1, X2)    . (6) Remark: For (scalar) Gaussian MAC, since a single distribution (Gaussian) maximizes three boundaries simultaneously, time sharing will not enlarge the inner bound, and hence capacity region is characterized without using time sharing.

28 / 49 I-Hsiang Wang NIT Lecture 7

slide-29
SLIDE 29

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2 R1

29 / 49 I-Hsiang Wang NIT Lecture 7

slide-30
SLIDE 30

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2 R1 R2

30 / 49 I-Hsiang Wang NIT Lecture 7

slide-31
SLIDE 31

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2 R1 R2 R3

31 / 49 I-Hsiang Wang NIT Lecture 7

slide-32
SLIDE 32

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2 R1 R2 R3 conv (R1 ∪ R2 ∪ R3)

32 / 49 I-Hsiang Wang NIT Lecture 7

slide-33
SLIDE 33

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

It turns out that Rinner in (6) is equal to C . To prove this result, we first develop an outer bound region Router, and then show that outer and inner bounds match, Router = Rinner = C .

33 / 49 I-Hsiang Wang NIT Lecture 7

slide-34
SLIDE 34

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Outer Bound Region of General MAC

Lemma 3 (Outer Bound) If (R1, R2) ≥ 0 is achievable, then it must satisfy the following rate constraints for some (Q, X1, X2) ∼ pQ · pX1|Q · pX2|Q: R1 ≤ I (X1; Y|X2, Q) (7) R2 ≤ I (X2; Y|X1, Q) (8) R1 + R2 ≤ I (X1, X2; Y|Q) (9) Router (Q, X1, X2) := {(R1, R2) ≥ 0 satisfying (7) – (9)}. Hence, an outer bound region of C can be defined as Router := ∪

(Q,X1,X2)∼pQ·pX1|Q·pX2|Q

Router (Q, X1, X2) . (10)

34 / 49 I-Hsiang Wang NIT Lecture 7

slide-35
SLIDE 35

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Single Letterization

pf: Recall that in the converse proof of Gaussian MAC, without making use of the assumption of Gaussian, we arrive at: If (R1, R2) ≥ 0 is achievable (i.e., ∃ a sequence of

( 2NR1, 2NR2, N )

  • codes with

limN→∞ P(N)

e

= 0), then (R1, R2) must satisfy the following 3 inequalities

for some ( XN

1 , XN 2

) , XN

1 ⊥

⊥ XN

2 :

R1 ≤ 1

N

∑N

t=1 I (X1[t]; Y[t]|X2[t]) + ϵ1,N

(11) R2 ≤ 1

N

∑N

t=1 I (X2[t]; Y[t]|X1[t]) + ϵ2,N

(12) R1 + R2 ≤ 1

N

∑N

t=1 I (X1[t], X2[t]; Y[t]) + ϵN

(13) Observe that the right-hand-side of (11) – (13) are the average of mutual information terms over time. In the point-to-point case, since there’s

  • nly one rate constraint, just find a maximizing distribution to upper

bound the mutual information terms at all time slots.

35 / 49 I-Hsiang Wang NIT Lecture 7

slide-36
SLIDE 36

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Auxiliary Random Variable Q

R1 ≤ 1

N

∑N

t=1 I (X1[t]; Y[t]|X2[t]) + ϵ1,N

(11) R2 ≤ 1

N

∑N

t=1 I (X2[t]; Y[t]|X1[t]) + ϵ2,N

(12) R1 + R2 ≤ 1

N

∑N

t=1 I (X1[t], X2[t]; Y[t]) + ϵN

(13) However, here we cannot do this, simply because such simultaneously maximizing distribution may not exist for all rate constraints (except for some special cases such as Gaussian MAC). Instead, we introduce an auxiliary random variable Q ∼ Unif [1 : N] and (X1, X2, Y) such that (X1, X2, Y) |{Q = t}

d

= (X1[t], X2[t], Y[t]). = ⇒ I (X1; Y|X2, Q) = ∑N

t=1 Pr {Q = t} I (X1; Y|X2, Q = t)

= 1

N

∑N

t=1 I (X1[t]; Y[t]|X2[t])

36 / 49 I-Hsiang Wang NIT Lecture 7

slide-37
SLIDE 37

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Since XN

1 ⊥

⊥ XN

2 , the newly introduced (Q, X1, X2) ∼ pQ · pX1|Q · pX2|Q.

Finally, let us rewrite the righ-thand-side of inequalities (11) – (13) into I (X1; Y|X2, Q) + ϵ1,N, I (X2; Y|X1, Q) + ϵ2,N, and I (X1, X2; Y|Q) + ϵN respectively, and obtain R1 ≤ I (X1; Y|X2, Q) + ϵ1,N R2 ≤ I (X2; Y|X1, Q) + ϵ2,N R1 + R2 ≤ I (X1, X2; Y|Q) + ϵN Hence, if (R1, R2) ≥ 0 is achievable, then (7) – (9) must hold for some (Q, X1, X2) ∼ pQ · pX1|Q · pX2|Q. Q here is usually called the “time-sharing” random variable, for reasons that will become clear later. Remark: In multi-user information theory, in order to obtain single-letter capacity bounds, introduction of auxiliary random variable is often inevitable, except for some special cases such as Gaussian networks.

37 / 49 I-Hsiang Wang NIT Lecture 7

slide-38
SLIDE 38

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

So far, by Lemma 1, time sharing, and Lemma 3, we have shown Rinner ⊆ C ⊆ Router. Next, we shall prove Router ⊆ Rinner to complete the proof for Router = Rinner = C .

38 / 49 I-Hsiang Wang NIT Lecture 7

slide-39
SLIDE 39

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Capacity Region for General MAC

Theorem 2 (Capacity of General MAC) For a general multiple access channel pY|X1,X2, its capacity region C can be characterized in the following two equivalent forms: C = Rinner as in (6) = Router as in (10). pf: Note that for any (Q, X1, X2) ∼ pQ · pX1|Q · pX2|Q, bounds (7) – (9) defines a “pentagon” with a 45° side, with two corner points A: { R1 = I (X1; Y|Q) R2 = I (X2; Y|X1, Q) , B: { R1 = I (X1; Y|X2Q) R2 = I (X2; Y|Q) . As long as point A and B ∈ Rinner, we prove that Router ⊆ Rinner. This is simple to prove since point A is a convex combination of points (I (X1; Y) , I (X2; Y|X2)) achieved in Lemma 1. Similarly, B ∈ Rinner.

39 / 49 I-Hsiang Wang NIT Lecture 7

slide-40
SLIDE 40

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Coded Time Sharing

We establish the capacity region of general MAC in a pretty convoluted

  • way. The main reason is that, the inner bound by Lemma 1 and (6), has

a different form from the outer bound by Lemma 3 and (10). Question: Can we directly prove an inner bound that has the same form as the outer bound by Lemma 3 and (10)? The answer is YES, by a new coding technique called coded time sharing. Lemma 4 (Inner Bound by Coded Time Sharing) If (R1, R2) ≥ 0 satisfies the following rate constraints for some (Q, X1, X2) ∼ pQ · pX1|Q · pX2|Q, then it is achievable: R1 ≤ I (X1; Y|X2, Q) R2 ≤ I (X2; Y|X1, Q) R1 + R2 ≤ I (X1, X2; Y|Q)

40 / 49 I-Hsiang Wang NIT Lecture 7

slide-41
SLIDE 41

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Proof Sketch of Coded Time Sharing Inner Bound

The key idea is to generate a time-sharing sequence qN to control the “configuration” of coding schemes at the two distributed encoders. Hence, qN should be thought of as part of the codebook, and it is revealed to ALL terminals (all encoders and the decoder).

ENC 1 DEC ENC 2 yN w1 w2 b w1, b w2

pY |X1,X2

qN QN

t=1 pQ (q[t])

QN

t=1 pX1|Q (x1[t]|q[t])

QN

t=1 pX2|Q (x2[t]|q[t])

xN

2 (w2)

xN

1 (w1)

  • qN, xN

1 ( b

w1) , xN

2 ( b

w2) , yN ∈ T (N)

(Q, X1, X2, Y )

41 / 49 I-Hsiang Wang NIT Lecture 7

slide-42
SLIDE 42

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2 Q = {1, 2} , |Q| = 2

2

Y

k=1

pXk|Q (·|Q = 1)

2

Y

k=1

pXk|Q (·|Q = 2)

42 / 49 I-Hsiang Wang NIT Lecture 7

slide-43
SLIDE 43

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2 Q = {1, 2} , |Q| = 2

2

Y

k=1

pXk|Q (·|Q = 1)

2

Y

k=1

pXk|Q (·|Q = 2) pQ (1) = 3

4

43 / 49 I-Hsiang Wang NIT Lecture 7

slide-44
SLIDE 44

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2 Q = {1, 2} , |Q| = 2

2

Y

k=1

pXk|Q (·|Q = 1)

2

Y

k=1

pXk|Q (·|Q = 2) pQ (1) = 1

2

pQ (1) = 3

4

44 / 49 I-Hsiang Wang NIT Lecture 7

slide-45
SLIDE 45

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2 Q = {1, 2} , |Q| = 2

2

Y

k=1

pXk|Q (·|Q = 1)

2

Y

k=1

pXk|Q (·|Q = 2) pQ (1) = 1

4

pQ (1) = 1

2

pQ (1) = 3

4

45 / 49 I-Hsiang Wang NIT Lecture 7

slide-46
SLIDE 46

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

R1 R2 Q = {1, 2} , |Q| = 2

2

Y

k=1

pXk|Q (·|Q = 1)

2

Y

k=1

pXk|Q (·|Q = 2)

No need to take the convex hull!

46 / 49 I-Hsiang Wang NIT Lecture 7

slide-47
SLIDE 47

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Remarks

1 For the capacity region in (10) to be computable, the time-sharing

random variable Q has to take value in a finite set Q. For a K-user MAC, it suffices to evaluate (10) for |Q| ≤ K. This is called the cardinality bound on the auxiliary random variable.

2 The capacity region in (10), without any convex hull operation, is

already convex and closed.

3 For MAC, it turns out that coded time sharing does not improve the

inner bound. This is in general not true for other multi-user channels, such as the interference channel.

4 The single-letterization technique presented in the converse proof

will be useful in other multi-user channel coding problems.

47 / 49 I-Hsiang Wang NIT Lecture 7

slide-48
SLIDE 48

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

1 Basic Bounds and Gaussian MAC 2 General Discrete Memoryless MAC 3 Summary

48 / 49 I-Hsiang Wang NIT Lecture 7

slide-49
SLIDE 49

Basic Bounds and Gaussian MAC General Discrete Memoryless MAC Summary

Comparison between point-to-point and multi-user channels:

1 Capacity vs. Capacity Region; Supremum vs. Closure 2 Single-letterization in converse proofs:

single maximizing distribution vs. auxiliary random variables

3 Achievability: both use random coding arguments; extends

straightforwardly

Time sharing Successive decoding (successive interference cancellation SIC) Coded time sharing

49 / 49 I-Hsiang Wang NIT Lecture 7