Lecture 8 Broadcast Channel I-Hsiang Wang Department of Electrical - - PowerPoint PPT Presentation

lecture 8 broadcast channel
SMART_READER_LITE
LIVE PREVIEW

Lecture 8 Broadcast Channel I-Hsiang Wang Department of Electrical - - PowerPoint PPT Presentation

Superposition Coding and Degraded BC Martons Coding Scheme and Semi-Deterministic BC Summary Lecture 8 Broadcast Channel I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 29, 2014 1


slide-1
SLIDE 1

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Lecture 8 Broadcast Channel

I-Hsiang Wang

Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw

December 29, 2014

1 / 41 I-Hsiang Wang NIT Lecture 8

slide-2
SLIDE 2

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Broadcast Channel: Overview

Broadcast: The simplest one-hop channel model that captures the broadcast feature of shared wireless medium. One-to-Many: Consisting of a single transmitter and multiple receivers; hence a one-to-many one-hop topology. Unlike multiple access channels, characterization of the capacity region of a broadcast channel is in general an open problem. Capacity is characterized for specific classes of channel laws, including K-receiver degraded broadcast channel K-receiver Gaussian vector broadcast channel K-receiver deterministic broadcast channel 2-receiver semi-deterministic broadcast channel 3-receiver less noisy broadcast channels 2-receiver more capable broadcast channel

2 / 41 I-Hsiang Wang NIT Lecture 8

slide-3
SLIDE 3

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Broadcast Channel: Problem Formulation

DEC 1 ENC DEC 2 DEC K ...... ......

pY1,...,YK|X

W1, . . . , WK c W1 c W2 c WK X Y1 Y2 YK 1 K independent messages {W1, . . . , WK}, all accessible by the

  • encoder. Wk ∼ Unif

[ 1 : 2NRk] , ∀ k ∈ [1 : K].

2 Channel:

( X, pY1,...,YK|X, Y1, . . . , YK ) .

3 Rate tuple: (R1, . . . , RK).

3 / 41 I-Hsiang Wang NIT Lecture 8

slide-4
SLIDE 4

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Broadcast Channel: Problem Formulation

DEC 1 ENC DEC 2 DEC K ...... ......

pY1,...,YK|X

W1, . . . , WK c W1 c W2 c WK X Y1 Y2 YK 4 A

( 2NR1, 2NR2, . . . , 2NRK, N ) BC channel code consists of

an encoding function encN :×

K k=1

[ 1 : 2NRk] → X N that maps message tuple (w1, . . . , wK) to a length N codeword xN. ∀ k ∈ [1 : K], a decoding function deck,N : YN

k →

[ 1 : 2NRk] that maps a channel output yN

k to a reconstructed message wk.

4 / 41 I-Hsiang Wang NIT Lecture 8

slide-5
SLIDE 5

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Broadcast Channel: Problem Formulation

DEC 1 ENC DEC 2 DEC K ...... ......

pY1,...,YK|X

W1, . . . , WK c W1 c W2 c WK X Y1 Y2 YK 5 Error probability P(N) e

:= Pr { (W1 . . . , WK) ̸= (

  • W1, . . . ,

WK )} .

6 A rate tuple R := (R1, . . . , RK) is said to be achievable if there

exist a sequence of ( 2NR1, 2NR2, . . . , 2NRK, N ) BC channel codes such that P(N)

e

→ 0 as N → ∞.

7 The capacity region C := cl

{ R ∈ [0, ∞)K : R is achievable } .

5 / 41 I-Hsiang Wang NIT Lecture 8

slide-6
SLIDE 6

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Capacity Region Depends Only on Conditional Marginals

Proposition 1 When there is no feedback, the capacity region C depends only on the conditional marginal distributions { pYk|X : k = 1, 2, . . . , K } , not on the conditional joint distribution pY1,...,YK|X. pf: Define P(N)

e,k := Pr

{ Wk ̸= Wk } for k = 1, . . . , K, observe that P(N)

e,k

depends only on conditional marginal pYk|X, and observe that maxk∈[1:K] P(N)

e,k ≤ P(N) e

≤ ∑K

k=1 P(N) e,k .

Hence, limN→∞ P(N)

e,k = 0, ∀ k ∈ [1 : K] ⇐

⇒ limN→∞ P(N)

e

= 0. Remark: In fact, for any non-feedback multi-user system with individual non-cooperating receivers, the above proposition holds.

6 / 41 I-Hsiang Wang NIT Lecture 8

slide-7
SLIDE 7

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

(Scalar) Gaussian Broadcast Channel: Model

DEC 1 DEC 2 ENC g2 g1 X Y1 Z1 Z2 Y2 W1, W2 c W1 c W2 1 Channel law: Yk = gkX + Zk. Zk ∼ N

( 0, σ2) ⊥ ⊥ (X), k = 1, 2.

2 White Gaussian: {Zk [t]} is an i.i.d. Gaussian r.p. for k = 1, 2. 3 Memoryless: Zk[t] ⊥

⊥ ( W1, W2, Xt−1, Zt−1

k

) , k = 1, 2.

4 Average power constraint: 1 N

∑N

t=1 |x[t]|2 ≤ P. 5 Signal-to-noise ratio: SNRk := |gk|2P σ2 , k = 1, 2.

7 / 41 I-Hsiang Wang NIT Lecture 8

slide-8
SLIDE 8

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Covered in this Lecture

The research on broadcast channel spans a broad area in network information theory – the reason is that the capacity characterization for the general case remains open. Hence in this lecture, we only focus on 2-receiver broadcast channels, and investigate a few important topics, including Superposition coding scheme Capacity region of scalar Gaussian broadcast channel Capacity region of degraded broadcast channel Marton’s coding scheme Capacity region of semi-deterministic broadcast channel Remark: Advanced topics such as Gaussian vector broadcast channel will be discussed if time allows.

8 / 41 I-Hsiang Wang NIT Lecture 8

slide-9
SLIDE 9

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Encode Two Independent Messages into One Codeword

DEC 1 ENC DEC 2 c W1 c W2

pY1,Y2|X

W1, W2 XN Y N

1

Y N

2

Key challenge lies in encoding – how to embed two independent messages (data) W1 and W2 into a single codeword XN (W1, W2)? Straightforward thinking: Independently encode Wk into UN

k ,

k = 1, 2, and combine them into XN via a deterministic map x (u1, u2). ex: XN = UN

1 + UN 2 . ENC 1 W1 W2 ENC 2 U N

1 (W1)

U N

2 (W2)

x (u1, u2) XN

ENC

9 / 41 I-Hsiang Wang NIT Lecture 8

slide-10
SLIDE 10

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

The naïve coding approach gives the following inner bound: for some (U1, U2) ∼ pU1 · pU2 and deterministic map x (u1, u2), if (R1, R2) satisfies the following, then it is achievable: R1 ≤ I (U1; Y1) , R2 ≤ I (U2; Y2) . Does this scheme work well? Let us use the scalar Gaussian broadcast channel as an example: U1 ∼ N (0, αP), U2 ∼ N (0, (1 − α)P), α ∈ [0, 1]. X = U1 + U2. Capacity inner bound: for α ∈ [0, 1], R1 ≤ 1

2 log

( 1 +

αSNR1 1+(1−α)SNR1

) , R2 ≤ 1

2 log

( 1 + (1−α)SNR2

1+αSNR2

) Unfortunately, the union of these rate tuples are not even convex, and hence not better than simple time sharing.

10 / 41 I-Hsiang Wang NIT Lecture 8

slide-11
SLIDE 11

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

R1 R2

1 2 log (1 + SNR1) 1 2 log (1 + SNR2)

Capacity Region Naive Superposition

Naïve coding scheme turns out to be quite suboptimal

11 / 41 I-Hsiang Wang NIT Lecture 8

slide-12
SLIDE 12

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

We shall present two methods to improve the naïve coding scheme:

1 Superposition coding: improvement comes from the decoding part 2 Marton’s coding: improvement comes from the encoding part

Before entering the main part, let us introduce two lemmas regarding typicality decoding, which extend from the achievability proofs in the point-to-point channel and the multiple access channel. These lemmas will help reduce time in deriving achievability based on typicality arguments.

12 / 41 I-Hsiang Wang NIT Lecture 8

slide-13
SLIDE 13

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Joint Typicality Lemma

Lemma 1 (Joint Typicality Lemma) Let (U, Y, X) ∼ pU,Y,X. Let ( un, yn) be a pair of arbitrary sequences, and Xn ∼ ∏n

i=1 pX|U (

xi| ui). Then, Pr {(

  • un,

yn, Xn) ∈ T (n)

ϵ

(U, Y, X) } ≤ 2−n(I(X;Y|U)−δ(ϵ)) pf: Pr {(

  • un,

yn, Xn) ∈ T (n)

ϵ

} = ∑

  • xn∈T (n)

ϵ

(X| un, yn) p (

xn| un) ≤

  • T (n)

ϵ

(X| un, yn)

  • 2−n(1−ϵ)H(X|U) ≤ 2n(1+ϵ)H(X|U,Y)2−n(1−ϵ)H(X|U)

= 2−n(I(X;Y|U)−δ(ϵ)), where δ (ϵ) = ϵ (H (X|U) + H (X|U, Y))

13 / 41 I-Hsiang Wang NIT Lecture 8

slide-14
SLIDE 14

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Packing Lemma

Lemma 2 (Packing Lemma) Let (U, Y, X) ∼ pU,Y,X. Let (

  • Un,

Yn) be a pair of arbitrarily distributed random sequences. Consider random sequences Xn (m), indexed by m ∈ M, |M| ≤ 2nR, is pairwise conditionally independent of Yn given

  • Un. Furthermore,

(

  • Xn (m) ,

Un) ∼ p ( un) · ∏n

i=1 pX|U (

xi| ui). Define event Am := {(

  • Un,

Yn, Xn (m) ) ∈ T (n)

ϵ

(U, Y, X) } . Then, ∃ δ (ϵ) → 0 as ϵ → 0 such that lim

n→∞ Pr

{∪

m∈M Am

} = 0 if R < I (X; Y|U) − δ (ϵ) .

14 / 41 I-Hsiang Wang NIT Lecture 8

slide-15
SLIDE 15

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

pf: By the joint typicality lemma and Xn (m) − Un − Yn, we bound Pr {Am} ≤ 2−n(I(X;Y|U)−δ(ϵ)) as follows: Pr {Am} = Pr {(

  • Un,

Yn, Xn (m) ) ∈ T (n)

ϵ

(U, Y, X) } = ∑

( un, yn)∈T (n)

ϵ

p ( un, yn) · Pr { ( un, yn, Xn (m)) ∈ T (n)

ϵ

  • Un =

un, Yn = yn}

(a)

= ∑

( un, yn)∈T (n)

ϵ

p ( un, yn) · Pr { ( un, yn, Xn (m)) ∈ T (n)

ϵ

  • Un =

un}

(b)

≤ ∑

( un, yn)∈T (n)

ϵ

p ( un, yn) · 2−n(I(X;Y|U)−δ(ϵ)) ≤ 2−n(I(X;Y|U)−δ(ϵ)). (a) is due to Xn (m) − Un −

  • Yn. (b) is due to the joint typicality lemma.

The proof is complete by the union of events bound.

15 / 41 I-Hsiang Wang NIT Lecture 8

slide-16
SLIDE 16

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

1 Superposition Coding and Degraded BC 2 Marton’s Coding Scheme and Semi-Deterministic BC 3 Summary

16 / 41 I-Hsiang Wang NIT Lecture 8

slide-17
SLIDE 17

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Degraded Broadcast Channel

Definition 1 (Physically Degraded BC and Stochastically Degraded BC) For a two-user broadcast channel ( X, pY1,Y2|X, Y1, Y2 ) :

1 If pY1,Y2|X = pY1|X · pY2|Y1, i.e., X − Y1 − Y2 forms a Markov

chain, then it is called a physically degraded broadcast channel.

2 If there exists

Y1 such that Y1|X

d

= Y1|X and X − Y1 − Y2 forms a Markov chain, then it is called a (stochastically) degraded BC. Remark: Since the non-feedback capacity region of a broadcast channel depends only on the conditional marginals, in the following we WLOG focus on physically degraded BC. Remark: In the above degraded BC, it is obvious that DEC 1 has a stronger reception than DEC 2. This yields a natural way for information

  • combining. Furthermore, for decoding, DEC 1 shall first decode W2 and

then decode W1 (similar to successive interference cancellation).

17 / 41 I-Hsiang Wang NIT Lecture 8

slide-18
SLIDE 18

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Capacity Region of Degraded Broadcast Channel

Theorem 1 (Capacity Region of Degraded BC) For a two-user degraded broadcast channel ( X, pY1,Y2|X, Y1, Y2 ) with X − Y1 − Y2, the capacity region C consists of (R1, R2) ≥ 0 satisfying R1 ≤ I (X; Y1|U) , R2 ≤ I (U; Y2) for some (U, X) ∼ pU · pX|U. Note: U is an auxiliary random variable. As we will see soon, however, it does not serve the purpose for time-sharing. Instead, it carries data W2. In the following, we first provide the achievability proof via superposition coding and give a general inner bound for the general broadcast channel. Then we prove the converse by identifying auxiliary random variables cleverly to match the inner bound.

18 / 41 I-Hsiang Wang NIT Lecture 8

slide-19
SLIDE 19

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Superposition Coding: Achievable Rate Region

Lemma 3 (Superposition Coding Inner Bound) For a general broadcast channel, (R1, R2) ≥ 0 is achievable if it satisfies the following for some (U, X) ∼ pU · pX|U: R1 < I (X; Y1|U) , (1) R2 < I (U; Y2) , (2) R1 + R2 < I (X; Y1) (3) Achievability of Theorem 1 is proved since (3) is inactive in degraded BC: RHS of (3) = I (X; Y1) = I (U, X; Y1) = I (X; Y1|U) + I (U; Y1) ≥ I (X; Y1|U) + I (U; Y2) = RHS of (1) + RHS of (2), where “≥” is due to Markov chain U − X − Y1 − Y2.

19 / 41 I-Hsiang Wang NIT Lecture 8

slide-20
SLIDE 20

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Superposition Coding: Simplified Encoding

Building upon the intuitions for degraded BC, the previousl naïve superposition scheme can be simplified to the following one:

W1 W2 ENC 2 pU pX|U ENC 1 U N

2 (W2)

XN (W1, W2)

ENC

Random codebook generation: Randomly and independently generate 2NR2 sequences uN (w2), w2 ∈ [ 1 : 2NR2] , each ∼ ∏N

i=1 pU (ui).

∀ w2 ∈ [ 1 : 2NR2] , randomly and conditionally independently generate 2NR1 seq. xN (w1, w2), w1 ∈ [ 1 : 2NR1] , each ∼ ∏N

i=1 pX|U (xi|ui(w2)).

Encoding: To send (w1, w2), transmit xN

1 (w1, w2).

20 / 41 I-Hsiang Wang NIT Lecture 8

slide-21
SLIDE 21

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

UN

uN (w2)

21 / 41 I-Hsiang Wang NIT Lecture 8

slide-22
SLIDE 22

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

UN X N

uN (w2) xN (w1, w2)

22 / 41 I-Hsiang Wang NIT Lecture 8

slide-23
SLIDE 23

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Superposition Coding: Refined Decoding

Decoding: DEC 2 finds a unique w2 ∈ [ 1 : 2NR2] such that ( uN (w2) , yN

2

) ∈ T (N)

ϵ

(U, Y2) . DEC 2 decodes w2 without making use of the structure of XN (treat interference w1 as noise). DEC 1 finds a unique w1 ∈ [ 1 : 2NR1] such that for some w2 ∈ [ 1 : 2NR2] ( uN ( w2) , xN (w1, w2) , yN

1

) ∈ T (N)

ϵ

(U, X, Y1) . DEC 1, on the other hand, decodes w1 by recognizing the structure of

  • UN. However, since decoding w2 correctly is not its goal, in the typicality

test it only finds some w2.

23 / 41 I-Hsiang Wang NIT Lecture 8

slide-24
SLIDE 24

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Superposition Coding: Error Probability Analysis (1)

Error Probability Analysis: As mentioned in the proof of Proposition 1, we can equivalently focus on individual error probability P(N)

e,1 and P(N) e,2 .

By the symmetry of codebook generation, we can assume WLOG the actual message tuple is (W1, W2) = (1, 1) and focus on analyzing the “averaged-over-codebook” error probability given (W1, W2) = (1, 1). DEC 2: We focus on P(1,1) {E2} := Pr {E2| (W1, W2) = (1, 1)}, where E2 denotes the error event W2 ̸=

  • W2. Following the achievability proof

arguments for the point-to-point case, distinguish the error event E2 into    E2,a := {( UN (1) , YN

2

) / ∈ T (N)

ϵ

} E2,t := {( UN (w2) , YN

2

) ∈ T (N)

ϵ

for some w2 ̸= 1 } P(1,1) {E2,a} vanishes as N → ∞ due to LLN. By the Packing Lemma, P(1,1) {E2,t} vanishes as N → ∞ if R2 < I (U; Y2).

24 / 41 I-Hsiang Wang NIT Lecture 8

slide-25
SLIDE 25

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Superposition Coding: Error Probability Analysis (2)

DEC 1: We focus on P(1,1) {E1} := Pr {E1| (W1, W2) = (1, 1)}, where E1 denotes the error event W1 ̸=

  • W1. Here the typicality test involves

two messages: recall that DEC 1 finds a unique w1 ∈ [ 1 : 2NR1] such that for some w2 ∈ [ 1 : 2NR2] , ( uN ( w2) , xN (w1, w2) , yN

1

) ∈ T (N)

ϵ

(U, X, Y1). Hence, the error event can be distinguished into the following three cases: E1,a, E(1)

1,t , and E(1,2) 1,t

, such that E1 = E1,a ∪ E(1)

1,t ∪ E(1,2) 1,t

, where            E1,a := {( UN (1) , XN (1, 1) , YN

1

) / ∈ T (N)

ϵ

} E(1)

1,t :=

{( UN (1) , XN (w1, 1) , YN

1

) ∈ T (N)

ϵ

for some w1 ̸= 1 } E(1,2)

1,t

:= {( UN (w2) , XN (w1, w2) , YN

1

) ∈ T (N)

ϵ

for some w1 ̸= 1, w2 ̸= 1 } P(1,1) {E1,a} vanishes as N → ∞ due to LLN. To bound the probability of E(1)

1,t and E(1,2) 1,t

, caution is needed to apply the Packing Lemma (next slide).

25 / 41 I-Hsiang Wang NIT Lecture 8

slide-26
SLIDE 26

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Key to applying the Packing Lemma is to identify the distribution of random sequences involved in the typicality test under various conditions. Error event E(1)

1,t : Here the involved sequences are UN (1), XN (w1, 1), and

YN

1 , where w1 ̸= 1. Due to the conditional i.i.d. generation of XN:

1 XN (w1, 1) ⊥

⊥ YN

1 given UN (1)

2 XN (w1, 1) |UN (1) ∼ ∏N

i=1 pX|U (xi|ui)

Hence, by the Packing Lemma, P(1,1) { E(1)

1,t

} vanishes as N → ∞ if R1 < I (X; Y1|U).

Error event E(1,2)

1,t

: The involved sequences are UN (w2), XN (w1, w2), and

YN

1 , where w1, w2 ̸= 1. Due to the i.i.d. generation of XN and UN:

1

( UN (w2) , XN (w1, w2) ) ⊥ ⊥ YN

1

2

( UN (w2) , XN (w1, w2) ) ∼ ∏N

i=1 pU,X (ui, xi)

Hence, by the Packing Lemma, P(1,1) { E(1,2)

1,t

} vanishes as N → ∞ if R1 + R2 < I (U, X; Y1) = I (X; Y1).

26 / 41 I-Hsiang Wang NIT Lecture 8

slide-27
SLIDE 27

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Converse Proof for the Degraded Broadcast Channel

The goal is to show that if (R1, R2) is achievable, then R1 ≤ I (X; Y1|U) and R2 ≤ I (U; Y2) for some (U, X) ∼ pU · pX|U. pf: As usual, by Fano’s inequality and data processing inequality we get R1 ≤ 1

NI

( W1; YN

1 |W2

) + ϵ1,N , R2 ≤ 1

NI

( W2; YN

2

) + ϵ2,N , where ϵ1,N, ϵ2,N → 0 as N → ∞. Note: Giving W2 to DEC 1 will not increase the rate much, since in degraded BC, DEC 1 can “simulate” YN

2 and hence can decode W2.

Towards single letterization, next we expand above using chain rule: I ( W1; YN

1 |W2

) = ∑N

t=1 I

( W1; Y1[t]|Yt−1

1

, W2 ) I ( W2; YN

2

) = ∑N

t=1 I

( W2; Y2[t]|Yt−1

2

)

27 / 41 I-Hsiang Wang NIT Lecture 8

slide-28
SLIDE 28

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Identifying Auxiliary Random Variable U

We’d like to identify auxiliary random variable U to relate the multi-letter expressions on the LHS to the single-letter expressions on the RHS: I ( W1; Y1[t]|Yt−1

1

, W2 ) ← → I (X; Y1|U) (4) I ( W2; Y2[t]|Yt−1

2

) ← → I (U; Y2) (5) Furthermore, U has to satisfy the Markov chain U − X − (Y1, Y2). To “guess” auxiliary random variables in converse proofs, we often get inspired by the achievability. In superposition coding, recall that the auxiliary random variable U carries message W2. Hence, the auxiliary r.v. U here should contain W2. Does it suffice to choose U[t] := W2?

Well, if you look at the LHS of (5), you see that the conditioning on Yt−1

2

cannot be removed by choosing U[t] := W2.

28 / 41 I-Hsiang Wang NIT Lecture 8

slide-29
SLIDE 29

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

How about choosing U[t] := ( W2, Yt−1

1

) ? I ( W1; Y1[t]|Yt−1

1

, W2 ) ≤ I ( W1, X[t]; Y1[t]|Yt−1

1

, W2 ) = I ( X[t]; Y1[t]|Yt−1

1

, W2 ) + I ( W1; Y1[t]|Yt−1

1

, W2, X[t] )

  • 0, ∵ memoryless

(W1,W2,Yt−1

1 )−X[t]−Y1[t]

= I (X[t]; Y1[t]|U[t]) I ( W2; Y2[t]|Yt−1

2

) ≤ I ( W2, Yt−1

2

; Y2[t] ) ≤ I ( W2, Yt−1

2

, Yt−1

1

; Y2[t] ) = I ( W2, Yt−1

1

; Y2[t] ) + I ( Yt−1

2

; Y2[t]|W2, Yt−1

1

)

  • 0, ∵ degraded

Yt−1

2

−Yt−1

1

−(W2,Y2[t])

= I (U[t]; Y2[t])

29 / 41 I-Hsiang Wang NIT Lecture 8

slide-30
SLIDE 30

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Single Letterization

By identifying U[t] := ( W2, Yt−1

1

) , we obtain that R1 − ϵ1,N ≤ 1

N

∑N

t=1 I (X[t]; Y1[t]|U[t])

R2 − ϵ2,N ≤ 1

N

∑N

t=1 I (U[t]; Y2[t])

To finish up, we use the trick in the MAC converse proof: introduce time-sharing random variable Q ∼ Unif [1 : N] and (

  • U, X, Y1, Y2

)

  • {Q = t}

d

= (U[t], X[t], Y1[t], Y2[t]) . Hence, we conclude that R1 ≤ I ( X; Y1

  • U, Q

) and R2 ≤ I (

  • U; Y2
  • Q

) . Taking U := (

  • U, Q

) and observing that I (

  • U; Y2
  • Q

) ≤ I (

  • U, Q; Y2

) , the proof is complete.

30 / 41 I-Hsiang Wang NIT Lecture 8

slide-31
SLIDE 31

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Scalar Gaussian Broadcast Channel Capacity

DEC 1 DEC 2 ENC g2 g1 X Y1 Z1 Z2 Y2 W1, W2 c W1 c W2

Without loss of generality let us assume SNR1 ≥ SNR2. Theorem 2 (Capacity Region of Scalar Gaussian BC) The scalar Gaussian BC is degraded, and the capacity region CGBC consists of (R1, R2) ≥ 0 satisfying the following for some α ∈ [0, 1]: R1 ≤ 1

2 log (1 + αSNR1) , R2 ≤ 1 2 log

( 1 + (1−α)SNR2

1+αSNR2

) .

31 / 41 I-Hsiang Wang NIT Lecture 8

slide-32
SLIDE 32

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

pf: We first prove that the scalar Gaussian BC is degraded. Then, we invoke Theorem 1 and show that jointly Gaussian distributed (U, X) suffices to evaluate the capacity region. Degradedness: Recall that when defining a scalar Gaussian BC, we do not specify the joint distribution of (Z1, Z2), since the capacity region of a broadcast channel without feedback depends only on the conditional marginals pY1|X and pY2|X (Proposition 1). Hence, it suffices to show that for some joint distribution of (Z1, Z2), we have a physically degraded broadcast channel X − Y1 − Y2. We do so by observing Y2 = g2X+Z2 = g2

g1 (g1X + Z1)+

( Z2 − g2

g1 Z1

) = g2

g1 Y1+

( Z2 − g2

g1 Z1

) . Setting Z2 := g2

g1 Z1 + Z where Z ∼ N

( 0, ( 1 − |g2|2

|g1|2

) σ2) ⊥ ⊥ (X, Z1), we get the desired Markov chain X − Y1 − Y2.

32 / 41 I-Hsiang Wang NIT Lecture 8

slide-33
SLIDE 33

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

We have proved that a scalar Gaussian is degraded and hence by Theorem 1, its capacity region C consists of (R1, R2) ≥ 0 satisfying R1 ≤ I (X; Y1|U) , R2 ≤ I (U; Y2) , for some (U, X). However, the characterization is not explicit, and we need to show that taking union over jointly Gaussian (U, X) suffices to evaluate C .

33 / 41 I-Hsiang Wang NIT Lecture 8

slide-34
SLIDE 34

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Jointly Gaussian (U, X) suffices: First, observe that with jointly Gaussian (U, X) with covariance matrix KU,X = [ KU ρ (KUP)

1 2

ρ (KUP)

1 2

P ] , (6) the two mutual information terms are evaluated as (exercise) I (X; Y1|U) = 1

2 log

( 1 + ( 1 − |ρ|2) SNR1 ) , I (U; Y2) = 1

2 log

( 1 +

|ρ|2SNR2 1+(1−|ρ|2)SNR2

) , which match the desired form in Theorem 2 with α := 1 − |ρ|2. Next we show that for arbitrarily distributed (U, X), ∃ α ∈ [0, 1] so that I (X; Y1|U) ≤ 1

2 log (1 + αSNR1), I (U; Y2) ≤ 1 2 log

( 1 + (1−α)SNR2

1+αSNR2

) .

34 / 41 I-Hsiang Wang NIT Lecture 8

slide-35
SLIDE 35

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Consider a jointly distributed (U, X) with covariance matrix as in (6). First, observe that I (X; Y1|U) = h (Y1|U) − h (Z1), and h (Y1|U) = h (g1X + Z1|U) ≤ 1

2 log (2πe)

( σ2 + ( 1 − |ρ|2) |g1|2 P ) , since jointly Gaussian maximizes conditional differential entropy. ∴ ∃ α ∈ [0, 1] so that h (g1X + Z1|U) = 1

2 log (2πe)

( σ2 + α |g1|2 P ) , and hence I (X; Y1|U) = 1

2 log (1 + αSNR1).

Second, observe that I (U; Y2) = h (Y2) − h (Y2|U). Since h (Y2) ≤ 1

2 log (2πe)

( σ2 + |g2|2 P ) , we only need to show that h (Y2|U) ≥ 1

2 log (2πe)

( σ2 + α |g2|2 P ) . One can prove the above (exercise) by the conditional entropy power inequality: if X − U − Y, then 22h(X+Y|U) ≥ 22h(X|U) + 22h(Y|U). Key: Y2 = g2

g1 Y1 + Z where Z ∼ N

( 0, ( 1 − |g2|2

|g1|2

) σ2) ⊥ ⊥ (X, Z1).

35 / 41 I-Hsiang Wang NIT Lecture 8

slide-36
SLIDE 36

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

1 Superposition Coding and Degraded BC 2 Marton’s Coding Scheme and Semi-Deterministic BC 3 Summary

36 / 41 I-Hsiang Wang NIT Lecture 8

slide-37
SLIDE 37

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Recap: Naïve Superposition

For some (U1, U2) ∼ pU1 · pU2 and deterministic map x (u1, u2), (R1, R2) is achievable if it satisfies the following: R1 ≤ I (U1; Y1) , R2 ≤ I (U2; Y2) .

ENC 1 W1 W2 ENC 2 U N

1 (W1)

U N

2 (W2)

x (u1, u2) XN

ENC

For decoding, both receivers decode as if there is only a single user – treat interference as noise. How to improve it from ENC’s point of view? Correlate U1 and U2! Main difficulty: since UN

1 f

= W1 and UN

2 f

= W2, UN

1 ⊥

⊥ UN

2 always!

37 / 41 I-Hsiang Wang NIT Lecture 8

slide-38
SLIDE 38

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Question: Suppose we insist to randomly generate UN

1 and UN 2 based on

the marginals induced by joint distribution pU1,U2, what is the probability that the randomly picked ( UN

1 (W1) , UN 2 (W2)

) looks like generated from the jointly distribution pU1,U2? Well, we know that the probability is not 1 because UN

1 ⊥

⊥ UN

2 always.

Moreover, the probability is exponentially vanishing by the joint typicality lemma – the probability is roughly 2−NI(U1;U2). Hence, suppose the target rate pair to achieve is (R1, R2), instead of generate exactly 2NR1 UN

1 and 2NR2 UN 2 sequences for user 1 and 2

respectively, we shall generate more. How much more shall we generate? Well, to compensate the 2−NI(U1;U2) probability, we shall generate in total 2NI(U1;U2) more sequences, and this incurs a rate loss of I (U1; U2) in the sum rate constraint.

38 / 41 I-Hsiang Wang NIT Lecture 8

slide-39
SLIDE 39

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Marton’s Inner Bound

Based on the idea mentioned above, K. Marton developed the following inner bound of the capacity region of general broadcast channels. Lemma 4 (Marton’s Inner Bound) For a general BC, (R1, R2) ≥ 0 is achievable if it satisfies the following for some (U1, U2) ∼ pU1,U2 and deterministic map x (u1, u2), R1 < I (U1; Y1) , R2 < I (U2; Y2) , (7) R1 + R2 < I (U1; Y1) + I (U2; Y2) − I (U1; U2) (8) Note: two key differences compared to the naïve superposition scheme: Instead of product distribution pU1 · pU2, here (U1, U2) can be arbitrarily jointly distributed. A rate penalty of I (U1; U2) appears in the sum rate constraint.

39 / 41 I-Hsiang Wang NIT Lecture 8

slide-40
SLIDE 40

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

Marton’s encoder improves upon the naïve superposition scheme, and the diagram is shown on the right. The key element is the joint encoder that generate ( UN

1 (W1) , UN 2 (W2)

) .

Joint ENC W1 W2 U N

1 (W1)

U N

2 (W2)

x (u1, u2) XN

ENC

The idea is that, each user first generates sequences more than the target data rate, and then applies binning so that each bin represents a message.

40 / 41 I-Hsiang Wang NIT Lecture 8

slide-41
SLIDE 41

Superposition Coding and Degraded BC Marton’s Coding Scheme and Semi-Deterministic BC Summary

1 Superposition Coding and Degraded BC 2 Marton’s Coding Scheme and Semi-Deterministic BC 3 Summary

41 / 41 I-Hsiang Wang NIT Lecture 8