The -Capacity Region of AWGN Multiple Access Channels with Feedback - - PowerPoint PPT Presentation

the capacity region of awgn multiple access channels with
SMART_READER_LITE
LIVE PREVIEW

The -Capacity Region of AWGN Multiple Access Channels with Feedback - - PowerPoint PPT Presentation

The -Capacity Region of AWGN Multiple Access Channels with Feedback Vincent Y. F. Tan (Joint work with Lan V. Truong and Silas L. Fong) National University of Singapore (NUS) SPCOM 2016, Bangalore Vincent Tan (NUS) AWGN MACs with Feedback


slide-1
SLIDE 1

The ε-Capacity Region of AWGN Multiple Access Channels with Feedback

Vincent Y. F. Tan (Joint work with Lan V. Truong and Silas L. Fong)

National University of Singapore (NUS)

SPCOM 2016, Bangalore

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 1 / 27

slide-2
SLIDE 2

Information Transmission

Shannon Centenary:

TRANSMITTER MESSAGE SIGNAL RECEIVED SIGNAL RECEIVER DESTINATION MESSAGE NOISE SOURCE INFORMATION SOURCE

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 2 / 27

slide-3
SLIDE 3

Information Transmission

Shannon Centenary:

TRANSMITTER MESSAGE SIGNAL RECEIVED SIGNAL RECEIVER DESTINATION MESSAGE NOISE SOURCE INFORMATION SOURCE

For a channel {p(y|x) : x ∈ X, y ∈ Y}, we can transmit information with rates up to the capacity [Shannon (1948)] C = max

P∈P(X) I(X; Y)

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 2 / 27

slide-4
SLIDE 4

Information Transmission

Shannon Centenary:

TRANSMITTER MESSAGE SIGNAL RECEIVED SIGNAL RECEIVER DESTINATION MESSAGE NOISE SOURCE INFORMATION SOURCE

For a channel {p(y|x) : x ∈ X, y ∈ Y}, we can transmit information with rates up to the capacity [Shannon (1948)] C = max

P∈P(X) I(X; Y)

“Feedback doesn’t increase capacity” [Shannon (1956)]

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 2 / 27

slide-5
SLIDE 5

AWGN Channel

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 3 / 27

slide-6
SLIDE 6

AWGN Channel

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 3 / 27

slide-7
SLIDE 7

AWGN Channel

At time i = 1, 2, . . . , n, the channel input and output are related by Yi = gXi + Zi, Zi ∼ N(0, 1)

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 3 / 27

slide-8
SLIDE 8

AWGN Channel

At time i = 1, 2, . . . , n, the channel input and output are related by Yi = gXi + Zi, Zi ∼ N(0, 1) Send M messages encoded as codewords {Xn(m) : m = 1, . . . , M}

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 3 / 27

slide-9
SLIDE 9

AWGN Channel

At time i = 1, 2, . . . , n, the channel input and output are related by Yi = gXi + Zi, Zi ∼ N(0, 1) Send M messages encoded as codewords {Xn(m) : m = 1, . . . , M} Peak power constraint 1 n

n

  • i=1

X2

i (m) ≤ P,

∀ m ∈ {1, . . . , M}

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 3 / 27

slide-10
SLIDE 10

AWGN Channel

At time i = 1, 2, . . . , n, the channel input and output are related by Yi = gXi + Zi, Zi ∼ N(0, 1) Send M messages encoded as codewords {Xn(m) : m = 1, . . . , M} Peak power constraint 1 n

n

  • i=1

X2

i (m) ≤ P,

∀ m ∈ {1, . . . , M} Expected or Long-Term power constraint 1 M

M

  • m=1

1 n

n

  • i=1

X2

i (m)

  • ≤ P.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 3 / 27

slide-11
SLIDE 11

AWGN Channel : Non-Asymptotic Fundamental Limits

Let the channel gain g = 1 wlog.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 4 / 27

slide-12
SLIDE 12

AWGN Channel : Non-Asymptotic Fundamental Limits

Let the channel gain g = 1 wlog. The average probability of error is P(n)

e

:= Pr( ˆ M = M).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 4 / 27

slide-13
SLIDE 13

AWGN Channel : Non-Asymptotic Fundamental Limits

Let the channel gain g = 1 wlog. The average probability of error is P(n)

e

:= Pr( ˆ M = M). Define M∗

PP(n, P, ε) := max

  • M ∈ N : ∃ length-n code with

M codewords and P(n)

e

≤ ε under the PP constraint

  • Vincent Tan (NUS)

AWGN MACs with Feedback SPCOM 2016 4 / 27

slide-14
SLIDE 14

AWGN Channel : Non-Asymptotic Fundamental Limits

Let the channel gain g = 1 wlog. The average probability of error is P(n)

e

:= Pr( ˆ M = M). Define M∗

PP(n, P, ε) := max

  • M ∈ N : ∃ length-n code with

M codewords and P(n)

e

≤ ε under the PP constraint

  • Define

M∗

LT(n, P, ε) := max

  • M ∈ N : ∃ length-n code with

M codewords and P(n)

e

≤ ε under the LT constraint

  • Vincent Tan (NUS)

AWGN MACs with Feedback SPCOM 2016 4 / 27

slide-15
SLIDE 15

First-Order Results

Let C(x) := 1 2 log(1 + x), nats per ch. use

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 5 / 27

slide-16
SLIDE 16

First-Order Results

Let C(x) := 1 2 log(1 + x), nats per ch. use If we demand that the avg error prob. vanishes [Shannon (1948)], lim

ε↓0 lim n→∞

1 n log M∗

PP(n, P, ε) = C(P),

lim

ε↓0 lim n→∞

1 n log M∗

LT(n, P, ε) = C(P).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 5 / 27

slide-17
SLIDE 17

First-Order Results

Let C(x) := 1 2 log(1 + x), nats per ch. use If we demand that the avg error prob. vanishes [Shannon (1948)], lim

ε↓0 lim n→∞

1 n log M∗

PP(n, P, ε) = C(P),

lim

ε↓0 lim n→∞

1 n log M∗

LT(n, P, ε) = C(P).

In n channel uses, can send up to nC(P) nats over p(y|x) reliably.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 5 / 27

slide-18
SLIDE 18

First-Order Results

If we do not demand that the avg error prob. vanishes

[Yoshihara (1964), Polyanskiy-Poor-Verdú (2010)],

lim

n→∞

1 n log M∗

PP(n, P, ε) = C(P)

lim

n→∞

1 n log M∗

LT(n, P, ε) = C

  • P

1 − ε

  • ,

∀ ε ∈ (0, 1).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 6 / 27

slide-19
SLIDE 19

First-Order Results

If we do not demand that the avg error prob. vanishes

[Yoshihara (1964), Polyanskiy-Poor-Verdú (2010)],

lim

n→∞

1 n log M∗

PP(n, P, ε) = C(P)

lim

n→∞

1 n log M∗

LT(n, P, ε) = C

  • P

1 − ε

  • ,

∀ ε ∈ (0, 1). The above limits are known as the ε-capacities

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 6 / 27

slide-20
SLIDE 20

First-Order Results

If we do not demand that the avg error prob. vanishes

[Yoshihara (1964), Polyanskiy-Poor-Verdú (2010)],

lim

n→∞

1 n log M∗

PP(n, P, ε) = C(P)

lim

n→∞

1 n log M∗

LT(n, P, ε) = C

  • P

1 − ε

  • ,

∀ ε ∈ (0, 1). The above limits are known as the ε-capacities Since for peak-power, the ε-capacity does not depend on ε, the strong converse holds

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 6 / 27

slide-21
SLIDE 21

First-Order Results

If we do not demand that the avg error prob. vanishes

[Yoshihara (1964), Polyanskiy-Poor-Verdú (2010)],

lim

n→∞

1 n log M∗

PP(n, P, ε) = C(P)

lim

n→∞

1 n log M∗

LT(n, P, ε) = C

  • P

1 − ε

  • ,

∀ ε ∈ (0, 1). The above limits are known as the ε-capacities Since for peak-power, the ε-capacity does not depend on ε, the strong converse holds Since for long-term, the ε-capacity depends on ε, the strong converse does not hold

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 6 / 27

slide-22
SLIDE 22

Strong Converse?

ε = lim

n→∞ P(n) e ,

R = lim

n→∞

1 n log M

0.5 1 1.5 2 2.5 3 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Peak R ε C(P)

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 7 / 27

slide-23
SLIDE 23

Strong Converse?

ε = lim

n→∞ P(n) e ,

R = lim

n→∞

1 n log M

0.5 1 1.5 2 2.5 3 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Peak R ε C(P) Long Term

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 7 / 27

slide-24
SLIDE 24

Higher-Order Results

More refined asymptotic expansions.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 8 / 27

slide-25
SLIDE 25

Higher-Order Results

More refined asymptotic expansions. Third-order [Polyanskiy-Poor-Verdú (2010), T.-Tomamichel (2015)], log M∗

PP(n, P, ε) = nC(P) +

  • nV(P)Φ−1(ε) + 1

2 log n + O(1) where the channel dispersion is V(x) := x(x + 2) 2(x + 1)2 squared nats per ch. use and Φ(a) := a

−∞

1 √ 2π e−t2/2 dt.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 8 / 27

slide-26
SLIDE 26

Higher-Order Results

More refined asymptotic expansions. Third-order [Polyanskiy-Poor-Verdú (2010), T.-Tomamichel (2015)], log M∗

PP(n, P, ε) = nC(P) +

  • nV(P)Φ−1(ε) + 1

2 log n + O(1) where the channel dispersion is V(x) := x(x + 2) 2(x + 1)2 squared nats per ch. use and Φ(a) := a

−∞

1 √ 2π e−t2/2 dt. Second-order [Yang-Caire-Durisi-Polyanskiy (2015)] log M∗

LT(n, P, ε) = nC

  • P

1 − ε

  • V
  • P

1 − ε

  • n log n + o(√n).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 8 / 27

slide-27
SLIDE 27

Feedback

Feedback helps to simplify coding schemes

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 9 / 27

slide-28
SLIDE 28

Feedback

Feedback helps to simplify coding schemes Long-term power constraint under feedback 1 M

M

  • m=1

1 n

n

  • i=1

E

  • X2

i (m, Yi−1)

≤ P.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 9 / 27

slide-29
SLIDE 29

Feedback

Feedback helps to simplify coding schemes Long-term power constraint under feedback 1 M

M

  • m=1

1 n

n

  • i=1

E

  • X2

i (m, Yi−1)

≤ P. Non-asymptotic fundamental limit M∗

FB(n, P, ε) := max

  • M ∈ N : ∃ length-n code with

M codewords and P(n)

e

≤ ε under the LT-FB constraint

  • Vincent Tan (NUS)

AWGN MACs with Feedback SPCOM 2016 9 / 27

slide-30
SLIDE 30

Feedback : Existing Results

First-order [Shannon (1956)] lim

ε↓0 lim n→∞

1 n log M∗

FB(n, P, ε) = C(P).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 10 / 27

slide-31
SLIDE 31

Feedback : Existing Results

First-order [Shannon (1956)] lim

ε↓0 lim n→∞

1 n log M∗

FB(n, P, ε) = C(P).

Schalkwijk and Kailath (1966) demonstrated a simple coding

scheme based on estimation-theoretic ideas to show that P(n)

e (R) ≤ 2 exp

  • −22n(C(P)−R)

2

  • ,

for R = 1 n log M < C(P).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 10 / 27

slide-32
SLIDE 32

Feedback : Existing Results

First-order [Shannon (1956)] lim

ε↓0 lim n→∞

1 n log M∗

FB(n, P, ε) = C(P).

Schalkwijk and Kailath (1966) demonstrated a simple coding

scheme based on estimation-theoretic ideas to show that P(n)

e (R) ≤ 2 exp

  • −22n(C(P)−R)

2

  • ,

for R = 1 n log M < C(P). Error exponent is infinity

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 10 / 27

slide-33
SLIDE 33

Feedback : Existing Results

First-order [Shannon (1956)] lim

ε↓0 lim n→∞

1 n log M∗

FB(n, P, ε) = C(P).

Schalkwijk and Kailath (1966) demonstrated a simple coding

scheme based on estimation-theoretic ideas to show that P(n)

e (R) ≤ 2 exp

  • −22n(C(P)−R)

2

  • ,

for R = 1 n log M < C(P). Error exponent is infinity Suggests that the fixed-error results can also be drastically improved

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 10 / 27

slide-34
SLIDE 34

AWGN Channels with Feedback : New Results

Theorem (Truong-Fong-T. (ISIT 2016)) For the direct part, log M∗

FB(n, P, ε) ≥ nC

  • P

1 − ε

  • − log log n + O(1).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 11 / 27

slide-35
SLIDE 35

AWGN Channels with Feedback : New Results

Theorem (Truong-Fong-T. (ISIT 2016)) For the direct part, log M∗

FB(n, P, ε) ≥ nC

  • P

1 − ε

  • − log log n + O(1).

For the converse part log M∗

FB(n, P, ε) ≤ nC

  • P

1 − ε

  • +
  • V
  • P

1 − ε

  • n log n + O(√n).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 11 / 27

slide-36
SLIDE 36

AWGN Channels with Feedback : New Results

Theorem (Truong-Fong-T. (ISIT 2016)) For the direct part, log M∗

FB(n, P, ε) ≥ nC

  • P

1 − ε

  • − log log n + O(1).

For the converse part log M∗

FB(n, P, ε) ≤ nC

  • P

1 − ε

  • +
  • V
  • P

1 − ε

  • n log n + O(√n).

From these results, the ε-capacity is lim

n→∞

1 n log M∗

FB(n, P, ε) = C

  • P

1 − ε

  • .

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 11 / 27

slide-37
SLIDE 37

AWGN Channels with Feedback : Remarks

lim

n→∞

1 n log M∗

FB(n, P, ε) = C

  • P

1 − ε

  • .

Feedback doesn’t improve the first-order term since lim

n→∞

1 n log M∗

LT(n, P, ε) = C

  • P

1 − ε

  • Vincent Tan (NUS)

AWGN MACs with Feedback SPCOM 2016 12 / 27

slide-38
SLIDE 38

AWGN Channels with Feedback : Remarks

lim

n→∞

1 n log M∗

FB(n, P, ε) = C

  • P

1 − ε

  • .

Feedback doesn’t improve the first-order term since lim

n→∞

1 n log M∗

LT(n, P, ε) = C

  • P

1 − ε

  • With feedback, second-order term is at least

− log log n + O(1). This is a great improvement over without feedback where the second-order term is [Yang-Caire-Durisi-Polyanskiy (2015)] −

  • V
  • P

1 − ε

  • n log n + o(√n).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 12 / 27

slide-39
SLIDE 39

Proof Idea for the Direct Part

Partition msg set {1, . . . , M} into A1 ⊔ A2.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 13 / 27

slide-40
SLIDE 40

Proof Idea for the Direct Part

Partition msg set {1, . . . , M} into A1 ⊔ A2. A1: Send (0, 0, . . . , 0) ∈ Rn

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 13 / 27

slide-41
SLIDE 41

Proof Idea for the Direct Part

Partition msg set {1, . . . , M} into A1 ⊔ A2. A1: Send (0, 0, . . . , 0) ∈ Rn A2: Schalkwijk-Kailath (1966) scheme M′ = |A2| ≈ (1 − ε)M msg P(n)

e (R′ n | A2) ≤ 1

n, where R′

n := 1

n log M′.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 13 / 27

slide-42
SLIDE 42

Proof Idea for the Direct Part

Partition msg set {1, . . . , M} into A1 ⊔ A2. A1: Send (0, 0, . . . , 0) ∈ Rn A2: Schalkwijk-Kailath (1966) scheme M′ = |A2| ≈ (1 − ε)M msg P(n)

e (R′ n | A2) ≤ 1

n, where R′

n := 1

n log M′. Choose log M′ = nC

  • P

1 − ε

  • − log log n + Oε(1)

where − log log n because of double exponential decay of P(n)

e (R)

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 13 / 27

slide-43
SLIDE 43

Proof Idea for the Direct Part

Partition msg set {1, . . . , M} into A1 ⊔ A2. A1: Send (0, 0, . . . , 0) ∈ Rn A2: Schalkwijk-Kailath (1966) scheme M′ = |A2| ≈ (1 − ε)M msg P(n)

e (R′ n | A2) ≤ 1

n, where R′

n := 1

n log M′. Choose log M′ = nC

  • P

1 − ε

  • − log log n + Oε(1)

where − log log n because of double exponential decay of P(n)

e (R)

Hence, P(n)

e

= Pr(A1)P(n)

e (A1) + Pr(A2)P(n) e (A2) ≤ ε · 1 + (1 − ε)1

n ≈ ε.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 13 / 27

slide-44
SLIDE 44

Proof Idea for the Converse Part

Convert expected long-term power to a peak-power code.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 14 / 27

slide-45
SLIDE 45

Proof Idea for the Converse Part

Convert expected long-term power to a peak-power code. Key observation ∃ LT-FB code {Xi(·, ·)}n

i=1 with M msges and P(n) e

≤ ε = ⇒ ∃ PP-FB code {X′

i(·, ·)}n i=1 with M msges and P(n) e

≤ 1 −

1 √n

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 14 / 27

slide-46
SLIDE 46

Proof Idea for the Converse Part

Convert expected long-term power to a peak-power code. Key observation ∃ LT-FB code {Xi(·, ·)}n

i=1 with M msges and P(n) e

≤ ε = ⇒ ∃ PP-FB code {X′

i(·, ·)}n i=1 with M msges and P(n) e

≤ 1 −

1 √n

with 1 n

n

  • i=1
  • X′

i(M, Yi−1)

2 ≤ P 1 − ε −

1 √n

a.s.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 14 / 27

slide-47
SLIDE 47

Proof Idea for the Converse Part

Convert expected long-term power to a peak-power code. Key observation ∃ LT-FB code {Xi(·, ·)}n

i=1 with M msges and P(n) e

≤ ε = ⇒ ∃ PP-FB code {X′

i(·, ·)}n i=1 with M msges and P(n) e

≤ 1 −

1 √n

with 1 n

n

  • i=1
  • X′

i(M, Yi−1)

2 ≤ P 1 − ε −

1 √n

a.s. Exploit connection between binary hypothesis testing and channel coding with feedback under peak-power constraint

[Polyanskiy-Poor-Verdú (2011)] [Fong-T. (2015)]

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 14 / 27

slide-48
SLIDE 48

MACs and Gaussian MACs

The multiple access channel (MAC)

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 15 / 27

slide-49
SLIDE 49

MACs and Gaussian MACs

The multiple access channel (MAC) The Gaussian multiple access channel Again assume g1 = g2 = 1.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 15 / 27

slide-50
SLIDE 50

Capacity Region for the Gaussian MAC

✲ ✻

R1 R2 C(

P1 1+P2 ) C(P1)

C(

P2 1+P1 )

C(P2) RCW

❅ ❅ ❅ Cover (1975) Wyner (1974)

R1 ≤ C(P1) R2 ≤ C(P2) R1 + R2 ≤ C(P1 + P2)

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 16 / 27

slide-51
SLIDE 51

Gaussian MAC with Feedback

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 17 / 27

slide-52
SLIDE 52

Gaussian MAC with Feedback

Consider Gaussian version with expected long-term power constraints 1 n

n

  • i=1

E

  • X2

1i(M1, Yi−1)

  • ≤ P1,

1 n

n

  • i=1

E

  • X2

2i(M2, Yi−1)

  • ≤ P2.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 17 / 27

slide-53
SLIDE 53

Capacity Region of the G-MAC with Feedback

Ozarow (1984) showed that the capacity region is

ROzarow(P1, P2) :=

  • 0≤ρ≤1

       (R1, R2)

  • R1 ≤ C
  • (1 − ρ2)P1
  • ,

R2 ≤ C

  • (1 − ρ2)P2
  • ,

R1 + R2 ≤ C

  • P1 + P2 + 2ρ√P1P2

      .

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 18 / 27

slide-54
SLIDE 54

Capacity Region of the G-MAC with Feedback

Ozarow (1984) showed that the capacity region is

ROzarow(P1, P2) :=

  • 0≤ρ≤1

       (R1, R2)

  • R1 ≤ C
  • (1 − ρ2)P1
  • ,

R2 ≤ C

  • (1 − ρ2)P2
  • ,

R1 + R2 ≤ C

  • P1 + P2 + 2ρ√P1P2

      . With feedback, capacity region is enlarged!

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 18 / 27

slide-55
SLIDE 55

Capacity Region of the G-MAC with Feedback

Ozarow (1984) showed that the capacity region is

ROzarow(P1, P2) :=

  • 0≤ρ≤1

       (R1, R2)

  • R1 ≤ C
  • (1 − ρ2)P1
  • ,

R2 ≤ C

  • (1 − ρ2)P2
  • ,

R1 + R2 ≤ C

  • P1 + P2 + 2ρ√P1P2

      . With feedback, capacity region is enlarged! It appears that transmitters can cooperate!

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 18 / 27

slide-56
SLIDE 56

Capacity Region of the G-MAC with Feedback

Ozarow (1984) showed that the capacity region is

ROzarow(P1, P2) :=

  • 0≤ρ≤1

       (R1, R2)

  • R1 ≤ C
  • (1 − ρ2)P1
  • ,

R2 ≤ C

  • (1 − ρ2)P2
  • ,

R1 + R2 ≤ C

  • P1 + P2 + 2ρ√P1P2

      . With feedback, capacity region is enlarged! It appears that transmitters can cooperate! Direct part is an extension of the Schalkwijk and Kailath coding scheme

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 18 / 27

slide-57
SLIDE 57

CR of the G-MAC with Feedback P1 = P2 = 1

0.05 0.1 0.15 0.2 0.25 0.3 0.05 0.1 0.15 0.2 0.25 0.3 R1 R2

RCW

No feedback

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 19 / 27

slide-58
SLIDE 58

CR of the G-MAC with Feedback P1 = P2 = 1

0.05 0.1 0.15 0.2 0.25 0.3 0.05 0.1 0.15 0.2 0.25 0.3 R1 R2

RCW

ρ = 0

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 19 / 27

slide-59
SLIDE 59

CR of the G-MAC with Feedback P1 = P2 = 1

0.05 0.1 0.15 0.2 0.25 0.3 0.05 0.1 0.15 0.2 0.25 0.3 R1 R2

RCW

ρ = 0.1

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 19 / 27

slide-60
SLIDE 60

CR of the G-MAC with Feedback P1 = P2 = 1

0.05 0.1 0.15 0.2 0.25 0.3 0.05 0.1 0.15 0.2 0.25 0.3 R1 R2

RCW

ρ = 0.2

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 19 / 27

slide-61
SLIDE 61

CR of the G-MAC with Feedback P1 = P2 = 1

0.05 0.1 0.15 0.2 0.25 0.3 0.05 0.1 0.15 0.2 0.25 0.3 R1 R2

RCW

ρ = 0.3

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 19 / 27

slide-62
SLIDE 62

CR of the G-MAC with Feedback P1 = P2 = 1

0.05 0.1 0.15 0.2 0.25 0.3 0.05 0.1 0.15 0.2 0.25 0.3 R1 R2

RCW

ρ = 0.4

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 19 / 27

slide-63
SLIDE 63

CR of the G-MAC with Feedback P1 = P2 = 1

0.05 0.1 0.15 0.2 0.25 0.3 0.05 0.1 0.15 0.2 0.25 0.3 R1 R2

RCW

ρ = 0.5

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 19 / 27

slide-64
SLIDE 64

CR of the G-MAC with Feedback P1 = P2 = 1

0.05 0.1 0.15 0.2 0.25 0.3 0.05 0.1 0.15 0.2 0.25 0.3 R1 R2

RCW

ρ = 0.6

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 19 / 27

slide-65
SLIDE 65

CR of the G-MAC with Feedback P1 = P2 = 1

0.05 0.1 0.15 0.2 0.25 0.3 0.05 0.1 0.15 0.2 0.25 0.3 R1 R2

RCW

ρ = 1.0

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 19 / 27

slide-66
SLIDE 66

CR of the G-MAC with Feedback P1 = P2 = 1

0.05 0.1 0.15 0.2 0.25 0.3 0.05 0.1 0.15 0.2 0.25 0.3 R1 R2

ROzarow

The Ozarow region

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 19 / 27

slide-67
SLIDE 67

ε-Capacity Region of the G-MAC with Feedback

Similarly to the single-user case, extend to non-vanishing errors

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 20 / 27

slide-68
SLIDE 68

ε-Capacity Region of the G-MAC with Feedback

Similarly to the single-user case, extend to non-vanishing errors (R1, R2) is ε-achievable ⇐ ⇒ ∃ sequence of codes with (M1, M2) messages s.t. lim

n→∞

1 n log M1 ≥ R1 lim

n→∞

1 n log M2 ≥ R2, and the average probability of error lim

n→∞ P(n) e

≤ ε.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 20 / 27

slide-69
SLIDE 69

ε-Capacity Region of the G-MAC with Feedback

Similarly to the single-user case, extend to non-vanishing errors (R1, R2) is ε-achievable ⇐ ⇒ ∃ sequence of codes with (M1, M2) messages s.t. lim

n→∞

1 n log M1 ≥ R1 lim

n→∞

1 n log M2 ≥ R2, and the average probability of error lim

n→∞ P(n) e

≤ ε. Cε(P1, P2) is the set of all ε-achievable (R1, R2).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 20 / 27

slide-70
SLIDE 70

ε-Capacity Region of the G-MAC with Feedback

Theorem (Truong-Fong-T. (arXiv 2015)) The ε-capacity region is Cε(P1, P2) = ROzarow P1 1 − ε, P2 1 − ε

  • ,

for all ε ∈ [0, 1).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 21 / 27

slide-71
SLIDE 71

ε-Capacity Region of the G-MAC with Feedback

Theorem (Truong-Fong-T. (arXiv 2015)) The ε-capacity region is Cε(P1, P2) = ROzarow P1 1 − ε, P2 1 − ε

  • ,

for all ε ∈ [0, 1). If we can tolerate an error of ≤ ε, we can operate at (R1, R2) satisfying R1 ≤ C (1 − ρ2)P1 1 − ε

  • R2 ≤ C

(1 − ρ2)P2 1 − ε

  • ,

for any 0 ≤ ρ ≤ 1. R1 + R2 ≤ C P1 + P2 + 2ρ√P1P2 1 − ε

  • This is optimal.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 21 / 27

slide-72
SLIDE 72

ε-Capacity of the G-MAC with Feedback : Remarks

ε = 0 recovers Ozarow’s result C(P1, P2) = C0(P1, P2) = ROzarow(P1, P2).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 22 / 27

slide-73
SLIDE 73

ε-Capacity of the G-MAC with Feedback : Remarks

ε = 0 recovers Ozarow’s result C(P1, P2) = C0(P1, P2) = ROzarow(P1, P2). Again Cε depends on ε Cε(P1, P2) = ROzarow P1 1 − ε, P2 1 − ε

  • ,

for all ε ∈ [0, 1).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 22 / 27

slide-74
SLIDE 74

ε-Capacity of the G-MAC with Feedback : Remarks

ε = 0 recovers Ozarow’s result C(P1, P2) = C0(P1, P2) = ROzarow(P1, P2). Again Cε depends on ε Cε(P1, P2) = ROzarow P1 1 − ε, P2 1 − ε

  • ,

for all ε ∈ [0, 1). Strong converse doesn’t hold

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 22 / 27

slide-75
SLIDE 75

ε-Capacity of the G-MAC with Feedback : Remarks

ε = 0 recovers Ozarow’s result C(P1, P2) = C0(P1, P2) = ROzarow(P1, P2). Again Cε depends on ε Cε(P1, P2) = ROzarow P1 1 − ε, P2 1 − ε

  • ,

for all ε ∈ [0, 1). Strong converse doesn’t hold We have bounds on the “second-order” terms but they are quite loose

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 22 / 27

slide-76
SLIDE 76

ε-Capacity of the G-MAC with Feedback : Remarks

ε = 0 recovers Ozarow’s result C(P1, P2) = C0(P1, P2) = ROzarow(P1, P2). Again Cε depends on ε Cε(P1, P2) = ROzarow P1 1 − ε, P2 1 − ε

  • ,

for all ε ∈ [0, 1). Strong converse doesn’t hold We have bounds on the “second-order” terms but they are quite loose Direct part follows similarly to the single-user case

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 22 / 27

slide-77
SLIDE 77

Proof Idea for the Converse : Step 1

Start with an information-spectrum bound somewhat similar to

Chen-Alajaji (1995) and Han (1998)

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 23 / 27

slide-78
SLIDE 78

Proof Idea for the Converse : Step 1

Start with an information-spectrum bound somewhat similar to

Chen-Alajaji (1995) and Han (1998)

Lemma (Information-Spectrum Bounds) Fix a MAC Wn(yn|xn

1, xn 2) with feedback and error prob. ≤ ε.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 23 / 27

slide-79
SLIDE 79

Proof Idea for the Converse : Step 1

Start with an information-spectrum bound somewhat similar to

Chen-Alajaji (1995) and Han (1998)

Lemma (Information-Spectrum Bounds) Fix a MAC Wn(yn|xn

1, xn 2) with feedback and error prob. ≤ ε.

For any γ1, γ2, γ3 > 0 and any {(QYi|X1i, QYi|X2i, QYi)}n

i=1,

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 23 / 27

slide-80
SLIDE 80

Proof Idea for the Converse : Step 1

Start with an information-spectrum bound somewhat similar to

Chen-Alajaji (1995) and Han (1998)

Lemma (Information-Spectrum Bounds) Fix a MAC Wn(yn|xn

1, xn 2) with feedback and error prob. ≤ ε.

For any γ1, γ2, γ3 > 0 and any {(QYi|X1i, QYi|X2i, QYi)}n

i=1,

log M1 ≤ γ1 − log+

  • 1 − ε − Pr
  • n
  • i=1

log W(Yi|X1i, X2i) QYi|X2i(Yi|X2i) ≥ γ1

  • log M2 ≤ γ2 − log+
  • 1 − ε − Pr
  • n
  • i=1

log W(Yi|X1i, X2i) QYi|X1i(Yi|X1i) ≥ γ2

  • log(M1M2) ≤ γ3 − log+
  • 1 − ε − Pr
  • n
  • i=1

log W(Yi|X1i, X2i) QYi(Yi) ≥ γ3

  • Vincent Tan (NUS)

AWGN MACs with Feedback SPCOM 2016 23 / 27

slide-81
SLIDE 81

Proof Idea for the Converse Part : Step 2

Given a code generating symbols {(X1i(M1, Yi−1), X2i(M2, Yi−1))}n

i=1, let

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 24 / 27

slide-82
SLIDE 82

Proof Idea for the Converse Part : Step 2

Given a code generating symbols {(X1i(M1, Yi−1), X2i(M2, Yi−1))}n

i=1, let

P1i := E[X2

1i],

P2i := E[X2

2i],

ρi := E[X1iX2i] √P1iP2i . Define ρ := n

i=1 ρi

√P1iP2i n√P1P2

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 24 / 27

slide-83
SLIDE 83

Proof Idea for the Converse Part : Step 2

Given a code generating symbols {(X1i(M1, Yi−1), X2i(M2, Yi−1))}n

i=1, let

P1i := E[X2

1i],

P2i := E[X2

2i],

ρi := E[X1iX2i] √P1iP2i . Define ρ := n

i=1 ρi

√P1iP2i n√P1P2 Lemma (“Single-Letterization”) |ρ| ≤ 1,

n

  • i=1
  • P1i(1 − ρ2

i )

  • ≤ nP1(1 − ρ2),

and

n

  • i=1
  • P1i + P2i + 2ρi

√P1iP2i

  • ≤ n
  • P1 + P2 + 2ρ

√ P1P2

  • .

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 24 / 27

slide-84
SLIDE 84

Proof Idea for the Converse Part : Step 3

Finally, we need to bound the probabilities. We do so using Chebyshev.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 25 / 27

slide-85
SLIDE 85

Proof Idea for the Converse Part : Step 3

Finally, we need to bound the probabilities. We do so using Chebyshev. Lemma For any T > 1, choose γ1 := nC

  • P1(1 − ρ2)T
  • + n2/3

γ3 := nC

  • (P1 + P2 + 2ρ

√ P1P2)T

  • + n2/3.

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 25 / 27

slide-86
SLIDE 86

Proof Idea for the Converse Part : Step 3

Finally, we need to bound the probabilities. We do so using Chebyshev. Lemma For any T > 1, choose γ1 := nC

  • P1(1 − ρ2)T
  • + n2/3

γ3 := nC

  • (P1 + P2 + 2ρ

√ P1P2)T

  • + n2/3.

Then, with a good choice of Q’s Pr

  • n
  • i=1

log W(Yi|X1i, X2i) QYi|X2i(Yi|X2i) ≥ γ1

  • ≤ 1

T + O(n−1/3) Pr

  • n
  • i=1

log W(Yi|X1i, X2i) QYi(Yi) ≥ γ3

  • ≤ 1

T + O(n−1/3).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 25 / 27

slide-87
SLIDE 87

Proof Idea for the Converse Part : Completion

Recall that log M1 ≤ γ1 − log+

  • 1 − ε − Pr
  • n
  • i=1

log W(Yi|X1i, X2i) QYi|X2i(Yi|X2i) ≥ γ1

  • Vincent Tan (NUS)

AWGN MACs with Feedback SPCOM 2016 26 / 27

slide-88
SLIDE 88

Proof Idea for the Converse Part : Completion

Recall that log M1 ≤ γ1 − log+

  • 1 − ε − Pr
  • n
  • i=1

log W(Yi|X1i, X2i) QYi|X2i(Yi|X2i) ≥ γ1

  • Probability term satisfies

Pr(· · · ) ≤ 1 T + O(n−1/3).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 26 / 27

slide-89
SLIDE 89

Proof Idea for the Converse Part : Completion

Recall that log M1 ≤ γ1 − log+

  • 1 − ε − Pr
  • n
  • i=1

log W(Yi|X1i, X2i) QYi|X2i(Yi|X2i) ≥ γ1

  • Probability term satisfies

Pr(· · · ) ≤ 1 T + O(n−1/3). Choose 1 T = 1 − ε − O(n−1/3) so γ1 = nC P1(1 − ρ2) 1 − ε

  • + O(n2/3).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 26 / 27

slide-90
SLIDE 90

Proof Idea for the Converse Part : Completion

Recall that log M1 ≤ γ1 − log+

  • 1 − ε − Pr
  • n
  • i=1

log W(Yi|X1i, X2i) QYi|X2i(Yi|X2i) ≥ γ1

  • Probability term satisfies

Pr(· · · ) ≤ 1 T + O(n−1/3). Choose 1 T = 1 − ε − O(n−1/3) so γ1 = nC P1(1 − ρ2) 1 − ε

  • + O(n2/3).

Conclusion: log M1 ≤ nC P1(1 − ρ2) 1 − ε

  • + O(n2/3).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 26 / 27

slide-91
SLIDE 91

Proof Idea for the Converse Part : Completion

Recall that log M1 ≤ γ1 − log+

  • 1 − ε − Pr
  • n
  • i=1

log W(Yi|X1i, X2i) QYi|X2i(Yi|X2i) ≥ γ1

  • Probability term satisfies

Pr(· · · ) ≤ 1 T + O(n−1/3). Choose 1 T = 1 − ε − O(n−1/3) so γ1 = nC P1(1 − ρ2) 1 − ε

  • + O(n2/3).

Conclusion: log M1 ≤ nC P1(1 − ρ2) 1 − ε

  • + O(n2/3).

By product: Second-order term is upper bounded by O(n2/3).

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 26 / 27

slide-92
SLIDE 92

Wrap Up

Generalized a result by Ozarow (1984) to non-vanishing ε ∈ [0, 1)

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 27 / 27

slide-93
SLIDE 93

Wrap Up

Generalized a result by Ozarow (1984) to non-vanishing ε ∈ [0, 1) Established ε-capacity region for AWGN-MAC with feedback Cε(P1, P2) = ROzarow P1 1 − ε, P2 1 − ε

  • .

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 27 / 27

slide-94
SLIDE 94

Wrap Up

Generalized a result by Ozarow (1984) to non-vanishing ε ∈ [0, 1) Established ε-capacity region for AWGN-MAC with feedback Cε(P1, P2) = ROzarow P1 1 − ε, P2 1 − ε

  • .

First step to obtaining higher-order terms in asymptotic expansion

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 27 / 27

slide-95
SLIDE 95

Wrap Up

Generalized a result by Ozarow (1984) to non-vanishing ε ∈ [0, 1) Established ε-capacity region for AWGN-MAC with feedback Cε(P1, P2) = ROzarow P1 1 − ε, P2 1 − ε

  • .

First step to obtaining higher-order terms in asymptotic expansion Current second-order bounds are loose

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 27 / 27

slide-96
SLIDE 96

Wrap Up

Generalized a result by Ozarow (1984) to non-vanishing ε ∈ [0, 1) Established ε-capacity region for AWGN-MAC with feedback Cε(P1, P2) = ROzarow P1 1 − ε, P2 1 − ε

  • .

First step to obtaining higher-order terms in asymptotic expansion Current second-order bounds are loose http://arxiv.org/abs/1512.05088 Lan V. Truong Silas L. Fong

Vincent Tan (NUS) AWGN MACs with Feedback SPCOM 2016 27 / 27