A Systematic Approach to Incremental Redundancy over Erasure - - PowerPoint PPT Presentation

a systematic approach to incremental redundancy over
SMART_READER_LITE
LIVE PREVIEW

A Systematic Approach to Incremental Redundancy over Erasure - - PowerPoint PPT Presentation

A Systematic Approach to Incremental Redundancy over Erasure Channels Anoosheh Heidarzadeh (Texas A&M University) Joint work with Jean-Francois Chamberland (Texas A&M University), Parimal Parag (Indian Institute of Science, Bengaluru),


slide-1
SLIDE 1

A Systematic Approach to Incremental Redundancy over Erasure Channels

Anoosheh Heidarzadeh (Texas A&M University)

Joint work with Jean-Francois Chamberland (Texas A&M University), Parimal Parag (Indian Institute of Science, Bengaluru), and Richard D. Wesel (University of California, Los Angeles) June 19, 2018 ISIT

slide-2
SLIDE 2

Random Coding + Hybrid ARQ

Consider the problem of communicating a k-bit message over a memoryless binary erasure channel (BEC) with erasure probability 0 ≤ ǫ < 1, using random coding + hybrid ARQ∗:

∗ARQ: Automatic Repeat Request 1 / 19

slide-3
SLIDE 3

Random Coding + Hybrid ARQ

Consider the problem of communicating a k-bit message over a memoryless binary erasure channel (BEC) with erasure probability 0 ≤ ǫ < 1, using random coding + hybrid ARQ∗:

  • Consider a random binary parity-check matrix H of size

(n − k) × n

  • Consider an arbitrary mapping from k-bit messages to n-bit

codewords in the null-space of matrix H

∗ARQ: Automatic Repeat Request 1 / 19

slide-4
SLIDE 4

Random Coding + Hybrid ARQ

Consider the problem of communicating a k-bit message over a memoryless binary erasure channel (BEC) with erasure probability 0 ≤ ǫ < 1, using random coding + hybrid ARQ∗:

  • Consider a random binary parity-check matrix H of size

(n − k) × n

  • Consider an arbitrary mapping from k-bit messages to n-bit

codewords in the null-space of matrix H

  • The source maps the message x = (x1, . . . , xk) to a codeword

c = (c1, . . . , cn)

  • The source divides the codeword c into m sub-blocks

c1, . . . , cm for a given 2 ≤ m ≤ n, where ci = (cni−1, . . . , cni ) for i ∈ [m] = {1, . . . , m}, and n1, . . . , nm are given integers such that k ≤ n1 < n2 < · · · < nm = n, and n0 = 0

∗ARQ: Automatic Repeat Request 1 / 19

slide-5
SLIDE 5

Random Coding + Hybrid ARQ (Cont.)

  • The source sends the first sub-block, c1
  • The destination receives c1, or a proper subset thereof
  • The destination performs ML decoding to recover the message

x, and depending on the outcome of decoding, sends an ACK

  • r NACK to the source over a perfect feedback channel

2 / 19

slide-6
SLIDE 6

Random Coding + Hybrid ARQ (Cont.)

  • The source sends the first sub-block, c1
  • The destination receives c1, or a proper subset thereof
  • The destination performs ML decoding to recover the message

x, and depending on the outcome of decoding, sends an ACK

  • r NACK to the source over a perfect feedback channel
  • If the source receives a NACK, it sends next sub-block, c2,

and waits for an ACK or NACK again

  • This action repeats until (i) the source receives an ACK; or (ii)

it exhausts all the sub-blocks, and does not receive an ACK

2 / 19

slide-7
SLIDE 7

Random Coding + Hybrid ARQ (Cont.)

  • The source sends the first sub-block, c1
  • The destination receives c1, or a proper subset thereof
  • The destination performs ML decoding to recover the message

x, and depending on the outcome of decoding, sends an ACK

  • r NACK to the source over a perfect feedback channel
  • If the source receives a NACK, it sends next sub-block, c2,

and waits for an ACK or NACK again

  • This action repeats until (i) the source receives an ACK; or (ii)

it exhausts all the sub-blocks, and does not receive an ACK In case (i), the communication round succeeds, and the source starts a new communication round for the next message In case (ii), the communication round fails, and the source starts a new communication round for the message x.

2 / 19

slide-8
SLIDE 8

Problem

Expected Effective Blocklength: The expected number of bits being sent by the source within a communication round (the randomness comes from both the channel and the code) Problem: To identify the aggregate sub-block sizes n1, . . . , nm−1 such that the expected effective blocklength is minimized where a maximum of m sub-blocks (i.e., maximum m bits of feedback) are available in a communication round

3 / 19

slide-9
SLIDE 9

Previous Works vs. This Work

Previous works (for channels other than BEC): [1] Vakilinia-Williamson-Ranganathan-Divsalar-Wesel ’14 (Feedback systems using non-binary LDPC codes with a limited number of transmissions, ITW) [2] Williamson-Chen-Wesel ’15 (Variable-length convolutional coding for short blocklengths with decision feedback, TCOM) [3] Vakilinia-Ranganathan-Divsalar-Wesel ’16 (Optimizing transmission lengths for limited feedback with non-binary LDPC examples, TCOM) In this work, we propose a solution by extending the sequential differential optimization (SDO) framework of [3] for BEC

4 / 19

slide-10
SLIDE 10

Expected Effective Blocklength

  • Rt: the number of bits observed by the destination at time t,

i.e., Rt ∼ B(t, 1 − ǫ)

  • PRt: the discrete probability measure associated with the

random variable (r.v.) Rt, i.e., PRt(r) = t r

  • ǫt−r(1 − ǫ)r

5 / 19

slide-11
SLIDE 11

Expected Effective Blocklength

  • Rt: the number of bits observed by the destination at time t,

i.e., Rt ∼ B(t, 1 − ǫ)

  • PRt: the discrete probability measure associated with the

random variable (r.v.) Rt, i.e., PRt(r) = t r

  • ǫt−r(1 − ǫ)r
  • Ps(r): the probability of decoding success given that the

number of bits observed by the destination is r, i.e., Ps(r) =      0 ≤ r < k n−r−1

l=0

  • 1 − 2l−(n−k)

k ≤ r < n 1 r ≥ n

5 / 19

slide-12
SLIDE 12

Expected Effective Blocklength (Cont.)

  • PACK(t): the probability that the destination sends an ACK

to the source at time t or earlier, i.e., PACK(t) =

  • 1 − t

e=0(1 − Ps(t − e))PRt(t − e)

k ≤ t ≤ n 0 ≤ t < k

6 / 19

slide-13
SLIDE 13

Expected Effective Blocklength (Cont.)

  • PACK(t): the probability that the destination sends an ACK

to the source at time t or earlier, i.e., PACK(t) =

  • 1 − t

e=0(1 − Ps(t − e))PRt(t − e)

k ≤ t ≤ n 0 ≤ t < k

  • S: the index of last sub-block being sent by the source within

a communication round

  • E[nS]: the expected effective blocklength, i.e.,

E[nS] = nm +

m−1

  • i=1

(ni − ni+1)PACK(ni) Problem: To identify n1, . . . , nm−1 such that E[nS] is minimized

6 / 19

slide-14
SLIDE 14

Multi-Dimensional vs. One-Dimensional Optimization

Challenge: The problem of minimizing E[nS] is a multi-dimensional

  • ptimization problem with integer variables n1, . . . , nm−1

7 / 19

slide-15
SLIDE 15

Multi-Dimensional vs. One-Dimensional Optimization

Challenge: The problem of minimizing E[nS] is a multi-dimensional

  • ptimization problem with integer variables n1, . . . , nm−1

Idea: Sequential differential optimization (SDO) reduces the problem to a one-dimensional optimization with integer variable n1 Recall E[nS] = nm +

m−1

  • i=1

(ni − ni+1)PACK(ni) Suppose that a smooth approximation F(t) of PACK(t) is given Define ˜ E[nS] = nm +

m−1

  • i=1

(ni − ni+1)F(ni)

7 / 19

slide-16
SLIDE 16

Sequential Differential Optimization (SDO)

Recall ˜ E[nS] = nm +

m−1

  • i=1

(ni − ni+1)F(ni) SDO: Given ˜ n1, . . . , ˜ ni−1, an approximation ˜ ni of the optimal value

  • f ni for 2 ≤ i ≤ m − 1 can be computed via setting the partial

derivative of ˜ E[nS] with respect to ni−1 to zero and solving for ni

8 / 19

slide-17
SLIDE 17

Sequential Differential Optimization (SDO)

Recall ˜ E[nS] = nm +

m−1

  • i=1

(ni − ni+1)F(ni) SDO: Given ˜ n1, . . . , ˜ ni−1, an approximation ˜ ni of the optimal value

  • f ni for 2 ≤ i ≤ m − 1 can be computed via setting the partial

derivative of ˜ E[nS] with respect to ni−1 to zero and solving for ni ❀ Given ˜ n1 (and ˜ n0 = −∞), an approximation ˜ ni of the optimal value of ni for all 2 ≤ i ≤ m − 1 can be obtained sequentially by ˜ ni = ˜ ni−1 +

  • (F(˜

ni−1) − F(˜ ni−2)) dF(t) dt

  • t=˜

ni−1

−1 ❀ a one-dimensional optimization problem with variable n1 Challenge: To find a smooth approximation F(t) to PACK(t)

8 / 19

slide-18
SLIDE 18

Main Idea and Contributions

Fact: PACK(t) for t < n matches the CDF of the r.v. Nn that represents the length of a communication round Idea:

  • To study the asymptotic behavior of the mean and variance of

the r.v. Nn as n grows large, and

  • To approximate PACK(t) by the CDF of a continuous

r.v. with a mean and variance matching the mean and variance of the r.v. Nn as n grows large

9 / 19

slide-19
SLIDE 19

Main Idea and Contributions

Fact: PACK(t) for t < n matches the CDF of the r.v. Nn that represents the length of a communication round Idea:

  • To study the asymptotic behavior of the mean and variance of

the r.v. Nn as n grows large, and

  • To approximate PACK(t) by the CDF of a continuous

r.v. with a mean and variance matching the mean and variance of the r.v. Nn as n grows large In this work, we show that limn→∞ E[Nn] = (k + c0)/(1 − ǫ) and limn→∞ Var(Nn) = ((k + c0)ǫ + c0 + c1)/(1 − ǫ)2 where c0 = 1.60669... is the Erd¨

  • s-Borwein constant, and

c1 = 1.13733... is the digital search tree constant

9 / 19

slide-20
SLIDE 20

Numerical Results

20 25 30 35 40 45 50 55 60

Message Size (k)

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

Throughput (T) ES: n=88, m=2 SDO-NA: n=88, m=2 SDO-LNA: n=88, m=2 ES: n=88, m=4 SDO-NA: n=88, m=4 SDO-LNA: n=88, m=4 ES: n=104, m=2 SDO-NA: n=104, m=2 SDO-LNA: n=104, m=2 ES: n=104, m=4 SDO-NA: n=104, m=4 SDO-LNA: n=104, m=4

ǫ = 0.5 16 ≤ k ≤ 64 n = 88, 104 m = 2, 4 T = kPACK(n)

E[nS]

  • ES: Optimization by Exhaustive Search
  • SDO-NA: SDO based on Normal Approximation
  • SDO-LNA: SDO based on Log-Normal Approximation

10 / 19

slide-21
SLIDE 21

Numerical Results (Cont.)

70 80 90 100 110 120

Blocklength (n)

0.2 0.25 0.3 0.35 0.4 0.45 0.5

Throughput (T) m=2 m=3 m=4 m=5 m=16

ǫ = 0.5 k = 32 64 ≤ n ≤ 128 m = 2, 3, 4, 5, 16 T = kPACK(n)

E[nS]

  • the benefit in terms of throughout for m ≥ 5 becomes relatively small
  • a small number of sub-blocks (i.e., a few bits of feedback) suffice to achieve a

throughput close to that obtained with unlimited feedback

11 / 19

slide-22
SLIDE 22

Proof Steps

In this work, we show that limn→∞ E[Nn] = (k + c0)/(1 − ǫ) and limn→∞ Var(Nn) = ((k + c0)ǫ + c0 + c1)/(1 − ǫ)2 where c0 = 1.60669... is the Erd¨

  • s-Borwein constant, and

c1 = 1.13733... is the digital search tree constant Proof Steps:

  • Analysis of the length of a communication round in the

asymptotic regime over a lossless channel (by using closed-form formulas for several sums of products)

  • Extension of the previous analysis for lossy channels

(by showing matching lower and upper bounds)

12 / 19

slide-23
SLIDE 23

Asymptotic Analysis over A Lossless Channel

Assume

  • ǫ = 0, i.e., the channel is lossless
  • m = n, i.e., each sub-block is one bit

13 / 19

slide-24
SLIDE 24

Asymptotic Analysis over A Lossless Channel

Assume

  • ǫ = 0, i.e., the channel is lossless
  • m = n, i.e., each sub-block is one bit

Define

  • Mn: the number of bits needed for the message to become

decodable, following the prescribed order in the codeword

  • PMn: the discrete probability measure for the r.v. Mn, i.e.,

PMn(r) = Ps(r) − Ps(r − 1) ❀ PMn(r) =

  • 2k−r n−r−1

l=0

  • 1 − 2l−(n−k)

k ≤ r ≤ n

  • therwise

Goal: To study limn→∞ E[Mn] and limn→∞ Var(Mn)

13 / 19

slide-25
SLIDE 25

limn→∞ E[Mn] and limn→∞ Var(Mn)

For any n, E[Mn] =

n

  • r=k

rPMn(r) =

n−k

  • i=0

(k + i)2−i

n−k

  • j=i+1
  • 1 − 2−j

and E[M2

n] = n

  • r=k

r 2PMn(r) =

n−k

  • i=0

(k + i)22−i

n−k

  • j=i+1
  • 1 − 2−j

14 / 19

slide-26
SLIDE 26

limn→∞ E[Mn] and limn→∞ Var(Mn)

For any n, E[Mn] =

n

  • r=k

rPMn(r) =

n−k

  • i=0

(k + i)2−i

n−k

  • j=i+1
  • 1 − 2−j

and E[M2

n] = n

  • r=k

r 2PMn(r) =

n−k

  • i=0

(k + i)22−i

n−k

  • j=i+1
  • 1 − 2−j

Theorem For any k, lim

n→∞ E[Mn] = k + c0 and lim n→∞ Var(Mn) = c0 + c1 where

c0 = ∞

i=1 1 2i−1 = 1.60669... is the Erd¨

  • s-Borwein constant, and

c1 = ∞

i=1 1 (2i−1)2 = 1.13733... is the digital search tree constant

Proof: By using the closed-form formulas for several infinite sums

  • f infinite products

14 / 19

slide-27
SLIDE 27

Asymptotic Analysis over A Lossy Channel

Assume

  • ǫ > 0, i.e., the channel is lossy
  • m = n, i.e., each sub-block is one bit

15 / 19

slide-28
SLIDE 28

Asymptotic Analysis over A Lossy Channel

Assume

  • ǫ > 0, i.e., the channel is lossy
  • m = n, i.e., each sub-block is one bit

Define

  • Er: the number of bits erased before r bits are observed by

the destination, i.e., Er ∼ NB(r, ǫ)

  • PEr : the discrete probability measure for the r.v. Er, i.e.,

PEr (e) = r + e − 1 e

  • ǫe(1 − ǫ)r

15 / 19

slide-29
SLIDE 29

Asymptotic Analysis over A Lossy Channel

Assume

  • ǫ > 0, i.e., the channel is lossy
  • m = n, i.e., each sub-block is one bit

Define

  • Er: the number of bits erased before r bits are observed by

the destination, i.e., Er ∼ NB(r, ǫ)

  • PEr : the discrete probability measure for the r.v. Er, i.e.,

PEr (e) = r + e − 1 e

  • ǫe(1 − ǫ)r
  • Nn: the length of a communication round
  • PNn: the discrete probability measure for the r.v. Nn, i.e.,

PNn(t) = t

r=k PEr (t − r)PMn(r)

k ≤ t < n ∞

u=n

u

r=k PEr (u − r)PMn(r)

t = n Goal: To study limn→∞ E[Nn] and limn→∞ Var(Nn)

15 / 19

slide-30
SLIDE 30

limn→∞ E[Nn] and limn→∞ Var(Nn)

❀ Nn < n: the destination can recover the message before all the codeword bits are sent by the source ❀ Nn = n: all the codeword bits are exhausted by the source, and the destination may or may not recover the message For any n, E[Nn] =

n

  • r=k

  • e=0

min(r + e, n)PEr (e)PMn(r) and E[N2

n] = n

  • r=k

  • e=0

min((r + e)2, n2)PEr (e)PMn(r).

16 / 19

slide-31
SLIDE 31

limn→∞ E[Nn] and limn→∞ Var(Nn)

❀ Nn < n: the destination can recover the message before all the codeword bits are sent by the source ❀ Nn = n: all the codeword bits are exhausted by the source, and the destination may or may not recover the message For any n, E[Nn] =

n

  • r=k

  • e=0

min(r + e, n)PEr (e)PMn(r) and E[N2

n] = n

  • r=k

  • e=0

min((r + e)2, n2)PEr (e)PMn(r). Theorem For any k and ǫ, lim

n→∞ E[Nn] = µ(k, ǫ) k+c0 1−ǫ and

lim

n→∞ Var(Nn) = σ2(k, ǫ) (k+c0)ǫ+c0+c1 (1−ǫ)2

Proof: By showing matching lower and upper bounds

16 / 19

slide-32
SLIDE 32

An Upper Bound on lim

n→∞ E[Nn]

Since min(r + e, n) ≤ r + e, E[Nn] = n

r=k PMn(r) ∞ e=0 min(r + e, n)PEr (e)

≤ n

r=k PMn(r) ∞ e=0(r + e)PEr (e)

for all n.

17 / 19

slide-33
SLIDE 33

An Upper Bound on lim

n→∞ E[Nn]

Since min(r + e, n) ≤ r + e, E[Nn] = n

r=k PMn(r) ∞ e=0 min(r + e, n)PEr (e)

≤ n

r=k PMn(r) ∞ e=0(r + e)PEr (e)

for all n. Since Er ∼ NB(r, ǫ), ∞

e=0(r + e)PEr (e) = r ∞ e=0 PEr (e) + ∞ e=0 ePEr (e)

= r + E[Er] = r/(1 − ǫ)

17 / 19

slide-34
SLIDE 34

An Upper Bound on lim

n→∞ E[Nn]

Since min(r + e, n) ≤ r + e, E[Nn] = n

r=k PMn(r) ∞ e=0 min(r + e, n)PEr (e)

≤ n

r=k PMn(r) ∞ e=0(r + e)PEr (e)

for all n. Since Er ∼ NB(r, ǫ), ∞

e=0(r + e)PEr (e) = r ∞ e=0 PEr (e) + ∞ e=0 ePEr (e)

= r + E[Er] = r/(1 − ǫ) Thus, E[Nn] ≤ n

r=k rPMn(r)/(1 − ǫ) = E[Mn]/(1 − ǫ)

for all n.

17 / 19

slide-35
SLIDE 35

An Upper Bound on lim

n→∞ E[Nn]

Since min(r + e, n) ≤ r + e, E[Nn] = n

r=k PMn(r) ∞ e=0 min(r + e, n)PEr (e)

≤ n

r=k PMn(r) ∞ e=0(r + e)PEr (e)

for all n. Since Er ∼ NB(r, ǫ), ∞

e=0(r + e)PEr (e) = r ∞ e=0 PEr (e) + ∞ e=0 ePEr (e)

= r + E[Er] = r/(1 − ǫ) Thus, E[Nn] ≤ n

r=k rPMn(r)/(1 − ǫ) = E[Mn]/(1 − ǫ)

for all n. Since limn→∞ E[Mn] = k + c0 (by the result of the lossless case), lim

n→∞ E[Nn] ≤ (k + c0)/(1 − ǫ)

17 / 19

slide-36
SLIDE 36

A Lower Bound on lim

n→∞ E[Nn]

Since min(r + e, n) = r + e for 0 ≤ e ≤ n − r, E[Nn] = n

r=k

e=0 min(r + e, n)PEr (e)PMn(r)

≥ n

r=k

n−r

e=0(r + e)PEr (e)PMn(r)

for all n.

18 / 19

slide-37
SLIDE 37

A Lower Bound on lim

n→∞ E[Nn]

Since min(r + e, n) = r + e for 0 ≤ e ≤ n − r, E[Nn] = n

r=k

e=0 min(r + e, n)PEr (e)PMn(r)

≥ n

r=k

n−r

e=0(r + e)PEr (e)PMn(r)

for all n. Since PMn(r) is monotone decreasing in n for all k ≤ r ≤ n, PMn(r) ≥ limn→∞ PMn(r) = 2k−r ∞

j=r−k+1(1 − 2−j)

18 / 19

slide-38
SLIDE 38

A Lower Bound on lim

n→∞ E[Nn]

Since min(r + e, n) = r + e for 0 ≤ e ≤ n − r, E[Nn] = n

r=k

e=0 min(r + e, n)PEr (e)PMn(r)

≥ n

r=k

n−r

e=0(r + e)PEr (e)PMn(r)

for all n. Since PMn(r) is monotone decreasing in n for all k ≤ r ≤ n, PMn(r) ≥ limn→∞ PMn(r) = 2k−r ∞

j=r−k+1(1 − 2−j)

Thus, E[Nn] ≥ n

r=k 2k−r n−r e=0(r + e)PEr (e) ∞ j=r−k+1(1 − 2−j)

for all n.

18 / 19

slide-39
SLIDE 39

A Lower Bound on lim

n→∞ E[Nn]

Since min(r + e, n) = r + e for 0 ≤ e ≤ n − r, E[Nn] = n

r=k

e=0 min(r + e, n)PEr (e)PMn(r)

≥ n

r=k

n−r

e=0(r + e)PEr (e)PMn(r)

for all n. Since PMn(r) is monotone decreasing in n for all k ≤ r ≤ n, PMn(r) ≥ limn→∞ PMn(r) = 2k−r ∞

j=r−k+1(1 − 2−j)

Thus, E[Nn] ≥ n

r=k 2k−r n−r e=0(r + e)PEr (e) ∞ j=r−k+1(1 − 2−j)

for all n. Since by the closed-form formulas for several sums of products, ∞

r=k 2k−r ∞ e=0(r +e)PEr (e) ∞ j=r−k+1(1−2−j) = (k+c0)/(1−ǫ)

then, lim

n→∞ E[Nn] ≥ (k + c0)/(1 − ǫ)

18 / 19

slide-40
SLIDE 40

Summary and Ongoing Work

In this work:

  • Considered the problem of communicating a message over a

BEC, using random coding + hybrid ARQ

  • Proposed a framework based on the sequential differential
  • ptimization (SDO) to optimize the parameters of the system

such that the average throughput of the system is maximized Ongoing work: Extending the proposed SDO-based framework

  • for scenarios with constrained feedback rate
  • for channels with memory

19 / 19