in the Finite Memory Length Regime Joint work of Yu-Chih Huang , - - PowerPoint PPT Presentation

in the finite memory length regime
SMART_READER_LITE
LIVE PREVIEW

in the Finite Memory Length Regime Joint work of Yu-Chih Huang , - - PowerPoint PPT Presentation

2020 IEEE International Symposium on Information Theory Error Rate Analysis for Random Linear Streaming Codes in the Finite Memory Length Regime Joint work of Yu-Chih Huang , National Chiao Tung University, Shih-Chun Lin , National Taiwan


slide-1
SLIDE 1

Error Rate Analysis for Random Linear Streaming Codes in the Finite Memory Length Regime

Sponsored by NSF CCF-1422997, CCF-1618475 and CCF-1816013, and by MOST Taiwan 107-2628-E-011-003-MY3.

Joint work of Yu-Chih Huang, National Chiao Tung University, Shih-Chun Lin, National Taiwan University of Science and Technology, I-Hsiang Wang, National Taiwan University, and Chih-Chun Wang, Pin-Wen Su, Purdue University 6/7/2020

2020 IEEE International Symposium on Information Theory

slide-2
SLIDE 2

Outline

 Motivation and Related Work  System Model  Main Contributions

 Information-Debt Under Finite Memory  Exact Error Rate Analysis  A Provably-Tight Closed-Form Error Rate Approximation

 Numerical Verification and Conclusion

slide-3
SLIDE 3

5G Communication Systems

 IMT-2020

 Enhanced mobile broadband (eMBB)  Massive machine type communications (mMTC)  Ultra-reliable and low latency communications (URLLC)

 Low Latency Requirements

 End-to-end delay ≤ 1 ms  Streaming codes may be a possible solution.

The figure is copied from International Telecommunication Union, “Setting the Scene for 5G: Opportunities and Challenges.” (2018). https://www.itu.int/dms_pub/itu-d/opb/pref/D-PREF-BB.5G_01-2018-PDF-E.pdf

slide-4
SLIDE 4

Streaming Codes

 Very small queueing delay: Good for URLLC & mMTC  Streaming Codes:  Align the convolutional code structure with the actual transmission/encoding schedule.  There are other definitions of Streaming Codes, but we use this basic definition.  What we are interested: Error Rate of Streaming Codes with a given finite memory length.

slide-5
SLIDE 5

Existing Results for Streaming Codes

 Adversarial Channel Model: Optimal streaming code rate and code construction given a deterministic set of possible channel error patterns

 Burst Channel Model [Martinian and Sundberg 2004] [Khisti and Singh 2009]  Burst and Arbitrary Erasure Channel Model [Fong et al. 2019] [Krishnan et al. 2018] [Badr et al. 2017]  Variable-Size Arrivals [Rudow and Rashmi 2018]

 Stochastic Channel Model: Error exponent analysis and finite-memory code construction [Draper and Khisti 2011]

Draper and Khisti 2011 Our Work Deadline Constraint (Δ) Finite Infinite Memory Length (𝛽) 𝛽 ≥ Δ Arbitrary Error Probability Error Exponent Analysis Exact Error Rate Analysis

slide-6
SLIDE 6

 Exponentially tight is not tight enough. Draper et al. 2011; Viterbi 1967 Our Work Exponential Asymptotic analysis, in addition to exact error rate analysis

Error Exponent Analysis for Convolutional-based Codes

 Continue from previous slide

Draper and Khisti 2011 Our Work Deadline Constraint (Δ) Finite Infinite Memory Length (𝛽) 𝛽 ≥ Δ Arbitrary Error Probability Error Exponent Analysis Exact Error Rate Analysis

slide-7
SLIDE 7

Outline

 Motivation and Related Work  System Model  Main Contributions

 Information-Debt Under Finite Memory  Exact Error Rate Analysis  A Provably-Tight Closed-Form Error Rate Approximation

 Numerical Verification and Conclusion

slide-8
SLIDE 8

Slotted Coding System

In every time slot 𝑢 ≥ 1,  Encoder:

 Receives 𝐿 packets:  𝑡𝑙(𝑢) in GF(2𝑟).  Stores 𝛽𝐿 packets in the previous 𝛽 slots  𝛽 is the memory length.  Encodes (𝛽 + 1)𝐿 packets and outputs 𝑂 coded packets:  Linear encoder: Define 𝐇𝑢 as the 𝑂-by- (min 𝛽 + 1, 𝑢 𝐿) generator matrix, we have  Random linear streaming codes (RLSCs): each entry of 𝐇𝑢 is chosen uniformly randomly from GF(2𝑟), excluding 0.  Cumulative generator matrix:

 𝛽 = 2:

slide-9
SLIDE 9

Comparison to [Martinian 2004] Coding System

 Our work: Finite memory  Martinian setting: Infinite memory

𝛽 = 2

finite 𝛽 𝛽 = ∞

slide-10
SLIDE 10

Slotted Coding System

In every time slot 𝑢 ≥ 1,  Packet Erasure Channel:

 Only a random subset of 𝑂 coded packets, denoted by , will arrive at the decoder perfectly.  is i.i.d. across 𝑢. Define  is the probability of receiving 𝑗 packets successfully.

 𝛽 = 2:

slide-11
SLIDE 11

Slotted Coding System

In every time slot 𝑢 ≥ 1,  Packet Erasure Channel:

 Only a random subset of 𝑂 coded packets, denoted by , will arrive at the decoder perfectly.  is i.i.d. across 𝑢. Define  is the probability of receiving 𝑗 packets successfully.

 Received Signal:

 The received packets:  Denote 𝐈𝑢 the projection of 𝐇𝑢 onto the random set  Cumulative receiver matrix:

 𝛽 = 2:

slide-12
SLIDE 12

Slotted Coding System

 Objective: Given any finite 𝑂, 𝐿, 𝛽 and 𝑄𝑗 ,

Definition 1. The vector 𝐭(𝑢) is decodable by time 𝑢 + Δ if all 𝑡𝑙 𝑢 ∶ 𝑙 ∈ 1, 𝐿 are decodable by time 𝑢 + Δ.

Random matrix depends on channel realization

 With optimal decoder on the received 𝐳1

𝑢+Δ, we

aim to solve the following: Average Error Rate Infinite Deadline Slot Error Rate

slide-13
SLIDE 13

Technical Assumptions

 Less-than-Capacity (LC) condition: Assume  Each slot:  Generalized MDS Condition:

 𝐇(𝑢): as full rank as possible (details in the paper)  MDS holds when 𝑟 → ∞  See Schwartz-Zippel Theorem  in [Ho et al. 2006, Theorems 3 and 4]  Avoid corner cases in the analysis

Encoder Channel

𝐿 pkts 𝑂 pkts 𝐷𝑢 pkts

slide-14
SLIDE 14

Outline

 Motivation and Related Work  System Model  Main Contributions

 Information-Debt Under Finite Memory  Exact Error Rate Analysis  A Provably-Tight Closed-Form Error Rate Approximation

 Numerical Verification and Conclusion

slide-15
SLIDE 15

Information-Debt Under Infinite Memory

 Mutual information debt under infinite memory 𝛽 = ∞ [Martinian 2004]

  • Definition. Initialize . For any

, we iteratively compute Debt is Nonnegative

slide-16
SLIDE 16

Information-Debt Under Infinite Memory

 Mutual information debt under infinite memory 𝛽 = ∞ [Martinian 2004]

  • Definition. Initialize . For any

, we iteratively compute Observation: wherever 𝐽𝑒(𝑢) hits 0, we can decode 𝐭(𝑢) backwards.

3 1

𝐿 < 𝐷𝑢: decrease 𝐿 > 𝐷𝑢: increase

2 7 3 3

slide-17
SLIDE 17

Information-Debt Under Infinite Memory

 Mutual information debt under infinite memory 𝛽 = ∞ [Martinian 2004]

  • Definition. Initialize . For any

, we iteratively compute Observation: wherever 𝐽𝑒(𝑢) hits 0, we can decode 𝐭(𝑢) backwards.

Q: What if 𝐈(𝑢+Δ) is NOT full triangular? E.g. a 4-by-4 matrix which is not full rank

slide-18
SLIDE 18

Information-Debt Under Finite Memory

 New information debt definition under finite memory 𝛽 < ∞ Definition 2. Define a constant and initialize _ . For any , we iteratively compute ∵ Memory length 𝛽 ∴ Maximum allowable debt one can carry forward is at most 𝛽𝐿  Absolute “ceiling”  Bankruptcy

𝛽𝐿

Maximum Allowable Debt Bankruptcy

slide-19
SLIDE 19

Define , , and as the 𝑗-th time that 𝐽𝑒 𝑢 hits 0 and , respectively.

Decodability Events

Proposition 1. For any fixed 𝑗0 ≥ 0, a) No 𝜐𝑘 ∈ 𝑢𝑗0, 𝑢𝑗0+1 , then 𝐭(𝑢) is decodable by time 𝑢𝑗0+1 for all 𝑢 ∈ (𝑢𝑗0, 𝑢𝑗0+1]. b) Exists 𝜐𝑘 ∈ 𝑢𝑗0, 𝑢𝑗0+1 , define 𝜐𝑘∗ the one with the largest 𝑘. Then 𝐭(𝑢) is decodable by time 𝑢𝑗0+1 for all 𝑢 ∈ (𝜐𝑘∗− 𝛽, 𝑢𝑗0+1].

  • Decodable
  • 𝛽 − 1
  • 𝐿 = 1, 𝛽 = 3
  • Decodable
slide-20
SLIDE 20

Define , , and as the 𝑗-th time that 𝐽𝑒 𝑢 hits 0 and , respectively.

Error Events

Proposition 2. None of 𝐭 𝑢 : 𝑢 ∈ (𝑢𝑗0, 𝜐𝑘∗− 𝛽] is decodable by time 𝜐𝑘∗− 𝛽 + Δ, regardless how large we set the deadline Δ.

  • Decodable
  • 𝛽 − 1
  • 𝐿 = 1, 𝛽 = 3
  • Decodable

x x x x x x x x Error

slide-21
SLIDE 21

Define , , and as the 𝑗-th time that 𝐽𝑒 𝑢 hits 0 and , respectively.

Intuition Behind

  • Decodable
  • 𝛽 − 1
  • 𝐿 = 1, 𝛽 = 3
  • Decodable

x x x x x x x x Error

 Enough linear equations  Start decoding from 𝐭(𝑢𝑗0+1), 𝐭(𝑢𝑗0+1 − 1), ⋯, in a backward fashion

slide-22
SLIDE 22

Define , , and as the 𝑗-th time that 𝐽𝑒 𝑢 hits 0 and , respectively.

Intuition Behind

  • Decodable
  • 𝛽 − 1
  • 𝐿 = 1, 𝛽 = 3
  • Decodable

x x x x x x x x Error

 Coupling between 𝐭 𝑢 : 𝑢 ≤ 𝜐𝑘∗− 𝛽 and 𝐭 𝑢 : 𝑢 > 𝜐𝑘∗− 𝛽 is severed once 𝐽𝑒(𝑢) hits (bankruptcy)

slide-23
SLIDE 23

Exact Error Rate Analysis

 𝐷𝑢 is i.i.d. ⟹ 𝐽𝑒 𝑢 is a renewal Markov process.  The state space: . Lemma 2. Assuming the LC and MDS conditions, we have for any fixed 𝑗0, where is the indicator function.

 𝐽𝑒 𝑢 from state-0 to state-0  Stopping Time  Not Stopping Time  More involved analysis

 With 𝐽𝑒(𝑢) being renewal Markov process, the long term average error rate can be computed by

 Information debt:

slide-24
SLIDE 24

Exact Error Rate Analysis

 Denote the transition matrix where .  The formulas 𝑔

1 Γ , 𝑔 2 Γ and 𝑔 3 Γ can be found in the paper.

 Complexity may be too high for large 𝛽 and 𝐿

 Information debt:

Proposition 3. Assume the LC and MDS conditions. The error rate 𝑞𝑓 in Lemma 2 equals where 𝑔

1 ∙ , 𝑔 2 ∙ and 𝑔 3 ∙ are matrix-based functions involving multiplication, summation, and

inversion, with the complexity 𝑃 𝛽𝐿 3 where the dimension of Γ is 𝛽𝐿 + 2 -by- 𝛽𝐿 + 2 .

slide-25
SLIDE 25

 Observation 1: Γ is almost Toeplitz Matrix  Observation 2: The three functions 𝑔

1 Γ , 𝑔 2 Γ

and 𝑔

3 Γ can be solved by Difference Equation

Provably-Tight Closed-Form Error Rate Approximation

Goal: Derive an approximation formula of the form 𝐶1, 𝐶2 and 𝐶3 depend only on 𝐿, 𝑂, and 𝑄𝑗 , not on 𝛽.

 Example:

Recall Γ is the transition matrix Since the transition probability depends on 𝐿 − 𝐷𝑢, Γ is near-Toeplitz. Here is an example.  Information debt:

slide-26
SLIDE 26

 Observation 1: Γ is almost Toeplitz Matrix  Observation 2: The three functions 𝑔

1 Γ , 𝑔 2 Γ

and 𝑔

3 Γ can be solved by Difference Equation

Provably-Tight Closed-Form Error Rate Approximation

Goal: Derive an approximation formula of the form 𝐶1, 𝐶2 and 𝐶3 depend only on 𝐿, 𝑂, and 𝑄𝑗 , not on 𝛽.

 Example:

Recall Γ is the transition matrix Since the transition probability depends on 𝐿 − 𝐷𝑢, Γ is near-Toeplitz. Here is an example.  Information debt:

Boundary Conditions

slide-27
SLIDE 27

 Observation 1: Γ is almost Toeplitz Matrix  Observation 2: The three functions 𝑔

1 Γ , 𝑔 2 Γ

and 𝑔

3 Γ can be solved by Difference Equation

 The closed-form expressions for 𝐶1 and 𝐶3 are in the paper.  𝐶2 expression is in our journal version under preparation.

Provably-Tight Closed-Form Error Rate Approximation

Goal: Derive an approximation formula of the form 𝐶1, 𝐶2 and 𝐶3 depend only on 𝐿, 𝑂, and 𝑄𝑗 , not on 𝛽.

 Example:

Recall Γ is the transition matrix Since the transition probability depends on 𝐿 − 𝐷𝑢, Γ is near-Toeplitz. Here is an example.  Information debt:

slide-28
SLIDE 28

Outline

 Motivation and Related Work  System Model  Main Contributions

 Information-Debt Under Finite Memory  Exact Error Rate Analysis  A Provably-Tight Closed-Form Error Rate Approximation

 Numerical Verification and Conclusion

slide-29
SLIDE 29

Numerical Verification

 Let 𝐿 = 2, 𝑂 = 5  𝐷𝑢 : binomial distribution with 𝑞 =

𝐿 𝑂 + 0.01 = 0.41

⟹ 𝑄𝑗 =

5 𝑗 𝑞𝑗 1 − 𝑞 5−𝑗

 By our formulas in the paper,  Monte Carlo Simulation: ground truth  Exact:  Approximation 1:  Approximation 2:

slide-30
SLIDE 30

Numerical Verification

 Let 𝐿 = 2, 𝑂 = 5  𝐷𝑢 : binomial distribution with 𝑞 =

𝐿 𝑂 + 0.05 = 0.45

⟹ 𝑄𝑗 =

5 𝑗 𝑞𝑗 1 − 𝑞 5−𝑗

 By our formulas in the paper,  Monte Carlo Simulation: ground truth  Exact:  Approximation 1:  Approximation 2:

slide-31
SLIDE 31

Conclusion

 New information debt definition to describe the decodability events of RLSCs with finite memory length.  Transition-Matrix-based procedure that computes the exact error rate  Closed-form approximation of the error rate that is provably tight for large memory lengths

slide-32
SLIDE 32

Questions are welcome. Pin-Wen Su su173@purdue.edu

slide-33
SLIDE 33

References I

[1] H. Ji, S. Park, J. Yeo, Y. Kim, J. Lee, and B. Shim, “Ultra-reliable and low-latency communications in 5G downlink: Physical layer aspects,” IEEE Wireless Communications, vol. 25, no. 3, pp. 124–130, Jun. 2018. [2] A. Viterbi, “Error bounds for convolutional codes and an asymptotically optimum decoding algorithm,” IEEE Transactions

  • n Information Theory, vol. 13, no. 2, pp. 260–269, Apr. 1967.

[3] R. G. Gallager, Information Theory and Reliable Communication. Springer, 1968, vol. 2. [4] N. Merhav, “List decoding—random coding exponents and expurgated exponents,” IEEE Transactions on Information Theory, vol. 60, no. 11, pp. 6749–6759, Nov. 2014. [5] P. S. Rybin and F. I. Ivanov, “On estimation of the error exponent for finite length regular graph-based ldpc codes,” Journal

  • f Communications Technology and Electronics, vol. 63, no. 12, pp. 1518–1523, 2018.

[6] Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel coding rate in the finite blocklength regime,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2307–2359, May 2010. [7] E. Martinian, “Dynamic information and constraints in source and channel coding,” Ph.D. dissertation, Massachusetts Institute of Technology, 2004. [8] E. Martinian and C.-E. W. Sundberg, “Burst erasure correction codes with low decoding delay,” IEEE Transactions on Information Theory, vol. 50, no. 10, pp. 2494–2502, Oct. 2004. [9] A. Khisti and J. P. Singh, “On multicasting with streaming burst-erasure codes,” in 2009 IEEE International Symposium on Information Theory, Jun. 2009, pp. 2887–2891. [10] A. Badr, A. Khisti, and E. Martinian, “Diversity embedded streaming erasure codes (DE-SCo): Constructions and

  • ptimality,” IEEE Journal on Selected Areas in Communications, vol. 29, no. 5, pp. 1042–1054, May 2011.
slide-34
SLIDE 34

References II

[11] A. Badr, A. Khisti, W. Tan, and J. Apostolopoulos, “Streaming codes with partial recovery over channels with burst and isolated erasures,” IEEE Journal of Selected Topics in Signal Processing, vol. 9, no. 3, pp. 501–516, Apr. 2015. [12] A. Badr, P. Patil, A. Khisti, W. Tan, and J. Apostolopoulos, “Layered constructions for low-delay streaming codes,” IEEE Transactions on Information Theory, vol. 63, no. 1, pp. 111–141, Jan. 2017. [13] M. Rudow and K. V. Rashmi, “Streaming codes for variable-size arrivals,” in 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Oct. 2018, pp. 733–740. [14] M. N. Krishnan and P. V. Kumar, “Rate-optimal streaming codes for channels with burst and isolated erasures,” in 2018 IEEE International Symposium on Information Theory (ISIT), Jun. 2018, pp. 1809–1813. [15] S. L. Fong, A. Khisti, B. Li, W. Tan, X. Zhu, and J. Apostolopoulos, “Optimal streaming codes for channels with burst and arbitrary erasures,” IEEE Transactions on Information Theory, vol. 65, no. 7, pp. 4274–4292, Jul. 2019. [16] S. C. Draper and A. Khisti, “Truncated tree codes for streaming data: Infinite-memory reliability using finite memory,” in 2011 8th International Symposium on Wireless Communication Systems, Nov. 2011, pp. 136–140. [17] T. Ho, M. Medard, R. Koetter, D. R. Karger, M. Effros, J. Shi, and B. Leong, “A random linear network coding approach to multicast,” IEEE Transactions on Information Theory, vol. 52, no. 10, pp. 4413–4430, Oct. 2006. [18] R. Durrett, Essentials of Stochastic Processes. Springer, 1999.