lecture 04 reliable communication
play

Lecture 04 Reliable Communication I-Hsiang Wang ihwang@ntu.edu.tw - PowerPoint PPT Presentation

Principle of Communications, Fall 2017 Lecture 04 Reliable Communication I-Hsiang Wang ihwang@ntu.edu.tw National Taiwan University 2017/10/25,26 Channel Coding Binary Interface x ( t ) { c i } { u m } x b ( t ) ECC Symbol Pulse Up { b


  1. Principle of Communications, Fall 2017 Lecture 04 Reliable Communication I-Hsiang Wang ihwang@ntu.edu.tw National Taiwan University 2017/10/25,26

  2. Channel Coding Binary Interface x ( t ) { c i } { u m } x b ( t ) ECC Symbol Pulse Up { b i } Encoder Mapper Shaper Converter Information coded discrete baseband passband Noisy bits bits sequence waveform waveform Channel Filter + y ( t ) { ˆ c i } { ˆ u m } y b ( t ) ECC Symbol Down { ˆ Sampler + b i } Decoder Demapper Converter Detection Previous lectures: Focusing on digital modulation, we can ensure that the coded bits { c i } can be reconstructed optimally (i.e., minimize avg. prob. of error) at the receiver 2

  3. Averaged symbol probability of error is exponentially decaying with SNR P e . = exp( − c SNR ) For each symbol, P e = 10 –3 is already pretty good! However, this is not good enough … Consider a file mapped and converted into n = 250 symbols The file cannot be reconstructed if one symbol is wrong The “file” probability of error is 1 − (1 − P e ) n ≈ n P e = 250 / 1000 = 0 . 25 Pretty bad … But we cannot do much because noise is inevitable, while modulation only focus on the symbol level, not the the file level 3

  4. Channel Coding Binary Interface x ( t ) { c i } { u m } x b ( t ) ECC Symbol Pulse Up { b i } Encoder Mapper Shaper Converter Information coded discrete baseband passband Noisy bits bits sequence waveform waveform Channel Filter + y ( t ) { ˆ c i } { ˆ u m } y b ( t ) ECC Symbol Down { ˆ Sampler + b i } Decoder Demapper Converter Detection This lecture: Reliable Communication! Introduce error correction coding, to add redundancy to the original file. ⟹ We are able to make the overall “file” probability of error arbitrarily small! Prices to pay : data rate and energy 4

  5. Soft decision: jointly consider detection and decoding; directly work on the demodulated symbols b � [ b 1 b 2 ... b k ] c � [ c 1 c 2 ... c n ] u � [ u 1 u 2 ... u ˜ n ] ECC Digital u c Equivalent b Encoder Modulator Discrete-time We focus on n = n/ � ˜ Complex Rate: R = k / n Baseband soft decision Channel Detection ˆ V = u + Z b V first! + Decoder Hard decision: only consider decoding; directly work on the detected bit sequences b � [ b 1 b 2 ... b k ] c � [ c 1 c 2 ... c n ] u � [ u 1 u 2 ... u ˜ n ] ECC Digital u c Equivalent b Encoder Modulator Discrete-time n = n/ � ˜ Complex Baseband Channel V ECC d Demod. + ˆ V = u + Z b Decoder Detection 5

  6. Outline • Prelude: repetition coding • Energy-e ffi cient reliable communication: orthogonal code • Rate-e ffi cient reliable communication: linear block code • Convolutional code 6

  7. Part I. Prelude: Repetition Coding Repetition code, Rate and Energy e ffi ciency 7

  8. Repetition: a simple way to enhance reliability • Idea: repeat each bit N times ⟹ data rate R = 1/ N . original bit seq. b 3 b 5 b 1 b 2 b 4 Many ways for repetition coded bit seq. 1 b 1 b 1 b 2 b 2 b 3 b 3 b 4 b 4 b 5 b 5 coded bit seq. 2 b 1 b 2 b 3 b 4 b 5 b 1 b 2 b 3 b 4 b 5 repeat N times                                        • � � We focus on the architecture below: b � +1 ∼ b 2 � b 1 ∼ b � b 1 ∼ b � b 1 ∼ b � c = b � [ b 1 b 2 ... b k ] c � [ c 1 c 2 ... c n ] u � [ u 1 u 2 ... u ˜ n ] Digital u c Repetition Equivalent b Modulator Discrete-time n = kN n = n/ � ˜ Complex � : # of bits in a symbol Baseband Channel Detection ˆ V = u + Z b V + Decoder 8

  9. repeat N times                                        � � b � +1 ∼ b 2 � b 1 ∼ b � b 1 ∼ b � b 1 ∼ b � c = mod mod mod mod � � u 1 u 2 u N Equivalent vector symbol ∈ C N u � � � u 1 u 2 · · · u N Since the noises are i.i.d., it su ffi ces to use the N -dim. demodulated V = u + Z to optimally decode b 1 ∼ b � 9

  10. BPSK + repetition coding V = u + Z ∈ C N Z 1 , ...Z N i.i.d. Equivalent channel model: ∼ CN (0 , N 0 ) Equivalent constellation set: u ∈ { a 0 , a 1 } � � � � d d · · · d d d · · · d a 1 = + a 0 = − Performance analysis: Repetition effectively � � �� �� � � P ( N ) ∥ a 1 − a 0 ∥ N · 4 d 2 N 2 d 2 = Q = Q = Q e 2 √ increase SNR by N -fold! 2 N 0 N 0 N 0 / 2 � . � √ = exp( − N SNR ) = Q N 2 SNR = d 2 SNR � average energy per uncoded symbol total noise variance per symbol N 0 10

  11. Rate and energy efficiency Rate: R = 1 /N → 0 as N → ∞ Energy per bit: E b = Nd 2 → ∞ as N → ∞ Achieving arbitrarily small prob. of error at the price of zero rate and infinite energy per bit Question: can we resolve the issue with more general constellation sets? 11

  12. General modulation + repetition coding V = u + Z ∈ C N Z 1 , ...Z N i.i.d. Equivalent channel model: ∼ CN (0 , N 0 ) Equivalent constellation set: u ∈ { a 1 , ..., a M } M = 2 � Rate: R = � /N Energy per bit: E b ℓ SNR = SNR N 0 = N R → ∞ as N → ∞ → 0 as N → ∞ Probability of error (take M -ary PAM as an example) : �� � �� � P ( N ) 6 = 2(1 − 2 − ℓ )Q N = 2(1 − 2 − NR )Q 4 NR − 1 6 SNR 4 ℓ − 1 SNR N e lim N →∞ P ( N ) ⇒ lim N →∞ 4 NR − 1 = 0 ⇐ = 0 e N it is necessary that lim N →∞ R = 0 12

  13. Why repetition coding is not very good • Repetition coding: high reliability at the price of asymptotically zero rate and infinite energy per bit • Repetition is too naive and does not utilize the available degrees of freedom in the N -dimensional space e ffi ciently • Is it possible to design better coding schemes with the following? ‣ Vanishing probability of error ‣ Positive rate ‣ Finite energy per bit YES! 13

  14. Part II. Energy-E ffj cient Reliable Communication Orthogonal code, Optimal energy e ffi ciency 14

  15. Orthogonal coding b � [ b 1 b 2 ... b k ] u � [ u 1 u 2 ... u ˜ n ] Encoder + Equivalent b u Modulation Discrete-time here we jointly consider Complex coding and modulation Baseband Channel Detection ˆ V = u + Z b V + Decoder • With N dimensions ( N time slots), use N equal-norm orthogonal vectors to encode log 2 N bits • Since the noises are i.i.d. circularly symmetric complex Gaussian, we can WLOG assume that these N vectors are simply scaled standard unit vectors: { d e i | i = 1 , ..., N } , e i ( j ) = { i = j } 15

  16. Example: N = 8 info. bits symbol vector 000 [ d 0 0 0 0 0 0 0] 001 [0 d 0 0 0 0 0 0] Equivalent to encoding messages 010 [0 0 d 0 0 0 0 0] using the location of a pulse 011 [0 0 0 d 0 0 0 0] 100 [0 0 0 0 d 0 0 0] ⟹ Pulse Position Modulation (PPM) 101 [0 0 0 0 0 d 0 0] 110 [0 0 0 0 0 0 d 0] 111 [0 0 0 0 0 0 0 d ] 16

  17. Performance analysis of orthogonal coding V = u + Z ∈ C N Z 1 , ...Z N i.i.d. Equivalent channel model: ∼ CN (0 , N 0 ) Equivalent constellation set: u ∈ { d e i | i = 1 , ..., N } Rate: Energy per bit: E b = d 2 / log 2 N R = log 2 N/N → 0 as N → ∞ Finite energy per bit su ffi ces! E b > (2 ln 2) N 0 Probability of error: √ d min = 2 d �� � �� �� � � d 2 P ( N ) d 2 log 2 N E b ≤ ( N − 1) Q = ( N − 1) Q ≤ N Q min e 2 N 0 N 0 N 0 � � �� → 0 as N → ∞ as long as E b ≤ 1 E b N 0 > 2 ln 2 2 exp − ln N (2 ln 2) N 0 − 1 17

  18. Minimum energy per bit • Does orthogonal coding achieve the minimum E b / N 0 ? • Let us use Shannon’s capacity formula to derive the minimum E b / N 0 of all possible coding+modulation schemes: ‣ For the additive Gaussian noise channel with energy per channel use P , the best achievable rate follows (bits per channel use) R < C � log 2 (1 + P N 0 ) ‣ Energy per bit is hence ⇒ R < log 2 (1 + R E b E b = P/R = N 0 ) ‣ N 0 > E ∗ b ( R ) � 2 R − 1 The minimum energy per bit when the rate is R can be found: E b N 0 R ‣ Taking infimum over all R , we see: E ∗ E ∗ 2 R − 1 b ( R ) b � inf = lim = ln 2 N 0 N 0 R R> 0 R ↓ 0 • In fact, orthogonal code can achieve any E b / N 0 > ln 2 ! ‣ but union bound fails; new techniques required (see Gallager Ch. 8.5.3 for more details) 18

  19. Part III. Rate-E ffj cient Reliable Communication Linear block code, Existence of rate-e ffi cient codes with vanishing error probability 19

  20. Linear block code + BPSK modulation • Orthogonal code achieves vanishing probability of error with zero rate but finite energy per bit (energy-efficient reliable communication) . • Is it possible to achieve vanishing probability of error with positive rate and finite energy per bit (rate-efficient reliable communication) ? • We focus on the following architecture: linear block code + BPSK modulation ‣ It turns out this simple architecture can achieve rate-e ffi cient reliable communication! b � [ b 1 b 2 ... b k ] c � [ c 1 c 2 ... c n ] u � [ u 1 u 2 ... u ˜ n ] ECC Digital Linear Block Binary PAM u c Equivalent b Encoder Modulator Code Modulator Discrete-time n = n/ � ˜ R = k/n � = 1 Complex Baseband Channel Detection ˆ V = u + Z b V + Decoder 20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend