lecture 4 channel coding
play

Lecture 4 Channel Coding I-Hsiang Wang Department of Electrical - PowerPoint PPT Presentation

Channel Capacity and the Weak Converse Achievability Proof Summary Lecture 4 Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture


  1. Channel Capacity and the Weak Converse Achievability Proof Summary Lecture 4 Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4

  2. Channel Capacity and the Weak Converse Meta Description I-Hsiang Wang 2 / 16 certain decoding criterion. w from the channel output y N . . determining the stochastic relationship between the output symbol p Achievability Proof and a family of conditional distributions NIT Lecture 4 Summary The Channel Coding Problem x N y N Channel Noisy Channel b w w Encoder Channel Decoder 1 Message : Random message W ∼ Unif [1 : 2 K ] . 2 Channel : Consist of an input alphabet X , an output alphabet Y , { ( y k | x k , y k − 1 ) } | k ∈ N ( x k − 1 , y k − 1 ) y k and the input symbol x k along with all past signals 3 Encoder : Encode the message w by a length N codeword x N ∈ X N . 4 Decoder : Reconstruct message � 5 Efficiency : Maximize the code rate R := K N bits/channel use, given

  3. Channel Capacity and the Weak Converse e I-Hsiang Wang 3 / 16 Note: In lossless source coding, we see that the infimum of compression code rate R where vanishing error probability is possible? e Is it possible to have a sequence of encoder/decoder pairs such attention to answering the following question: Following the development of lossless source coding, Shannon turned the Ans : Probably not, unless the channel noise has some special structure. Question : Is it possible to get zero error probability? . W Achievability Proof NIT Lecture 4 Summary Decoding Criterion: Vanishing Error Probability x N y N Channel Noisy Channel b w w Encoder Channel Decoder { } A key performance measure: Error Probability P ( N ) W ̸ = � := Pr that P ( N ) → 0 as N → ∞ ? If so, what is the largest possible rates where vanishing error probability is possible is H ( { S i } ) .

  4. Channel Capacity and the Weak Converse e I-Hsiang Wang 4 / 16 finite length V e error exponent e Achievability Proof capacity NIT Lecture 4 Summary Rate R Block Probability Length of Error P ( N ) N e Take N → ∞ , Require P ( N ) → 0 : sup R = C . Take N → ∞ : min P ( N ) ≈ 2 − NE ( R ) . √ n Q − 1 ( ϵ ) . Fix N , Require P ( N ) ≤ ϵ : sup R ≈ C −

  5. Channel Capacity and the Weak Converse Achievability Proof Summary In this course we only focus on capacity. In other words, we ignore the issue of delay and do not pursue finer analysis of the error probability via large deviation techniques. 5 / 16 I-Hsiang Wang NIT Lecture 4

  6. Channel Capacity and the Weak Converse is memoryless if I-Hsiang Wang 6 / 16 the channel output y N ? , Question : is our definition of a channel sufficient to specify p transition function. . p Achievability Proof NIT Lecture 4 p A discrete channel Summary Discrete Memoryless Channel (DMC) In order to demonstrate the key ideas in channel coding, in this lecture we shall focus on discrete memoryless channels (DMC) defined below. Definition 1 (Discrete Memoryless Channel) ( { ( y k | x k , y k − 1 ) } ) X , | k ∈ N , Y ∀ k ∈ N , ( y k | x k , y k − 1 ) = p Y | X ( y k | x k ) . ( X k − 1 , Y k − 1 ) In other words, Y k − X k − Here the conditional p.m.f. p Y | X is called the channel law or channel ( y N | x N ) the stochastic relationship between the channel input (codeword) x N and

  7. Channel Capacity and the Weak Converse N I-Hsiang Wang 7 / 16 past channel output, i.e., feedback . function, which implies that the encoder can potentially make use of the is induced by the encoding p Interpretation : . cannot be obtained from p , which p Hence, we need to further specify Achievability Proof p p p N Summary p p NIT Lecture 4 ( x N , y N ) ( y N | x N ) = p p ( x N ) ∏ ( x N , y N ) ( x k , y k | x k − 1 , y k − 1 ) = k =1 ∏ ( y k | x k , y k − 1 ) ( x k | x k − 1 , y k − 1 ) = k =1 { ( x k | x k − 1 , y k − 1 ) } | k ∈ N ( x N ) { ( x k | x k − 1 , y k − 1 ) } | k ∈ N

  8. Channel Capacity and the Weak Converse (b) With Feedback I-Hsiang Wang 8 / 16 N without feedback, p For a DMC Proposition 1 (DMC without Feedback) . suffices to specify p p specifying Achievability Proof realization of the channel output, then, p Suppose in the system, the encoder has no knowledge about the NIT Lecture 4 Summary (a) No Feedback DMC without Feedback x k x k y k y k Channel Noisy Channel Noisy w w Encoder Channel Encoder Channel y k − 1 D ( x k | x k − 1 , y k − 1 ) ( x k | x k − 1 ) = p for all k ∈ N , and it is said the the channel has no feedback. In this case, { ( y k | x k , y k − 1 ) } ( y N | x N ) | k ∈ N ∏ ( ) ( y N | x N ) X , p Y | X , Y = p Y | X ( y i | x i ) . k =1

  9. Channel Capacity and the Weak Converse Achievability Proof I-Hsiang Wang 9 / 16 3 Prove the achievability part with a random coding argument. 2 Prove the converse part: an achievable rate cannot exceed C . couple of examples to show how to compute channel capacity. 1 Give the problem formulation, state the main theorem, and visit a To demonstrate this beautiful result, we organize this lecture as follows: The above holds regardless of the availability of feedback. NIT Lecture 4 For a DMC , the maximum code rate with described) noisy channel coding theorem due to Shannon: In this lecture, we would like to establish the following (informally Overview Summary ( ) X , p Y | X , Y vanishing error probability is the channel capacity C := max p X ( · ) I ( X ; Y ) .

  10. Channel Capacity and the Weak Converse Achievability Proof Summary 1 Channel Capacity and the Weak Converse 2 Achievability Proof 3 Summary 10 / 16 I-Hsiang Wang NIT Lecture 4

  11. Channel Capacity and the Weak Converse 1 A I-Hsiang Wang 11 / 16 e 3 A rate R is said to be achievable if there exist a sequence of . W e w or an Achievability Proof channel code consists of NIT Lecture 4 Summary Channel Coding: Problem Setup x N y N Channel Noisy Channel b w w Encoder Channel Decoder ( ) 2 NR , N an encoding function (encoder) enc N : [1 : 2 K ] → X N that maps each message w to a length N codeword x N , where K := ⌈ NR ⌉ . a decoding function (decoder) dec N : Y N → [1 : 2 K ] ∪ {∗} that maps a channel output sequence y N to a reconstructed message � error message ∗ . { } 2 The error probability is defined as P ( N ) W ̸ = � := Pr ( ) codes such that P ( N ) 2 NR , N → 0 as N → ∞ . The channel capacity is defined as C := sup { R | R : achievable } .

  12. Channel Capacity and the Weak Converse Achievability Proof I-Hsiang Wang 12 / 16 regardless of the availability of feedback. Theorem 1 (Channel Coding Theorem for DMC) NIT Lecture 4 Summary Channel Coding Theorem for Discrete Memoryless Channel x N y N Channel Noisy Channel b w w Encoder Channel Decoder The capacity of the DMC p ( y | x ) is given by C = max p ( x ) I ( X ; Y ) ,

  13. Channel Capacity and the Weak Converse W I-Hsiang Wang 13 / 16 W and Fano’s inequality. (3) e I N (2) log e Achievability Proof (1) W W NIT Lecture 4 e Summary codes such Proof of the (Weak) Converse (1) We would like to show that for every sequence of ( ) 2 NR , N that P ( N ) → 0 as N → ∞ , the rate R ≤ max p ( x ) I ( X ; Y ) . pf : Note that W ∼ Unif [1 : 2 K ] and hence K = H ( W ) . ( ) ( � ) � W ; � � � NR ≤ H ( W ) = I + H ( )) ( W ; Y N ) ( 2 K + 1 1 + P ( N ) ≤ I + ( ) ∑ ( W ; Y k | Y k − 1 ) 1 + P ( N ) ≤ + ( NR + 2) k =1 (1) is due to K = ⌈ NR ⌉ ≥ NR and chain rule. (2) is due to W − Y N − � (3) is due to chain rule and 2 K + 1 ≤ 2 NR +1 + 1 ≤ 2 × 2 NR +1 .

  14. Channel Capacity and the Weak Converse e I-Hsiang Wang 14 / 16 . (4) is due to the fact that conditioning reduces entropy. (5) (4) Achievability Proof I following manipulation: NIT Lecture 4 N Proof of the (Weak) Converse (2) Summary e ( ) 1 + P ( N ) Set ϵ N := 1 ( NR + 2) , we see that ϵ N → 0 as N → ∞ because lim N →∞ P ( N ) = 0 . ( W ; Y k | Y k − 1 ) The next step is to relate ∑ N to I ( X ; Y ) , by the k =1 I ( W ; Y k | Y k − 1 ) ( ) ( ) W , Y k − 1 ; Y k W , Y k − 1 , X k ; Y k ≤ I ≤ I = I ( X k ; Y k ) ≤ max p ( x ) I ( X ; Y ) ( X k − 1 , Y k − 1 ) (5) is due to DMC: Y k − X k − Hence, we have R ≤ max p ( x ) I ( X ; Y ) + ϵ N for all N . Taking N → ∞ , we conclude that R ≤ max p ( x ) I ( X ; Y ) if it is achievable.

  15. Channel Capacity and the Weak Converse Achievability Proof Summary 1 Channel Capacity and the Weak Converse 2 Achievability Proof 3 Summary 15 / 16 I-Hsiang Wang NIT Lecture 4

  16. Channel Capacity and the Weak Converse Achievability Proof Summary 1 Channel Capacity and the Weak Converse 2 Achievability Proof 3 Summary 16 / 16 I-Hsiang Wang NIT Lecture 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend