lecture 6 channel coding over continuous channels
play

Lecture 6 Channel Coding over Continuous Channels I-Hsiang Wang - PowerPoint PPT Presentation

Channel Coding over Continuous Memoryless Channels Lecture 6 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 2, 2015 1 / 30 I-Hsiang Wang IT


  1. Channel Coding over Continuous Memoryless Channels Lecture 6 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 2, 2015 1 / 30 I-Hsiang Wang IT Lecture 6

  2. Channel Coding over Continuous Memoryless Channels We have investigated the measures of information for continuous r.v.’s: The amount of uncertainty (entropy) is mostly infinite. Mutual information and KL divergences are well defined. Differential entropy is a useful entity to compute and manage measures of information for continuous r.v.’s. Question : How about coding theorems? Is there a general way or framework to extend coding theorems from discrete (memoryless) sources/channels to continuous (memoryless) sources/channels? 2 / 30 I-Hsiang Wang IT Lecture 6

  3. Channel Coding over Continuous Memoryless Channels 3 / 30 I-Hsiang Wang IT Lecture 6 Discrete Memoryless Channel x N y N Channel Channel Channel b w w Encoder p Y | X Decoder C ( B ) = X : E [ b ( X )] ≤ B I ( X ; Y ) . max ? Continuous Memoryless Channel x N y N Channel Channel Channel b w w f Y | X Encoder Decoder C ( B ) = sup I ( X ; Y ) . X : E [ b ( X )] ≤ B

  4. Channel Coding over Continuous Memoryless Channels differential entropy terms. I-Hsiang Wang 4 / 30 threshold decoder, similar to weak typicality. and Yeung[5] use weak typicality for continuous r.v.’s. Moser[4] uses capacity follows Gallager[2] and El Gamal&Kim[6] . Cover&Thomas[1] Using discretization to derive the achievability of Gaussian channel entropy terms in the definitions of weakly typical sequences by Coding Theorems: from Discrete to Continuous (1) repeat the arguments in a similar way. In particular, replace the 2 New typicality : Extend weak typicality for continuous r.v. and finer to prove the achievability. create a discrete system, and then make the discretization finer and 1 Discretization : Discretize the source and channel input/output to theorems from the discrete world to the continuous world: Two main techniques for extending the achievability part of coding IT Lecture 6

  5. Channel Coding over Continuous Memoryless Channels Coding Theorems: from Discrete to Continuous (2) In this lecture, we use discretization for the achievability proof. Pros : No need for new tools (eg., typicality) for continuous r.v.’s. Extends naturally to multi-terminal settings – can focus on discrete memoryless networks. Cons : Technical; not much insight on how to achieve capacity. Hence, we use a geometric argument to provide insights on how to achieve capacity. Disclaimer : We will not be 100% rigorous in deriving the results in this lecture. Instead, you can find rigorous treatment in the references. 5 / 30 I-Hsiang Wang IT Lecture 6

  6. Channel Coding over Continuous Memoryless Channels Outline 1 First, we formulate the channel coding problem over continuous memoryless channels (CMC), state the coding theorem, and sketch the converse and achievability proofs. 2 Second, we introduce additive Gaussian noise (AGN) channel, derive the Gaussian channel capacity, and provide insights based on geometric arguments. 3 We then explore extensions, including parallel Gaussian channels and correlated Gaussian channels, and continuous-time bandlimited Gaussian channels. 6 / 30 I-Hsiang Wang IT Lecture 6

  7. Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel Gaussian Channel Capacity 1 Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel Gaussian Channel Capacity 7 / 30 I-Hsiang Wang IT Lecture 6

  8. Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel Gaussian Channel Capacity 1 Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel Gaussian Channel Capacity 8 / 30 I-Hsiang Wang IT Lecture 6

  9. Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel I-Hsiang Wang 9 / 30 same as those in channel coding over DMC. The definitions of error probability, achievable rate, and capacity, are the N 3 Average input cost constraint B : . 2 Continuous Memoryless Channel (CMC): IT Lecture 6 Gaussian Channel Capacity Continous Memoryless Channel x N y N Channel Channel Channel b w w f Y | X Encoder Decoder 1 Input/output alphabet X = Y = R . Channel Law : Governed by the conditional density (p.d.f.) f Y | X . X k − 1 , Y k − 1 ) ( Memoryless : Y k − X k − 1 ∑ N k =1 b ( x k ) ≤ B , where b : R → [0 , ∞ ) is the (single-letter) cost function.

  10. Channel Coding over Continuous Memoryless Channels sup I-Hsiang Wang 10 / 30 Converse proof : Exactly the same as that in the DMC case. In other words, it could also be discrete. Continuous Memoryless Channel (1) Note : The input distribution of the r.v. X needs not to have a density. with input cost constraint B is The capacity of the CMC Gaussian Channel Capacity Channel Coding Theorem Theorem 1 (Continuous Memoryless Channel Capacity) IT Lecture 6 ( ) R , f Y | X , R C = I ( X ; Y ) . X : E [ b ( X )] ≤ B How to compute h ( Y | X ) when X has no density? Recall [ ] ∫ h ( Y | X ) = E X − supp Y f ( y | X ) log f ( y | X ) dy , where f ( y | x ) is the conditional density of Y given X .

  11. Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel I-Hsiang Wang 11 / 30 apply the result in DMC with input cost: The proof of achievability makes use of discretization, so that one can IT Lecture 6 Gaussian Channel Capacity Sketch of the Achievability (1): Discretization x N y N b w f Y | X w ENC DEC

  12. Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel I-Hsiang Wang 12 / 30 apply the result in DMC with input cost: The proof of achievability makes use of discretization, so that one can IT Lecture 6 Sketch of the Achievability (1): Discretization Gaussian Channel Capacity b w f Y | X Q in Q out w ENC DEC Q in : (single-letter) discretization that maps X ∈ R to X d ∈ X d . Q out : (single-letter) discretization that maps Y ∈ R to Y d ∈ Y d . Note that both X d and Y d are discrete (countable) alphabets.

  13. Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel I-Hsiang Wang 13 / 30 as shown above. equivalent DMC apply the result in DMC with input cost: The proof of achievability makes use of discretization, so that one can IT Lecture 6 Gaussian Channel Capacity Sketch of the Achievability (1): Discretization New ENC Equivalent DMC b w f Y | X Q in Q out w ENC DEC Q in : (single-letter) discretization that maps X ∈ R to X d ∈ X d . Q out : (single-letter) discretization that maps Y ∈ R to Y d ∈ Y d . Note that both X d and Y d are discrete (countable) alphabets. Idea : With the two discretization blocks Q in and Q out , one can build an ( ) X d , p Y d | X d , Y d

  14. Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel I-Hsiang Wang 14 / 30 4 Achievability in the original CMC : Prove that when the discretization channel coding theorem for DMC with input constraint, any rate 3 Achievability in the equivalent DMC : By the achievability part of the 1 Random codebook generation : Generate the codebook randomly IT Lecture 6 Sketch of the Achievability (2): Arguments Gaussian Channel Capacity Q in Q out x N y N d d Equivalent DMC b w w New ENC DEC p Y d | X d based on the original (continuous) r.v. X , satisfying E [ b ( X )] ≤ B . 2 Choice of discretization : Choose Q in such that the cost constraint will not be violated after discretization. Specifically, E [ b ( X d )] ≤ B . R < I ( X d ; Y d ) is achievable. in Q in and Q out gets finer and finer, I ( X d ; Y d ) → I ( X ; Y ) .

  15. Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel Gaussian Channel Capacity 1 Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel Gaussian Channel Capacity 15 / 30 I-Hsiang Wang IT Lecture 6

  16. Channel Coding over Continuous Memoryless Channels 2 AWGN Channel: I-Hsiang Wang 16 / 30 N 3 Average input power constraint P : . Continuous Memoryless Channel IT Lecture 6 Gaussian Channel Capacity Additive White Gaussian Noise (AWGN) Channel z N x N y N Channel Channel b w w Encoder Decoder 1 Input/output alphabet X = Y = R . Conditional p.d.f. f Y | X is given by Y = X + Z , Z ∼ N ( 0 , σ 2 ) ⊥ ⊥ X . { Z k } form an i.i.d. (white) Gaussian r.p. with Z k ∼ N ( 0 , σ 2 ) , ∀ k . W , X k − 1 , Z k − 1 ) Memoryless: Z k ⊥ ⊥ ( Without feedback: Z N ⊥ ⊥ X N . k =1 | x k | 2 ≤ P . 1 ∑ N

  17. Channel Coding over Continuous Memoryless Channels Continuous Memoryless Channel I-Hsiang Wang 17 / 30 shown in the next slide. Note : For the AWGN channel, the supremum is actually attainable with (2) IT Lecture 6 sup Theorem 2 (Gaussian Channel Capacity) Gaussian Channel Capacity Channel Coding Theorem for Gaussian Channel The capacity of the AWGN channel with input power constraint P and noise variance σ 2 is given by I ( X ; Y ) = 1 ( ) C = 1 + P . 2 log σ 2 f ( x ): E [ | X | 2 ] ≤ P 2 π P e − x 2 1 Gaussian input distribution f ( x ) = 2 P , i.e., X ∼ N (0 , P ) , as √

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend