chapter 9 gaussian channel
play

Chapter 9 Gaussian Channel Peng-Hua Wang Graduate Inst. of Comm. - PowerPoint PPT Presentation

Chapter 9 Gaussian Channel Peng-Hua Wang Graduate Inst. of Comm. Engineering National Taipei University Chapter Outline Chap. 9 Gaussian Channel 9.1 Gaussian Channel: Definitions 9.2 Converse to the Coding Theorem for Gaussian Channels 9.3


  1. Chapter 9 Gaussian Channel Peng-Hua Wang Graduate Inst. of Comm. Engineering National Taipei University

  2. Chapter Outline Chap. 9 Gaussian Channel 9.1 Gaussian Channel: Definitions 9.2 Converse to the Coding Theorem for Gaussian Channels 9.3 Bandlimited Channels 9.4 Parallel Gaussian Channels 9.5 Channels with Colored Gaussian Noise 9.6 Gaussian Channels with Feedback Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 2/31

  3. 9.1 Gaussian Channel: Definitions Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 3/31

  4. Introduction Z i ∼ N (0 , N ) Y i = X i + Z i , ■ X i : input, Y i :output, Z i : noise. Z i is independent of X i . ■ Without further constraint, the capacity of this channel may be infinite. ◆ If the noise variance N is zero, the channel can transmit an arbitrary real number with no error. ◆ If the noise variance N is nonzero, we can choose an infinite subset of inputs arbitrary far apart, so that they are distinguishable at the output with arbitrarily small probability of error. Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 4/31

  5. Introduction ■ The most common limitation on the input is an energy or power constraint. ■ We assume an average power constraint. For any codeword ( x 1 , x 2 , . . . , x n ) transmitted over the channel, we require that n 1 � x 2 i ≤ P n i =1 Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 5/31

  6. Information Capacity Definition 1 (Capacity) The information capacity of the Gaussian channel with power P is C = f ( x ): E [ X 2 ] ≤ P I ( X ; Y ) max We can calculate the information capacity as follows. I ( X ; Y ) = h ( Y ) − h ( Y | X ) = h ( Y ) − h ( X + Z | X ) = h ( Y ) − h ( Z | X ) = h ( Y ) − h ( Z ) ≤ 1 2 log 2 πe ( P + N ) − 1 2 log 2 πeN � � = 1 1 + P 2 log N Note that E [ Y 2 ] = E [( X + Z ) 2 ] = P + N and the entropy of gaussian with variance σ 2 is 1 2 log 2 πeσ 2 . Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 6/31

  7. Information Capacity Therefore, the information capacity of the Gaussian channel is E [ X 2 ] ≤ P I ( X ; Y ) = 1 � 1 + P � C = max 2 log N and the equality holds when X ∼ N (0 , P ) . ■ Next, we will show that this capacity is achievable. Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 7/31

  8. Code for Gaussian Channel Definition 2 ( ( M, n ) code for Gaussian Channel) An ( M, n ) code for the Gaussian channel with power constraint P consists the following: 1. An index set { 1 , 2 , . . . , M } . 2. An encoding function x : { 1 , 2 , . . . , M } → X n , yielding codewords x n (1) , x n (2) , . . . , x n ( M ) , satisfying the power constraint P n 1 � x 2 i ( w ) ≤ P, w = 1 , 2 , . . . , M. n i =1 3. A decoding function g : Y n → { 1 , 2 , . . . , M } . Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 8/31

  9. Definitions Definition 3 (Conditional probability of error) λ i = Pr( g ( Y n ) � = i | X n = x n ( i )) = � p ( y n | x n ( i )) g ( y n ) � = i � p ( y n | x n ( i )) I ( g ( y n ) � = i ) = y n ■ I ( · ) is the indicator function. Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 9/31

  10. Definitions Definition 4 (Maximal probability of error) λ ( n ) = i ∈{ 1 , 2 ,...,M } λ i max Definition 5 (Average probability of error) M = 1 � P ( n ) λ i e M i =1 ■ The decoding error is M � Pr( g ( Y n ) � = W ) = Pr( W = i ) Pr( g ( Y n ) � = i | W = i ) i =1 If the index W is chosen uniformly from { 1 , 2 , . . . , M } , then P ( n ) = Pr( g ( Y n ) � = W ) . e Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 10/31

  11. Definitions Definition 6 (Rate) The rate R of an ( M, n ) code is R = log M bits per transmission n Definition 7 (Achievable rate) A rate R is said to be achievable for a Gaussian channel with a power constraint P if there exists a ( ⌈ 2 nR ⌉ , n ) code with codewords satisfying the power constraint such that the maximal probability of error λ ( n ) tends to 0 as n → ∞ . Definition 8 (Channel capacity) The capacity of a channel is the supremum of all achievable rates. Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 11/31

  12. Capacity of a Gaussian Channel Theorem 1 (Capacity of a Gaussian Channel) The capacity of a Gaussian channel with power constraint P and noise variance N is 1 � 1 + P � 2 log bits per transmission. N Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 12/31

  13. Sphere Packing Argument Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 13/31

  14. Sphere Packing Argument For each sent codeword, the received codeword is contained in a √ nN . The received vectors have energy no grater sphere of radius � than n ( P + N ) , so they lie in a sphere of radius n ( P + N ) . How many codeword can we use without intersection in the decoding sphere? � n �� n ( P + N ) A n � n/ 2 � 1 + P √ M = = N nN ) n A n ( where A the constant for calculating the volume of n -dimensional sphere. For example, A 2 = π , A 3 = 4 3 π. Therefore, the capacity is � � 1 n log M = 1 1 + P 2 log . N Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 14/31

  15. R < C → Achievable ■ Codebook. Let X i ( w ) , i = 1 , 2 , . . . , n, w = 1 , 2 , . . . , 2 nR be i.i.d. ∼ N (0 , P − ǫ ) . For large n , 1 � X 2 i → P − ǫ. n ■ Encoding. The codebook is revealed to both the sender and the receiver. To send the message index w , the transmitter sends the w th codeword X n ( w ) in the codebook. ■ Decoding. The receiver searches for the one that is jointly typical with the received vector. If there is one and only one such codeword X n ( w ) , the receiver declares ˆ W = w . Otherwise, the receiver declares an error. If the power constraint is not satisfied, the receiver also declare an error. Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 15/31

  16. R < C → Achievable ■ Probability of error. Assume that codeword 1 was sent. Y n = X n (1) + Z n . Define the events � n � 1 � X 2 E 0 = j (1) > P n j =1 and X n ( i ) , Y n ( i ) is in A ( n ) � � E i = { } . ǫ Then an error occurs if ◆ The power constraint is violate. ⇒ E 0 occurs. ◆ The transmitted codeword and the received sequence are not jointly typical. ⇒ E c 1 occurs. ◆ Wrong codeword is jointly typical with the received sequence. ⇒ E 2 ∪ E 3 ∪ · · · ∪ E 2 nR occurs. Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 16/31

  17. R < C → Achievable Let W be uniformly distributed. We have 1 � P ( n ) λ i = P ( E ) = Pr( E| W = 1) = e 2 nR = P ( E 0 ∪ E c a ∪ E 2 ∪ E 3 ∪ · · · ∪ E 2 nR ) 2 nR � ≤ P ( E 0 ) + P ( E c 1 ) + P ( E i ) i =2 2 nR � 2 − n ( I ( X ; Y ) − 3 ǫ ) ≤ ǫ + ǫ + i =2 ≤ 2 ǫ + 2 − n ( I ( X ; Y ) − R − 3 ǫ ) ≤ 3 ǫ for n sufficient large and R < I ( X ; Y ) − 3 ǫ. Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 17/31

  18. R < C → Achievable, final part ■ Since the average probability of error over codebooks is less then 3 ǫ , there exists at least one codebook C ∗ such that Pr( E|C ∗ ) < 3 ǫ. ◆ C ∗ can be found by an exhaustive search over all codes. ■ Deleting the worst half of the codewords in C ∗ , we obtain a code with low maximal probability of error. The codewords that violates the power constraint is definitely deleted. (why?) Hence, we have construct a code that achieves a rate arbitrarily close to C . Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 18/31

  19. 9.2 Converse to the Coding Theorem for Gaussian Channels Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 19/31

  20. Achievable → R < C We will prove that if P ( n ) → 0 then R ≤ C = 1 2 log(1 + P N ) . Let W e be distributed uniformly. We have W → X n → Y n → ˆ W. By Fano’s inequality, where ǫ n = 1 H ( W | ˆ W ) ≤ 1 + nRP ( n ) n + RP ( n ) = nǫ n , → 0 e e as P ( n ) → 0 . Now, e nR = H ( W ) = I ( W ; ˆ W ) + H ( W | ˆ W ) ≤ I ( W ; ˆ W ) + nǫ n ≤ I ( X n ; Y n ) + nǫ n ( data processing ineq. ) = h ( Y n ) − h ( Y n | X n ) + nǫ n = h ( Y n ) − h ( Z n ) + nǫ n n n n � h ( Y i ) − h ( Z n ) + nǫ n ≤ � � ≤ h ( Y i ) − h ( Z i ) + nǫ n i =1 i =1 i =1 Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 20/31

  21. Achievable → R < C n � nR ≤ ( h ( Y i ) − h ( Z i )) + nǫ n i =1 � � 1 2 log (2 πe ( P i + N )) − 1 � ≤ 2 log 2 πeN + nǫ n � 1 � 1 + P i � = 2 log + nǫ n N ≤ n � 1 + P � 2 log + nǫ n N since every codeword satisfies the power constraint. Thus, � � R ≤ 1 1 + P 2 log + ǫ n . N Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 21/31

  22. 9.3 Bandlimited Channels Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 22/31

  23. Capacity of Bandlimited Channels ■ Suppose the output of a band-limited channel can be represented by Y ( t ) = ( X ( t ) + N ( t )) ∗ h ( t ) where X ( t ) is the input signal, Z ( t ) is the white Gaussian noise, and h ( t ) is the impulse response of the channel with bandwidth W . ■ The sampling frequency is 2 W. If the channel be used over the time interval [0 , T ] , then there are 2 WT samples transmitted. Peng-Hua Wang, May 14, 2012 Information Theory, Chap. 9 - p. 23/31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend