lecture 2
play

Lecture 2 Matthieu Bloch 1 Channel coding problem As illustrated - PDF document

1 messages are uniformly distributed and that the statistics of the channel are known ahead of time. lim inf length such that stochastic encoder allows possibly several codewords to represent the same message. often show the existence or


  1. 1 messages are uniformly distributed and that the statistics of the channel are known ahead of time. lim inf length such that stochastic encoder allows possibly several codewords to represent the same message. often show the existence or construct channel codes with deterministic encoder and decoders. Note that a Remark 1.1. Our definition allows the encoder and the decoder to be stochastic, even though we will decoder to reliably transmit messages over a noisy channel. Tie noisy channel is characterized by a max or in terms of the maximal probability of error forms the codebook and its component are the codewords. With a slight abuse of notation, we use Figure 1: Channel coding over a noisy channel. DECODER Y Revised December 5, 2019 Information Theoretic Security Lecture 2 Matthieu Bloch 1 Channel coding problem As illustrated in Figure 1, the problem of channel coding consists in designing an encoder and triplet ( X , { W Y n | X n } n ⩾ 1 , Y} ; X and Y represent the input and output alphabets of transmitted and received symbols, respectively; { W Y n | X n } n ⩾ 1 is the set of transition probabilities characterizing the channel noise affecting sequence of n input symbols for every n ∈ N ∗ . Messages are represented by the random variable W , which the encoder maps to codewords of n symbols X and transmits over the channel; the decoder estimates � W from the noisy received sequence Y . We assume that W Y n | X n ENCODER X ˆ W W An ( M, n ) channel code consists of two stochastic maps f : � 1 , M � → X n , to encode messages into codewords, and g : Y n → � 1 , M � ∪{ ? } , which outputs an estimate of the transmitted message or an error symbol “?.” Tie parameter n is called the blocklength and 1 n log 2 M is the rate of the code, which measures the number of bits transmitted per symbol. Tie set { f ( m ) : m ∈ � 1 , M � } C to denote both an ( M, n ) channel code with its associated encoder and decoder and its codebook. Tie performance of a code C is measured in terms of the average probability of error � � � M = 1 � P e ( C ) ≜ P W � = W |C P ( g ( Y ) � = m | W = m ) M m =1 ( C ) ≜ m ∈ � 1 ,M � P ( g ( Y ) � = m | W = m ) . P max e Definition 1.2 (Achievable channel coding rate and channel capacity) . A rate R is an achievable channel coding rate if there exists a sequence {C n } n ⩾ 1 of ( M n , n ) channel codes with increasing block- 1 n log M n ⩾ R and lim sup P e ( C n ) = 0 . n →∞ n →∞ Tie supremum of all achievable channel coding rates is called the channel capacity , denoted by C ( { W Y n | X n } n ⩾ 1 ) .

  2. 2 both encoder and decoder are deterministic. instead of studying a specific code considerably simplifies the analysis by making the exact structure the probability of error averaged over the set of all possible codebooks. Averaging over a set of codes pendently sampling its codewords according to a prescribed probability distribution, and to analyze nique known as random coding . Intuitively, the idea is to randomly generate a codebook by inde- Tie proof of existence of codes with rates achieving capacity that we develop here relies on a tech- decoding is suboptimal, it turns out to be sufficient to obtain optimal asymptotic results. Note that later. Formally, we consider a generic channel capacity is a challenging problem in itself, and several families of channel codes will be discussed Tie design of low-complexity encoders and decoders for codes operating at rates approaching the explicit low-complexity channel codes approaching the limits when characterizing channel capacity. independent of any technological constraint. In general, one does not worry about identifying constraints at the encoder and decoder; the notion of capacity is therefore a fundamental limit Lemma 2.1 (Random coding for channel reliability) . of the codebook disappear from the analysis. Revised December 5, 2019 Information Theoretic Security Tie notion of achievable rate is asymptotic in the blocklength n and disregards any complexity 2 Random coding for channel reliability � � , in which the alphabets U and V are U , W V | U , V arbitrary, and we construct an ( M, 1) code. Let C = { u i : i ∈ � 1 , M � } be a codebook of M codewords obtained by independently sampling the same distribution p U ∈ P ( U ) . Tie distribution of the random variable C representing the random codebook is then � M ∀ u ∈ U M p C ( u ) = p U ( u i ) , i =1 and for any function φ : U M → R : C = ( u 1 , . . . , u M ) �→ φ ( u 1 , . . . , u M ) , we have � � � E C ( φ ( C )) = p U ( u 1 ) · · · p U ( u i ) · · · p U ( u M ) φ ( u 1 , . . . , u i , . . . , u M ) . u 1 u i u M Let p V ( v ) ≜ � u W V | U ( v | u ) p U ( u ) and for γ > 0 � � ( u, v ) ∈ U × V : log W V | U ( v | u ) A γ ≜ ⩾ γ . p V ( v ) Define the encoder as the mapping f : � 1 , M � → U : i �→ u i . Define the decoder g : V → � 1 , M � ∪{ ? } : v �→ i ∗ , where i ∗ = j if u j is the unique codeword such that ( u j , v ) ∈ A γ ; otherwise, an error i ∗ =? is declared. Tiis decoding operation is called threshold decoding . Although threshold Tie probability of decoding error P e ( C ) under threshold decoding averaged over the randomly generated codebook C satisfies the following. ∈ A γ ) + M 2 − γ . E C ( P e ( C )) ⩽ P p U W V | U (( U, V ) / Proof. Let us first explicit P e ( C ) for any code C = { u i : i ∈ � 1 , M � } . When transmitting message i , the channel output is distributed according to W V | U ( v | u i ) . Consequently, using the definition

  3. 3 (1) of the threshold decoder, we obtain (3) (2) Revised December 5, 2019 Information Theoretic Security � M P e ( C ) = 1 P ( g ( V ) � = i | W = i ) M i =1 � M � = 1 W V | U ( v | u i ) 1 { g ( v ) � = i | W = i } M i =1 v � M � = 1 W V | U ( v | u i ) 1 { ( u i , v ) / ∈ A γ or ∃ j � = i such that ( u j , v ) ∈ A γ } . M i =1 v It is convenient to split the two predicates in the indicator function and bound P e ( C ) as � M � P e ( C ) ⩽ 1 W V | U ( v | u i ) 1 { ( u i , v ) / ∈ A γ } M i =1 v � M � � + 1 W V | U ( v | u i ) 1 { ( u j , v ) ∈ A γ } . M i =1 v j ∈ � 1 ,M � ,j ̸ = i Let us study separately the expected value over C of the two terms in the right-hand side of (1). Denote { U i } i ∈ � 1 ,M n � the random variables representing the randomly generated codewords in the random codebook C . First, � � � M � � M � � � 1 = 1 E C W V | U ( v | U i ) 1 { ( U i , v ) / ∈ A γ } E U i W V | U ( v | U i ) 1 { ( U i , v ) / ∈ A γ } M M i =1 v i =1 v � M � � = 1 p U ( u i ) W V | U ( v | u i ) 1 { ( u i , v ) / ∈ A γ } M i =1 v u i � � = p U ( u ) W V | U ( v | u ) 1 { ( u, v ) / ∈ A γ } , v u = P p U W V | U (( U, V ) / ∈ A γ ) where we have remarked that u i is merely a dummy index that does not depend on i , which we can replace by a generic index u . Next,   M � � �  1  W V | U ( v | U i ) 1 { ( U j , v ) ∈ A γ } E C M v i =1 j ∈ � 1 ,M � ,j ̸ = i � M � � � � = 1 W V | U ( v | U i ) 1 { ( U j , v ) ∈ A γ } E U i U j M i =1 v j ∈ � 1 ,M � \{ i } M � � � � � = 1 p U ( u i ) p U ( u j ) W V | U ( v | u i ) 1 { ( u j , v ) ∈ A γ } M v u i u j i =1 j ∈ � 1 ,M � \{ i } � � � W V | U ( v | u ) p U ( u ) p U ( u ′ ) 1 { ( u ′ , v ) ∈ A γ } , = ( M − 1) u u ′ v

  4. 4 (4) Proposition 2.2. Let we obtain the following result. is sub-optimal. Tiere exist alternative techniques to develop bounds, which are explored as exercises. by combining the bounds (2) and (4) with (1). Revised December 5, 2019 Information Theoretic Security where we have replaced the dummy indices u i and u j by generic indices u and u ′ , respectively. Note that M − 1 ⩽ M and � u W V | U ( v | u ) p U ( u ) = p V ( v ) . In addition, for ( u ′ , v ) ∈ A γ , we have p V ( v ) ⩽ W V | U ( v | u ′ )2 − γ . Using these facts with (3), we obtain   M � � �  1  W V | U ( v | U i ) 1 { ( U j , v ) ∈ A γ } E C M v i =1 j ∈ � 1 ,M � ,j ̸ = i � p V ( v ) p U ( u ′ ) 1 { ( u ′ , v ) ∈ A γ } ⩽ M u ′ � � ⩽ M W ( v | u ′ )2 − γ p U ( u ′ ) 1 { ( u ′ , v ) ∈ A γ } u ′ v ⩽ M 2 − γ , since 1 { ( u ′ , v ) ∈ A γ } ⩽ 1 and � � v W ( v | u ′ ) p U ( u ′ ) = 1 . Finally, we obtain the desired result u ′ ■ Tie upper bound given in Lemma 2.1 is by no means the best one since the decoding procedure Nevertheless, Lemma 2.1 is “good enough” to recover the first order fundamental limits, and we will therefore content ourself with the result. Tie form of Lemma 2.1 is often referred to as a “one-shot” result, since it considers codes with blocklength 1 . With a direct application of Markov’s inequality, � � be a channel and let p U ∈ P ( U ) . For any M ∈ N ∗ and γ > 0 , U , W V | U , V there exists an ( M, 1) channel code C with deterministic encoder and decoder such that ∈ A γ ) + M 2 − γ . P e ( C ) ⩽ P p U W V | U (( U, V ) /

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend