some channel models
play

some channel models Input X P(y|x) output Y 1-p Error Source 0 - PDF document

Example: binary symmetric channel (BSC) some channel models Input X P(y|x) output Y 1-p Error Source 0 0 E transition probabilities p Y X E X = ! + 1 1 memoryless: Output Input 1-p - output at time i depends only on input at


  1. Example: binary symmetric channel (BSC) some channel models Input X P(y|x) output Y 1-p Error Source 0 0 E transition probabilities p Y X E X = ! + 1 1 memoryless: Output Input 1-p - output at time i depends only on input at time i E is the binary error sequence s.t. P(1) = 1-P(0) = p - input and output alphabet finite X is the binary information sequence Y is the binary output sequence from AWGN to BSC Other models 1-e 0 0 (light on) 0 0 e X Y p E 1-p 1 1 (light off) p e 1 1 P(X=0) = P 0 1-e P(X=0) = P 0 Z-channel (optical) Erasure channel (MAC) Homework: calculate the capacity as a function of A and σ 2 burst error model (Gilbert-Elliot) Erasure with errors Random error channel; outputs independent Random 1-p-e P(0) = 1- P(1); 0 0 Error Source e p Burst Burst error channel; outputs dependent E p e P(0 | state = bad ) = P(1|state = bad ) = 1/2; Error Source 1 1 P(0 | state = good ) = 1 - P(1|state = good ) = 0.999 1-p-e State info: good or bad transition probability P gb P bb P gg good bad P bg

  2. channel capacity: Practical communication system design Code book I(X;Y) = H(X) - H(X|Y) = H(Y) – H(Y|X) (Shannon 1948) Code receive message word in X Y estimate H(X) H(X|Y) channel 2 k decoder channel max I ( X ; Y ) capacity = Code book with errors H Entropy = n E ( p ) p ( i ) * I ( i ) ! = There are 2 k code words of length n notes: k is the number of information bits transmitted in n channel uses capacity depends on input probabilities because the transition probabilites are fixed Encoding and decoding according to Shannon Channel capacity Code: 2 k binary codewords where p(0) = P(1) = ½ Definition: Channel errors: P(0 → 1) = P(1 → 0) = p The rate R of a code is the ratio k/n, where i.e. # error sequences ≈ 2 nh(p) k is the number of information bits transmitted in n channel uses Decoder: search around received sequence for codeword Shannon showed that: : with ≈ np differences for R ≤ C encoding methods exist with decoding error probability 0 space of 2 n binary sequences decoding error probability channel capacity: the BSC I(X;Y) = H(Y) – H(Y|X) 1. For t errors: |t/n-p|> Є 1-p → 0 for n → ∞ the maximum of H(Y) = 1 0 0 (law of large numbers) X p Y since Y is binary 2. > 1 code word in region (codewords random) H(Y|X) = h(p) 1 1 nh ( p ) 2 = P(X=0)h(p) + P(X=1)h(p) k 1-p P ( 1 ) ( 2 1 ) > $ # n 2 n ( C R ) n ( 1 h ( p ) R ) # # 2 # # # 2 BSC 0 Conclusion: the capacity for the BSC C BSC = 1- h(p) " = " k Homework: draw C BSC , what happens for p > ½ for R 1 h ( p ) = < # n and n " !

  3. channel capacity: the BSC channel capacity: the Z-channel Application in optical communications Explain the behaviour! H(Y) = h(P 0 +p(1- P 0 ) ) 0 0 (light on) 1.0 X Y Channel capacity p H(Y|X) = (1 - P 0 ) h(p) 1-p 1 1 (light off) For capacity, maximize I(X;Y) over P 0 P(X=0) = P 0 0.5 1.0 Bit error p channel capacity: the erasure channel Erasure with errors: calculate the capacity! Application: cdma detection 1-p-e 0 0 1-e I(X;Y) = H(Y)– H(Y|X) e 0 0 p e H(Y) = h(P 0 ) E E X Y H(Y|X) = e h(P 0 ) p e e 1 1 1 1 1-p-e 1-e Thus C erasure = 1 – e P(X=0) = P 0 (check!, draw and compare with BSC and Z) 0 0 channel models: general diagram 1/3 example 1 1 1/3 2 2 Consider the following example  P 1|1 y 1 x 1 For P(0) = P(2) = p, P(1) = 1-2p Input alphabet X = {x 1 , x 2 , …, x n } P 2|1  P 1|2 y 2 x 2 Output alphabet Y = {y 1 , y 2 , …, y m } P 2|2 H(Y) = h(1/3 – 2p/3) + (2/3 + 2p/3); H(Y|X) = (1-2p)log 2 3 P j|i = P Y|X (y j |x i ) : : : : Q: maximize H(Y) – H(Y|X) as a function of p In general: : : calculating capacity needs more Q: is this the capacity? x n P m|n theory y m hint use the following: log 2 x = lnx / ln 2; d lnx / dx = 1/x The statistical behavior of the channel is completely defined by the channel transition probabilities P j|i = P Y|X (y j |x i )

  4. Channel capacity: converse * clue: I(X;Y) For R > C the decoding error probability > 0 is convex ∩ in the input probabilities Pe i.e. finding a maximum is simple k/n C Converse: For a discrete memory less channel converse R := k/n k = H(M) = I(M;Y n )+H(M|Y n ) X n is a function of M Fano channel 1 – C n/k - 1/k ≤ Pe X i Y i ≤ I(X n ;Y n ) + 1 + k Pe n n n n ≤ nC + 1 + k Pe I X ( n ; Y n ) H Y ( n ) # H Y X ( | ) # H Y ( ) # H Y X ( | ) # I X Y ( ; ) nC = ! " ! = " i i i i i i i i 1 i 1 i 1 i 1 = = = = Source generates one Pe ≥ 1 – C/R - 1/nR source encoder channel decoder out of 2 k equiprobable m X n Y n m‘ messages Hence: for large n, and R > C, Let Pe = probability that m‘ ≠ m the probability of error Pe > 0 We used the data processing theorem Cascading of Channels I(X;Z) X Y Z I(X;Y) I(Y;Z) The overall transmission rate I(X;Z) for the cascade can not be larger than I(Y;Z), that is:

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend