i n f o r m a t i o n t r a n s m i s s i o n c h a p t e
play

I n f o r m a t i o n T r a n s m i s s i o n - PowerPoint PPT Presentation

I n f o r m a t i o n T r a n s m i s s i o n C h a p t e r 5 , C o n v o l u t i o n a l c o d e s OVE EDFORS Electrical and information technology L e a r n i n g o u t c o m e s


  1. I n f o r m a t i o n T r a n s m i s s i o n C h a p t e r 5 , C o n v o l u t i o n a l c o d e s OVE EDFORS Electrical and information technology

  2. L e a r n i n g o u t c o m e s ● A f t e r t h i s l e c t u r e , t h e s t u d e n t s h o u l d – U n d e r s t a n d w h a t a c o n v o l u t i o n a l c o d e i s – H o w c o n v o l u t i o n a l e n c o d i n g i s d o n e u s i n g a s t a t e m a c h i n e o r a t r e l l i s r e p r e s e n t e a t i o n – K n o w h o w t o s e t u p a s t a t e t r a n s i t i o n d i a g r a m , d r a w a s t a t e m a c h i n e a n d a c o r r e s p o n d i n g t r e l l i s – B e a b l e t o e x e c u t e t h e ( o p t i m a l ) V i t e r b i a l g o r i t h m t o d e c o d e a r e c e i v e d s e q u e n c e e n c o d e d u s i n g a c o n v o l u t i o n a l c o d e O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 2

  3. Wh e r e a r e w e i n t h e B I G P I C T U R E ? Convolutional codes Lecture relates to pages 195-199 in textbook. O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 3

  4. C o n v o l u t i o n a l c o d e s When we study convolutional codes we regard the information sequence and the code sequence as semi-infinite; they start at time t = 0 and go on forever. Consider the convolutional encoder below: O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 4

  5. Tie e n c o d e r s t a t e The state of a system is a compact description of its past history such that it, together with the present input, suffices to determine the present output and the next state. For our convolutional encoder we can simply choose the state to be the contents of its memory element; that is, at time t we have the state How many stated does our encoder have? O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 5

  6. Tie s t a t e t r a n s i t i o n d i a g r a m O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 6

  7. Tie t r e l l i s d e s c r i p t i o n O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 7

  8. O p t i m a l d e c o d e r – t h e V i t e r b i a l g o r i t h m At each state we compare subpaths leading to it and discard the one that is not closest (measured in Hamming distance) to the received sequence. The discarded path cannot possibly be the initial part of the path that minimizes the Hamming distance between the r sequence and the codeword v . This is the principle of nonoptimality . Let’s consider the previous (7,5) R=1/2 code and a received sequence r =00 01 01 10 01 10. O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 8

  9. A n e x a m p l e O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 9

  10. E v o l u t i o n o f s u b - p a t h s O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 1 0

  11. E r r o r c o r r e c t i o n c a p a b i l i t i e s We could correct a certain pattern of two errors, namely, where is the estimated error pattern. How many errors can we correct in general? The answer is related to the minimum Hamming distance between any two codewords in the trellis. Since a convolutional code is linear this value is equal to the least number of ones in any nonzero codeword, this is called the free distance, . O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 1 1

  12. E r r o r c o r r e c t i o n c a p a b i l i t i e s Clearly we can correct all error patterns with or fewer errors. What about error patterns with more errors? The answer is that it depends; if the errors are sparse enough we can correct many more! That is why convolutional codes are so powerful. O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 1 2

  13. C o n c l u d i n g r e m a r k s , C h a p t e r 5 O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 1 3

  14. I n f o r m a t i o n t h e o r y In this chapter we first gave a brief introduction to Claude E. Shannon's information theory which is the basis for modern communication technology. It provides guidelines for the design of digital communication systems. We then looked at some practical methods of source and channel coding. O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 1 4

  15. S o u r c e c o d i n g Shannon modeled sources as discrete stochastic processes and showed that a source is characterized by the uncertainty of its output, H(U) , in the sense that the source output sequence can be compressed arbitrarily close to H(U) binary digits per source symbol but not further. The uncertainty or entropy of a discrete random variable U is defined by the quantity The unit of the uncertainty is called bit . O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 1 5

  16. C h a n n e l c a p a c i t y Shannon's most remarkable result concerns transmission over a noisy channel. It is possible to encode the source at the channel input and decode the possibly corrupted signal at the receiver side such that: if the source uncertainty is less than the channel capacity, H(U)<C , the source sequence can be reconstructed with arbitrary accuracy; This is impossible if the source uncertainty exceeds the channel capacity. It can be shown that the capacity of a Gaussian channel with energy constraint E and noise variance is O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 1 6

  17. H u fg m a n c o d i n g Huffman coding is a fixed-to-variable length optimal source coding procedure that uses the probability of the source symbols to obtain an encoding for single (or blocks of) source symbols into code words that consist of variable length strings of binary digits such that the average codeword length is minimum. O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 1 7

  18. Tie L Z W a l g o r i t h m The LZW algorithm is a procedure that does not need to know the source statistics beforehand. It parses the source output sequence, recognizes fragments that have appeared before, and refers to the addresses of these fragments in a dynamical dictionary. This algorithm is asymptotically optimum, easy to implement, and widely used in practice. O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 1 8

  19. C o d i n g m e t h o d s Hamming codes constitute the most celebrated class of block codes. Their minimum distance is d min =3 and, thus, theycorrect all error patterns of single errors. Convolutional codes are more powerful and often used in practice, either by themselves, as concatenated codes or for parallell codes. O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 1 9

  20. Tie V i t e r b i d e c o d e r . Viterbi decoding is a both optimum and practical method for decoding convolutional codes. It is widely used in mobile telephony and high-speed modems. O v e E d f o r s E I T A 3 0 - C h a p t e r 5 ( P a r t 5 ) 2 0

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend