I n f o r m a t i o n T r a n s m i s s i o n - - PowerPoint PPT Presentation

i n f o r m a t i o n t r a n s m i s s i o n c h a p t e
SMART_READER_LITE
LIVE PREVIEW

I n f o r m a t i o n T r a n s m i s s i o n - - PowerPoint PPT Presentation

I n f o r m a t i o n T r a n s m i s s i o n C h a p t e r 5 , C o n v o l u t i o n a l c o d e s OVE EDFORS Electrical and information technology L e a r n i n g o u t c o m e s


slide-1
SLIDE 1

I n f

  • r

m a t i

  • n

T r a n s m i s s i

  • n

C h a p t e r 5 , C

  • n

v

  • l

u t i

  • n

a l c

  • d

e s

OVE EDFORS Electrical and information technology

slide-2
SLIDE 2

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 2

L e a r n i n g

  • u

t c

  • m

e s

  • A

f t e r t h i s l e c t u r e , t h e s t u d e n t s h

  • u

l d

– U

n d e r s t a n d w h a t a c

  • n

v

  • l

u t i

  • n

a l c

  • d

e i s

– H

  • w

c

  • n

v

  • l

u t i

  • n

a l e n c

  • d

i n g i s d

  • n

e u s i n g a s t a t e m a c h i n e

  • r

a t r e l l i s r e p r e s e n t e a t i

  • n

– K

n

  • w

h

  • w

t

  • s

e t u p a s t a t e t r a n s i t i

  • n

d i a g r a m , d r a w a s t a t e m a c h i n e a n d a c

  • r

r e s p

  • n

d i n g t r e l l i s

– B

e a b l e t

  • e

x e c u t e t h e (

  • p

t i m a l ) V i t e r b i a l g

  • r

i t h m t

  • d

e c

  • d

e a r e c e i v e d s e q u e n c e e n c

  • d

e d u s i n g a c

  • n

v

  • l

u t i

  • n

a l c

  • d

e

slide-3
SLIDE 3

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 3

Wh e r e a r e w e i n t h e B I G P I C T U R E ?

Convolutional codes Lecture relates to pages 195-199 in textbook.

slide-4
SLIDE 4

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 4

C

  • n

v

  • l

u t i

  • n

a l c

  • d

e s

When we study convolutional codes we regard the information sequence and the code sequence as semi-infinite; they start at time t = 0 and go on forever. Consider the convolutional encoder below:

slide-5
SLIDE 5

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 5

Tie e n c

  • d

e r s t a t e

The state of a system is a compact description of its past history such that it, together with the present input, suffices to determine the present output and the next state. For our convolutional encoder we can simply choose the state to be the contents of its memory element; that is, at time t we have the state How many stated does our encoder have?

slide-6
SLIDE 6

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 6

Tie s t a t e t r a n s i t i

  • n

d i a g r a m

slide-7
SLIDE 7

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 7

Tie t r e l l i s d e s c r i p t i

  • n
slide-8
SLIDE 8

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 8

O p t i m a l d e c

  • d

e r – t h e V i t e r b i a l g

  • r

i t h m

At each state we compare subpaths leading to it and discard the one that is not closest (measured in Hamming distance) to the received sequence. The discarded path cannot possibly be the initial part of the path that minimizes the Hamming distance between the r sequence and the codeword v. This is the principle of nonoptimality. Let’s consider the previous (7,5) R=1/2 code and a received sequence r=00 01 01 10 01 10.

slide-9
SLIDE 9

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 9

A n e x a m p l e

slide-10
SLIDE 10

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 1

E v

  • l

u t i

  • n
  • f

s u b

  • p

a t h s

slide-11
SLIDE 11

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 1 1

E r r

  • r

c

  • r

r e c t i

  • n

c a p a b i l i t i e s

We could correct a certain pattern of two errors, namely, where is the estimated error pattern. How many errors can we correct in general? The answer is related to the minimum Hamming distance between any two codewords in the trellis. Since a convolutional code is linear this value is equal to the least number of

  • nes in any nonzero codeword, this is called the free

distance, .

slide-12
SLIDE 12

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 1 2

E r r

  • r

c

  • r

r e c t i

  • n

c a p a b i l i t i e s

Clearly we can correct all error patterns with

  • r fewer errors.

What about error patterns with more errors? The answer is that it depends; if the errors are sparse enough we can correct many more! That is why convolutional codes are so powerful.

slide-13
SLIDE 13

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 1 3

C

  • n

c l u d i n g r e m a r k s , C h a p t e r 5

slide-14
SLIDE 14

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 1 4

I n f

  • r

m a t i

  • n

t h e

  • r

y

In this chapter we first gave a brief introduction to Claude E. Shannon's information theory which is the basis for modern communication technology. It provides guidelines for the design of digital communication systems. We then looked at some practical methods of source and channel coding.

slide-15
SLIDE 15

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 1 5

S

  • u

r c e c

  • d

i n g

Shannon modeled sources as discrete stochastic processes and showed that a source is characterized by the uncertainty of its output, H(U), in the sense that the source output sequence can be compressed arbitrarily close to H(U) binary digits per source symbol but not further. The uncertainty or entropy of a discrete random variable U is defined by the quantity The unit of the uncertainty is called bit.

slide-16
SLIDE 16

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 1 6

C h a n n e l c a p a c i t y

Shannon's most remarkable result concerns transmission

  • ver a noisy channel. It is possible to encode the source at

the channel input and decode the possibly corrupted signal at the receiver side such that: if the source uncertainty is less than the channel capacity, H(U)<C, the source sequence can be reconstructed with arbitrary accuracy; This is impossible if the source uncertainty exceeds the channel capacity. It can be shown that the capacity of a Gaussian channel with energy constraint E and noise variance is

slide-17
SLIDE 17

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 1 7

H u fg m a n c

  • d

i n g

Huffman coding is a fixed-to-variable length optimal source coding procedure that uses the probability of the source symbols to obtain an encoding for single (or blocks of) source symbols into code words that consist of variable length strings of binary digits such that the average codeword length is minimum.

slide-18
SLIDE 18

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 1 8

Tie L Z W a l g

  • r

i t h m

The LZW algorithm is a procedure that does not need to know the source statistics beforehand. It parses the source

  • utput sequence, recognizes fragments that have appeared

before, and refers to the addresses of these fragments in a dynamical dictionary. This algorithm is asymptotically

  • ptimum, easy to implement, and widely used in practice.
slide-19
SLIDE 19

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 1 9

C

  • d

i n g m e t h

  • d

s

Hamming codes constitute the most celebrated class of block codes. Their minimum distance is dmin=3 and, thus, theycorrect all error patterns of single errors. Convolutional codes are more powerful and often used in practice, either by themselves, as concatenated codes or for parallell codes.

slide-20
SLIDE 20

O v e E d f

  • r

s E I T A 3

  • C

h a p t e r 5 ( P a r t 5 ) 2

Tie V i t e r b i d e c

  • d

e r .

Viterbi decoding is a both optimum and practical method for decoding convolutional codes. It is widely used in mobile telephony and high-speed modems.

slide-21
SLIDE 21