Coding and decoding with convolutional codes. The Viterbi Algorithm. - - PowerPoint PPT Presentation

coding and decoding with convolutional codes the viterbi
SMART_READER_LITE
LIVE PREVIEW

Coding and decoding with convolutional codes. The Viterbi Algorithm. - - PowerPoint PPT Presentation

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Coding and decoding with convolutional codes. The Viterbi Algorithm. J.-M. Brossier 2008 J.-M. Brossier Coding and decoding with convolutional codes. The


slide-1
SLIDE 1

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm

Coding and decoding with convolutional codes. The Viterbi Algorithm.

J.-M. Brossier 2008

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-2
SLIDE 2

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible:

TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY

C O D I N G T O E O R Y: 2 corrections - 1 detection. Beyond repetition ... Better codes exist.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-3
SLIDE 3

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible:

TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY

C O D I N G T O E O R Y: 2 corrections - 1 detection. Beyond repetition ... Better codes exist.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-4
SLIDE 4

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible:

TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY

C O D I N G T O E O R Y: 2 corrections - 1 detection. Beyond repetition ... Better codes exist.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-5
SLIDE 5

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible:

TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY

C O D I N G T O E O R Y: 2 corrections - 1 detection. Beyond repetition ... Better codes exist.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-6
SLIDE 6

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible:

TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY

C O D I N G T O E O R Y: 2 corrections - 1 detection. Beyond repetition ... Better codes exist.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-7
SLIDE 7

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible:

TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY

C O D I N G T O E O R Y: 2 corrections - 1 detection. Beyond repetition ... Better codes exist.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-8
SLIDE 8

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Repetition code TX: CODING THEORY RX: CPDING TOEORY No way to recover from transmission errors, we need to add some redundancy at the transmitter side. Repetition of transmitted symbols make detection and correction possible:

TX:CCC OOO DDD III NNN GGG TTT HHH EEE OOO RRR YYY RX:CCC OPO DDD III NNN GGD TTT OHO EEE OOO RRR YYY

C O D I N G T O E O R Y: 2 corrections - 1 detection. Beyond repetition ... Better codes exist.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-9
SLIDE 9

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Geometric view

0 0 0 1 1 1 1 0 0 1 1 0 0 0 1 0 1 1 0 1 0 1 0 1

k=1 bit of information 2 code words (length n=3): (000) (111) RX: how can the receiver decide about transmitted words: (001),(010),(100): Detection + correction (000) (110),(101),(011): Detection + correction (111) (000) (111) (Probably) right

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-10
SLIDE 10

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Geometric view

0 0 0 1 1 1 1 0 0 1 1 0 0 0 1 0 1 1 0 1 0 1 0 1

k=1 bit of information 2 code words (length n=3): (000) (111) RX: how can the receiver decide about transmitted words: (001),(010),(100): Detection + correction (000) (110),(101),(011): Detection + correction (111) (000) (111) (Probably) right

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-11
SLIDE 11

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Geometric view

0 0 0 1 1 1 1 0 0 1 1 0 0 0 1 0 1 1 0 1 0 1 0 1

k=1 bit of information 2 code words (length n=3): (000) (111) RX: how can the receiver decide about transmitted words: (001),(010),(100): Detection + correction (000) (110),(101),(011): Detection + correction (111) (000) (111) (Probably) right

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-12
SLIDE 12

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Geometric view

0 0 0 1 1 1 1 0 0 1 1 0 0 0 1 0 1 1 0 1 0 1 0 1

k=1 bit of information 2 code words (length n=3): (000) (111) RX: how can the receiver decide about transmitted words: (001),(010),(100): Detection + correction (000) (110),(101),(011): Detection + correction (111) (000) (111) (Probably) right

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-13
SLIDE 13

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Geometric view

0 0 0 1 1 1 1 0 0 1 1 0 0 0 1 0 1 1 0 1 0 1 0 1

k=1 bit of information 2 code words (length n=3): (000) (111) RX: how can the receiver decide about transmitted words: (001),(010),(100): Detection + correction (000) (110),(101),(011): Detection + correction (111) (000) (111) (Probably) right

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-14
SLIDE 14

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Linear block codes, e.g. Hamming codes. A binary linear block code takes k information bits at its input and calculates n bits. If the 2k codewords are enough and well spaced in the n-dim space, it is possible to detect or even correct errors. In 1950, Hamming introduced the (7,4) Hamming code. It encodes 4 data bits into 7 bits by adding three parity bits. It can detect and correct single-bit errors but can only detect double-bit errors. The code parity-check matrix is: H =   1 1 1 1 1 1 1 1 1 1 1 1  

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-15
SLIDE 15

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Linear block codes, e.g. Hamming codes. A binary linear block code takes k information bits at its input and calculates n bits. If the 2k codewords are enough and well spaced in the n-dim space, it is possible to detect or even correct errors. In 1950, Hamming introduced the (7,4) Hamming code. It encodes 4 data bits into 7 bits by adding three parity bits. It can detect and correct single-bit errors but can only detect double-bit errors. The code parity-check matrix is: H =   1 1 1 1 1 1 1 1 1 1 1 1  

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-16
SLIDE 16

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Linear block codes, e.g. Hamming codes. A binary linear block code takes k information bits at its input and calculates n bits. If the 2k codewords are enough and well spaced in the n-dim space, it is possible to detect or even correct errors. In 1950, Hamming introduced the (7,4) Hamming code. It encodes 4 data bits into 7 bits by adding three parity bits. It can detect and correct single-bit errors but can only detect double-bit errors. The code parity-check matrix is: H =   1 1 1 1 1 1 1 1 1 1 1 1  

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-17
SLIDE 17

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Linear block codes, e.g. Hamming codes. A binary linear block code takes k information bits at its input and calculates n bits. If the 2k codewords are enough and well spaced in the n-dim space, it is possible to detect or even correct errors. In 1950, Hamming introduced the (7,4) Hamming code. It encodes 4 data bits into 7 bits by adding three parity bits. It can detect and correct single-bit errors but can only detect double-bit errors. The code parity-check matrix is: H =   1 1 1 1 1 1 1 1 1 1 1 1  

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-18
SLIDE 18

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Block codes: main ideas

Linear block codes, e.g. Hamming codes. A binary linear block code takes k information bits at its input and calculates n bits. If the 2k codewords are enough and well spaced in the n-dim space, it is possible to detect or even correct errors. In 1950, Hamming introduced the (7,4) Hamming code. It encodes 4 data bits into 7 bits by adding three parity bits. It can detect and correct single-bit errors but can only detect double-bit errors. The code parity-check matrix is: H =   1 1 1 1 1 1 1 1 1 1 1 1  

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-19
SLIDE 19

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Convolutional encoding: main ideas

In convolutional codes, each block of k bits is mapped into a block of n bits BUT these n bits are not only determined by the present k information bits but also by the previous information bits. This dependence can be captured by a finite state machine. This is achieved using several linear filtering operations:

Each convolution imposes a constraint between bits. Several convolutions introduce the redundancy.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-20
SLIDE 20

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Convolutional encoding: main ideas

In convolutional codes, each block of k bits is mapped into a block of n bits BUT these n bits are not only determined by the present k information bits but also by the previous information bits. This dependence can be captured by a finite state machine. This is achieved using several linear filtering operations:

Each convolution imposes a constraint between bits. Several convolutions introduce the redundancy.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-21
SLIDE 21

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Convolutional encoding: main ideas

In convolutional codes, each block of k bits is mapped into a block of n bits BUT these n bits are not only determined by the present k information bits but also by the previous information bits. This dependence can be captured by a finite state machine. This is achieved using several linear filtering operations:

Each convolution imposes a constraint between bits. Several convolutions introduce the redundancy.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-22
SLIDE 22

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Convolutional encoding: main ideas

In convolutional codes, each block of k bits is mapped into a block of n bits BUT these n bits are not only determined by the present k information bits but also by the previous information bits. This dependence can be captured by a finite state machine. This is achieved using several linear filtering operations:

Each convolution imposes a constraint between bits. Several convolutions introduce the redundancy.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-23
SLIDE 23

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Convolutional encoding: main ideas

In convolutional codes, each block of k bits is mapped into a block of n bits BUT these n bits are not only determined by the present k information bits but also by the previous information bits. This dependence can be captured by a finite state machine. This is achieved using several linear filtering operations:

Each convolution imposes a constraint between bits. Several convolutions introduce the redundancy.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-24
SLIDE 24

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Convolutional encoding: main ideas

In convolutional codes, each block of k bits is mapped into a block of n bits BUT these n bits are not only determined by the present k information bits but also by the previous information bits. This dependence can be captured by a finite state machine. This is achieved using several linear filtering operations:

Each convolution imposes a constraint between bits. Several convolutions introduce the redundancy.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-25
SLIDE 25

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Infinite generator matrix

A convolutional code can be described by an “infinite matrix”: G =               G0 G1 · · · GM 0k×n · · · 0k×n G0 · · · GM−1 GM . . . ... ... . . . . . . ... . . . 0k×n G0 G1 . . . ... G0 ... . . . 0k×n ... ...               This matrix depends on K = M + 1 k × n sub-matrices {Gi}i=0..M. K is known as the constraint length of the code.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-26
SLIDE 26

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Infinite generator matrix

A convolutional code can be described by an “infinite matrix”: (C0, C1 · · · ) = (I0, I1 · · · )               G0 G1 · · · GM 0k×n · · · 0k×n G0 · · · GM−1 GM . . . ... ... . . . . . . ... . . . 0k×n G0 G1 . . . ... G0 ... . . . 0k×n ... ...               It looks like a block coding: C = IG

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-27
SLIDE 27

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Infinite generator matrix

Denoting by: Ij = (Ij1 · · · Ijk) the jth block of k informative bits, Cj = (Cj1 · · · Cjn) a block of n coded bits at the output.

Coding an infinite sequence of blocks (length k) I = (I0I1 · · · ) produces an infinite sequence C = (C0C1 · · · ) of coded blocks (length n).

Block form of the coding scheme: it looks like a block coding: C = IG

C0 = I0G0 C1 = I0G1 + I1G0 . . . CM = I0GM + I1GM−1 + · · · + IM G0 . . . Cj = Ij−M GM + · · · + Ij G0 for j ≥ M . . .

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-28
SLIDE 28

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Infinite generator matrix

Denoting by: Ij = (Ij1 · · · Ijk) the jth block of k informative bits, Cj = (Cj1 · · · Cjn) a block of n coded bits at the output.

Coding an infinite sequence of blocks (length k) I = (I0I1 · · · ) produces an infinite sequence C = (C0C1 · · · ) of coded blocks (length n).

Block form of the coding scheme: it looks like a block coding: C = IG

C0 = I0G0 C1 = I0G1 + I1G0 . . . CM = I0GM + I1GM−1 + · · · + IM G0 . . . Cj = Ij−M GM + · · · + Ij G0 for j ≥ M . . .

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-29
SLIDE 29

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Infinite generator matrix

Denoting by: Ij = (Ij1 · · · Ijk) the jth block of k informative bits, Cj = (Cj1 · · · Cjn) a block of n coded bits at the output.

Coding an infinite sequence of blocks (length k) I = (I0I1 · · · ) produces an infinite sequence C = (C0C1 · · · ) of coded blocks (length n).

Block form of the coding scheme: it looks like a block coding: C = IG

C0 = I0G0 C1 = I0G1 + I1G0 . . . CM = I0GM + I1GM−1 + · · · + IM G0 . . . Cj = Ij−M GM + · · · + Ij G0 for j ≥ M . . .

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-30
SLIDE 30

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Infinite generator matrix performs a convolution

Using the convention Ii = 0 for i < 0, the encoding structure C = IG is clearly a convolution : Cj =

M

  • l=0

Ij−lGl. For an informative bits sequence I whose length is finite,only L < +∞ blocks of k bits are different from zero at the input of the coder: I = (I0 · · · IL−1). The sequence C = (C0 · · · CL−1+M) at the coder output is finite too. This truncated coded sequence is generated by a linear block code whose generator matrix is a size kL × n(L + M) sub-matrix of G

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-31
SLIDE 31

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Infinite generator matrix performs a convolution

Using the convention Ii = 0 for i < 0, the encoding structure C = IG is clearly a convolution : Cj =

M

  • l=0

Ij−lGl. For an informative bits sequence I whose length is finite,only L < +∞ blocks of k bits are different from zero at the input of the coder: I = (I0 · · · IL−1). The sequence C = (C0 · · · CL−1+M) at the coder output is finite too. This truncated coded sequence is generated by a linear block code whose generator matrix is a size kL × n(L + M) sub-matrix of G

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-32
SLIDE 32

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Let us write g(l)

αβ elements of matrix Gl.

We now expand the convolution Cj = M

l=0 Ij−lGl to explicit the n

components Cj1, · · · , Cjn of each output block Cj: Cj = [Cj1, · · · , Cjn] = M

  • l=0

k

  • α=1

Ij−l,αg(l)

α1, · · · , M

  • l=0

k

  • α=1

Ij−l,αg(l)

αn

  • If the length of the shift register is L, there are ML different

internal configurations. The behavior of the convolutional coder can be captured by a ML states machine.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-33
SLIDE 33

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Cjβ =

k

  • α=1

M

  • l=0

Ij−l,αg(l)

αβ

depends on: the present input Ij M previous input blocks Ij−1, · · · , Ij−M. Cjβ can be calculated by memorizing M input values in shift registers One shift register α ∈ 1 · · · k for each k bits of the input. For register α, only memories for which g(l)

αβ = 1 are connected to

adder β ∈ 1 · · · n.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-34
SLIDE 34

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Cjβ =

k

  • α=1

M

  • l=0

Ij−l,αg(l)

αβ

depends on: the present input Ij M previous input blocks Ij−1, · · · , Ij−M. Cjβ can be calculated by memorizing M input values in shift registers One shift register α ∈ 1 · · · k for each k bits of the input. For register α, only memories for which g(l)

αβ = 1 are connected to

adder β ∈ 1 · · · n.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-35
SLIDE 35

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Cjβ =

k

  • α=1

M

  • l=0

Ij−l,αg(l)

αβ

depends on: the present input Ij M previous input blocks Ij−1, · · · , Ij−M. Cjβ can be calculated by memorizing M input values in shift registers One shift register α ∈ 1 · · · k for each k bits of the input. For register α, only memories for which g(l)

αβ = 1 are connected to

adder β ∈ 1 · · · n.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-36
SLIDE 36

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Cjβ =

k

  • α=1

M

  • l=0

Ij−l,αg(l)

αβ

depends on: the present input Ij M previous input blocks Ij−1, · · · , Ij−M. Cjβ can be calculated by memorizing M input values in shift registers One shift register α ∈ 1 · · · k for each k bits of the input. For register α, only memories for which g(l)

αβ = 1 are connected to

adder β ∈ 1 · · · n.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-37
SLIDE 37

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Rate of a convolutional code

Asymptotic rate For each k bits long block at the input, a n bits long block is generated at the output. At the coder output, the ratio [number of informative bits] over [total number of bits] is given by: R = k n This quantity is called the rate of the code.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-38
SLIDE 38

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Rate of a convolutional code

Finite length rate For a finite length input sequence, the truncating reduces the rate. The exact finite-length rate is exactly: r′ = r L L + M For L >> M, this rate is almost equal to the asymptotic rate.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-39
SLIDE 39

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder. 1 input (k = 1), 2 outputs (n = 2).

D D D + + 1 1 1 1 1 1 1 G0 G1 G2 G3 1+ D + D2 + D3 1+ D + D3 C j1 C j2 1 input 2 outputs

Impulse responses are P(D) = 1 + D + D2 + D3 and Q(D) = 1 + D + D3.

2 convolutions are evaluated in parallel. The output of each convolution depends on one input and on 3 values memorized in the shift register. At each step, the 2 values at the output depend on the input and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-40
SLIDE 40

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder. 1 input (k = 1), 2 outputs (n = 2).

D D D + + 1 1 1 1 1 1 1 G0 G1 G2 G3 1+ D + D2 + D3 1+ D + D3 C j1 C j2 1 input 2 outputs

Impulse responses are P(D) = 1 + D + D2 + D3 and Q(D) = 1 + D + D3.

2 convolutions are evaluated in parallel. The output of each convolution depends on one input and on 3 values memorized in the shift register. At each step, the 2 values at the output depend on the input and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-41
SLIDE 41

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder. 1 input (k = 1), 2 outputs (n = 2).

D D D + + 1 1 1 1 1 1 1 G0 G1 G2 G3 1+ D + D2 + D3 1+ D + D3 C j1 C j2 1 input 2 outputs

Impulse responses are P(D) = 1 + D + D2 + D3 and Q(D) = 1 + D + D3.

2 convolutions are evaluated in parallel. The output of each convolution depends on one input and on 3 values memorized in the shift register. At each step, the 2 values at the output depend on the input and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-42
SLIDE 42

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder. 1 input (k = 1), 2 outputs (n = 2).

D D D + + 1 1 1 1 1 1 1 G0 G1 G2 G3 1+ D + D2 + D3 1+ D + D3 C j1 C j2 1 input 2 outputs

Impulse responses are P(D) = 1 + D + D2 + D3 and Q(D) = 1 + D + D3.

2 convolutions are evaluated in parallel. The output of each convolution depends on one input and on 3 values memorized in the shift register. At each step, the 2 values at the output depend on the input and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-43
SLIDE 43

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder. 1 input (k = 1), 2 outputs (n = 2).

D D D + + 1 1 1 1 1 1 1 G0 G1 G2 G3 1+ D + D2 + D3 1+ D + D3 C j1 C j2 1 input 2 outputs

Impulse responses are P(D) = 1 + D + D2 + D3 and Q(D) = 1 + D + D3.

Rate 1/2 (k = 1, n = 2) Constraint length K = M + 1 = 4 Sub-matrices: G0 = [11], G1 = [11], G2 = [10], G3 = [11].

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-44
SLIDE 44

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder. 1 input (k = 1), 2 outputs (n = 2).

D D D + + 1 1 1 1 1 1 1 G0 G1 G2 G3 1+ D + D2 + D3 1+ D + D3 C j1 C j2 1 input 2 outputs

Impulse responses are P(D) = 1 + D + D2 + D3 and Q(D) = 1 + D + D3.

Rate 1/2 (k = 1, n = 2) Constraint length K = M + 1 = 4 Sub-matrices: G0 = [11], G1 = [11], G2 = [10], G3 = [11].

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-45
SLIDE 45

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder. 1 input (k = 1), 2 outputs (n = 2).

D D D + + 1 1 1 1 1 1 1 G0 G1 G2 G3 1+ D + D2 + D3 1+ D + D3 C j1 C j2 1 input 2 outputs

Impulse responses are P(D) = 1 + D + D2 + D3 and Q(D) = 1 + D + D3.

Rate 1/2 (k = 1, n = 2) Constraint length K = M + 1 = 4 Sub-matrices: G0 = [11], G1 = [11], G2 = [10], G3 = [11].

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-46
SLIDE 46

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder. 1 input (k = 1), 2 outputs (n = 2).

D D D + + 1 1 1 1 1 1 1 G0 G1 G2 G3 1+ D + D2 + D3 1+ D + D3 C j1 C j2 1 input 2 outputs

Impulse responses are P(D) = 1 + D + D2 + D3 and Q(D) = 1 + D + D3.

Rate 1/2 (k = 1, n = 2) Constraint length K = M + 1 = 4 Sub-matrices: G0 = [11], G1 = [11], G2 = [10], G3 = [11].

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-47
SLIDE 47

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 2/3 encoder

+ + + C j1 C j2 C j3 I j = I j1,I j2 ⎡ ⎣ ⎤ ⎦ 1 1 1 1 1 G0 = 1 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ G1 = 0 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟

3 convolutions are evaluated in parallel. The output of each convolution depends on two inputs and on 4 values memorized in the shift registers. At each step, the 3 values at the output depend on the inputs and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-48
SLIDE 48

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 2/3 encoder

+ + + C j1 C j2 C j3 I j = I j1,I j2 ⎡ ⎣ ⎤ ⎦ 1 1 1 1 1 G0 = 1 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ G1 = 0 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟

3 convolutions are evaluated in parallel. The output of each convolution depends on two inputs and on 4 values memorized in the shift registers. At each step, the 3 values at the output depend on the inputs and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-49
SLIDE 49

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 2/3 encoder

+ + + C j1 C j2 C j3 I j = I j1,I j2 ⎡ ⎣ ⎤ ⎦ 1 1 1 1 1 G0 = 1 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ G1 = 0 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟

3 convolutions are evaluated in parallel. The output of each convolution depends on two inputs and on 4 values memorized in the shift registers. At each step, the 3 values at the output depend on the inputs and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-50
SLIDE 50

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 2/3 encoder

+ + + C j1 C j2 C j3 I j = I j1,I j2 ⎡ ⎣ ⎤ ⎦ 1 1 1 1 1 G0 = 1 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ G1 = 0 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟

3 convolutions are evaluated in parallel. The output of each convolution depends on two inputs and on 4 values memorized in the shift registers. At each step, the 3 values at the output depend on the inputs and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-51
SLIDE 51

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 2/3 encoder

+ + + C j1 C j2 C j3 I j = I j1,I j2 ⎡ ⎣ ⎤ ⎦ 1 1 1 1 1 G0 = 1 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ G1 = 0 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟

Rate 2/3 (k = 2, n = 3) Constraint length K = 2 Sub-matrices: G0 = 1 1 1

  • and G1 =

1 1

  • J.-M. Brossier

Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-52
SLIDE 52

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 2/3 encoder

+ + + C j1 C j2 C j3 I j = I j1,I j2 ⎡ ⎣ ⎤ ⎦ 1 1 1 1 1 G0 = 1 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ G1 = 0 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟

Rate 2/3 (k = 2, n = 3) Constraint length K = 2 Sub-matrices: G0 = 1 1 1

  • and G1 =

1 1

  • J.-M. Brossier

Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-53
SLIDE 53

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 2/3 encoder

+ + + C j1 C j2 C j3 I j = I j1,I j2 ⎡ ⎣ ⎤ ⎦ 1 1 1 1 1 G0 = 1 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ G1 = 0 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟

Rate 2/3 (k = 2, n = 3) Constraint length K = 2 Sub-matrices: G0 = 1 1 1

  • and G1 =

1 1

  • J.-M. Brossier

Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-54
SLIDE 54

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 2/3 encoder

+ + + C j1 C j2 C j3 I j = I j1,I j2 ⎡ ⎣ ⎤ ⎦ 1 1 1 1 1 G0 = 1 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ G1 = 0 1 1 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟

Rate 2/3 (k = 2, n = 3) Constraint length K = 2 Sub-matrices: G0 = 1 1 1

  • and G1 =

1 1

  • J.-M. Brossier

Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-55
SLIDE 55

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder

Ij−1,1 Ij−2,1

+ +

Cj1

Ij,1

Cj2 Ij1

2 convolutions are evaluated in parallel. The output of each convolution depends on one input Ij,1 and

  • n 2 values memorized in the shift register Ij−1,1 and Ij−2,1.

At each step, the 2 values at the output depend on the input and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-56
SLIDE 56

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder

Ij−1,1 Ij−2,1

+ +

Cj1

Ij,1

Cj2 Ij1

2 convolutions are evaluated in parallel. The output of each convolution depends on one input Ij,1 and

  • n 2 values memorized in the shift register Ij−1,1 and Ij−2,1.

At each step, the 2 values at the output depend on the input and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-57
SLIDE 57

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder

Ij−1,1 Ij−2,1

+ +

Cj1

Ij,1

Cj2 Ij1

2 convolutions are evaluated in parallel. The output of each convolution depends on one input Ij,1 and

  • n 2 values memorized in the shift register Ij−1,1 and Ij−2,1.

At each step, the 2 values at the output depend on the input and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-58
SLIDE 58

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder

Ij−1,1 Ij−2,1

+ +

Cj1

Ij,1

Cj2 Ij1

2 convolutions are evaluated in parallel. The output of each convolution depends on one input Ij,1 and

  • n 2 values memorized in the shift register Ij−1,1 and Ij−2,1.

At each step, the 2 values at the output depend on the input and the internal state.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-59
SLIDE 59

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder

Ij−1,1 Ij−2,1

+ +

Cj1

Ij,1

Cj2 Ij1

Rate 1/2 (k = 1, n = 2) Constraint length K = M + 1 = 3 Sub-matrices: G0 =

  • 1

1

  • , G1 =
  • 1
  • and

G2 =

  • 1

1

  • .

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-60
SLIDE 60

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder

Ij−1,1 Ij−2,1

+ +

Cj1

Ij,1

Cj2 Ij1

Rate 1/2 (k = 1, n = 2) Constraint length K = M + 1 = 3 Sub-matrices: G0 =

  • 1

1

  • , G1 =
  • 1
  • and

G2 =

  • 1

1

  • .

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-61
SLIDE 61

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder

Ij−1,1 Ij−2,1

+ +

Cj1

Ij,1

Cj2 Ij1

Rate 1/2 (k = 1, n = 2) Constraint length K = M + 1 = 3 Sub-matrices: G0 =

  • 1

1

  • , G1 =
  • 1
  • and

G2 =

  • 1

1

  • .

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-62
SLIDE 62

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder

Ij−1,1 Ij−2,1

+ +

Cj1

Ij,1

Cj2 Ij1

Rate 1/2 (k = 1, n = 2) Constraint length K = M + 1 = 3 Sub-matrices: G0 =

  • 1

1

  • , G1 =
  • 1
  • and

G2 =

  • 1

1

  • .

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-63
SLIDE 63

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Principles 1st point of view: infinite length block code 2nd point of view: convolutions Some examples

Shift registers based realization

Rate 1/2 encoder

Ij−1,1 Ij−2,1

+ +

Cj1

Ij,1

Cj2 Ij1

This rate 1/2 (k = 1, n = 2) code is used in the sequel to explain the Viterbi algorithm.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-64
SLIDE 64

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine

A state space An input that activates the transition from on state to another one. An output is generated during the transition. Usual representations:

Transition diagram Lattice diagram

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-65
SLIDE 65

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine

A state space An input that activates the transition from on state to another one. An output is generated during the transition. Usual representations:

Transition diagram Lattice diagram

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-66
SLIDE 66

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine

A state space An input that activates the transition from on state to another one. An output is generated during the transition. Usual representations:

Transition diagram Lattice diagram

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-67
SLIDE 67

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine

A state space An input that activates the transition from on state to another one. An output is generated during the transition. Usual representations:

Transition diagram Lattice diagram

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-68
SLIDE 68

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine

A state space An input that activates the transition from on state to another one. An output is generated during the transition. Usual representations:

Transition diagram Lattice diagram

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-69
SLIDE 69

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine: transition diagram

A simple FSM

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

The state space is composed of 4 elements: 00, 01, 11, 10. Each state is represented by a node. The input is binary valued: 2 arrows start at each node. Arrows are indexed by a couple of values: (input,output) the input that activates the transition and the output that is generated by this transition.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-70
SLIDE 70

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine: transition diagram

A simple FSM

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

The state space is composed of 4 elements: 00, 01, 11, 10. Each state is represented by a node. The input is binary valued: 2 arrows start at each node. Arrows are indexed by a couple of values: (input,output) the input that activates the transition and the output that is generated by this transition.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-71
SLIDE 71

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine: transition diagram

A simple FSM

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

The state space is composed of 4 elements: 00, 01, 11, 10. Each state is represented by a node. The input is binary valued: 2 arrows start at each node. Arrows are indexed by a couple of values: (input,output) the input that activates the transition and the output that is generated by this transition.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-72
SLIDE 72

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine: transition diagram

A simple FSM

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

The state space is composed of 4 elements: 00, 01, 11, 10. Each state is represented by a node. The input is binary valued: 2 arrows start at each node. Arrows are indexed by a couple of values: (input,output) the input that activates the transition and the output that is generated by this transition.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-73
SLIDE 73

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine: lattice diagram

Finite State Machine

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

One slice of a lattice diagram

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01) 00 01 10 11 Time j j+1

Lattice diagram A new input triggers a transition from the present state to a

  • ne-step future step.

A lattice diagram unwraps the behavior of the FSM as a function of time.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-74
SLIDE 74

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine: lattice diagram

Finite State Machine

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

One slice of a lattice diagram

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01) 00 01 10 11 Time j j+1

Lattice diagram A new input triggers a transition from the present state to a

  • ne-step future step.

A lattice diagram unwraps the behavior of the FSM as a function of time.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-75
SLIDE 75

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Finite State Machine: lattice diagram

Finite State Machine

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

One slice of a lattice diagram

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01) 00 01 10 11 Time j j+1

Lattice diagram A new input triggers a transition from the present state to a

  • ne-step future step.

A lattice diagram unwraps the behavior of the FSM as a function of time.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-76
SLIDE 76

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

FSM of a Convolutional encoder

A simple encoder ...

Ij−1,1 Ij−2,1

+ +

Cj1

Ij,1

Cj2 Ij1

... and the FSM of this encoder

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-77
SLIDE 77

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Coding a sequence using the coder FSM

Coding bits 001 with a known automate. Initial coder state: 00. Informative sequence: 001.

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

Information bearing bits enter the coder The first value at the coder input is: I0 = 0. According to the transition diagram, this input activates the transition indexed by (0, 00): this means input 0 generates output C0 = (00). The transition is from state 00 to state 00.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-78
SLIDE 78

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Coding a sequence using the coder FSM

Coding bits 001 with a known automate. Initial coder state: 00. Informative sequence: 001.

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

Information bearing bits enter the coder The second value at the coder input is: I1 = 0. According to the transition diagram, this input activates the transition indexed by (0, 00): this means input 0 generates output C1 = (00). The transition is from state 00 to state 00.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-79
SLIDE 79

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Coding a sequence using the coder FSM

Coding bits 001 with a known automate. Initial coder state: 00. Informative sequence: 001.

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

Information bearing bits enter the coder The last informative bit to enter the coder is 1. According to the transition diagram, this input activates the transition indexed by (1, 11): this means input 1 generates output C2 = (11). The transition is from state 00 to state 10.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-80
SLIDE 80

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Coding a sequence using the coder FSM

Coding bits 001 with a known automate. Initial coder state: 00. Informative sequence: 001.

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

Lattice closure: 2 zeros at the coder input to reset its state. The first 0 activates the transition indexed by (0, 01), generates output C3 = (01) and sets the state to 01. The second 0 activates the transition indexed by(0, 11), generates output C4 = (11) and resets the state to 00.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-81
SLIDE 81

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Coding a sequence using the coder FSM

Coding bits 001 with a known automate. Initial coder state: 00. Informative sequence: 001.

00 (0,00) 10 (1,11) 11 (1,10) 01 (0,11) (1,00) (1,01) (0,10) (0,01)

Received sequence In fine, the informative sequence 001 is encoded by: [C0, C1, C2, C3, C4] = [00, 00, 11, 01, 11]

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-82
SLIDE 82

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Transition diagram Lattice diagram A convolutional encoder is a FSM Coding a sequence using the coder FSM

Coding a sequence using the coder FSM

Noisy received coded sequence The coded sequence [C0, C1, C2, C3, C4] = [00, 00, 11, 01, 11] is transmitted over a Binary Symmetric Channel. Let us assume that two errors occur and that the received sequence is: [y0, y1, y2, y3, y4] = [10, 01, 11, 01, 11]

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-83
SLIDE 83

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Binary Symmetric Channel

Diagram of the BSC

1 1 1 - p 1 - p p p

Characteristics of a Binary Symmetric Channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (0, 1), two possible outputs (0, 1), 0, 1 are equally affected by errors (error probability p)

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-84
SLIDE 84

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Binary Symmetric Channel

Diagram of the BSC

1 1 1 - p 1 - p p p

Characteristics of a Binary Symmetric Channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (0, 1), two possible outputs (0, 1), 0, 1 are equally affected by errors (error probability p)

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-85
SLIDE 85

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Binary Symmetric Channel

Diagram of the BSC

1 1 1 - p 1 - p p p

Characteristics of a Binary Symmetric Channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (0, 1), two possible outputs (0, 1), 0, 1 are equally affected by errors (error probability p)

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-86
SLIDE 86

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Binary Symmetric Channel

Diagram of the BSC

1 1 1 - p 1 - p p p

Characteristics of a Binary Symmetric Channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (0, 1), two possible outputs (0, 1), 0, 1 are equally affected by errors (error probability p)

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-87
SLIDE 87

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Branch Metric of a Binary Symmetric Channel

Calculation of the Branch Metric The transition probabilities are:

p(1|0) = p(0|1) = p p(1|1) = p(0|0) = 1 − p

The Hamming distance between the received value y and the coder output Crs (generated by the transition from state r to state s) is the number of bits that differs between the two vectors.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-88
SLIDE 88

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Branch Metric of a Binary Symmetric Channel

Calculation of the Branch Metric The transition probabilities are:

p(1|0) = p(0|1) = p p(1|1) = p(0|0) = 1 − p

The Hamming distance between the received value y and the coder output Crs (generated by the transition from state r to state s) is the number of bits that differs between the two vectors.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-89
SLIDE 89

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Branch Metric of a Binary Symmetric Channel

Calculation of the Branch Metric The Likelihood is written: p(y|Crs) = (1 − p)n

  • p

1 − p dH(y,Crs) log p(y|Crs) = dH (y, Crs) log

  • p

1 − p

  • + n log (1 − p)

n log (1 − p) is a constant and log

  • p

1−p

  • < 0: maximizing

the likelihood is equivalent to minimizing dH (y, Crs). The Hamming branch metric dH (y, Crs) between the

  • bservation and the output of the FSM is adapted to the BS

Channel.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-90
SLIDE 90

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Branch Metric of a Binary Symmetric Channel

Calculation of the Branch Metric The Likelihood is written: p(y|Crs) = (1 − p)n

  • p

1 − p dH(y,Crs) log p(y|Crs) = dH (y, Crs) log

  • p

1 − p

  • + n log (1 − p)

n log (1 − p) is a constant and log

  • p

1−p

  • < 0: maximizing

the likelihood is equivalent to minimizing dH (y, Crs). The Hamming branch metric dH (y, Crs) between the

  • bservation and the output of the FSM is adapted to the BS

Channel.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-91
SLIDE 91

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Branch Metric of a Binary Symmetric Channel

Calculation of the Branch Metric The Likelihood is written: p(y|Crs) = (1 − p)n

  • p

1 − p dH(y,Crs) log p(y|Crs) = dH (y, Crs) log

  • p

1 − p

  • + n log (1 − p)

n log (1 − p) is a constant and log

  • p

1−p

  • < 0: maximizing

the likelihood is equivalent to minimizing dH (y, Crs). The Hamming branch metric dH (y, Crs) between the

  • bservation and the output of the FSM is adapted to the BS

Channel.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-92
SLIDE 92

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Additive White Gaussian Noise Channel

Diagram of the AWGN channel

ak nk ak + nk

Characteristics of an AWGN channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (−1, +1), real (or even complex)-valued

  • utput.

The output is a superposition of the input and a Gaussian

  • noise. 0, 1 are equally affected by errors

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-93
SLIDE 93

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Additive White Gaussian Noise Channel

Diagram of the AWGN channel

ak nk ak + nk

Characteristics of an AWGN channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (−1, +1), real (or even complex)-valued

  • utput.

The output is a superposition of the input and a Gaussian

  • noise. 0, 1 are equally affected by errors

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-94
SLIDE 94

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Additive White Gaussian Noise Channel

Diagram of the AWGN channel

ak nk ak + nk

Characteristics of an AWGN channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (−1, +1), real (or even complex)-valued

  • utput.

The output is a superposition of the input and a Gaussian

  • noise. 0, 1 are equally affected by errors

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-95
SLIDE 95

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Additive White Gaussian Noise Channel

Diagram of the AWGN channel

ak nk ak + nk

Characteristics of an AWGN channel Memoryless channel: the output only depends on the present input (no internal state). Two possible inputs (−1, +1), real (or even complex)-valued

  • utput.

The output is a superposition of the input and a Gaussian

  • noise. 0, 1 are equally affected by errors

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-96
SLIDE 96

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Branch Metric of an AWGN Channel

Calculation of the Branch Metric The probability density function of a Gaussian noise is given by: p (x) = 1 σ √ 2π exp

  • − x2

2σ2

  • The Euclidian distance between the analog received value y

and the coder output Crs (generated by the transition from state r to state s) is the sum of square errors between the two vectors.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-97
SLIDE 97

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Branch Metric of an AWGN Channel

Calculation of the Branch Metric The probability density function of a Gaussian noise is given by: p (x) = 1 σ √ 2π exp

  • − x2

2σ2

  • The Euclidian distance between the analog received value y

and the coder output Crs (generated by the transition from state r to state s) is the sum of square errors between the two vectors.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-98
SLIDE 98

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Branch Metric of an AWGN Channel

Calculation of the Branch Metric The Likelihood is written: p(y|Crs) =

  • 1

σ √ 2π n exp

  • −y − Crs2

2σ2

exp

  • −d2

E (y, Crs)

2σ2

  • Since

log p(y|Crs) ∝ −d2

E (y, Crs)

2σ2 maximizing the likelihood is equivalent to minimizing the Euclidian distance.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-99
SLIDE 99

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Branch Metric of an AWGN Channel

Calculation of the Branch Metric The Likelihood is written: p(y|Crs) =

  • 1

σ √ 2π n exp

  • −y − Crs2

2σ2

exp

  • −d2

E (y, Crs)

2σ2

  • Since

log p(y|Crs) ∝ −d2

E (y, Crs)

2σ2 maximizing the likelihood is equivalent to minimizing the Euclidian distance.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-100
SLIDE 100

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Binary Symmetric Channel Additive White Gaussian Noise Channel

Branch Metric of an AWGN Channel

Binary Symmetric Channel as an approximation of the Gaussian channel Note the BS channel is a coarse approximation of the AWGN channel with an error probability p given by : p = 1 σ √ 2π +∞

1

exp

  • − x2

2σ2

  • dx

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-101
SLIDE 101

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size 2) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations?

Test 2L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-102
SLIDE 102

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size 2) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations?

Test 2L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-103
SLIDE 103

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size 2) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations?

Test 2L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-104
SLIDE 104

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size 2) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations?

Test 2L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-105
SLIDE 105

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size 2) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations?

Test 2L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-106
SLIDE 106

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size 2) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations?

Test 2L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-107
SLIDE 107

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size 2) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations?

Test 2L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-108
SLIDE 108

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Problem statement A finite-length (length L) sequence drawn from a finite-alphabet (size 2) enters a FSM. The FSM output goes through a channel that corrupts the signal with some kind of noise. The noisy channel output is observed ML estimation of the transmitted sequence How can we find the input sequence that maximizes the likelihood of the observations?

Test 2L input sequences! Use the Viterbi algorithm to determine this sequence with a minimal complexity.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-109
SLIDE 109

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

In search of the optimal complexity Parameter the optimization using the state of the FSM rather than the input sequence. Remove sub-optimal sequences on the fly Minimizing a distance is simpler than maximizing a Likelihood.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-110
SLIDE 110

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

In search of the optimal complexity Parameter the optimization using the state of the FSM rather than the input sequence. Remove sub-optimal sequences on the fly Minimizing a distance is simpler than maximizing a Likelihood.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-111
SLIDE 111

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

In search of the optimal complexity Parameter the optimization using the state of the FSM rather than the input sequence. Remove sub-optimal sequences on the fly Minimizing a distance is simpler than maximizing a Likelihood.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-112
SLIDE 112

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

In search of the optimal complexity Parameter the optimization using the state of the FSM rather than the input sequence. Remove sub-optimal sequences on the fly Minimizing a distance is simpler than maximizing a Likelihood.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-113
SLIDE 113

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Likelihood The informative sequence is made of L k-bits long blocks Ij: I = [I0, · · · , IL−1] This sequence is completed by M blocks of k zeros to close the

  • lattice. The coded sequence is:

C = [C0, · · · , CL−1|CL, · · · , CL+M−1] For a memoryless channel, the received sequence is: y =

  • y0, · · · , yL−1|yL, · · · , yL+M−1
  • J.-M. Brossier

Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-114
SLIDE 114

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Likelihood Notations: sj denotes the state of the machine at time j, the initial state is zero. sjsj+1 denotes the edge between states sj and sj+1. There exists a one-to-one relationship between the path in the lattice s0, · · · , sL+M and the sequence C0, · · · , CL+M−1.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-115
SLIDE 115

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Likelihood Since observation yj depends on sj and Cj or equivalently sjsj+1, the likelihood function is given by: log P(y|C) =

L+M−1

  • j=0

log P

  • yj|sj, Cj
  • =

L+M−1

  • j=0

log P

  • yj|sjsj+1
  • lj(sj, sj+1) = log P
  • yj|sjsj+1
  • is called a branch metric.

Sums of branch metrics k

j=0 lj(sj, sj+1) are called path (or

cumulative) metrics (from the initial node to sj+1). Searching for the optimal input sequence is equivalent to finding the path in the lattice whose final cumulative metric is minimal.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-116
SLIDE 116

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Likelihood Since observation yj depends on sj and Cj or equivalently sjsj+1, the likelihood function is given by: log P(y|C) =

L+M−1

  • j=0

log P

  • yj|sj, Cj
  • =

L+M−1

  • j=0

log P

  • yj|sjsj+1
  • lj(sj, sj+1) = log P
  • yj|sjsj+1
  • is called a branch metric.

Sums of branch metrics k

j=0 lj(sj, sj+1) are called path (or

cumulative) metrics (from the initial node to sj+1). Searching for the optimal input sequence is equivalent to finding the path in the lattice whose final cumulative metric is minimal.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-117
SLIDE 117

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Maximum Likelihood Sequence Estimation (MLSE)

Likelihood Since observation yj depends on sj and Cj or equivalently sjsj+1, the likelihood function is given by: log P(y|C) =

L+M−1

  • j=0

log P

  • yj|sj, Cj
  • =

L+M−1

  • j=0

log P

  • yj|sjsj+1
  • lj(sj, sj+1) = log P
  • yj|sjsj+1
  • is called a branch metric.

Sums of branch metrics k

j=0 lj(sj, sj+1) are called path (or

cumulative) metrics (from the initial node to sj+1). Searching for the optimal input sequence is equivalent to finding the path in the lattice whose final cumulative metric is minimal.

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-118
SLIDE 118

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Notations

Notations for transitions on the lattice diagram

p Weight p and state s of the optimal path at this node p = min(q+b,q'+b') (e,s) q+b Edge indexed by the couple (input,output) Cumulative (path) metric = q + b b Branch metric q (e',s') q'+b' This competing branch is removed if q'+b' > q+b State Previous state. The weight of the optimal path that reaches this node is q. J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-119
SLIDE 119

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... open the lattice (1/2)

The FSM memorises two past values: it takes two steps to reach the steady-state behavior. During the first two steps, only a fraction of the impulse response is used. The lattice is being opened during this transient period. The initial state is 00, since the input is binary-valued, two edges start from state 00 and only 2 states are reachable: 00 and 10.

10 01 11 01 11 (1,11) 00 01 10 11 (0, 00)

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-120
SLIDE 120

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... open the lattice (1/2)

Computation of branch metrics: Branch 00 → 00: the FSM generates the output 00, the Hamming distance between this FSM output and the channel output 10 is 1. Branch 00 → 10: the FSM generates the output 11, the Hamming distance between this FSM output and the channel output 10 is 1. Each branch metric is surrounded by a circle on the figure.

10 01 11 01 11 1 (1,11) 1 00 01 10 11 (0, 00)

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-121
SLIDE 121

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... open the lattice (1/2)

Computation of cumulative (or path) metrics: A cumulative metric is the sum of branch metrics to reach a given node. Its value is printed on the edge just before the node. The path metric of the best path that reaches a node is printed in the black square representing this node. The initial path metrics is set to 0.

10 01 11 01 11 1 1 (1,11) 1 1 1 00 01 10 11 1 (0, 00)

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-122
SLIDE 122

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... open the lattice (1/2)

Computation of cumulative (or path) metrics: Branch 00 → 00: the path metric to reach state 00 is equal to the previous path metric (0 for the initial path metric) plus the branch metric 1: path metric = 1. Branch 00 → 10: the path metric to reach state 10 is equal to the previous path metric (0 for the initial path metric) plus the branch metric 1: path metric = 1.

10 01 11 01 11 1 1 (1,11) 1 1 1 00 01 10 11 1 (0, 00)

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-123
SLIDE 123

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... open the lattice (2/2)

(0, 00) 10 01 11 01 11 1 (1,11) 1 1 (0, 01) (1,10) (1, 11) 1 1 00 01 10 11 (0, 00) 1

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-124
SLIDE 124

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... open the lattice (2/2)

(0, 00) 10 01 11 01 11 1 1 (1,11) 1 1 (0, 01) (1,10) 2 (1, 11) 1 1 1 00 01 10 11 (0, 00) 1

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-125
SLIDE 125

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... open the lattice (2/2)

2 (0, 00) 10 01 11 01 11 1 1 1 (1,11) 1 1 (0, 01) 3 (1,10) 2 2 (1, 11) 1 1 2 1 2 1 00 01 10 11 1 (0, 00) 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-126
SLIDE 126

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... steady-sate behavior

The transient is finished. Two edges start from each node and two edges reach each node.

(0, 00) 2 (1,11) 10 01 11 01 11 1 1 (1,11) 1 1 (0, 01) 1 (1,10) 3 2 (1, 11) 2 1 (0, 11) (1,00) (1,01) (0,10) (0,01) (1,10) 1 2 1 2 1 00 01 10 11 1 (0, 00) (0, 00) 1 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-127
SLIDE 127

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... steady-state behavior

Computation of branch metrics as usual.

(0, 00) 2 (1,11) 10 01 11 01 11 1 1 (1,11) 1 1 (0, 01) 1 (1,10) 3 2 (1, 11) 2 1 (0, 11) (1,00) (1,01) (0,10) (0,01) (1,10) 1 2 1 2 1 1 1 1 00 01 10 11 1 2 2 (0, 00) (0, 00) 1 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-128
SLIDE 128

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... steady-state behavior

For each arrival node, the algorithm must select the path whose metric is minimal.

2 (0, 00) 2 (1,11) 10 01 11 01 11 1 1 1 (1,11) 1 1 (0, 01) 3 (1,10) 2 2 (1, 11) 1 1 (0, 11) 3 (1,00) 4 (1,01) 4 (0,10) 3 (0,01) 3 (1,10) 1 2 1 1 2 2 3 1 1 1 1 00 01 10 11 1 2 2 4 (0, 00) 1 (0, 00) 3 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-129
SLIDE 129

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... close the lattice (1/2)

The informative sequence is finished. Two zero inputs reset the state to its initial value.

(0, 00) 2 (1,11) 2 (0, 01) 10 01 11 01 11 1 1 (1,11) 1 1 (0, 01) 1 (1,10) 3 2 (1, 11) 2 1 (0, 11) 1 (1,00) 3 (1,01) 4 (0,10) 4 (0,01) 3 (1,10) 3 1 2 1 1 2 2 3 1 1 1 1 00 01 10 11 1 2 2 (0,10) (0, 00) (0, 00) 4 (0, 00) 1 3 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-130
SLIDE 130

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... close the lattice (1/2)

Computation of branch metrics as usual.

(0, 00) 2 (1,11) 2 (0, 01) 10 01 11 01 11 1 1 (1,11) 1 1 (0, 01) 1 (1,10) 3 2 (1, 11) 2 1 (0, 11) 1 (1,00) 3 (1,01) 4 (0,10) 4 (0,01) 3 (1,10) 3 1 2 1 1 2 2 3 1 1 1 1 00 01 10 11 1 2 2 1 1 (0,10) 2 (0, 00) (0, 00) 4 (0, 00) 1 3 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-131
SLIDE 131

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... close the lattice (1/2)

2 (0, 00) 2 (1,11) 2 (0, 01) 10 01 11 01 11 1 1 1 (1,11) 1 1 (0, 01) 3 (1,10) 2 2 (1, 11) 1 1 (0, 11) 3 (1,00) 4 (1,01) 4 (0,10) 3 (0,01) 3 (1,10) 1 2 1 2 1 2 2 3 2 1 1 1 1 00 01 10 11 1 2 2 1 4 1 5 (0,10) 2 (0, 00) 2 4 (0, 00) 1 (0, 00) 3 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-132
SLIDE 132

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... close the lattice (2/2)

(0, 00) 2 (1,11) 2 (0, 01) 2 (0, 11) 10 01 11 01 11 1 1 (1,11) 1 1 (0, 01) 1 (1,10) 3 2 (1, 11) 2 1 (0, 11) 1 (1,00) 3 (1,01) 4 (0,10) 4 (0,01) 3 (1,10) 3 1 2 1 2 1 2 2 3 2 1 1 1 1 00 01 10 11 1 2 2 1 4 1 (0,10) 5 2 (0, 00) 2 (0, 00) (0, 00) 4 (0, 00) 1 3 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-133
SLIDE 133

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... close the lattice (2/2)

(0, 00) 2 (1,11) 2 (0, 01) 2 (0, 11) 10 01 11 01 11 1 1 (1,11) 1 1 (0, 01) 1 (1,10) 3 2 (1, 11) 2 1 (0, 11) 1 (1,00) 3 (1,01) 4 (0,10) 4 (0,01) 3 (1,10) 3 1 2 1 2 1 2 2 3 2 1 1 1 1 00 01 10 11 1 2 2 1 4 1 (0,10) 5 2 (0, 00) 2 2 (0, 00) (0, 00) 4 (0, 00) 1 3 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-134
SLIDE 134

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi in progress ... close the lattice (2/2)

2 (0, 00) 2 (1,11) 2 (0, 01) 2 (0, 11) 10 01 11 01 11 1 1 1 (1,11) 1 1 (0, 01) 3 (1,10) 2 2 (1, 11) 1 1 (0, 11) 3 (1,00) 4 (1,01) 4 (0,10) 3 (0,01) 3 (1,10) 1 2 1 2 2 1 2 2 3 2 1 1 1 1 00 01 10 11 1 2 2 1 4 1 5 (0,10) 2 4 (0, 00) 2 (0, 00) 2 4 (0, 00) 1 (0, 00) 3 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.

slide-135
SLIDE 135

Convolutional encoding Finite State Machine Channel models The Viterbi algorithm Problem statement Main ideas of the Viterbi algorithm Notations Running a Viterbi algorithm

Viterbi for decoding the output of a BSC

Viterbi done. Backtracking

2 (0, 00) 2 (1,11) 2 (0, 01) 2 (0, 11) 10 01 11 01 11 1 1 1 (1,11) 1 1 (0, 01) 3 (1,10) 2 2 (1, 11) 1 1 (0, 11) 3 (1,00) 4 (1,01) 4 (0,10) 3 (0,01) 3 (1,10) 1 2 1 2 2 1 2 2 3 2 1 1 1 1 00 01 10 11 1 2 2 1 4 1 5 (0,10) 2 4 (0, 00) 2 (0, 00) 2 4 (0, 00) 1 (0, 00) 3 3

J.-M. Brossier Coding and decoding with convolutional codes. The Viterbi Algorithm.