ECEN 5682 Theory and Practice of Error Control Codes Convolutional - - PowerPoint PPT Presentation

ecen 5682 theory and practice of error control codes
SMART_READER_LITE
LIVE PREVIEW

ECEN 5682 Theory and Practice of Error Control Codes Convolutional - - PowerPoint PPT Presentation

Convolutional Code Performance ECEN 5682 Theory and Practice of Error Control Codes Convolutional Code Performance Peter Mathys University of Colorado Spring 2007 Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes


slide-1
SLIDE 1

Convolutional Code Performance

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance Peter Mathys

University of Colorado

Spring 2007

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-2
SLIDE 2

Convolutional Code Performance Performance Measures

Performance Measures

Definition: A convolutional encoder which maps one or more data sequences of infinite weight into code sequences of finite weight is called a catastrophic encoder. Example: Encoder #5. The binary R = 1/2, K = 3 convolutional encoder with transfer function matrix G(D) =

  • 1 + D

1 + D2 , has the encoder state diagram shown in Figure 15, with states S0 = 00, S1 = 10, S2 = 01, and S3 = 11.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-3
SLIDE 3

Convolutional Code Performance Performance Measures

S0 S1 S2 S3 Fig.15 Encoder State Diagram for Catastrophic R = 1/2, K = 3 Encoder 1/11 1/01 0/11 0/01 0/10 1/10 0/00 1/00

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-4
SLIDE 4

Convolutional Code Performance Performance Measures

  • S0

S1 S2 S3 · · · Fig.16 A Detour of Weight w = 7 and i = 3, Starting at Time t = 0

00 00 00 00 00 00 00 00 11 11 11 11 11 11 11 11 01 01 01 01 01 01 01 10 10 10 10 10 10 10 11 11 11 11 11 11 00 00 00 00 00 00 10 10 10 10 10 10 01 01 01 01 01 01 11 01 00 10 10 11 Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-5
SLIDE 5

Convolutional Code Performance Performance Measures

Definition: The complete weight distribution {A(w, i, ℓ)} of a convolutional code is defined as the number of detours (or codewords), beginning at time 0 in the all-zero state S0 of the encoder, returning again for the first time to S0 after ℓ time units, and having code (Hamming) weight w and data (Hamming) weight i. Definition: The extended weight distribution {A(w, i)} of a convolutional code is defined by A(w, i) =

  • ℓ=1

A(w, i, ℓ) . That is, {A(w, i)} is the number of detours (starting at time 0) from the all-zero path with code sequence (Hamming) weight w and corresponding data sequence (Hamming) weight i.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-6
SLIDE 6

Convolutional Code Performance Performance Measures

Definition: The weight distribution {Aw} of a convolutional code is defined by Aw =

  • i=1

A(w, i) . That is, {Aw} is the number of detours (starting at time 0) from the all-zero path with code sequence (Hamming) weight w. Theorem: The probability of an error event (or decoding error) PE for a convolutional code with weight distribution {Aw}, decoded by a ML decoder, at any given time t (measured in frames) is upper bounded by PE ≤

  • w=dfree

Aw Pw(E) , where Pw(E) = P{ML decoder makes detour with weight w} .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-7
SLIDE 7

Convolutional Code Performance Performance Measures

Theorem: On a memoryless BSC with transition probability ǫ < 0.5, the probability of error Pd(E) between two detours or codewords distance d apart is given by

Pd(E) = 8 > > > > > < > > > > > :

d

X

e=(d+1)/2

d e ! ǫe (1 − ǫ)d−e , d odd , 1 2 d d/2 ! ǫd/2 (1 − ǫ)d/2 +

d

X

e=d/2+1

d e ! ǫe (1 − ǫ)d−e , d even .

Proof: Under the Hamming distance measure, an error between two binary codewords distance d apart is made if more than d/2 of the bits in which the codewords differ are in error. If d is even and exactly d/2 bits are in error, then an error is made with probability 1/2. QED

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-8
SLIDE 8

Convolutional Code Performance Performance Measures

Note: A somewhat simpler but less tight bound is obtained by dropping the factor of 1/2 in the first term for d even as follows Pd(E) ≤

d

  • e=⌈d/2⌉

d e

  • ǫe (1 − ǫ)d−e .

A much simpler, but often also much more loose bound is the Bhattacharyya bound Pd(E) ≤ 1 2

  • 4ǫ (1 − ǫ)

d/2 .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-9
SLIDE 9

Convolutional Code Performance Performance Measures

Probability of Symbol Error. Suppose now that Aw = P∞

i=1 A(w, i) is

substituted in the bound for PE. Then PE ≤

X

w=dfree ∞

X

i=1

A(w, i) Pw(E) . Multiplying A(w, i) by i and summing over all i then yields the total number of data symbol errors that result from all detours of weight w as P∞

i=1 i A(w, i).

Dividing by k, the number of data symbols per frame, thus leads to the following theorem.

Theorem: The probability of a symbol error Ps(E) at any given time t (measured in frames) for a convolutional code with rate R = k/n and extended weight distribution {A(w, i)}, when decoded by a ML decoder, is upper bounded by Ps(E) ≤ 1 k

  • w=dfree

  • i=1

i A(w, i) Pw(E) , where Pw(E) is the probability of error between the all-zero path and a detour of weight w.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-10
SLIDE 10

Convolutional Code Performance Performance Measures

The graph on the next slide shows different bounds for the probability of a bit error on a BSC for a binary rate R = 1/2, K = 3 convolutional encoder with transfer function matrix G(D) =

  • 1 + D2

1 + D + D2 .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-11
SLIDE 11

Convolutional Code Performance Performance Measures −6 −5.5 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1 10

−25

10

−20

10

−15

10

−10

10

−5

10 Binary R=1/2, K=3, dfree=5, Convolutional Code, Bit Error Probability log10(ε) Pb(E) Pb(E) BSC Pb(E) BSC Bhattcharyya Pb(E) AWGN soft Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-12
SLIDE 12

Convolutional Code Performance Performance Measures −4 −3.5 −3 −2.5 −2 −1.5 −1 10

−10

10

−9

10

−8

10

−7

10

−6

10

−5

10

−4

10

−3

10

−2

10

−1

10 Upper Bounds on Pb(E) for Convolutional Codes on BSC (Hard Decisions) log10(ε) for BSC Pb(E) R=1/2,K=3,dfree=5 R=2/3,K=3,dfree=5 R=3/4,K=3,dfree=5 R=1/2,K=5,dfree=7 R=1/2,K=7,dfree=10 Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-13
SLIDE 13

Convolutional Code Performance Performance Measures Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-14
SLIDE 14

Convolutional Code Performance Performance Measures

Transmission Over AWGN Channel

The following figure shows a “one-shot” model for transmitting a data symbol with value a0 over an additive Gaussian noise (AGN) waveform channel using pulse amplitude modulation (PAM) of a pulse p(t) and a matched filter (MF) receiver. The main reason for using a “one-shot” model for performance evaluation with respect to channel noise is that it avoids intersymbol interference (ISI).

+ Noise n(t), Sn(f) Filter hR(t)

t = 0

  • Channel
  • Receiver

b(t) s(t) = a0p(t) r(t) b0

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-15
SLIDE 15

Convolutional Code Performance Performance Measures

If the noise is white with power spectral density (PSD) Sn(f ) = N0/2 for all f , the channel model is called additive white Gaussian noise (AWGN) model. In this case the matched filter (which maximizes the SNR at its output at t = 0) is hR(t) = p∗(−t) ∞

−∞ |p(µ)|2dµ

⇐ ⇒ HR(f ) = P∗(f ) ∞

−∞ |P(ν)|2dν ,

where ∗ denotes complex conjugation. If the PAM pulse p(t) is normalized so that Ep = ∞

−∞ |p(µ)|2dµ = 1 then the symbol

energy at the input of the MF is Es = E ∞

−∞

|s(µ)|2 dµ

  • = E
  • |a0|2

, where the expectation is necessary since a0 is a random variable.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-16
SLIDE 16

Convolutional Code Performance Performance Measures

When the AWGN model with Sn(f ) = N0/2 is used and a0 = α is transmitted, the received symbol b0 at the sampler after the

  • utput of the MF is a Gaussian random variable with mean α and

variance σ2

b = N0/2. For antipodal binary signaling (e.g., using

BPSK) a0 ∈ {−√Es, +√Es} where Es is the (average) energy per

  • symbol. Thus, b0 is characterized by the conditional pdf’s

fb0(β|a0=−

  • Es) = e−(β+√Es)2/N0

√πN0 , and fb0(β|a0=+

  • Es) = e−(β−√Es)2/N0

√πN0 . These pdf’s are shown graphically on the following slide.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-17
SLIDE 17

Convolutional Code Performance Performance Measures

β ˆ a0 = −√Es ← → ˆ a0 = +√Es −√Es +√Es fb0(β|a0=−√Es) fb0(β|a0=+√Es) 2√Es

If the two values of a0 are equally likely or if a ML decoding rule is used, then the (hard) decision threshold per symbol is to decide a0 = +√Es if β > 0 and a0 = −√Es otherwise.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-18
SLIDE 18

Convolutional Code Performance Performance Measures

The probability of a symbol error when hard decisions are used is P(E|A0=−

  • Es) =

1 √πN0 ∞ e−(β+√Es)2/N0dβ = 1 2erfc

  • Es

N0

  • ,

where erfc(x) =

2 √π

x

e−γ2dγ ≈ e−x2. Because of the symmetry

  • f antipodal signaling, the same result is obtained for

P(E|a0= + √Es) and thus a BSC derived from an AWGN channel used with antipodal signaling has transition probability ǫ = 1 2 erfc

  • Es

N0

  • ,

where Es is the energy received per transmitted symbol.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-19
SLIDE 19

Convolutional Code Performance Performance Measures

To make a fair comparison in terms of signal-to-noise ratio (SNR)

  • f the transmitted information symbols between coded and

uncoded systems, the energy per code symbol of the coded system needs to be scaled by the rate R of the code. Thus, when hard decisions and coding are used in a binary system, the transition probability of the BSC model becomes ǫc = 1 2 erfc

  • R Es

N0

  • ,

where R = k/n is the rate of the code. The figure on the next slide compares Pb(E) versus Eb/N0 for an uncoded and a coded binary system. The coded system uses a R = 1/2 K = 3 convolutional encoder with G(D) =

  • 1 + D2

1 + D + D2 .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-20
SLIDE 20

Convolutional Code Performance Performance Measures 2 4 6 8 10 12 10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10 10

2

Binary R=1/2, K=3, dfree=5, Convolutional Code, Hard decisions AWGN channel Eb/N0 [dB], Eb: info bit energy Pb(E) Pb(E) uncoded Pb(E) union bound Pb(E) Bhattacharyya Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-21
SLIDE 21

Convolutional Code Performance Performance Measures

Definition: Coding Gain. Coding gain is defined as the reduction in Es/N0 permissible for a coded communication system to obtain the same probability of error (Ps(E) or PB(E) as an uncoded system, both using the same average energy per transmitted information symbol. Definition: Coding Threshold. The value of Es/N0 (where Es is the energy per transmitted information symbol) for which the coding gain becomes zero is called the coding threshold. The graphs on the following slide show Pb(E) (computed using the union bound) versus Eb/N0 for a number of different binary convolutional encoders.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-22
SLIDE 22

Convolutional Code Performance Performance Measures 2 4 6 8 10 12 10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10 Upper Bounds on Pb(E) for Convolutional Codes on AWGN Channel, Hard Decisions Eb/N0 [dB], Eb: info bit energy Pb(E) Uncoded R=1/2,K=3,dfree=5 R=2/3,K=3,dfree=5 R=3/4,K=3,dfree=5 R=1/2,K=5,dfree=7 R=1/2,K=7,dfree=10 Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-23
SLIDE 23

Convolutional Code Performance Performance Measures

Soft Decisions and AWGN Channel Assuming a memoryless channel model used without feedback, the ML decoding rule after the MF and the sampler is: Output code sequence estimate ˆ c = ci iff i maximizes fb(β|a=ci) =

N−1

  • j=0

fbj(βj|aj=cij) ,

  • ver all code sequences ci = (ci0, ci1, ci2, . . .) for i = 0, 1, 2, . . ..

If the mapping 0 → −1 and 1 → +1 is used so that cij ∈ {−1, +1} then fbj(βj|aj=cij) can be written as fbj(βj|aj=cij) = e−(βj−cij

√Es)2/N0

√πN0 .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-24
SLIDE 24

Convolutional Code Performance Performance Measures

Taking (natural) logarithms and defining vj = βj/√Es yields ln fb(β|a=ci) = ln

N−1

  • j=0

fbj(βj|aj=cij) =

N−1

  • j=0

ln fbj(βj|aj=cij) = −

N−1

  • j=0

(βj − cij √Es)2 N0 − N 2 ln(πN0) = − Es N0

N−1

  • j=0

(v2

j − 2 vj cij + c2 ij) − N

2 ln(πN0) = 2Es N0

N−1

  • j=0

vj cij − |β|2 + NEs N0 + N 2 ln(πN0)

  • = K1

N−1

  • j=0

vj cij − K2 , where K1 and K2 are constants independent of the codeword ci and thus irrelevant for ML decoding.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-25
SLIDE 25

Convolutional Code Performance Performance Measures

Example: Suppose the convolutional encoder with G(D) =

  • 1

1 + D

  • is used and the received data is

v = -0.4, -1.7, 0.1, 0.3, -1.1, 1.2, 1.2, 0.0, 0.3, 0.2, -0.2, 0.7, . . .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-26
SLIDE 26

Convolutional Code Performance Performance Measures

Soft Decisions versus Hard Decisions To compare the performance of coded binary systems on a AWGN channel when the decoder performs either hard or soft decisions, the energy Ec per coded bit is fixed and Pb(E) is plotted versus ǫ

  • f the hard decision BSC model where ǫ = 1

2erfc

  • Ec/N0
  • as
  • before. For soft decisions the expression

Pw(E) = 1 2 erfc

  • w Ec

N0

  • is used for the probability that the ML decoder makes a detour

with weight w from the correct path. Thus, for soft decisions with fixed SNR per code symbol Pb(E) ≤ 1 2k

  • w=dfree

Dw erfc

  • w Ec

N0

  • .

Examples are shown on the next slide.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-27
SLIDE 27

Convolutional Code Performance Performance Measures −4 −3.5 −3 −2.5 −2 −1.5 −1 10

−15

10

−10

10

−5

10 Upper Bounds on Pb(E) for Convolutional Codes with Soft Decisons (Dashed: Hard Decisions) log10(ε) for BSC Pb(E) R=1/2,K=3,dfree=5 R=2/3,K=3,dfree=5 R=3/4,K=3,dfree=5 R=1/2,K=5,dfree=7 R=1/2,K=7,dfree=10 Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-28
SLIDE 28

Convolutional Code Performance Performance Measures

Coding Gain for Soft Decisions To compare the performance of uncoded and coded binary systems with soft decisions on a AWGN channel, the energy Eb per information bit is fixed and Pb(E) is plotted versus the signal-to-noise ratio (SNR) Eb/N0. For an uncoded system Pb(E) = 1 2 erfc

  • Eb

N0

  • ,

(uncoded) . For a coded system with soft decision ML decoding on a AWGN channel Pb(E) ≤ 1 2k

  • w=dfree

Dw erfc

  • wR Eb

N0

  • ,

where R = k/n is the rate of the code. Examples are shown in the graph on the next slide.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

slide-29
SLIDE 29

Convolutional Code Performance Performance Measures 2 4 6 8 10 12 10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10 Upper Bounds on Pb(E) for Convolutional Codes on AWGN Channel, Soft Decisions Eb/N0 [dB], Eb: info bit energy Pb(E) Uncoded R=1/2,K=3,dfree=5 R=2/3,K=3,dfree=5 R=3/4,K=3,dfree=5 R=1/2,K=5,dfree=7 R=1/2,K=7,dfree=10 Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes