Lecture 04 Reliable Communication I-Hsiang Wang ihwang@ntu.edu.tw - - PowerPoint PPT Presentation

lecture 04 reliable communication
SMART_READER_LITE
LIVE PREVIEW

Lecture 04 Reliable Communication I-Hsiang Wang ihwang@ntu.edu.tw - - PowerPoint PPT Presentation

Principle of Communications, Fall 2017 Lecture 04 Reliable Communication I-Hsiang Wang ihwang@ntu.edu.tw National Taiwan University 2017/10/25,26 Channel Coding Binary Interface x ( t ) { c i } { u m } x b ( t ) ECC Symbol Pulse Up { b


slide-1
SLIDE 1

Principle of Communications, Fall 2017

Lecture 04 Reliable Communication

I-Hsiang Wang

ihwang@ntu.edu.tw National Taiwan University 2017/10/25,26

slide-2
SLIDE 2

2

Previous lectures:

{bi} {ˆ bi}

{ci} {ˆ ci} {um} {ˆ um} xb(t) yb(t)

x(t) y(t)

ECC Encoder Symbol Mapper Pulse Shaper Filter + Sampler + Detection Symbol Demapper ECC Decoder

coded bits discrete sequence

Binary Interface

Channel Coding

Information bits Up Converter Down Converter

baseband waveform

Noisy Channel

passband waveform

Focusing on digital modulation, we can ensure that the coded bits {ci} can be reconstructed optimally (i.e., minimize avg. prob. of error) at the receiver

slide-3
SLIDE 3

3

However, this is not good enough …

Averaged symbol probability of error is exponentially decaying with SNR Pe . = exp(−c SNR) For each symbol, Pe = 10–3 is already pretty good! Consider a file mapped and converted into n = 250 symbols The file cannot be reconstructed if one symbol is wrong Pretty bad … But we cannot do much because noise is inevitable, while modulation only focus on the symbol level, not the the file level The “file” probability of error is 1 − (1 − Pe)n ≈ nPe = 250/1000 = 0.25

slide-4
SLIDE 4

4

This lecture:

{bi} {ˆ bi}

{ci} {ˆ ci} {um} {ˆ um} xb(t) yb(t)

x(t) y(t)

ECC Encoder Symbol Mapper Pulse Shaper Filter + Sampler + Detection Symbol Demapper ECC Decoder

coded bits discrete sequence

Binary Interface

Channel Coding

Information bits Up Converter Down Converter

baseband waveform

Noisy Channel

passband waveform

Introduce error correction coding, to add redundancy to the original file. ⟹ We are able to make the overall “file” probability of error arbitrarily small! Prices to pay: data rate and energy Reliable Communication!

slide-5
SLIDE 5

5

Equivalent Discrete-time Complex Baseband Channel

ECC Encoder Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

Detection + Decoder

ˆ b

Soft decision: jointly consider detection and decoding; directly work on the demodulated symbols Hard decision: only consider decoding; directly work on the detected bit sequences

Equivalent Discrete-time Complex Baseband Channel

ECC Encoder Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

ECC Decoder

ˆ b

  • Demod. +

Detection

d

We focus on soft decision first!

Rate: R = k/n

slide-6
SLIDE 6

Outline

  • Prelude: repetition coding
  • Energy-efficient reliable communication: orthogonal code
  • Rate-efficient reliable communication: linear block code
  • Convolutional code

6

slide-7
SLIDE 7

7

Part I. Prelude: Repetition Coding

Repetition code, Rate and Energy efficiency

slide-8
SLIDE 8

Repetition: a simple way to enhance reliability

  • Idea: repeat each bit N times ⟹ data rate R = 1/N .
  • We focus on the architecture below:

8

  • riginal bit seq.

coded bit seq. 1 coded bit seq. 2

b1 b2 b4 b3 b5

b1 b2 b3 b4 b5 b1 b2 b3 b4 b5 b1 b2 b3 b4 b5 b1 b2 b3 b4 b5

Many ways for repetition

Equivalent Discrete-time Complex Baseband Channel

Repetition Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

Detection + Decoder

ˆ b

  • c =
  • b1 ∼ b

b1 ∼ b b1 ∼ b

                                      

repeat N times

b+1 ∼ b2

n = kN : # of bits in a symbol

slide-9
SLIDE 9

9

  • c =
  • b1 ∼ b

b1 ∼ b b1 ∼ b

                                      

repeat N times

b+1 ∼ b2

u

  • u1 u2 · · · uN
  • ∈ CN

Equivalent vector symbol u1

mod mod mod mod

u2 uN

  • Since the noises are i.i.d., it suffices to use the N-dim. demodulated

to optimally decode V = u + Z b1 ∼ b

slide-10
SLIDE 10

= Q

  • N·4d2

2N0

  • = Q
  • N 2d2

N0

  • BPSK + repetition coding

10

Equivalent channel model: V = u + Z ∈ CN Z1, ...ZN

i.i.d.

∼ CN(0, N0) Equivalent constellation set: u ∈ {a0, a1}

a0 = −

  • d d · · · d
  • a1 = +
  • d d · · · d
  • Performance analysis:

P(N)

e

= Q

  • ∥a1−a0∥

2√ N0/2

  • SNR average energy per uncoded symbol

total noise variance per symbol

= d2

N0

Repetition effectively increase SNR by N-fold!

= Q √ N2SNR . = exp(−NSNR)

slide-11
SLIDE 11

Rate and energy efficiency

11

Rate: R = 1/N → 0 as N → ∞ Energy per bit: Eb = Nd2 → ∞ as N → ∞ Achieving arbitrarily small prob. of error at the price of zero rate and infinite energy per bit Question: can we resolve the issue with more general constellation sets?

slide-12
SLIDE 12

General modulation + repetition coding

12

Equivalent channel model: V = u + Z ∈ CN Z1, ...ZN

i.i.d.

∼ CN(0, N0) Equivalent constellation set: Probability of error (take M-ary PAM as an example): u ∈ {a1, ..., aM} M = 2 P(N)

e

= 2(1 − 2−ℓ)Q

  • N

6 4ℓ−1SNR

  • Rate: R = /N

= 2(1 − 2−NR)Q

  • N

4NR−16SNR

  • limN→∞ P(N)

e

= 0 ⇐ ⇒ limN→∞ 4NR−1

N

= 0 Energy per bit:

Eb N0 = N ℓ SNR = SNR R

→ ∞ as N → ∞ → 0 as N → ∞ it is necessary that limN→∞ R = 0

slide-13
SLIDE 13

Why repetition coding is not very good

  • Repetition coding: high reliability at the price of asymptotically zero rate and

infinite energy per bit

  • Repetition is too naive and does not utilize the available degrees of freedom in

the N-dimensional space efficiently

  • Is it possible to design better coding schemes with the following?
  • Vanishing probability of error
  • Positive rate
  • Finite energy per bit

13

YES!

slide-14
SLIDE 14

14

Part II. Energy-Effjcient Reliable Communication

Orthogonal code, Optimal energy efficiency

slide-15
SLIDE 15

Orthogonal coding

  • With N dimensions (N time slots), use N equal-norm orthogonal vectors to

encode log2N bits

  • Since the noises are i.i.d. circularly symmetric complex Gaussian, we can WLOG

assume that these N vectors are simply scaled standard unit vectors:

15

Equivalent Discrete-time Complex Baseband Channel

Encoder + Modulation

b [b1 b2 ... bk] b V = u + Z u [u1 u2 ... u˜

n]

u V

Detection + Decoder

ˆ b

here we jointly consider coding and modulation

{dei | i = 1, ..., N}, ei(j) = {i = j}

slide-16
SLIDE 16

16

  • info. bits

000 001 010 011 100 101 110 111 symbol vector [d 0 0 0 0 0 0 0] [0 d 0 0 0 0 0 0] [0 0 d 0 0 0 0 0] [0 0 0 d 0 0 0 0] [0 0 0 0 d 0 0 0] [0 0 0 0 0 d 0 0] [0 0 0 0 0 0 d 0] [0 0 0 0 0 0 0 d] Example: N = 8

Equivalent to encoding messages using the location of a pulse ⟹ Pulse Position Modulation (PPM)

slide-17
SLIDE 17

17

Performance analysis of orthogonal coding

Equivalent channel model: V = u + Z ∈ CN Z1, ...ZN

i.i.d.

∼ CN(0, N0) Equivalent constellation set: Probability of error: Rate: Energy per bit: → 0 as N → ∞ u ∈ {dei | i = 1, ..., N} R = log2 N/N Eb = d2/ log2 N P(N)

e

≤ (N − 1)Q

  • d2

min

2N0

  • = (N − 1)Q
  • d2

N0

  • dmin =

√ 2d

≤ NQ

  • log2 N Eb

N0

  • Finite energy per bit suffices!

≤ 1

2 exp

  • − ln N
  • Eb

(2 ln 2)N0 − 1

  • → 0 as N → ∞ as long as Eb

N0 > 2 ln 2

Eb > (2 ln 2)N0

slide-18
SLIDE 18

Minimum energy per bit

  • Does orthogonal coding achieve the minimum Eb/N0?
  • Let us use Shannon’s capacity formula to derive the minimum Eb/N0 of all

possible coding+modulation schemes:

  • For the additive Gaussian noise channel with energy per channel use P, the best

achievable rate follows (bits per channel use)

  • Energy per bit is hence
  • The minimum energy per bit when the rate is R can be found:
  • Taking infimum over all R, we see:
  • In fact, orthogonal code can achieve any Eb/N0 > ln 2 !
  • but union bound fails; new techniques required (see Gallager Ch. 8.5.3 for more details)

18

R < C log2(1 + P

N0 )

Eb = P/R = ⇒ R < log2(1 + R Eb

N0 ) Eb N0 > E∗

b(R)

N0

2R−1

R

E∗

b

N0 inf

R>0

E∗

b(R)

N0 = lim

R↓0

2R − 1 R = ln 2

slide-19
SLIDE 19

19

Part III. Rate-Effjcient Reliable Communication

Linear block code, Existence of rate-efficient codes with vanishing error probability

slide-20
SLIDE 20

Linear block code + BPSK modulation

20

Equivalent Discrete-time Complex Baseband Channel

ECC Encoder Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

Detection + Decoder

ˆ b

  • Orthogonal code achieves vanishing probability of error with zero rate but finite

energy per bit (energy-efficient reliable communication).

  • Is it possible to achieve vanishing probability of error with positive rate and finite

energy per bit (rate-efficient reliable communication)?

  • We focus on the following architecture: linear block code + BPSK modulation
  • It turns out this simple architecture can achieve rate-efficient reliable communication!

Linear Block Code Binary PAM Modulator

= 1 R = k/n

slide-21
SLIDE 21

Linear block code

21

Encoding: matrix multiplication (under the binary field arithmetics)

g ∈ {0, 1}k×n b [b1 b2 ... bk] ∈ {0, 1}k c [c1 c2 ... cn] ∈ {0, 1}n

=

c = bg

Coded bit ci is the XOR of the info bits sampled by the i-th row of g. Codebook: the collection of all possible codewords [ c ]

codeword

    1 1 1 1 1     g

generator matrix

[ b ]

message

Cg

  • c = bg | b ∈ {0, 1}k
slide-22
SLIDE 22

Z ∼ N(0, N0

2 In)

Receiver: ML decoding

22

V = u + Z

ML Decoder u ∈ A {ab1,b2,...,bk | b ∈ {0, 1}k}

1-to-1 correspondence between constellation set and codebook A Cg (ab)i =

  • +√Es,

(bg)i = 1 −√Es, (bg)i = 0 i-th symbol represents the BPSK modulated outcome of the i-th bit

constellation point codeword

This is just a 2k -ary vector detection problem, and we know how to find bounds of the probability of error! ˆ B = φML(V ) Pe(φML; b, g) ≤

  • ˜

b=b

Pb,g

  • Eb˜

b

  • Distribution of V depends
  • n the message b and the

generator matrix g!

slide-23
SLIDE 23

Existence of a good generator matrix

  • The performance obviously depends on the generator matrix g.
  • Generator matrix g determines the decoding algorithm.
  • Generator matrix g determines the codebook too.
  • ML decoding can be realized by exhaustive search
  • Complexity is exponential in the codeword length n.
  • Reduction of complexity relies on the structure of the codebook.
  • Our goal:
  • NOT to explicitly design a good generator matrix
  • NOT to worry about decoding complexity at this point
  • Instead, we want to show that some of the generator matrices are good and can

yield vanishing probability of error!

  • Only need to prove the existence of good codes!

23

slide-24
SLIDE 24

Random generator matrix

24

G = [Gi,j]

= ⇒ P{G = g} =

1 2nk , ∀ g ∈ {0, 1}k×n

Gi,j

i.i.d.

∼ Ber( 1

2),

∀ i = 1, ..., k, j = 1, ..., n     1 1 1 1 1     g

generator matrix

Idea for proving existence: (fix rate R = k/n, drive n to infinity) (1) compute the average-over-random-codebook probability of error (2) show that converges to zero as (3) conclude that at least one generator matrix is good! Pe

(n)(R) EG,B [Pe(φML; B, G)]

Pe

(n)(R)

n → ∞

Why is this easier to upper bound?

made random

slide-25
SLIDE 25

Upper bounding of

25

Pe

(n)(R)

Pe

(n)(R) EG,B [Pe(φML; B, G)]

Pb,g

  • Eb→˜

b

  • = Q

u ˜ u 2N0

  • ∥u − ˜

u∥ = 2d √ # of 1's in c ⊕ ˜ c

= Q √ 2SNR

  • w(c ⊕ ˜

c)

  • weight of a 0-1 vector c ⊕ ˜

c

≤ 1 2 exp (−SNR w(c ⊕ ˜ c))

Bounding : Pe(φML; b, g) Pe(φML; b, g) ≤

˜ b̸=b Pb,g

  • Eb→˜

b

  • Averaging over random matrix G and random bits B :

Pe

(n)(R) EG,B [Pe(φML; B, G)] ≤

  • g
  • b
  • ˜

b=b

1 2nk 1 2k 1 2 exp (−SNR w(c ⊕ ˜ c))

c = bg

slide-26
SLIDE 26

26

=

  • b
  • ˜

b=b

1 2k 1 2

  • g

1 2nk exp (−SNR w(c ⊕ ˜ c))

Swap the order of summation:

  • g
  • b
  • ˜

b=b

1 2nk 1 2k 1 2 exp (−SNR w(c ⊕ ˜ c))

f(x) = P {w(xG) = } For a fixed pair of , scanning through all possible matrices g, let’s find the fraction of g such that and denote it as (b, ˜ b) w((b ⊕ ˜ b)g) = f(b ⊕ ˜ b) Pe

(n)(R) EG,B [Pe(φML; B, G)] ≤

  • g
  • b
  • ˜

b=b

1 2nk 1 2k 1 2 exp (−SNR w(c ⊕ ˜ c))

slide-27
SLIDE 27

27

Key observation: for y = xG, y1, ..., yn

i.i.d.

∼ Binom(n, 1

2)

  • b
  • ˜

b=b

1 2k 1 2

n

  • =0

f(b ⊕ ˜ b) exp (−SNR ) Pe

(n)(R) EG,B [Pe(φML; B, G)]

=

  • b
  • ˜

b=b n

  • =0

1 2k 1 2 n

  • 1

2n exp (−SNR ) = ⇒ fℓ(x) = P {w(y) = ℓ} = n

1

2n

slide-28
SLIDE 28

28

  • b
  • ˜

b=b

1 2k 1 2

n

  • =0

f(b ⊕ ˜ b) exp (−SNR ) Pe

(n)(R) EG,B [Pe(φML; B, G)]

=

  • b
  • ˜

b=b n

  • =0

1 2k 1 2 n

  • 1

2n exp (−SNR ) = 2k − 1 2

n

  • =0

n

  • 1

2n exp (−SNR ) = 2k − 1 2n+1 (1 + exp (−SNR))n ≤ 2n{R−1− 1

n +log2(1+exp(−SNR)}

slide-29
SLIDE 29

29

Pe

(n)(R) EG,B [Pe(φML; B, G)]

In particular, if we choose the rate R slightly smaller than R* as follows: then, we can guarantee that R = R∗ − δ, R∗ 1 − log2(1 + exp (−SNR)) ≤ 2n{R−1− 1

n +log2(1+exp(−SNR)}

Pe

(n)(R) EG,B [Pe(φML; B, G)] ≤ 2−nδ → 0

as n → ∞ Hence, when R < R*, there exists at least one sequence of generator matrices with strictly positive rate R and vanishing probability of error! Meanwhile, energy per bit is finite, too!

Eb N0 = nd2 kN0 = SNR R

slide-30
SLIDE 30

30

Part IV. Convolutional Code

Encoding Architecture, Trellis Representation, Maximum Likelihood Sequence Detection, Viterbi Algorithm

slide-31
SLIDE 31

31

Convolutional Code

Introduced by Peter Elias in 1955 Efficient ML decoding algorithm by Andrew Viterbi in 1967 Used in NASA space exploration projects, from Voyager (1977) onwards Widely applied in digital video, radio, satellite communications, etc.

slide-32
SLIDE 32

32

Encoding architecture

LTI Filtering b1 b2 b3 ...

message bit sequence coded bit sequence

c1 c2 c3 c4 c5 c6 ... Message bits passing through causal LTI filters to generate coded bits Multiple filters to introduce redundancy: (example: 2 filters) b1 b2 b3 ...

message bit sequence

{h(1)

` }

{h(2)

` }

merge coded bit sequence

c(1)

1 c(2) 1 c(1) 2 c(2) 2 c(1) 3 c(2) 3 ...

{bm, m ∈ N} {cm, m ∈ N} {cm, m ∈ N} {bm, m ∈ N}

{c(1)

m }

{c(2)

m }

slide-33
SLIDE 33

Encoding FIR filtering

33

Each filter is causal and has finite impulse response (FIR) h ... h(j) h(j)

1

... h(j)

L−1

... i The output bit sequence of one branch is the input convolve with the IR More generally, there can be K input sequences and N output sequences, and the code rate is R = K/N j-th branch:

binary field arithmetic coefficients of filter taps are ±1

c(j)

m = (h ∗ b)m , ∞

X

`=−∞

h(j)

` bm−` = L−1

X

`=0

h(j)

` bm−`

j-th branch: c(j)

m , K

X

i=1 L−1

X

`=0

h(i,j)

`

b(i)

m−`,

j = 1, ..., N

slide-34
SLIDE 34

Implementation with shift registers

34

The encoder (FIR filtering) can be implemented with L–1 shift registers

h(1) = 1 1 L = 3 h(2) = 1 1 1 bm bm−1 bm−2

b1 b2 b3 ... {bm, m ∈ N}

register 1 register 2

c(1)

m = bm ⊕ bm−2

c(2)

m = bm ⊕ bm−1 ⊕ bm−2

D D

input State (before) State (after) Output Output 1 00 10 1 1 10 01 1 01 00 1 1 1 00 10 1 1 c(1)

m

c(2)

m

Call the content of the two registers as the “state” of the encoder

finite state machine! K = 1, N = 2

slide-35
SLIDE 35

State transition diagram

35

h(1) = 1 1 h(2) = 1 1 1 bm bm−1 bm−2

b1 b2 b3 ... {bm, m ∈ N}

register 1 register 2

c(1)

m = bm ⊕ bm−2

c(2)

m = bm ⊕ bm−1 ⊕ bm−2

D D

For the finite state machine, its state transition diagram can be drawn

11 00 01 10 0/11 0/00 0/10 1/11 1/10 1/01 1/00 0/01 11 00 01 10 11 00 10 11 10 01 00 01

Transition: 0 1

slide-36
SLIDE 36

Trellis representation of codewords

36

11 00 01 10 11 00 10 11 10 01 00 01

Transition: 0 1

11 00 01 10 0/11 0/00 0/10 1/11 1/10 1/01 1/00 0/01 10 00 01 11 10 00 01 11 10 00 01 11

time

00 11 01 10 11 00 10 01

slide-37
SLIDE 37

37

10 00 01 11 10 00 01 11 10 00 01 11

time

10 00 01 11 10 00 01 11 10 00 01 11

00 11 01 10 11 00 10 01

Transition: 0 1 message: 1 1 codeword: 11 01 11 11

Each path represents a message and its corresponding codeword!

slide-38
SLIDE 38

Equivalent channel model under hard decision

38

Equivalent Discrete-time Complex Baseband Channel

ECC Encoder Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

ECC Decoder

ˆ b

  • Demod. +

Detection

d

Equivalent Binary-Input Binary-Output Channel Binary-input, binary-output: for the each integer i, Each input bit is flipped with certain probability p : For hard decision (bit-level detection), assume that the flips are i.i.d. p : bit error probability of the modulation scheme ci, di ∈ {0, 1}, i = 1, ..., n Di = ci ⊕ Ei, Ei ∼ Ber(p)

slide-39
SLIDE 39

Hard decision vs. soft decision

39

Equivalent Discrete-time Complex Baseband Channel

ECC Encoder Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

Detection + Decoder

ˆ b Equivalent Discrete-time Complex Baseband Channel

ECC Encoder Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

ECC Decoder

ˆ b

  • Demod. +

Detection

d

Soft decision: Hard decision:

Decode b from V Decode b from D

Equivalent Binary-Input Binary-Output Channel Focus on hard decision next

Vi = ui + Zi, Zi

  • ∼ CN(0, N0)

Di = ci ⊕ Ei, Ei

  • ∼ Ber(p)
slide-40
SLIDE 40

D = c ⊕ E

Maximum likeligood sequence detection

40

ML Decoder

ˆ B = φML(D) Equivalently, finding a length-k path on the trellis diagram such that the likelihood is maximized Likelihood (conditional pmf of D given c): PD|C(d|c) = (1 − p)n−w(d⊕c)pw(d⊕c) = (1 − p)n(

p 1−p)w(d⊕c)

Maximum likelihood is equivalent to minimum Hamming distance!

WLOG p < 1/2

φML(d) = argmax

c∈C

P D|C(d|c) = argmin

c∈C

w(d ⊕ c) dH(d, c)

# of locations where d and c disagree

Again, ML ≡ MD!

slide-41
SLIDE 41

Decompose the target function (distance)

41

dH(d, c) = n

i=1 dH(di, ci) = k m=1 dH(dm, cm)

decompose into stages of the encoding finite state machine

m = 1

message:

1 1

codeword:

11 01 11 11 m = 2 m = 3 m = 4 10 00 01 11 10 00 01 11 10 00 01 11 10 00 01 11 10 00 01 11

slide-42
SLIDE 42

Decoding: finding the minimum-cost path

42

10 00 01 11 10 00 01 11 10 00 01 11 10 00 01 11 10 00 01 11 m = 1

received:

10 00 11 00

codeword:

11 01 11 11 m = 3 m = 4 m = 2

cost: (distance)

1 1 2

dH(d, c) = 4

slide-43
SLIDE 43

Viterbi algorithm

  • How to efficiently find a minimum cost path on a trellis?
  • For a directed acyclic graph is acyclic, one can use dynamic programming to

find the min-cost path, with computational complexity polynomial in the size of the graph

  • Viterbi algorithm is a special case of finding the shortest path on a trellis

43

slide-44
SLIDE 44

44

00 10 01 11 State 1 Transition:

slide-45
SLIDE 45

45

00 10 01 11 State 1 Transition:

Initialization: start with State 00

slide-46
SLIDE 46

46

00 10 01 11 State 1 Transition:

Termination: end with State 00 by inserting 0,0 in the last two input bits

slide-47
SLIDE 47

47

00 10 01 11 State d 1 Transition:

01 10 11 01 10 00

slide-48
SLIDE 48

48

00 10 01 11 State

00 11

d 1 Transition:

V = 1 p = 00 V = 1 p = 00

01 10 11 01 10 00

slide-49
SLIDE 49

49

00 10 01 11 State

00 11

d 1 Transition:

V = 1 p = 00 V = 1 p = 00

01 10

V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10

01 10 11 01 10 00

slide-50
SLIDE 50

50

00 10 01 11 State d 1 Transition:

V = 1 p = 00 V = 1 p = 00 V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10

00 11 00 11

V = 3 p = 01 V = 2 p = 00

01 10 11 01 10 00

slide-51
SLIDE 51

51

00 10 01 11 State d 1 Transition:

V = 1 p = 00 V = 1 p = 00 V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10

10 01

V = 3 p = 01 V = 2 p = 00

10 01

V = 2 p = 11 V = 2 p = 11

01 10 11 01 10 00

slide-52
SLIDE 52

52

00 10 01 11 State d 1 Transition:

V = 1 p = 00 V = 1 p = 00 V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10 V = 3 p = 01 V = 2 p = 00 V = 2 p = 11 V = 2 p = 11

00 11 00 11

V = 3 p = 01 V = 3 p = 01

01 10 11 01 10 00

slide-53
SLIDE 53

53

00 10 01 11 State d 1 Transition:

V = 1 p = 00 V = 1 p = 00 V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10 V = 3 p = 01 V = 2 p = 00 V = 2 p = 11 V = 2 p = 11

01

V = 3 p = 01 V = 3 p = 01

10 10 01

V = 2 p = 10 V = 2 p = 11

01 10 11 01 10 00

slide-54
SLIDE 54

54

00 10 01 11 State d 1 Transition:

V = 1 p = 00 V = 1 p = 00 V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10 V = 3 p = 01 V = 2 p = 00 V = 2 p = 11 V = 2 p = 11 V = 3 p = 01 V = 3 p = 01 V = 2 p = 10 V = 2 p = 11

00 11 01 10

V = 3 p = 01 V = 2 p = 11

01 10 11 01 10 00

slide-55
SLIDE 55

55

00 10 01 11 State d 1 Transition:

V = 1 p = 00 V = 1 p = 00 V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10 V = 3 p = 01 V = 2 p = 00 V = 2 p = 11 V = 2 p = 11 V = 3 p = 01 V = 3 p = 01 V = 2 p = 10 V = 2 p = 11 V = 3 p = 01 V = 2 p = 11 V = 3 p = 00

00 11 01 10 11 01 10 00

slide-56
SLIDE 56

56

00 10 01 11 State d

01 10 11 01 10 00

1 Transition:

V = 1 p = 00 V = 1 p = 00 V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10 V = 3 p = 01 V = 2 p = 00 V = 2 p = 11 V = 2 p = 11 V = 3 p = 01 V = 3 p = 01 V = 2 p = 10 V = 2 p = 11 V = 3 p = 01 V = 2 p = 11 V = 3 p = 00

slide-57
SLIDE 57

57

00 10 01 11 State d

01 10 11 01 10 00

1 Transition:

V = 1 p = 00 V = 1 p = 00 V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10 V = 3 p = 01 V = 2 p = 00 V = 2 p = 11 V = 2 p = 11 V = 3 p = 01 V = 3 p = 01 V = 2 p = 10 V = 2 p = 11 V = 3 p = 01 V = 2 p = 11 V = 3 p = 00

b = 001000 c = 000011011100

slide-58
SLIDE 58

58

00 10 01 11 State d 1 Transition:

V = 1 p = 00 V = 1 p = 00 V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10 V = 3 p = 01 V = 2 p = 00 V = 2 p = 11 V = 2 p = 11 V = 3 p = 01 V = 3 p = 01 V = 2 p = 10 V = 2 p = 11

01 10 11 01 10 00

slide-59
SLIDE 59

59

00 10 01 11 State d 1 Transition:

V = 1 p = 00 V = 1 p = 00 V = 2 p = 00 V = 2 p = 00 V = 3 p = 10 V = 1 p = 10 V = 3 p = 01 V = 2 p = 00 V = 2 p = 11 V = 2 p = 11 V = 3 p = 01 V = 3 p = 01 V = 2 p = 10 V = 2 p = 11

01 10 11 01 10 00

b = 111100 c = 111001011011

slide-60
SLIDE 60

60

Other channel models

  • Soft decision: the additive cost function becomes the square of Euclidean

distance from the estimated signal to the received signal

  • Remark: things can be a bit trickier when the modulation size (# of bits in one symbol) is

larger than the # of output streams.

  • Think about how to draw the state transition diagram and the trellis!
  • Erasure channel: each bit is either obtained without any error, or it is erased
  • This can be realized by a detector which report the decoded bits if the likelihood function
  • f the decoded symbol is significantly larger than the threshold of other candidates
  • For an erasure channel, decoding is simple: find the codeword that match the received

sequence at the non-erased locations

  • Aside: derive the pairwise error probability!
  • Can you derive the Viterbi decoder for a convolutional code in the erasure channel?