Achieving channel capacity ... Shannon says this is possible ... how - - PowerPoint PPT Presentation

achieving channel capacity
SMART_READER_LITE
LIVE PREVIEW

Achieving channel capacity ... Shannon says this is possible ... how - - PowerPoint PPT Presentation

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Achieving channel capacity ... Shannon says this is possible ... how ? Make use of the gap between the source rate and the channel capacity: coding scheme


slide-1
SLIDE 1

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code

Achieving channel capacity ...

Shannon says this is possible ... how ? Make use of the gap between the source rate and the channel capacity: coding scheme Claude Shannon, 1953 A scheme of coding and decoding can be found allowing correction

  • f all transmission errors, if the information rate is inferior or equal

to the channel capacity.

J.-M. Brossier Turbo codes.

slide-2
SLIDE 2

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Coding and uncoding

Turbo coding

Serial encoding

Code 1 Code 2 Interleaver Data u p q

Two short systematic codes are used to build a large code

Data u Redundancy of the first coder p Redundancy of the second coder q

J.-M. Brossier Turbo codes.

slide-3
SLIDE 3

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Coding and uncoding

Turbo decoding

Iterative Decoding Scheme

Decoder 1 Decoder 2 u p q E E D

First iteration The two decoders provide a first estimation of the transmitted symbols. Each decoder transmits its

  • utput to the input of the
  • ther one for the second

iteration.

J.-M. Brossier Turbo codes.

slide-4
SLIDE 4

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Coding and uncoding

Turbo decoding

Iterative Decoding Scheme

Decoder 1 Decoder 2 u p q E E D

Second Iteration Using the outputs computed at the first iteration, the two decoders provide a second estimation of the transmitted symbols. the same sequence of

  • perations is applied for all

iterations ...

J.-M. Brossier Turbo codes.

slide-5
SLIDE 5

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Definition of a soft information, how to use it? Convolutional codes Block codes

Soft information

What is a soft information? A log-likelihood ratio Example: the Additive White Gaussian Noise Channel Its output is given by r = x + b with x = ±1 and b zero-mean Gaussian random variable with variance σ2. LLR (Log Likelihood Ratio): log p (r| + 1) p (r| − 1) = log  

1 σ √ 2π exp

  • −(r−1)2

2σ2

  • 1

σ √ 2π exp

  • −(r+1)2

2σ2

 = 2 σ2 r Interpretation:

The sign of the LLR is a hard decision Its module indicates the reliability of this decision.

J.-M. Brossier Turbo codes.

slide-6
SLIDE 6

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Definition of a soft information, how to use it? Convolutional codes Block codes

Modelisation of the decoder input

Emitted codeword X = (X1, · · · , Xn) Soft received sequence of values R = (LLR1, · · · , LLRn) LLRi are Log-Likelihood Ratios Iteration 1: information is hard or soft. Following iterations: only soft information. Decomposition of the information given by LLRs Hard decision: the LLR sign provides a hard decision: Yi = sgn [LLRi] Reliability: the LLR module provides its reliability: αi = |LLRi|

J.-M. Brossier Turbo codes.

slide-7
SLIDE 7

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Definition of a soft information, how to use it? Convolutional codes Block codes

Several kinds of decoders

Notations Vector of errors Z m = Y ⊕ X m Weights of errors W (Z m) =

n

  • i=1

Z m

i

Analog weight (soft) Wα (Z m) = n

i=1 αiZ m i .

Incomplete Decoder (Hard input) It only uses the hard information. It gives the word X m = (X m

1 , · · · , X m n ) whose Hamming

distance to Y = (Y1, · · · , Yn) is minimum.

1 word is found if W (Z m) ≤ dmin−1

2

  • else no word found.

The decision is right if the number of errors is less than dmin−1

2

  • .

J.-M. Brossier Turbo codes.

slide-8
SLIDE 8

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Definition of a soft information, how to use it? Convolutional codes Block codes

Several kinds of decoders

Notations Vector of errors Z m = Y ⊕ X m Weights of errors W (Z m) =

n

  • i=1

Z m

i

Analog weight (soft) Wα (Z m) = n

i=1 αiZ m i .

Complete Decoder (Soft input) It uses the whole information. Complete decoder: minm Wα (Y ⊕ X m) can provide a codeword even if the number of errors is greater than dmin−1

2

  • Complete Soft Decoder: minm Wα (Y ⊕ X m) with analog

weight Wα (Z m).

J.-M. Brossier Turbo codes.

slide-9
SLIDE 9

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Definition of a soft information, how to use it? Convolutional codes Block codes

SISO Convolutional

Soft input The Viterbi algorithm is able to use soft inputs: it only needs to use an Euclidian metric. Soft output The Viterbi must be modified: Soft Output Viterbi Algorithm (SOVA) Idea: keep more than a single path: the difference between metrics of the two best paths is an indication about the reliability of the decision.

J.-M. Brossier Turbo codes.

slide-10
SLIDE 10

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Definition of a soft information, how to use it? Convolutional codes Block codes

Block turbo codes

Soft input - Chase Algorithm If only hard decisions are known. Vector R is received. A hard version Y of R is usable by a usual algebric decoder. An algebraic decoder provides a codeword XA. For an incomplete decoder, the procedure stops here. But, if reliabilities are known, it is possible to improve the estimation:

Find weak positions (weak LLRs) Modify Y for these positions and produce a small set of decoded codewords using this set of decisions about R. Select the codeword whose analog distance to R is minimum.

J.-M. Brossier Turbo codes.

slide-11
SLIDE 11

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Definition of a soft information, how to use it? Convolutional codes Block codes

Block turbo codes

Soft input - Chase Algorithm If only hard decisions are known. Vector R is received. A hard version Y of R is usable by a usual algebric decoder. An algebraic decoder provides a codeword XA. For an incomplete decoder, the procedure stops here. But, if reliabilities are known, it is possible to improve the estimation:

Find weak positions (weak LLRs) Modify Y for these positions and produce a small set of decoded codewords using this set of decisions about R. Select the codeword whose analog distance to R is minimum.

J.-M. Brossier Turbo codes.

slide-12
SLIDE 12

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Definition of a soft information, how to use it? Convolutional codes Block codes

Block turbo codes

Soft outputs - Pyndiah Algorithm Aim : computation of reliabilities Λ (dj) = log P (aj = +1|R) P (aj = −1|R) Reliabilities S±1

j

is the set of words with ci

j = ±1,

P (aj = ±1|R) =

  • C i∈S±1

j

P

  • E = C i|R
  • J.-M. Brossier

Turbo codes.

slide-13
SLIDE 13

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Definition of a soft information, how to use it? Convolutional codes Block codes

Block turbo codes

Soft outputs - Pyndiah Algorithm Aim : computation of reliabilities Λ (dj) = log P (aj = +1|R) P (aj = −1|R) Reliabilities Λ (dj) = log

  • C i∈S+1

j

P

  • R|E = C i
  • C i∈S−1

j

P (R|E = C i) with P

  • R|E = C i

=

  • 1

√ 2πσ

n exp

  • −|R−C i|

2

2σ2

  • J.-M. Brossier

Turbo codes.

slide-14
SLIDE 14

Introduction Turbo Principle SISO (Soft Input Soft Output) Example of a product code Definition of a soft information, how to use it? Convolutional codes Block codes

Block turbo codes

Soft outputs - Pyndiah Algorithm Aim : computation of reliabilities Λ (dj) = log P (aj = +1|R) P (aj = −1|R) A good approximation of reliabilities is given by: Λ (dj) ≈ 1 2σ2

  • R − C −1(j)
  • 2

  • R − C +1(j)
  • 2

C ±1(j) are words in S±1

j

whose Euclidian distance to R is minimum. S±1

j

is the set of words with ci

j = ±1

J.-M. Brossier Turbo codes.