Shuffled Belief Propagation Decoding Juntan Zhang and Marc - - PowerPoint PPT Presentation

shuffled belief propagation decoding
SMART_READER_LITE
LIVE PREVIEW

Shuffled Belief Propagation Decoding Juntan Zhang and Marc - - PowerPoint PPT Presentation

Shuffled Belief Propagation Decoding Juntan Zhang and Marc Fossorier Department of Electrical Engineering University of Hawaii at Manoa Honolulu, HI 96816 Outline Review of LDPC Codes Standard Belief Propagation Algorithm Shuffled


slide-1
SLIDE 1

Shuffled Belief Propagation Decoding

Juntan Zhang and Marc Fossorier

Department of Electrical Engineering University of Hawaii at Manoa Honolulu, HI 96816

slide-2
SLIDE 2

Outline

Review of LDPC Codes Standard Belief Propagation Algorithm Shuffled Belief Propagation Algorithm Optimality and Convergence Parallel Shuffled Belief Propagation A Small Example Simulation Results Conclusion

slide-3
SLIDE 3

Low-Density Parity Check (LDPC) Codes

  • First proposed by R. G. Gallager in 1960’s, and resurrected

recently [Gallager-IRE62, MacKay-IT99] .

  • Can achieve near Shannon limit performance with belief

propagation (BP) or sum-product algorithm [Richardson- Urbanke-IT01] .

  • Advantages over turbo codes:

better distance properties; parallel decoding structure for high speed decoders.

  • Disadvantages:

encoding complexity is high; decoder complexity is high for full parallel implementation.

slide-4
SLIDE 4

Representations of LDPC Codes

M 1 1 1 1 1 1 1 1                                  = L L L L M M M M L L L L M M M M M M M O M M M O O O L L L L H

parity check matrix

N

Bit(variable) nodes Check nodes

bipartite graph

slide-5
SLIDE 5
  • An LDPC code is regular if H has constant row weight and column

weight, or equivalently, the check nodes have constant degree dc and variable nodes have constant degree dv;

  • An LDPC code is irregular if row and column weights are not

constants.

  • Irregular LDPC codes are defined by degree distributions
  • Long irregular LDPC codes have better performance than regular

LDPC codes, and can beat turbo codes [Richardson-Urbanke-IT01] .

Regular and Irregular LDPC Codes

slide-6
SLIDE 6

Geometric LDPC Codes

  • Originally studied for majority logic decoding decades ago, and

constructed based on finite geometries (Euclidean and projective geometries) [Weldon-Bell66, Rudolph-IT67];

  • BP algorithm can be applied to the decoding of this family of

codes [Lucas-Fossorier-Kou-Lin-COM00, Kou-Lin-Fossorier-IT01];

  • Encoding can be easily implemented with shift registers since

they are cyclic codes;

  • They have very good minimum distance properties;
  • Decoding complexity is high.
slide-7
SLIDE 7

An example: (7, 3) DSC code:

                      = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 H

Parity matrix is Squared ! Not full rank. Equal number of bit nodes and check nodes. Node degrees are larger.

slide-8
SLIDE 8

0.825 0.769 0.700 0.616 0.524 0.429 rate 34 (1057, 813) 66 (4161, 3431) 18 (273, 191) 10 (73, 45) 6 (21, 11) 4 (7, 3) dmin (N, K) 0.822 0.763 0.686 0.587 0.467 rate 33 (1023, 781) 65 (4095, 3367) 17 (255, 175) 9 (63, 37) 5 (15, 7) dmin (N, K)

Some one-step majority decodable codes:

PG-LDPC codes (DSC) EG-LDPC codes

slide-9
SLIDE 9

Processing in check nodes: Principles:

incoming messages + constraints ⇒ outgoing messages

( )

       =

∈ ′ ′ − n m N n n m mn

z L

\ ) ( 1

2 tanh tanh 2

1

mn

z

2

mn

z

4

mn

z

3

mn

z

3

mn

L

2

mn

L

4

mn

L

1

mn

L

2

mn

L

1

mn

z

4

mn

z

3

mn

z

Bit Nodes N(m) Check Node m

slide-10
SLIDE 10

Processing in bit nodes:

∈ ′ ′

+ =

m n M m n m n mn

L F z

\ ) (

,

) (

+ =

n M m mn n n

L F z

n m

z

4

n m

z

3

n m

z

2

n m

z

1

n m

L

4

n m

L

3

n m

L

2

n m

L

1

n

F

for hard decision Check Nodes M(n) Bit Node n

slide-11
SLIDE 11

Standard IDBP

  • Initialization
  • Step1: Update the Belief Matrix
  • Horizontal Step: Update the whole Check-to-Bit Matrix
  • Vertical Step: Update the whole Bit-to-Check Matrix
  • Step2: Hard decision and stopping test
  • Step3: Output the decoded codeword
slide-12
SLIDE 12

Standard BP Algorithm; Step 1:

( i ) Horizontal Step: ( ii )Vertical Step:

∏ ∏

∈ − ∈ −

− + =

n m N n i mn n m N n i mn i mn

z z

\ ) ( ' ) 1 ( ' \ ) ( ' ) 1 ( ' ) (

) 2 / tanh( 1 ) 2 / tanh( 1 log ε

+ =

m n M m i n m n i mn

F z

\ ) ( ' ) ( ' ) (

ε

+ =

) ( ) ( ) ( n M m i mn n i n

F z ε

slide-13
SLIDE 13

Update the belief matrix in i-th iteration with standard belief propagation decoding

) 1 ( 66 ) 1 ( 62 ) 1 ( 60 ) 1 ( 56 ) 1 ( 55 ) 1 ( 51 ) 1 ( 45 ) 1 ( 44 ) 1 ( 40 ) 1 ( 36 ) 1 ( 34 ) 1 ( 33 ) 1 ( 25 ) 1 ( 23 ) 1 ( 22 ) 1 ( 14 ) 1 ( 12 ) 1 ( 11 ) 1 ( 03 ) 1 ( 01 ) 1 ( 00 − − − − − − − − − − − − − − − − − − − − − i i i i i i i i i i i i i i i i i i i i i

z z z z z z z z z z z z z z z z z z z z z

) 1 ( 66 ) 1 ( 62 ) 1 ( 60 ) 1 ( 56 ) 1 ( 55 ) 1 ( 51 ) 1 ( 45 ) 1 ( 44 ) 1 ( 40 ) 1 ( 36 ) 1 ( 34 ) 1 ( 33 ) 1 ( 25 ) 1 ( 23 ) 1 ( 22 ) 1 ( 14 ) 1 ( 12 ) 1 ( 11 ) 1 ( 03 ) 1 ( 01 ) 1 ( 00 − − − − − − − − − − − − − − − − − − − − − i i i i i i i i i i i i i i i i i i i i i

ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε

) ( 44 ) ( 34 ) ( 14 i i i

z z z

) ( 60 ) ( 40 ) ( 00 i i i

ε ε ε

) ( 62 ) ( 22 ) ( 12 i i i

ε ε ε

) ( 33 ) ( 23 ) ( 03 i i i

ε ε ε

) ( 44 ) ( 34 ) ( 14 i i i

ε ε ε

) ( 55 ) ( 45 ) ( 25 i i i

ε ε ε

) ( 66 ) ( 56 ) ( 36 i i i

ε ε ε

) ( 60 ) ( 40 ) ( 00 i i i

z z z

) ( 51 ) ( 11 ) ( 01 i i i

ε ε ε

) ( 51 ) ( 11 ) ( 01 i i i

z z z

) ( 62 ) ( 22 ) ( 12 i i i

z z z

) ( 33 ) ( 23 ) ( 03 i i i

z z z

) ( 55 ) ( 45 ) ( 25 i i i

z z z

) ( 66 ) ( 56 ) ( 36 i i i

z z z

slide-14
SLIDE 14

Shuffled Belief Propagation

Initialization Step1: Update the Belief Matrix, for n=0,..N-1

Horizontal Step: Update the n-th column of Check-to-Bit

Matrix

Vertical Step: Update the n-th column of Bit-to-Check Matrix

Step2: Hard decision and stopping test Step3: Output the decoded codeword

slide-15
SLIDE 15

Shuffled Belief Propagation; Step 1:

( i ) Horizontal Step: ( ii )Vertical Step:

∏ ∏ ∏ ∏

> ∈ − < ∈ > ∈ − < ∈

− + =

n n n m N n i mn n n n m N n i mn n n n m N n i mn n n n m N n i mn i mn

z z z z

' \ ) ( ' ) 1 ( ' ' \ ) ( ' ) ( ' ' \ ) ( ' ) 1 ( ' ' \ ) ( ' ) ( ' ) (

) 2 / tanh( ) 2 / tanh( 1 ) 2 / tanh( ) 2 / tanh( 1 log ε

+ =

m n M m i n m n i mn

F z

\ ) ( ' ) ( ' ) (

ε

+ =

) ( ) ( ) ( n M m i mn n i n

F z ε

slide-16
SLIDE 16

) 1 ( 66 ) 1 ( 62 ) 1 ( 60 ) 1 ( 56 ) 1 ( 55 ) 1 ( 51 ) 1 ( 45 ) 1 ( 44 ) 1 ( 40 ) 1 ( 36 ) 1 ( 34 ) 1 ( 33 ) 1 ( 25 ) 1 ( 23 ) 1 ( 22 ) 1 ( 14 ) 1 ( 12 ) 1 ( 11 ) 1 ( 03 ) 1 ( 01 ) 1 ( 00 − − − − − − − − − − − − − − − − − − − − − i i i i i i i i i i i i i i i i i i i i i

z z z z z z z z z z z z z z z z z z z z z

) 1 ( 66 ) 1 ( 62 ) 1 ( 60 ) 1 ( 56 ) 1 ( 55 ) 1 ( 51 ) 1 ( 45 ) 1 ( 44 ) 1 ( 40 ) 1 ( 36 ) 1 ( 34 ) 1 ( 33 ) 1 ( 25 ) 1 ( 23 ) 1 ( 22 ) 1 ( 14 ) 1 ( 12 ) 1 ( 11 ) 1 ( 03 ) 1 ( 01 ) 1 ( 00 − − − − − − − − − − − − − − − − − − − − − i i i i i i i i i i i i i i i i i i i i i

ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε

Update the belief matrix in i-th iteration with shuffled belief propagation decoding

) ( 44 ) ( 34 ) ( 14 i i i

z z z

) ( 60 ) ( 40 ) ( 00 i i i

ε ε ε

) ( 62 ) ( 22 ) ( 12 i i i

ε ε ε

) ( 33 ) ( 23 ) ( 03 i i i

ε ε ε

) ( 44 ) ( 34 ) ( 14 i i i

ε ε ε

) ( 55 ) ( 45 ) ( 25 i i i

ε ε ε

) ( 66 ) ( 56 ) ( 36 i i i

ε ε ε

) ( 60 ) ( 40 ) ( 00 i i i

z z z

) ( 51 ) ( 11 ) ( 01 i i i

ε ε ε

) ( 51 ) ( 11 ) ( 01 i i i

z z z

) ( 62 ) ( 22 ) ( 12 i i i

z z z

) ( 33 ) ( 23 ) ( 03 i i i

z z z

) ( 55 ) ( 45 ) ( 25 i i i

z z z

) ( 66 ) ( 56 ) ( 36 i i i

z z z

slide-17
SLIDE 17

Implementation of shuffled BP

Backward-forward implementation Computation Complexity

slide-18
SLIDE 18

Optimality and Convergence Property of Shuffled BP

Given the Tanner Graph of the code is connected and acyclic.

Shuffled BP is optimal in the sense of

MAP

Shuffled BP converges faster (or at

least no slower) than Standard BP

slide-19
SLIDE 19

Parallel Shuffled BP

Divide the N bits into G groups, each

group contains bits. (regular partition).

In each group, the updatings are

processed in parallel. The processings

  • f groups are in sequential.

G

N

slide-20
SLIDE 20

Parallel Shuffled Belief Propagation

( i ) Horizontal Step:

( ii )Vertical Step:

∏ ∏ ∏ ∏

⋅ ≥ ∈ − − ⋅ ≤ ∈ ⋅ ≥ ∈ − − ⋅ ≤ ∈

− + =

G G G G

N g n n m N n i mn N g n n m N n i mn N g n n m N n i mn N g n n m N n i mn i mn

z z z z

' \ ) ( ' ) 1 ( ' 1 ' \ ) ( ' ) ( ' ' \ ) ( ' ) 1 ( ' 1 ' \ ) ( ' ) ( ' ) (

) 2 / tanh( ) 2 / tanh( 1 ) 2 / tanh( ) 2 / tanh( 1 log ε

+ =

m n M m i n m n i mn

F z

\ ) ( ' ) ( ' ) (

ε

+ =

) ( ) ( ) ( n M m i mn n i n

F z ε

slide-21
SLIDE 21

) 1 ( 66 ) 1 ( 62 ) 1 ( 60 ) 1 ( 56 ) 1 ( 55 ) 1 ( 51 ) 1 ( 45 ) 1 ( 44 ) 1 ( 40 ) 1 ( 36 ) 1 ( 34 ) 1 ( 33 ) 1 ( 25 ) 1 ( 23 ) 1 ( 22 ) 1 ( 14 ) 1 ( 12 ) 1 ( 11 ) 1 ( 03 ) 1 ( 01 ) 1 ( 00 − − − − − − − − − − − − − − − − − − − − − i i i i i i i i i i i i i i i i i i i i i

z z z z z z z z z z z z z z z z z z z z z

) 1 ( 66 ) 1 ( 62 ) 1 ( 60 ) 1 ( 56 ) 1 ( 55 ) 1 ( 51 ) 1 ( 45 ) 1 ( 44 ) 1 ( 40 ) 1 ( 36 ) 1 ( 34 ) 1 ( 33 ) 1 ( 25 ) 1 ( 23 ) 1 ( 22 ) 1 ( 14 ) 1 ( 12 ) 1 ( 11 ) 1 ( 03 ) 1 ( 01 ) 1 ( 00 − − − − − − − − − − − − − − − − − − − − − i i i i i i i i i i i i i i i i i i i i i

ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε

Update the belief matrix in i-th iteration with Group shuffled BP decoding

) ( 44 ) ( 34 ) ( 14 i i i

z z z

) ( 60 ) ( 40 ) ( 00 i i i

ε ε ε

) ( 62 ) ( 22 ) ( 12 i i i

ε ε ε

) ( 33 ) ( 23 ) ( 03 i i i

ε ε ε

) ( 44 ) ( 34 ) ( 14 i i i

ε ε ε

) ( 55 ) ( 45 ) ( 25 i i i

ε ε ε

) ( 66 ) ( 56 ) ( 36 i i i

ε ε ε

) ( 60 ) ( 40 ) ( 00 i i i

z z z

) ( 51 ) ( 11 ) ( 01 i i i

ε ε ε

) ( 51 ) ( 11 ) ( 01 i i i

z z z

) ( 62 ) ( 22 ) ( 12 i i i

z z z

) ( 33 ) ( 23 ) ( 03 i i i

z z z

) ( 55 ) ( 45 ) ( 25 i i i

z z z

) ( 66 ) ( 56 ) ( 36 i i i

z z z

slide-22
SLIDE 22

A Small Example: LDPC (6,2) code

Parity Check Matrix Tanner Graph

            = 1 1 1 1 1 1 1 1 1 H

slide-23
SLIDE 23

Decoding with Standard BP

Decoding Before

bit check } {y1 } {y2 } {y3 } {y4 } {y5 } {y6 } {s1 } {s2 } {s3 } {s4

tion Initializa } y ,y {s 4 2 1 } y y ,y {s 5 3 1 3 } y ,y {s 6 3 4

Iteration1 } y y y ,y s {s 5 3 2 1 3 2 } y y y y s {s 6 5 3 1 , 4 3 } y y {s 4 2 , 1 } y y y {s 5 3 1 , 3 } y y {s 6 3 , 4 } y y y s {s 4 2 1 , 2 1 } y y y y s {s 6 5 3 1 , 4 3 Iteration2 } y y y y y s s {s 6 3 5 2 1 , 4 3 2 } y y y y y s s {s 6 5 3 2 1 , 4 3 2 Iteration3 Iteration4 } y y y y s {s 6 5 3 1 , 4 3 } y ,y {s 2 1 2 } y y y y y s s {s 5 4 3 1 2 , 3 2 1 } y y y y y s s {s 6 5 3 1 2 , 4 3 2 } y y y s {s 4 1 2 , 2 1 } y y y y y s s {s 5 4 3 2 1 , 3 2 1 } y y y s {s 4 1 2 , 2 1 } y y y y y s s {s 4 5 2 3 1 , 3 2 1 } y y y y y s s {s 6 5 3 2 1 , 4 3 2 } y y y y ,y s s {s 5 4 3 2 1 3 2 1 } y y y y y s s {s 6 5 2 3 1 , 4 3 2 Y S U Y S U Y S U Y S U Y S U Y S U Y S U Y S U Y S U Y S U

→ ← → ← → ← → ←

Converge

slide-24
SLIDE 24

Decoding with Shuffled BP

Decoding Before

bit check } {y1 } {y2 } {y3 } {y4 } {y5 } {y6 } {s1 } {s2 } {s3 } {s4 tion Initializa

} y ,y {s 4 2 1 } y ,y {s 2 1 2 } y y ,y {s 5 3 1 3 } y ,y {s 6 3 4 Iteration1

} y y y ,y s {s 5 3 2 1 3 2

} y y y y s {s 5 3 2 1 , 3 2 } y y y y s {s 5 3 2 1 , 3 2

} y y y y ,y s s {s 4 5 3 2 1 3 2 1

} y y y y ,y s s {s 5 3 1 4 2 3 2 1 } y y y y ,y s s {s 5 3 1 4 2 3 2 1

} y y y y y s s {s 5 6 3 2 1 , 4 3 2

} y y y y y s s {s 5 6 3 2 1 , 4 3 2 } y y y y y s s {s 5 6 3 2 1 , 4 3 2

} y y y y ,y s s {s 5 3 1 4 2 3 2 1

→ ←

} y y y y y s s {s 5 6 3 2 1 , 4 3 2

→ ←

} y y y y y s s {s 5 6 3 2 1 , 4 3 2

Iteration2

Y S U

Y S U Y S U

Y S U

Y S U

Y S U

Y S U

Y S U

→ ←

Y S U

→ ←

Y S U

Converge

slide-25
SLIDE 25

Decoding with Group Shuffled BP

Decoding Before

bit check } {y1 } {y2 } {y3 } {y4 } {y5 } {y6 } {s1 } {s2 } {s3 } {s4 tion Initializa

} y ,y {s 4 2 1 } y ,y {s 2 1 2 } y y ,y {s 5 3 1 3 } y ,y {s 6 3 4 Iteration1

} y y y ,y s {s 5 3 2 1 3 2 } y y y s {s 4 1 2 , 2 1 } y y y y s {s 6 5 3 1 , 4 3

} y y y s {s 4 1 2 , 2 1 } y y y y y s s {s 4 5 3 2 1 , 3 2 1 } y y y y y s s {s 5 6 3 2 1 , 4 3 2 } y y y y s {s 6 5 3 1 , 4 3

} y y y s {s 4 1 2 , 2 1 } y y y y y s s {s 5 6 3 2 1 , 4 3 2 } y y y y s {s 6 5 3 1 , 4 3

Iteration2

Y S U } y y y y y s s {s 5 4 3 2 1 , 3 2 1 } y y y y y s s {s 6 5 3 2 1 , 4 3 2

} y y y y y s s {s 5 4 3 2 1 , 3 2 1 Y S U Y S U } y y y y y s s {s 6 5 3 2 1 , 4 3 2

} y y y y y s s {s 5 4 3 2 1 , 3 2 1 Y S U } y y y y y s s {s 6 5 3 2 1 , 4 3 2

Iteration3

Y S U Y S U

Y S U Y S U

Y S U Y S U

Converge

) 2 ( = G

slide-26
SLIDE 26

Comparison of Speed of Convergence

Standard BP Shuffled BP Group Shuffled BP

[ ]

4 3 4 3 3 2

1 = = G

I

[ ]

2 2 2 2 2 2

6 = = G

I

[ ]

3 2 3 3 3 2

2 = = G

I

slide-27
SLIDE 27

Pe of LDPC (8000,4000)(3,6) code with shuffled and standard BP

1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 10

  • 6

10

  • 5

10

  • 4

10

  • 3

10

  • 2

10

  • 1

10 Eb/No(dB) Probability of Error Standard BP 20 itr Shuffled BP 20 itr Standard BP 2000 itr Shuffled BP 2000 itr

slide-28
SLIDE 28

Average Number of Iterations

1 1.1 1.2 1.3 1.4 1.5 1.6 8 10 12 14 16 18 20 Eb/No (dB) Average Number of Iterations Standard BP Algorithm Shuffled BP Algorithm

slide-29
SLIDE 29

Pe of LDPC(8000,4000)(3,6) with Group Shuffled BP decoding

1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 10

  • 4

10

  • 3

10

  • 2

10

  • 1

10 Eb/No(dB) Probability of Block Error G=1(Standard BP) G=2 G=8 G=100 G=8000(Shuffled BP)

slide-30
SLIDE 30

Average Number of Iterations

1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 6 8 10 12 14 16 18 20 Eb/No(dB) Average Number of Iterations G=1(Standard BP) G=2 G=8 G=100 G=8000(Shuffled BP)

slide-31
SLIDE 31

Conclusion

Shuffled BP achieves a good trade-off

between performance and complexity

Group shuffled BP can decrease

decoding delay and is suitable for hardware implementation.