Linear-Programming Decoding of Tanner Codes with Local-Optimality - - PowerPoint PPT Presentation

linear programming decoding of tanner codes with local
SMART_READER_LITE
LIVE PREVIEW

Linear-Programming Decoding of Tanner Codes with Local-Optimality - - PowerPoint PPT Presentation

Linear-Programming Decoding of Tanner Codes with Local-Optimality Certificates Nissim Halabi Guy Even School of Electrical Engineering, Tel-Aviv University July 6, 2012 1/16 Communication Over a Noisy Channel u { 0 , 1 } k c C { 0


slide-1
SLIDE 1

Linear-Programming Decoding of Tanner Codes with Local-Optimality Certificates

Nissim Halabi Guy Even

School of Electrical Engineering, Tel-Aviv University

July 6, 2012

1/16

slide-2
SLIDE 2

Communication Over a Noisy Channel

Channel Encoder Channel Decoder codeword Noisy Channel noisy codeword λ(y) ∈ RN ˆ u ∈ {0, 1}k ˆ c ∈ {0, 1}N u ∈ {0, 1}k c ∈ C ⊂ {0, 1}N

MBIOS channel: memoryless, binary-input, output-symmetric Log-Likelihood-Ratio (LLR): λi(yi) ln Pr(yi | ci = 0) Pr(yi | ci = 1)

  • Linear Code: C ⊆ {0, 1}N is subspace of FN

2 of dimension k.

Optimal decoding: Maximum Likelihood decoding. Input: y. Output: ml(y). ml(y) arg max

x∈C

Pr{y | c = x} = arg min

x∈C

λ(y), x

2/16

slide-3
SLIDE 3

Tanner Graphs and Tanner Codes

v1 x1 v2 x2 v4 x4 v3 x3 v5 x5 v6 x6 v7 x7 v8 x8 v10 x10 x9 v9

G = (V ∪ J , E)

Variable Nodes Local-Code Nodes

V J

C4 C4 C1 C1 C2 C2 C3 C3 C5 C5

Tanner code C(G, CJ ) represented by bipartite graph x ∈ C(G, CJ ) iff x ∈ Cj for every j ∈ {1, . . . , J} degrees: can be regular, irregular, bounded, or arbitrary can allow arbitrary linear local codes minimum local distance d∗ minj dj Examples: LDPC codes [Gallager’63], Expander codes [Sipser-Spielman’96]

3/16

slide-4
SLIDE 4

Linear Programming (LP) Decoding

conv(X) ⊆ RN - the convex hull a set of points X ⊆ RN. ML-decoding can be rephrased: ml(y) arg min

x∈conv(C)

λ(y), x Generalized fundamental polytope of a Tanner code C(G, CJ )

  • relaxation of conv(C) [following

Feldman-Wainwright-Karger’05] P(G, CJ )

  • Cj∈CJ

conv(Cj) LP-decoding: lp(y) arg min

x∈P(G,CJ )

λ(y), x

4/16

slide-5
SLIDE 5

LP Decoding with ML Certificate

LP-decode(λ) solve LP: ˆ xlp ← arg minx∈P(G,CJ )λ, x. if ˆ xlp ∈ {0, 1}N then return ˆ xlp is an ML codeword else return fail end if Polynomial time algorithm Applies to any MBIOS channel! Integral solution ⇒ ML-certificate

5/16

slide-6
SLIDE 6

Goal: Analysis of Finite Length Codes

Problem (Finite Length Analysis) Design: Constant rate code C(G, CJ ) and an efficient decoding algorithm dec. Analyze: If SNR > t, then Pr(dec(λ) = x|c = x) exp(−N α) for some 0 < α. Goal: Minimize t (lower bound on SNR). Remarks: Not an asymptotic problem Code is not chosen randomly from an ensemble Successful decoding = ML decoding

6/16

slide-7
SLIDE 7

Certificate for ML-Optimality / LP-Optimality

Problem (Optimality Certificate) Input: Channel observation λ and a codeword x ∈ C Question 1: Is x ML-optimal with respect to λ? is it unique? (NP-Hard) Question 2: Is x LP-optimal with respect to λ? is it unique? Relax: one-sided error test A positive answer = certificate for the optimality of x w.r.t. λ A negative answer = don’t know if optimal or not (allow one sided error) Prefer: efficient test via local computations ⇒ “Local-Optimality” criterion

7/16

slide-8
SLIDE 8

Definition of Local-Optimality

[Feldman’03] For x ∈ {0, 1}N and f ∈ [0, 1]N ⊆ RN, define the relative point x ⊕ f by (x ⊕ f)i |xi − fi| Consider a finite set of “deviations” B ⊂ [0, 1]N Definition (following [Arora-Daskalakis-Steurer’09]) A codeword x ∈ C is locally-optimal w.r.t. λ ∈ RN if for all vectors β ∈ B, λ, x ⊕ β > λ, x Goal Find a set B such that: 1 x ∈ lo(λ) ⇒ x = ml(λ) & unique 2 x ∈ lo(λ) ⇒ x = lp(λ) & unique 3 Prλ{x ∈ lo(λ) | c = x} = 1 − o(1)

λ ML(λ) LP(λ) LO(λ)

8/16

slide-9
SLIDE 9

Set B: Projections of Normalized Weighted Subtrees in Computation Trees of the Tanner Graph

  • 1. Tanner graph G

r

  • 2. Computation tree of G,

height= 2h, root= var node r

T 4

r

(r)

  • 3. d-tree: subtree T

deg(local-code node)=d

3-tree (d = 3)

  • 4. Weight function

wT : ˆ V(T ) → R

w1 wT (p) w2

  • 5. Projection of weighted d-tree to Tanner graph.

Deviation β ∈ RN= projection assignment to var. nodes

+

βv

9/16

slide-10
SLIDE 10

Local-Optimality based on Deviations Set B(w)

d

Set of deviations B(w)

d

= projections of w-weighted d-trees B(w)

d

  • β ∈ RN | ∃w−weighted d−tree T

: β = π(T )

  • Definition

A codeword x ∈ C is (h, w, d)-locally optimal w.r.t. λ ∈ RN if for all vectors β ∈ B(w)

d

, λ, x ⊕ β > λ, x

2h - tree height (h levels) w ∈ Rh

+ - tree level weights

d 2 - local-code nodes degree

10/16

slide-11
SLIDE 11

d-Trees Get “Fatter” As d Increases

2−tree (skinny tree [ADS’09]) 3−tree

4−tree Over an MBIOS channel, the probability of a local-optimality certificate increases as deviations become denser

11/16

slide-12
SLIDE 12

Local Optimality ⇒ unique ML-codeword

Theorem Let d 2. If x is (h, w, d)-locally optimal w.r.t. λ, then x is the unique ML-codeword w.r.t. λ. hard: is x the unique ML-codeword? easy: is x is locally optimal? (dynamic programming) Proof method: Lemma (Decomposition Lemma) Every codeword is a conic combination of projections of weighted d-trees in computation trees of G x = α · Eβ∈ρB(w)

d [β]

Following [ADS’09]: decomposition lemma ⇒ unique ML

12/16

slide-13
SLIDE 13

Local Optimality ⇒ unique LP optimality

Theorem If x is a (h, w, d)-locally optimal codeword w.r.t. λ, then x is also the unique optimal LP solution given λ. Proof method:

1

Use graph cover decoding [Koetter-Vontobel’05]: In graph covers, realization of LP-opt and ML codeword are the same

2

Lemma: local-optimality is invariant w.r.t. lifting to covering graphs

3

Lift of locally optimal codeword is the unique ML-codeword in the graph cover.

13/16

slide-14
SLIDE 14

Local-Optimality for LP-decoding - Comparison

[KV’06][ADS’09][HE’11] Current Deviation height h < 1

4girth(G). Local

isomorphism h is unbounded. Charac- terization using computa- tion trees Regularity Regular Tanner graph Irregular Tanner graph. Add normalization factors according to node degrees [Von’10] Constraints Single parity-check codes Linear codes. Tighter re- laxation for the generalized fundamental polytope (also in [Von’10]) Deviations “skinny”. Locally satis- fies parity checks. “fat”. Not necessarily sat- isfies local codes. LP solu- tion anal- ysis Dual/Primal LP. Poly- hedral analysis. Use reduction to ML via characterization of graph cover decoding

14/16

slide-15
SLIDE 15

Probabilistic Analysis for Regular Tanner Codes - Examples

Form of finite length bounds: ∃c > 1.∃t.∀noise < t. Pr{LP decoder fails} exp(−cgirth) If girth = θ(log N), then Pr{LP decoder fails} exp(−N α), for 0 < α < 1 N → ∞ : t is a lower bound on the threshold of LP-decoding with LO-certificate

[Skachek-Roth’03] [Feldman-Stein’05] Current work Decoder Iterative LP LP Channels Bit-flipping (worst- case) Bit-flipping (worst- case) MBIOS (average- case) Technique Expansion Expansion Density evolution

  • f

sum-min-sum random process Example: BSC(p) threshold dR >> 2 dR >> 2 dR = 16 (2, dR)-reg Tan- ner code, d∗ >> 2 d∗ >> 2 d∗ = 4 Rate=0.375

  • piterat. > 0.0016

plp > 0.0008 plp > 0.044

15/16

slide-16
SLIDE 16

Summary

Conclusions Follows line of works based on combinatorial characterizations of local-optimality [KV’06] [ADS’09] [Von’10] [HE’11]:

1

A new combinatorial characterization of local-optimality for irregular Tanner codes

2

Local-opt. ⇒ ML-opt.

3

Local-opt. ⇒ LP-opt.

4

Efficiently computed certificate (dynamic-programming)

5

Upper bounds on the word error probability of LP-decoding Open questions Prove bounds on noise thresholds for LP-decoding that are better than plp 0.05 for rate-1

2 codes [ADS’09]

Probabilistic analysis for irregular Tanner codes Probabilistic analysis beyond the girth

16/16