Linear time list recovery via expander codes Brett Hemenway and Mary - - PowerPoint PPT Presentation

linear time list recovery via expander codes
SMART_READER_LITE
LIVE PREVIEW

Linear time list recovery via expander codes Brett Hemenway and Mary - - PowerPoint PPT Presentation

Linear time list recovery via expander codes Brett Hemenway and Mary Wootters June 17 2016 Outline Introduction List recovery Expander codes List recovery of expander codes Conclusion Our Results One slide version Inner code + expander


slide-1
SLIDE 1

Linear time list recovery via expander codes

Brett Hemenway and Mary Wootters June 17 2016

slide-2
SLIDE 2

Outline

Introduction List recovery Expander codes List recovery of expander codes Conclusion

slide-3
SLIDE 3

Our Results

One slide version

◮ Inner code + expander graph ⇒ Expander code

slide-4
SLIDE 4

Our Results

One slide version

◮ Inner code + expander graph ⇒ Expander code ◮ [SS96, Spi96, Zem01, BZ02, BZ05, BZ06]

Inner code has decent distance ⇒ So does the expander code and it’s decodable in linear time!

slide-5
SLIDE 5

Our Results

One slide version

◮ Inner code + expander graph ⇒ Expander code ◮ [SS96, Spi96, Zem01, BZ02, BZ05, BZ06]

Inner code has decent distance ⇒ So does the expander code and it’s decodable in linear time!

◮ [HOW13]

Inner code has decent locality ⇒ So does the expander code and it’s locally decodable in sub-linear time!

slide-6
SLIDE 6

Our Results

One slide version

◮ Inner code + expander graph ⇒ Expander code ◮ [SS96, Spi96, Zem01, BZ02, BZ05, BZ06]

Inner code has decent distance ⇒ So does the expander code and it’s decodable in linear time!

◮ [HOW13]

Inner code has decent locality ⇒ So does the expander code and it’s locally decodable in sub-linear time!

◮ Moral?

Inner code has decent ⇒ So does the expander code and there’s an efficient algorithm!

slide-7
SLIDE 7

Our Results

One slide version

◮ Inner code + expander graph ⇒ Expander code ◮ [SS96, Spi96, Zem01, BZ02, BZ05, BZ06]

Inner code has decent distance ⇒ So does the expander code and it’s decodable in linear time!

◮ [HOW13]

Inner code has decent locality ⇒ So does the expander code and it’s locally decodable in sub-linear time!

◮ Moral?

Inner code has decent ⇒ So does the expander code and there’s an efficient algorithm!

◮ This work:

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

slide-8
SLIDE 8

Our Results

One Two slide version

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

◮ List-recoverable: decodable from uncertainty

(instead of errors).

slide-9
SLIDE 9

Our Results

One Two slide version

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

◮ List-recoverable: decodable from uncertainty

(instead of errors).

◮ Known linear-time list-recoverable codes have rate < 1/2.

slide-10
SLIDE 10

Our Results

One Two slide version

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

◮ List-recoverable: decodable from uncertainty

(instead of errors).

◮ Known linear-time list-recoverable codes have rate < 1/2. ◮ Expander codes can have rate 1 − ε!

slide-11
SLIDE 11

Our Results

One Two slide version

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

◮ List-recoverable: decodable from uncertainty

(instead of errors).

◮ Known linear-time list-recoverable codes have rate < 1/2. ◮ Expander codes can have rate 1 − ε! ◮ We can plug our codes into [Meir’14] and get the optimal∗

rate/error trade-off, for any rate.

slide-12
SLIDE 12

Outline

Introduction List recovery Expander codes List recovery of expander codes Conclusion

slide-13
SLIDE 13

List Decoding

slide-14
SLIDE 14

List Decoding

G O O D T I M E F O R P I E

slide-15
SLIDE 15

List Decoding

G O O D T I M E F O R P I E X A C U O V R T

slide-16
SLIDE 16

List Decoding

GOODTIMEFORPIE GOATSATEALLPIE

slide-17
SLIDE 17

List Recovery

G O O D T I M E F O R P I E M Y W H A T B I G T E E T H L I V E A N D L E T L I V E

slide-18
SLIDE 18

List Recovery

G O O D T I M E F O R P I E M Y W H A T B I G T E E T H L I V E A N D L E T L I V E

slide-19
SLIDE 19

List Recovery

G O O D T I M E F O R P I E M Y W H A T B I G T E E T H L I V E A N D L E T L I V E

GOODTIMEFORPIE MYWHATBIGTEETH LIVEANDLETLIVE

slide-20
SLIDE 20

List Recovery From Erasures

G O O D T I M E F O R P I E M Y W H A T B I G T E E T H L I V E A N D L E T L I V E

? ? ? ? GOODTIMEFORPIE MYWHATBIGTEETH LIVEANDLETLIVE

slide-21
SLIDE 21

List Recovery

Definition

Definition

C ∈ ΣN is (α, ℓ, L)-list-recoverable (from erasures) if:

◮ for any set of lists S1, . . . , SN

so that at least αN of them have size ≤ ℓ,

◮ there are at most L codewords c ∈ C so that ci ∈ Si for all i.

slide-22
SLIDE 22

List Recovery

Definition

Definition

C ∈ ΣN is (α, ℓ, L)-list-recoverable (from erasures) if:

◮ for any set of lists S1, . . . , SN

so that at least αN of them have size ≤ ℓ,

◮ there are at most L codewords c ∈ C so that ci ∈ Si for all i. ◮ α ←

− fraction of codeword not erased

◮ ℓ ←

− number of symbols in each slot

◮ L ←

− number of codewords that match Note: L ≥ ℓ.

slide-23
SLIDE 23

List decoding concatenated codes

An application of list recovery

message

slide-24
SLIDE 24

List decoding concatenated codes

An application of list recovery

message Cout

slide-25
SLIDE 25

List decoding concatenated codes

An application of list recovery

message Cout Cin Cin Cin Cin Cin Cin Cin

slide-26
SLIDE 26

List decoding concatenated codes

An application of list recovery

Cout Cin Cin Cin Cin Cin Cin Cin

slide-27
SLIDE 27

List decoding concatenated codes

An application of list recovery

Cout Cin Cin Cin Cin Cin Cin Cin List decode Cin

slide-28
SLIDE 28

List decoding concatenated codes

An application of list recovery

Cout Cin Cin Cin Cin Cin Cin Cin List decode Cin list list list list list list list

slide-29
SLIDE 29

List decoding concatenated codes

An application of list recovery

Cout Cin Cin Cin Cin Cin Cin Cin List decode Cin list list list list list list list List recover Cout

slide-30
SLIDE 30

List decoding concatenated codes

An application of list recovery

Cout Cin Cin Cin Cin Cin Cin Cin List decode Cin list list list list list list list List recover Cout {List of messages} Concatenated code is list decodable

slide-31
SLIDE 31

List Recovery

Applications

◮ Related to list decoding [GI02, GI03, GI04] ◮ Compressed sensing [NPR12, GNP+13] ◮ Group testing [INR10] ◮ Erasure model is weaker than error model

(erasure model was studied before in [GI04])

slide-32
SLIDE 32

Outline

Introduction List recovery Expander codes List recovery of expander codes Conclusion

slide-33
SLIDE 33

Tanner Codes [Tanner’81]

Given:

◮ A d-regular graph G with n vertices and N = nd 2 edges ◮ An inner code C0 with block length d over Σ

We get a Tanner code C.

◮ C has block length N and alphabet Σ. ◮ Codewords are labelings of edges of G. ◮ A labeling is in C if the labels on each vertex form a codeword

  • f C0.

◮ (We fix an arbitrary ordering of edges at each vertex)

slide-34
SLIDE 34

Example [Tanner’81]

G is K8, and C0 is the [7, 4, 3]-Hamming code.

N = 8

2

  • = 28 and Σ = {0, 1}
slide-35
SLIDE 35

Example [Tanner’81]

G is K8, and C0 is the [7, 4, 3]-Hamming code.

A codeword of C is a labeling of edges of G. red → 0 blue → 1

(0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1) ∈ C ⊂ {0, 1}28

slide-36
SLIDE 36

Example [Tanner’81]

G is K8, and C0 is the [7, 4, 3]-Hamming code.

These edges form a codeword in the Hamming code red → 0 blue → 1

(0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1) ∈ C ⊂ {0, 1}28

slide-37
SLIDE 37

Encoding Tanner Codes

Encoding is Easy!

  • 1. Generate parity-check matrix

Requires:

◮ Edge-vertex incidence matrix of graph ◮ Parity-check matrix of inner code

  • 2. Calculate a basis for the kernel of the parity-check matrix
  • 3. This basis defines a generator matrix for the linear Tanner

Code

  • 4. Encoding is just multiplication by this generator matrix
slide-38
SLIDE 38

Linearity

If the inner code C0 is linear, so is the Tanner code C

◮ C0 = Ker(H0) for some parity check matrix H0.

x ∈ C0 ⇐ ⇒ H0 x = 0

slide-39
SLIDE 39

Linearity

If the inner code C0 is linear, so is the Tanner code C

◮ C0 = Ker(H0) for some parity check matrix H0.

x ∈ C0 ⇐ ⇒ H0 x = 0

◮ So codewords of the Tanner code C also are defined by linear

constraints: v ↔ y y ∈ C ⇐ ⇒ ∀v ∈ G, H0 y|Γ(v) = 0

slide-40
SLIDE 40

Example: vertex edge incidence matrix of K8

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1                                                                                   1 row for each vertex 1 column for each edge

◮ Columns have weight 2

(Each edge hits two vertices)

◮ Rows have weight 7

(Each vertex has degree seven)

slide-41
SLIDE 41

Example: parity-check matrix of a Tanner code

K8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Parity-check

  • f Hamming code

Edge-vertex incidence matrix of K8

slide-42
SLIDE 42

Example: parity-check matrix of a Tanner code

K8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0 1 1 1 1 Vertex 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

slide-43
SLIDE 43

Example: parity-check matrix of a Tanner code

K8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

slide-44
SLIDE 44

Example: parity-check matrix of a Tanner code

K8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

slide-45
SLIDE 45

Example: parity-check matrix of a Tanner code

K8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

slide-46
SLIDE 46

Example: parity-check matrix of a Tanner code

K8 and the [7, 4, 3]-Hamming code

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

slide-47
SLIDE 47

If the inner code has good rate, so does the outer code

Say that C0 is linear

◮ If C0 has rate r0, it satisfies (1 − r0)d linear constraints. ◮ Each of the n vertices of G must satisfy these constraints.

slide-48
SLIDE 48

If the inner code has good rate, so does the outer code

Say that C0 is linear

◮ If C0 has rate r0, it satisfies (1 − r0)d linear constraints. ◮ Each of the n vertices of G must satisfy these constraints.

◮ C is defined by at most n · (1 − r0)d constraints.

slide-49
SLIDE 49

If the inner code has good rate, so does the outer code

Say that C0 is linear

◮ If C0 has rate r0, it satisfies (1 − r0)d linear constraints. ◮ Each of the n vertices of G must satisfy these constraints.

◮ C is defined by at most n · (1 − r0)d constraints. ◮ Length of C = N = # edges = nd/2

slide-50
SLIDE 50

If the inner code has good rate, so does the outer code

Say that C0 is linear

◮ If C0 has rate r0, it satisfies (1 − r0)d linear constraints. ◮ Each of the n vertices of G must satisfy these constraints.

◮ C is defined by at most n · (1 − r0)d constraints. ◮ Length of C = N = # edges = nd/2 ◮ The rate of C is

R = k N ≥ N − n · (1 − r0)d N = 2r0 − 1.

slide-51
SLIDE 51

Better rate bounds?

◮ The lower bound R > 2r0 − 1 is independent of the ordering

  • f edges around a vertex

◮ Tanner already noticed that order matters.

Let G be the complete bipartite graph with 7 vertices per side Let C0 be the [7, 4, 3] hamming code Then different “natural” orderings achieve a Tanner code with

◮ [49, 16, 9] ( 16

49 ≈ .327)

◮ [49, 12, 16] ( 12

49 ≈ .245)

◮ [49, 7, 17] ( 7

49 ≈ .142) Meets lower bound of 2 · 4 7 − 1

slide-52
SLIDE 52

Expander codes

When the underlying graph is an expander graph, the Tanner code is a expander code.

◮ Expander codes admit very fast decoding algorithms

[Sipser and Spielman 1996]

◮ Further improvements in

[Sipser’96, Zemor’01, Barg and Zemor’02,’05,’06]

slide-53
SLIDE 53

Outline

Introduction List recovery Expander codes List recovery of expander codes Conclusion

slide-54
SLIDE 54

Linear-time list-recoverable codes

Theorem (Main Theorem)

◮ Suppose that C0 is (α0, ℓ, L0)-list recoverable from erasures. ◮ Suppose G is an expander graph of degree d.

◮ d needs to be big enough compared to ℓ, L0.

◮ Then the expander code C is (α, ℓ, L)-list recoverable from

erasures.

◮ It can be list-recovered in linear time. ◮ Above, α, L are constants (independent of n). ...

slide-55
SLIDE 55

Comparison with Previous Work

Code Rate Decoding time Random code 1 - ε lots Folded RS (+friends) [GR08, Gur11, DL12] 1 - ε poly(n) [GI03, GI04] 1/poly(ℓ) O(n) This work (Random inner code) 1 − ε O(n)

◮ All list sizes L are constants

◮ but some are huge constants

slide-56
SLIDE 56

Improving the rate/error tradeoff

Theorem (“Dream theorem” of Meir’14)

Constructing codes of rate → 1 with property P and decent distance ⇒ Constructing codes with P that approach the Singleton bound.

slide-57
SLIDE 57

Improving the rate/error tradeoff

Theorem

For any R > 0, ℓ > 0, and ε > 0, and for any large enough L, d, q (depending only on ℓ, ε), there is a family of codes with:

◮ rate at least R ◮ (R + ε, ℓ, L)-list-recoverable in linear time.

slide-58
SLIDE 58

Improving the rate/error tradeoff

The Or Meir Construction [Mei14]

x C1(x) y z c FR1m

q

Fm

q

  • FR0d

q

n

  • Fd

q

n

  • Fd

q

n ∈ ∈ ∈ ∈ ∈ y1 C0(y1) c1 yn C0(yn) cn

Redistribute according to G

slide-59
SLIDE 59

The algorithm

Suppose that C0 is list-recoverable.

Sd = {a,b} S1 S2 . . .

slide-60
SLIDE 60

The algorithm

Suppose that C0 is list-recoverable.

Sd = {a,b} S1 S2 . . .

  • 1. List recover locally

at this vertex: get {c1, . . . , cL} a d c d c f b f c d c f a c1 d b c r e t a b c b c d b a b x d e f a b z d e d a cL . . .

slide-61
SLIDE 61

The algorithm

Suppose that C0 is list-recoverable.

Sd = {a,b}

  • 2. What if we

know this symbol is “a”? S1 S2 . . .

  • 1. List recover locally

at this vertex: get {c1, . . . , cL} a d c d c f b f c d c f a c1 d b c r e t a b c b c d b a b x d e f a b z d e d a cL . . .

slide-62
SLIDE 62

The algorithm

Suppose that C0 is list-recoverable.

Sd = {a,b}

  • 2. What if we

know this symbol is “a”? S1 S2 . . .

  • 1. List recover locally

at this vertex: get {c1, . . . , cL} a d c d c f b f c d c f a c1 d b c r e t a b c b c d b a b x d e f a b z d e d a cL . . .

slide-63
SLIDE 63

The algorithm

Suppose that C0 is list-recoverable.

Sd = {a,b}

  • 2. What if we

know this symbol is “a”? S1 S2 . . .

  • 1. List recover locally

at this vertex: get {c1, . . . , cL} a d c d c f b f c d c f a c1 d b c r e t a b c b c d b a b x d e f a b z d e d a cL . . .

slide-64
SLIDE 64

The algorithm

Suppose that C0 is list-recoverable.

Sd = {a,b}

  • 2. What if we

know this symbol is “a”? S1 S2 . . .

  • 1. List recover locally

at this vertex: get {c1, . . . , cL} a d c d c f b f c d c f a c1 d b c r e t a b c b c d b a b x d e f a b z d e d a cL . . .

  • 3. Then we know a

bunch of other symbols too.

slide-65
SLIDE 65

The algorithm

Suppose that C0 is list-recoverable.

Sd = {a,b}

  • 2. What if we

know this symbol is “a”? S1 S2 . . .

  • 1. List recover locally

at this vertex: get {c1, . . . , cL} a d c d c f b f c d c f a c1 d b c r e t a b c b c d b a b x d e f a b z d e d a cL . . .

  • 3. Then we know a

bunch of other symbols too. Lℓ possible columns. Each choice fixes about d/Lℓ others      

slide-66
SLIDE 66

The algorithm

Suppose that C0 is list-recoverable

◮ Make one decision.

slide-67
SLIDE 67

The algorithm

Suppose that C0 is list-recoverable

◮ Make one decision.

slide-68
SLIDE 68

The algorithm

Suppose that C0 is list-recoverable

◮ Make one decision.

slide-69
SLIDE 69

The algorithm

Suppose that C0 is list-recoverable

◮ Make one decision.

slide-70
SLIDE 70

The algorithm

Suppose that C0 is list-recoverable

◮ Make one decision. ◮ Determine a bunch of other edges.

slide-71
SLIDE 71

The algorithm

Suppose that C0 is list-recoverable

◮ Make one decision. ◮ Determine a bunch of other edges. ◮ Make another decision.

slide-72
SLIDE 72

The algorithm

Suppose that C0 is list-recoverable

◮ Make one decision. ◮ Determine a bunch of other edges. ◮ Make another decision. ◮ Determine some more edges.

slide-73
SLIDE 73

The algorithm

Suppose that C0 is list-recoverable

◮ Make one decision. ◮ Determine a bunch of other edges. ◮ Make another decision. ◮ Determine some more edges. ◮ ...

slide-74
SLIDE 74

The algorithm

Suppose that C0 is list-recoverable

◮ Make one decision. ◮ Determine a bunch of other edges. ◮ Make another decision. ◮ Determine some more edges. ◮ ... ◮ Correct the rest using regular decoding alg.

slide-75
SLIDE 75

The algorithm

Suppose that C0 is list-recoverable

◮ Make one decision. ◮ Determine a bunch of other edges. ◮ Make another decision. ◮ Determine some more edges. ◮ ... ◮ Correct the rest using regular decoding alg.

Number of possibilities = ℓnumber of decision edges = ℓO(1).

slide-76
SLIDE 76

Why does propagation work?

Equivalence classes of edges

◮ L > ℓ so choosing single symbol won’t determine CW ◮ L CWs, ℓ choices per index ◮ Lℓ possible “columns”

(For each index i, Lℓ maps from CW to symbol) (Lℓ maps from [L] → ℓ)

◮ Edges are in same equivalance class (wrt vertex) if knowing

  • ne determines all the others

◮ Each vertex has d edges

At most Lℓ equivalence classes Average equivalence class is of size d/Lℓ

slide-77
SLIDE 77

Why does propagation work?

Large Equivalance Classes

◮ Average size of an equivalence class is d/Lℓ ◮ What if we get unlucky and pick an edge in a small

equivalence class?

◮ By the expansion property of the graph there is a large

(constant fraction) subset of edges that all have large equivalence class

◮ Probability a uniformly chosen edge covers an ε(ε − λ)

fraction of the graph is at least 1 − 3εℓL

slide-78
SLIDE 78

Results

◮ (R + η, ℓ, L)-list recoverable codes

◮ Any R > 0 ◮ Any ℓ > 0 ◮ Any η > 0 ◮ L depends only on ℓ, η

◮ Linear-time recovery algorithm

slide-79
SLIDE 79

Outline

Introduction List recovery Expander codes List recovery of expander codes Conclusion

slide-80
SLIDE 80

Moral of the story

Inner code has decent ⇒ So does the expander code and there’s an efficient algorithm!

slide-81
SLIDE 81

Moral of the story

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

slide-82
SLIDE 82

Moral of the story

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

Open Questions:

slide-83
SLIDE 83

Moral of the story

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

Open Questions:

◮ What other properties can expander codes improve?

slide-84
SLIDE 84

Moral of the story

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

Open Questions:

◮ What other properties can expander codes improve? ◮ Handle errors?

slide-85
SLIDE 85

Moral of the story

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

Open Questions:

◮ What other properties can expander codes improve? ◮ Handle errors? ◮ Other applications with erasures?

slide-86
SLIDE 86

Moral of the story

Inner code has decent list-recoverability ⇒ So does the expander code and it’s list-recoverable in linear time!

Open Questions:

◮ What other properties can expander codes improve? ◮ Handle errors? ◮ Other applications with erasures? ◮ Better inner code? Better constants?

slide-87
SLIDE 87

The end

Thanks!

slide-88
SLIDE 88

References I

Alexander Barg and Gilles Zemor. Error exponents of expander codes. Information Theory, IEEE Transactions on, 48(6):1725–1729, June 2002. Alexander Barg and Gilles Zemor. Concatenated codes: serial and parallel. Information Theory, IEEE Transactions on, 51(5):1625–1634, May 2005. Alexander Barg and Gilles Zemor. Distance properties of expander codes. Information Theory, IEEE Transactions on, 52(1):78–90, January 2006. Zeev Dvir and Shachar Lovett. Subspace Evasive Sets. In STOC ’12, pages 351–358, October 2012.

slide-89
SLIDE 89

References II

Venkatesan Guruswami and Piotr Indyk. Near-optimal Linear-time Codes for Unique Decoding and New List-decodable Codes over Smaller Alphabets. In Proceedings of the Thiry-fourth Annual ACM Symposium

  • n Theory of Computing, STOC ’02, pages 812–821, New

York, NY, USA, 2002. ACM. Venkatesan Guruswami and Piotr Indyk. Linear time encodable and list decodable codes. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, STOC ’03, pages 126–135, New York, NY, USA, 2003. ACM.

slide-90
SLIDE 90

References III

Venkatesan Guruswami and Piotr Indyk. Efficiently decodable codes meeting Gilbert-Varshamov bound for low rates. In SODA ’04: Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, pages 756–757, Philadelphia, PA, USA, 2004. Society for Industrial and Applied Mathematics. Anna C. Gilbert, Hung Q. Ngo, Ely Porat, Atri Rudra, and Martin J. Strauss. 2/2-Foreach Sparse Recovery with Low Risk. In FedorV Fomin, Rsi¸ nˇ s Freivalds, Marta Kwiatkowska, and David Peleg, editors, Automata, Languages, and Programming, volume 7965 of Lecture Notes in Computer Science, pages 461–472. Springer Berlin Heidelberg, 2013.

slide-91
SLIDE 91

References IV

Venkatesan Guruswami and Atri Rudra. Concatenated Codes Can Achieve List-decoding Capacity. In Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’08, pages 258–267, Philadelphia, PA, USA, 2008. Society for Industrial and Applied Mathematics. Venkatesan Guruswami. Linear-algebraic list decoding of folded Reed-Solomon codes. In IEEE Conference on Computational Complexity, pages 77–85, 2011. Brett Hemenway, Rafail Ostrovsky, and Mary Wootters. Local Correctability of Expander Codes. In ICALP, LNCS. Springer, April 2013.

slide-92
SLIDE 92

References V

Piotr Indyk, Hung Q. Ngo, and Atri Rudra. Efficiently decodable non-adaptive group testing. In Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’10, pages 1126–1142, Philadelphia, PA, USA, 2010. Society for Industrial and Applied Mathematics. Or Meir. Locally Correctable and Testable Codes Approaching the Singleton Bound. ECCC Report 2014-107, 2014. Hung Q. Ngo, Ely Porat, and Atri Rudra. Efficiently Decodable Compressed Sensing by List-Recoverable Codes and Recursion. In Christoph D¨ urr and Thomas Wilke, editors, 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012), volume 14 of Leibniz International

slide-93
SLIDE 93

References VI

Proceedings in Informatics (LIPIcs), pages 230–241, Dagstuhl, Germany, 2012. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. Daniel A. Spielman. Linear-time encodable and decodable error-correcting codes. Information Theory, IEEE Transactions on, 42(6):1723–1731, November 1996. Michael Sipser and Daniel A. Spielman. Expander codes. Information Theory, IEEE Transactions on, 42(6):1710–1722, November 1996. Gilles Zemor. On expander codes. Information Theory, IEEE Transactions on, 47(2):835–837, February 2001.