A New Approach for Constructing Low-Error Two-Source Extractors - - PowerPoint PPT Presentation

a new approach for constructing low error two source
SMART_READER_LITE
LIVE PREVIEW

A New Approach for Constructing Low-Error Two-Source Extractors - - PowerPoint PPT Presentation

A New Approach for Constructing Low-Error Two-Source Extractors DEAN DORON TEL-AVIV UNIVERSITY Joint work with AVRAHAM BEN-AROYA ESHAN CHATTOPADHYAY XIN LI AMNON TA-SHMA Todays talk Two-source extractors and the low-error challenge.


slide-1
SLIDE 1

A New Approach for Constructing Low-Error 
 Two-Source Extractors

DEAN DORON TEL-AVIV UNIVERSITY Joint work with AVRAHAM BEN-AROYA ESHAN CHATTOPADHYAY XIN LI AMNON TA-SHMA

slide-2
SLIDE 2

Today’s talk

Two-source extractors and the low-error challenge. Seeded and non-malleable extractors. Current constructions of two-source extractors via non-malleable extractors and where they fail in achieving small error. Constructing low-error two-source extractors given “good” non-malleable extractors.

slide-3
SLIDE 3

Today’s talk

Two-source extractors and the low-error challenge. Seeded and non-malleable extractors. Current constructions of two-source extractors via non-malleable extractors and where they fail in achieving small error. Constructing low-error two-source extractors given “good” non-malleable extractors.

slide-4
SLIDE 4

Two-source extractors

slide-5
SLIDE 5

Two-source extractors

We say that a source X over {0,1}n has min-entropy k if

for every x, Pr[X=x]≤2-k. This is how we model weak sources, of imperfect randomness. Alternatively, we can think of a weak source X as a subset of {0,1}n of cardinality 2k.

slide-6
SLIDE 6

Two-source extractors

We say that a source X over {0,1}n has min-entropy k if

for every x, Pr[X=x]≤2-k. This is how we model weak sources, of imperfect randomness. Alternatively, we can think of a weak source X as a subset of {0,1}n of cardinality 2k.

Given two independent weak source X1 and X2, we want to extract almost-uniform bits (potentially, almost all the entropy).

slide-7
SLIDE 7

Two-source extractors

We say that a source X over {0,1}n has min-entropy k if

for every x, Pr[X=x]≤2-k. This is how we model weak sources, of imperfect randomness. Alternatively, we can think of a weak source X as a subset of {0,1}n of cardinality 2k.

Given two independent weak source X1 and X2, we want to extract almost-uniform bits (potentially, almost all the entropy). We want to do it for small min-entropies and low-error.

slide-8
SLIDE 8

Two-source extractors

{0, 1}n

E

{0, 1}n

X1 X2

H∞(X1) ≥ k1 H∞(X2) ≥ k2

≈ Um E(X1, X2) ≈ε Um

slide-9
SLIDE 9

Two-source extractors

Known results for constant error.

min-entropy

Non-explicit

logn+O(1)

Chor—Goldreich 88

(½+δ)n

Raz 05

(½+δ)n,O(logn)

Bourgain 05

0.499n

Chattopadhyay—Zuckerman 16

polylog(n)

Ben-Aroya—Doron—Ta-Shma 17

log1+o(1)n

Cohen 17

logn·poly(loglogn)

Li 17

logn·loglogn

Li 18

logn·o(loglogn)

slide-10
SLIDE 10

A closer look at the error

min-entropy Non-explicit logn+O(1) [CG88] (½+δ)n [Raz05] (½+δ)n,O(logn) [Bourgain05] 0.499n [CZ16] polylog(n) [BDT17] log1+o(1)n [Cohen17] logn·poly(loglogn) [Li17] logn·loglogn [Li18] logn·o(loglogn)

slide-11
SLIDE 11

A closer look at the error

Non-explicitly, we can hope for ε=2-Ω(k). We want the construction to run in time polylog(1/ε) and not poly(1/ε).

min-entropy Non-explicit logn+O(1) [CG88] (½+δ)n [Raz05] (½+δ)n,O(logn) [Bourgain05] 0.499n [CZ16] polylog(n) [BDT17] log1+o(1)n [Cohen17] logn·poly(loglogn) [Li17] logn·loglogn [Li18] logn·o(loglogn)

slide-12
SLIDE 12

A closer look at the error

Non-explicitly, we can hope for ε=2-Ω(k). We want the construction to run in time polylog(1/ε) and not poly(1/ε). Only the constructions of Chor-Goldreich, Raz and Bourgain achieve this.

min-entropy Non-explicit logn+O(1) [CG88] (½+δ)n [Raz05] (½+δ)n,O(logn) [Bourgain05] 0.499n [CZ16] polylog(n) [BDT17] log1+o(1)n [Cohen17] logn·poly(loglogn) [Li17] logn·loglogn [Li18] logn·o(loglogn)

slide-13
SLIDE 13

A closer look at the error

Non-explicitly, we can hope for ε=2-Ω(k). We want the construction to run in time polylog(1/ε) and not poly(1/ε). Only the constructions of Chor-Goldreich, Raz and Bourgain achieve this. We will soon see where recent constructions fall short.

min-entropy Non-explicit logn+O(1) [CG88] (½+δ)n [Raz05] (½+δ)n,O(logn) [Bourgain05] 0.499n [CZ16] polylog(n) [BDT17] log1+o(1)n [Cohen17] logn·poly(loglogn) [Li17] logn·loglogn [Li18] logn·o(loglogn)

slide-14
SLIDE 14

Our goal: Low-error two-source extractors, even for δn min-entropy, for all constant δ.

(Preferably outputting many bits as well, but it often goes together…)

slide-15
SLIDE 15

Today’s talk

Two-source extractors and the low-error challenge. Seeded and non-malleable extractors. Current constructions of two-source extractors via non-malleable extractors and where they fail in achieving small error. Constructing low-error two-source extractors given “good” non-malleable extractors.

slide-16
SLIDE 16

Seeded extractors

A special case of unbalanced two-source extractors, when

  • ne source is completely

uniform (the seed). The seed length can be as small as d=2log(n/ε).

{0, 1}n

X

≈ Um Ud E source seed

slide-17
SLIDE 17

Seeded extractors

slide-18
SLIDE 18

Seeded extractors

We say a seeded extractor is strong if the output is uniform even given the seed: (E(X,Y),Y) ≈ε (U,Y).

slide-19
SLIDE 19

Seeded extractors

We say a seeded extractor is strong if the output is uniform even given the seed: (E(X,Y),Y) ≈ε (U,Y). Equivalently, for every source X with entropy at least k there exists a set of good seeds of density at least 1-ε such that for every good seed y∈{0,1}d, E(X,y) ≈ε U. We have good strong seeded extractors [LRVW03,GUV07,…].

slide-20
SLIDE 20

Seeded extractors

The δ < ½ barrier for constructing low-error two- source extractors can be morally explained by the following fact: An optimal seeded extractor, with seed-length 2log(n/ε), already gives a two-source extractor for δ > ½ having exponentially-small error. Our goal: Low-error two-source extractors, even for δn min-entropy.

slide-21
SLIDE 21

Non-malleable extractors [Dodis- Wichs 09]

slide-22
SLIDE 22

Non-malleable extractors [Dodis- Wichs 09]

A generalization of strong seeded-extractors. An adversary cannot distinguish between the

  • utput nmE(X,Y) and a uniform string, even given

the seed Y and the output of nmE on t correlated seeds.

slide-23
SLIDE 23

Non-malleable extractors [Dodis- Wichs 09]

A generalization of strong seeded-extractors. An adversary cannot distinguish between the

  • utput nmE(X,Y) and a uniform string, even given

the seed Y and the output of nmE on t correlated seeds. (nmE(X,Y),nmE(X,f1(Y)),…,nmE(X,ft(Y)),Y) is ε-close to (U,nmE(X,f1(Y)),…,nmE(X,ft(Y)),Y).

slide-24
SLIDE 24

Non-malleable extractors

slide-25
SLIDE 25

Non-malleable extractors

{0, 1}n

X

f(Y )

nmE(X, f(Y ))

nmE

{0, 1}n

X

Y = Ud nmE Y

nmE(X, Y )

( )

, ,

slide-26
SLIDE 26

Non-malleable extractors

{0, 1}n

X

f(Y )

nmE(X, f(Y ))

nmE

{0, 1}n

X

Y = Ud nmE Y

nmE(X, Y )

( )

, ,

nmE(X, f(Y ))

Y Um

( )

, ,

slide-27
SLIDE 27

Non-malleable extractors

Known explicit constructions for t=1 (a partial list). A reduction by [Cohen16] allows us to go to an arbitrary t by roughly paying a factor of t in the entropy and t2 in the seed-length. seed length min-entropy [CRS12,DLWZ11] log(n/ε) (½+δ)n [Li12] log(n/ε) 0.499n [CGL15] log2(n/ε) Ω(d) [Cohen16] log(n/ε)log(log(n)/ε) Ω(d) [CL16] log1+o(1)(n/ε) Ω(d) [Cohen17] log(n)+log(1/ε)poly(loglog(1/ε)) Ω(d) [Li17] log(n)+log(1/ε)loglog(1/ε) Ω(d)

slide-28
SLIDE 28

Non-malleable extractors

slide-29
SLIDE 29

Non-malleable extractors

We will use an equivalent definition (up to some loss in the error) [CZ16,Cohen16]. nmE is a n.m. extractor if every k-source induces a set of good seeds of high density such that the output of the extractor on a good seed is close to uniform even conditioned on its output on t other distinct seeds.

slide-30
SLIDE 30

Non-malleable extractors

We will use an equivalent definition (up to some loss in the error) [CZ16,Cohen16]. nmE is a n.m. extractor if every k-source induces a set of good seeds of high density such that the output of the extractor on a good seed is close to uniform even conditioned on its output on t other distinct seeds. For every X there exists a set of G of density at least 1-ε such that for every y∈G and any y1,…,yt∈{0,1}d\{y} it holds that:
 (nmE(X,y),nmE(X,y1),…,nmE(X,yt)) ≈ε
 (U,nmE(X,y1),…,nmE(X,yt)).

slide-31
SLIDE 31

Today’s talk

Two-source extractors and the low-error challenge. Seeded and non-malleable extractors. Current constructions of two-source extractors via non-malleable extractors and where they fail in achieving small error. Constructing low-error two-source extractors given “good” non-malleable extractors.

slide-32
SLIDE 32

Current constructions of two-source extractors

slide-33
SLIDE 33

Current constructions of two-source extractors

All recent constructions of two-source extractors use non-malleable extractors as a central ingredient (the [CZ16] scheme).

slide-34
SLIDE 34

Current constructions of two-source extractors

All recent constructions of two-source extractors use non-malleable extractors as a central ingredient (the [CZ16] scheme). A bird’s-eye view of these constructions: Given two inputs x1 and x2, Generate a table of nmE(x1,i) for all seeds i∈{0,1}d. Using x2, sample a subset of the rows. Apply a resilient function on the reduced table.

slide-35
SLIDE 35

Current constructions of two-source extractors

X1 X2

{0, 1}n {0, 1}n

slide-36
SLIDE 36

Current constructions of two-source extractors

X1 X2 x1

{0, 1}n {0, 1}n

slide-37
SLIDE 37

Current constructions of two-source extractors

X1 X2 x1

{0, 1}n {0, 1}n

slide-38
SLIDE 38

X2

{0, 1}n

Current constructions of two-source extractors

X1 X2 x1

{0, 1}n {0, 1}n

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

slide-39
SLIDE 39

X2

{0, 1}n

Current constructions of two-source extractors

X1 X2 x1

x2

{0, 1}n {0, 1}n

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

slide-40
SLIDE 40

X2

{0, 1}n

Current constructions of two-source extractors

X1 X2 x1

x2

{0, 1}n {0, 1}n

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

D0

slide-41
SLIDE 41

X2

{0, 1}n

Current constructions of two-source extractors

X1 X2 x1

x2

{0, 1}n {0, 1}n

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

D0

slide-42
SLIDE 42

X2

{0, 1}n

Current constructions of two-source extractors

X1 X2 x1

x2

{0, 1}n {0, 1}n

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

D0

. . .

nmE(x1, 3) nmE(x1, 7)

slide-43
SLIDE 43

X2

{0, 1}n

Current constructions of two-source extractors

X1 X2 x1

x2

{0, 1}n {0, 1}n

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

D0

. . .

nmE(x1, 3) nmE(x1, 7)

f

slide-44
SLIDE 44

X2

{0, 1}n

Current constructions of two-source extractors

X1 X2 x1

x2

{0, 1}n {0, 1}n

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

D0

. . .

nmE(x1, 3) nmE(x1, 7)

f ≈ U1

slide-45
SLIDE 45

Resilient functions

slide-46
SLIDE 46

Resilient functions

The sampled table is close to being uniform and t-wise independent in the good rows.

slide-47
SLIDE 47

Resilient functions

The sampled table is close to being uniform and t-wise independent in the good rows. We need f to be resilient: Say we have D’ players. ε-fraction of them are malicious, and the rest are t-wise independent and uniform.

slide-48
SLIDE 48

Resilient functions

The sampled table is close to being uniform and t-wise independent in the good rows. We need f to be resilient: Say we have D’ players. ε-fraction of them are malicious, and the rest are t-wise independent and uniform. The honest players draw their random bit and later the malicious players draw as they wish.

slide-49
SLIDE 49

Resilient functions

The sampled table is close to being uniform and t-wise independent in the good rows. We need f to be resilient: Say we have D’ players. ε-fraction of them are malicious, and the rest are t-wise independent and uniform. The honest players draw their random bit and later the malicious players draw as they wish. With high probability, the outcome has small bias — the malicious players cannot substantially bias the outcome.

slide-50
SLIDE 50

The bottleneck

slide-51
SLIDE 51

The bottleneck

A corollary of [KKL88] — even one malicious player can bias the output with probability at least logD’/D’.

slide-52
SLIDE 52

The bottleneck

A corollary of [KKL88] — even one malicious player can bias the output with probability at least logD’/D’. We cannot hope for an error smaller than 1/D’, and D’ is the size of our table.

slide-53
SLIDE 53

The bottleneck

A corollary of [KKL88] — even one malicious player can bias the output with probability at least logD’/D’. We cannot hope for an error smaller than 1/D’, and D’ is the size of our table. Thus, the running time is at least 1/ε.

slide-54
SLIDE 54

Today’s talk

Two-source extractors and the low-error challenge. Seeded and non-malleable extractors. Current constructions of two-source extractors via non-malleable extractors and where they fail in achieving small error. Constructing low-error two-source extractors given “good” non-malleable extractors.

slide-55
SLIDE 55

Getting a small error

slide-56
SLIDE 56

Getting a small error

We should abandon resilient functions if we want to get a small error.

slide-57
SLIDE 57

Getting a small error

We should abandon resilient functions if we want to get a small error. In current constructions, we need the sampled set to contain many good rows.

slide-58
SLIDE 58

Getting a small error

We should abandon resilient functions if we want to get a small error. In current constructions, we need the sampled set to contain many good rows. Instead of trying to sample and then employ t-wise independence in the good rows, let’s just try and hit a good row — a weaker sampling guarantee.

slide-59
SLIDE 59

Getting a small error

We should abandon resilient functions if we want to get a small error. In current constructions, we need the sampled set to contain many good rows. Instead of trying to sample and then employ t-wise independence in the good rows, let’s just try and hit a good row — a weaker sampling guarantee. We hit with a disperser.

slide-60
SLIDE 60

Dispersers

A

|A| ≥ K

B

|Γ(A, [D])| > K0

{0, 1}n = [N] {0, 1}m = [M]

slide-61
SLIDE 61

Dispersers

Γ:{0,1}n×[D]→{0,1}m is a (K,K’)-disperser if for every set A of cardinality at least K, Γ maps A to a set of cardinality greater than K’.

A

|A| ≥ K

B

|Γ(A, [D])| > K0

{0, 1}n = [N] {0, 1}m = [M]

slide-62
SLIDE 62

Dispersers

Γ:{0,1}n×[D]→{0,1}m is a (K,K’)-disperser if for every set A of cardinality at least K, Γ maps A to a set of cardinality greater than K’. We are interested in the case where K’ is small compared to 2m. That is, we want to avoid small bad sets.

A

|A| ≥ K

B

|Γ(A, [D])| > K0

{0, 1}n = [N] {0, 1}m = [M]

slide-63
SLIDE 63

Dispersers

[RT]: When K’ is not too large, say K’=εM, the lower bound on the degree is

A

|A| ≥ K

B

|Γ(A, [D])| > K0

D = Ω log N

K

log 1

ε

!

{0, 1}n = [N] {0, 1}m = [M]

slide-64
SLIDE 64

Explicit disperser

slide-65
SLIDE 65

Explicit disperser

Quite amazingly, when K=N𝜀 for a constant 𝜀<1 (alternatively, for entropy k = 𝜀n), there exist explicit constructions that achieve this bound [BKSSW 05, Raz 05, Zuckerman 06].

slide-66
SLIDE 66

Explicit disperser

Quite amazingly, when K=N𝜀 for a constant 𝜀<1 (alternatively, for entropy k = 𝜀n), there exist explicit constructions that achieve this bound [BKSSW 05, Raz 05, Zuckerman 06]. The key ingredient in Zuckerman’s beautiful construction: a points-lines incidence graph.

slide-67
SLIDE 67

Explicit disperser

Quite amazingly, when K=N𝜀 for a constant 𝜀<1 (alternatively, for entropy k = 𝜀n), there exist explicit constructions that achieve this bound [BKSSW 05, Raz 05, Zuckerman 06]. The key ingredient in Zuckerman’s beautiful construction: a points-lines incidence graph. Gives sub-optimal results also for lower k-s, where 𝜀 is sub-constant.

slide-68
SLIDE 68

Our reduction

slide-69
SLIDE 69

Our reduction

We are given a source X1 over {0,1}n1 with entropy k1 and a source X2 over {0,1}n2 with min-entropy k2.

slide-70
SLIDE 70

Our reduction

We are given a source X1 over {0,1}n1 with entropy k1 and a source X2 over {0,1}n2 with min-entropy k2. Ingredients: nmE: {0,1}n1 × [D]→{0,1}m, a t-n.m. extractor with error ε. Γ: {0,1}n2 × [t+1]→[D], a (εK2,εD)-disperser.

slide-71
SLIDE 71

Our reduction

We are given a source X1 over {0,1}n1 with entropy k1 and a source X2 over {0,1}n2 with min-entropy k2. Ingredients: nmE: {0,1}n1 × [D]→{0,1}m, a t-n.m. extractor with error ε. Γ: {0,1}n2 × [t+1]→[D], a (εK2,εD)-disperser. On input x1,x2, output ⊕i∈[t+1]nmE(x1,Γ(x2,i)).

slide-72
SLIDE 72

Our reduction

X1 X2 [N1] [N2]

slide-73
SLIDE 73

Our reduction

X1 X2 x1 [N1] [N2]

slide-74
SLIDE 74

Our reduction

X1 X2 x1 [N1] [N2]

slide-75
SLIDE 75

X2 [N2]

Our reduction

X1 X2 x1

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

[N1] [N2]

slide-76
SLIDE 76

X2 [N2]

Our reduction

X1 X2 x1

x2

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

[N1] [N2]

slide-77
SLIDE 77

X2 [N2]

Our reduction

X1 X2 x1

x2

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

t + 1

Γ

[N1] [N2]

slide-78
SLIDE 78

X2 [N2]

Our reduction

X1 X2 x1

x2

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

t + 1

Γ

[N1] [N2]

slide-79
SLIDE 79

X2 [N2]

Our reduction

X1 X2 x1

x2

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

. . .

nmE(x1, 3) nmE(x1, 7)

t + 1

Γ

[N1] [N2]

slide-80
SLIDE 80

X2 [N2]

Our reduction

X1 X2 x1

x2

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

. . .

nmE(x1, 3) nmE(x1, 7)

t + 1

Γ

M

[N1] [N2]

slide-81
SLIDE 81

X2 [N2]

Our reduction

X1 X2 x1

x2

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

. . .

nmE(x1, 3) nmE(x1, 7)

≈ U1

t + 1

Γ

M

[N1] [N2]

slide-82
SLIDE 82

X2 [N2]

Our reduction

X1 X2 x1

x2

. . . . . .

nmE(x1, 1) nmE(x1, 2) nmE(x1, D)

. . .

nmE(x1, 3) nmE(x1, 7)

≈ U1

t + 1

Γ

M

[N1] [N2]

N

  • r

e s i l i e n t f u n c t i

  • n

s h e r e !

slide-83
SLIDE 83

Correctness overview

slide-84
SLIDE 84

Correctness overview

The source X1 defines a set of good and bad seeds for the n.m. extractor. Let G be the set of good seeds, of density at least 1-ε.

slide-85
SLIDE 85

Correctness overview

The source X1 defines a set of good and bad seeds for the n.m. extractor. Let G be the set of good seeds, of density at least 1-ε. Γ is a (εK2,εD)-disperser, so the number of elements x2 for which Γ(x2,[t+1]) contains only bad seeds is at most εK2.

slide-86
SLIDE 86

Correctness overview

The source X1 defines a set of good and bad seeds for the n.m. extractor. Let G be the set of good seeds, of density at least 1-ε. Γ is a (εK2,εD)-disperser, so the number of elements x2 for which Γ(x2,[t+1]) contains only bad seeds is at most εK2. Thus, with probability at least 1-εK2/K2=1-ε, the input x2 samples t+1 seeds of nmE, one of which, y, is good.

slide-87
SLIDE 87

Correctness overview

slide-88
SLIDE 88

Correctness overview

O n i n p u t x

1

, x

2

,

  • u

t p u t

⊕i∈[t+1]

n m E ( x

1

, Γ ( x

2

, i ) )

slide-89
SLIDE 89

Correctness overview

In such a case, nmE(X,y) is ε-close to uniform, even condition on t arbitrary outputs! This is since:

O n i n p u t x

1

, x

2

,

  • u

t p u t

⊕i∈[t+1]

n m E ( x

1

, Γ ( x

2

, i ) )

slide-90
SLIDE 90

Correctness overview

In such a case, nmE(X,y) is ε-close to uniform, even condition on t arbitrary outputs! This is since: For every y∈G and any y1,…,yt∈{0,1}d\{y} it holds that (nmE(X,y),nmE(X,y1),…,nmE(X,yt)) is ε-close to (U,nmE(X,y1),…,nmE(X,yt)).

O n i n p u t x

1

, x

2

,

  • u

t p u t

⊕i∈[t+1]

n m E ( x

1

, Γ ( x

2

, i ) )

slide-91
SLIDE 91

Correctness overview

In such a case, nmE(X,y) is ε-close to uniform, even condition on t arbitrary outputs! This is since: For every y∈G and any y1,…,yt∈{0,1}d\{y} it holds that (nmE(X,y),nmE(X,y1),…,nmE(X,yt)) is ε-close to (U,nmE(X,y1),…,nmE(X,yt)). Hence, the parity of the sampled random variables is also close to 
 uniform, and the overall 
 error is 2ε.

O n i n p u t x

1

, x

2

,

  • u

t p u t

⊕i∈[t+1]

n m E ( x

1

, Γ ( x

2

, i ) )

slide-92
SLIDE 92

Our reduction

So, if the n.m. extractor can support small error (and existing constructions can), we get a construction with a small error.

slide-93
SLIDE 93

Our reduction

slide-94
SLIDE 94

Our reduction

The parity is not resilient… What happened here? We proposed a different approach:

slide-95
SLIDE 95

Our reduction

The parity is not resilient… What happened here? We proposed a different approach: Instead of sampling D’ rows from the table and applying a resilient function, we pick a drastically smaller sample set — of size t+1.

slide-96
SLIDE 96

Our reduction

The parity is not resilient… What happened here? We proposed a different approach: Instead of sampling D’ rows from the table and applying a resilient function, we pick a drastically smaller sample set — of size t+1. Instead of requiring that the number of malicious players is small, we have the weaker requirement that not all of the players in our sample set are malicious.

slide-97
SLIDE 97

But does it work?

slide-98
SLIDE 98

But does it work?

Or, when does it work? We have no option but to look closer into the parameters.

slide-99
SLIDE 99

But does it work?

Or, when does it work? We have no option but to look closer into the parameters. A potential circular hazard: The degree of Γ should be at most t+1, but The degree of Γ also depends on the seed length of the n.m. extractor, which in turn depends on t…

slide-100
SLIDE 100

Our result

slide-101
SLIDE 101

Our result

We see that the seed length of the n.m. extractor plays a crucial role.

slide-102
SLIDE 102

Our result

We see that the seed length of the n.m. extractor plays a crucial role. Say there exists an explicit n.m. extractor with seed length d and supports entropy k1. Our results:

slide-103
SLIDE 103

Our result

We see that the seed length of the n.m. extractor plays a crucial role. Say there exists an explicit n.m. extractor with seed length d and supports entropy k1. Our results: If d=ctlog(n1/ε) for a small enough constant c, there exists an explicit two-source extractor with small error for entropies k1 and k2=𝜀n2 (for every constant 𝜀).

slide-104
SLIDE 104

Our result

We see that the seed length of the n.m. extractor plays a crucial role. Say there exists an explicit n.m. extractor with seed length d and supports entropy k1. Our results: If d=tɣlog(n1/ε) for a small enough constant ɣ, there exists an explicit two-source extractor with small error for entropies k1 and k2=n2β for some constant β.

slide-105
SLIDE 105

Good n.m. extractors

slide-106
SLIDE 106

Good n.m. extractors

Non-explicitly, our constraints on d are easily

  • satisfied. The seed length of a probabilistic

construction is d=2log(n/ε)+O(log t).

slide-107
SLIDE 107

Good n.m. extractors

Non-explicitly, our constraints on d are easily

  • satisfied. The seed length of a probabilistic

construction is d=2log(n/ε)+O(log t). Taking a closer look on recent constructions of non-malleable extractors, we see that d=Ω(k) and k=Õ(t2log(n/ε)).

slide-108
SLIDE 108

Good n.m. extractors

slide-109
SLIDE 109

Good n.m. extractors

To summarize…

slide-110
SLIDE 110

Good n.m. extractors

To summarize… Constructing low-error two-source extractor is a big challenge.

slide-111
SLIDE 111

Good n.m. extractors

To summarize… Constructing low-error two-source extractor is a big challenge. Previous works: N.m. extractors with short seed length supporting small entropies give rise to good two-source extractors with constant error.

slide-112
SLIDE 112

Good n.m. extractors

This work: N.m. extractors also give rise to two- source extractors with small error supporting polynomially-small min-entropy, as long as the seed-length’s dependence on t is good. The moral: Keep constructing non-malleable extractors, using new techniques.

slide-113
SLIDE 113

Thanks for listening.