Secure Computation using Leaky Correlations (Asymptotically Optimal - - PowerPoint PPT Presentation

secure computation using leaky correlations
SMART_READER_LITE
LIVE PREVIEW

Secure Computation using Leaky Correlations (Asymptotically Optimal - - PowerPoint PPT Presentation

Secure Computation using Leaky Correlations (Asymptotically Optimal Constructions) Alexander R. Block 1 , Divya Gupta 2 , Hemanta K. Maji 1 , Hai H. Nguyen 1 1 Purdue University, {block9,hmaji,nguye245}@purdue.edu 2 Microsoft Research, Banaglore,


slide-1
SLIDE 1

Secure Computation using Leaky Correlations (Asymptotically Optimal Constructions)

Alexander R. Block1, Divya Gupta2, Hemanta K. Maji1, Hai H. Nguyen1

1Purdue University, {block9,hmaji,nguye245}@purdue.edu 2Microsoft Research, Banaglore, India, divya.gupta@microsoft.com 1 / 21

slide-2
SLIDE 2

Correlated Private Randomness (Correlation) (rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase

2 / 21

slide-3
SLIDE 3

Correlated Private Randomness (Correlation) (rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase 1 2 mBob mAlice Online Phase

2 / 21

slide-4
SLIDE 4

Correlated Private Randomness (Correlation)

OT

(rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase 1 2 mBob mAlice Online Phase

Example

Parties can use (rA, rB) to generate multiple samples of Oblivious Transfer in an online protocol, which can then be used to securely compute any circuit.

2 / 21

slide-5
SLIDE 5

Correlated Private Randomness (Correlation) (rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase 1 2 mBob mAlice Online Phase

Notes

The preprocessing phase is independent of the functionality or the inputs fed to the functionality by the parties. Secret shares (rA, rB) are vulnerable to arbitrary leakage attacks.

2 / 21

slide-6
SLIDE 6

Correlated Private Randomness (Correlation) (rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase 1 2 mBob mAlice Online Phase LAlice(rB)

Notes

The preprocessing phase is independent of the functionality or the inputs fed to the functionality by the parties. Secret shares (rA, rB) are vulnerable to arbitrary leakage attacks.

2 / 21

slide-7
SLIDE 7

Correlated Private Randomness (Correlation) (rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase 1 2 mBob mAlice Online Phase LBob(rA)

Questions

Given such leakage attacks, how can we securely use the initial preprocessing?

2 / 21

slide-8
SLIDE 8

Correlation Extractors (CorrExt)

Introduced by Ishai, Kushilevitz, Ostrovsky, and Sahai at FOCS 2009 [IKOS09] to address leakage attacks Take leaky correlations as input and produce secure independent copies of oblivious transfer (OT) (or Randomized OTs)

3 / 21

slide-9
SLIDE 9

(n, m, t, ε)-Correlation Extractor for (RA, RB)

(rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase

4 / 21

slide-10
SLIDE 10

(n, m, t, ε)-Correlation Extractor for (RA, RB)

(rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase

n-bits

rA rB

4 / 21

slide-11
SLIDE 11

(n, m, t, ε)-Correlation Extractor for (RA, RB)

(rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase

n-bits

rA rB

t-bit leakage t-bit leakage sender corruption

  • r

receiver corruption

Leakage Phase

4 / 21

slide-12
SLIDE 12

(n, m, t, ε)-Correlation Extractor for (RA, RB)

(rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase

n-bits

rA rB

t-bit leakage t-bit leakage sender corruption

  • r

receiver corruption

Leakage Phase 1 2 mBob mAlice ε-Secure Online Phase

4 / 21

slide-13
SLIDE 13

(n, m, t, ε)-Correlation Extractor for (RA, RB)

(rA, rB) ∼ (RA, RB)

rA rB Preprocessing Phase

n-bits

rA rB

t-bit leakage t-bit leakage sender corruption

  • r

receiver corruption

Leakage Phase 1 2 mBob mAlice ε-Secure Online Phase ROT1 ROTm ROT2

· · · · · · · · ·

Fresh ROT Output Phase

4 / 21

slide-14
SLIDE 14

Correlation Extractors (CorrExt): which (RA, RB)?

Random Oblivious Transfer (ROT):

ROTn/2

m(i)

0 , m(i) 1 , c(i) $

← {0, 1} (m(i)

0 , m(i) 1 ) ∈ {0, 1}n

(c(i), m(i)

c(i)) ∈ {0, 1}n 5 / 21

slide-15
SLIDE 15

Correlation Extractors (CorrExt): which (RA, RB)?

Random Oblivious Transfer (ROT):

ROTn/2

m(i)

0 , m(i) 1 , c(i) $

← {0, 1} (m(i)

0 , m(i) 1 ) ∈ {0, 1}n

(c(i), m(i)

c(i)) ∈ {0, 1}n

Random Oblivious Linear-function Evaluation (ROLE

  • F
  • ):

ROLE

  • F

n/2

a(i), b(i), x(i)

$

← F z(i) := a(i)x(i) + b(i) (a(i), b(i)) ∈ Fn (x(i), z(i)) ∈ Fn

5 / 21

slide-16
SLIDE 16

Correlation Extractors (CorrExt): which (RA, RB)?

Random Oblivious Transfer (ROT):

ROTn/2

m(i)

0 , m(i) 1 , c(i) $

← {0, 1} (m(i)

0 , m(i) 1 ) ∈ {0, 1}n

(c(i), m(i)

c(i)) ∈ {0, 1}n

Random Oblivious Linear-function Evaluation (ROLE

  • F
  • ):

ROLE

  • F

n/2

a(i), b(i), x(i)

$

← F z(i) := a(i)x(i) + b(i) (a(i), b(i)) ∈ Fn (x(i), z(i)) ∈ Fn

Note ROT ≡ ROLE

  • GF [2]
  • since mc = (m1 − m0)c + m0.

5 / 21

slide-17
SLIDE 17

Prior Work and Our Contribution

Result Correlation m t ε # [IKOS09] ROTn/2 Θ(n) Θ(n) 2−Θ(n) 4 [GIMS15] ROTn/2 n poly log n (1/4 − g)n 2−gn/m 2

6 / 21

slide-18
SLIDE 18

Prior Work and Our Contribution

Result Correlation m t ε # [IKOS09] ROTn/2 Θ(n) Θ(n) 2−Θ(n) 4 [GIMS15] ROTn/2 n poly log n (1/4 − g)n 2−gn/m 2

3IP

  • GF [2]n

1 (1/2 − g)n 2−gn 2

3The inner-product correlation IP

  • K

n/lg |K|

is a correlation in which each party gets a vector in K

n/lg |K| such that their vectors are orthogonal.

6 / 21

slide-19
SLIDE 19

Prior Work and Our Contribution

Result Correlation m t ε # [IKOS09] ROTn/2 Θ(n) Θ(n) 2−Θ(n) 4 [GIMS15] ROTn/2 n poly log n (1/4 − g)n 2−gn/m 2 IP

  • GF [2]n

1 (1/2 − g)n 2−gn 2 [BMN17] IP

  • K

n/lg |K|

n1−o(1) (1/2 − g)n 2−gn 2

6 / 21

slide-20
SLIDE 20

Prior Work and Our Contribution

Result Correlation m t ε # [IKOS09] ROTn/2 Θ(n) Θ(n) 2−Θ(n) 4 [GIMS15] ROTn/2

n/poly log n

(1/4 − g)n 2−gn/m 2 IP

  • GF [2]n

1 (1/2 − g)n 2−gn 2 [BMN17] IP

  • K

n/lg |K|

n1−o(1) (1/2 − g)n 2−gn 2 Our Work ROTn/2 ROLE

  • F

n/2 lg |F|

7 / 21

slide-21
SLIDE 21

Prior Work and Our Contribution

Result Correlation m t ε # [IKOS09] ROTn/2 Θ(n) Θ(n) 2−Θ(n) 4 [GIMS15] ROTn/2

n/poly log n

(1/4 − g)n 2−gn/m 2 IP

  • GF [2]n

1 (1/2 − g)n 2−gn 2 [BMN17] IP

  • K

n/lg |K|

n1−o(1) (1/2 − g)n 2−gn 2 Our Work ROTn/2 Θ(n) Θ(n) 2−Θ(n) 2 ROLE

  • F

n/2 lg |F| Θ(n) Θ(n) 2−Θ(n) 2

7 / 21

slide-22
SLIDE 22

Prior Work and Our Contribution

Result Correlation m t ε # [IKOS09] ROTn/2 Θ(n) Θ(n) 2−Θ(n) 4 [GIMS15] ROTn/2

n/poly log n

(1/4 − g)n 2−gn/m 2 IP

  • GF [2]n

1 (1/2 − g)n 2−gn 2 [BMN17] IP

  • K

n/lg |K|

n1−o(1) (1/2 − g)n 2−gn 2 Our Work ROTn/2 Θ(n) Θ(n) 2−Θ(n) 2 ROLE

  • F

n/2 lg |F| Θ(n) Θ(n) 2−Θ(n) 2 [BMN18] IP

  • K

n/lg |K|

Θ(n) (1/2 − g)n 2−gn 2

7 / 21

slide-23
SLIDE 23

Prior Work and Our Contribution

Result Correlation m t ε # [IKOS09] ROTn/2 Θ(n) Θ(n) 2−Θ(n) 4 [GIMS15] ROTn/2

n/poly log n

(1/4 − g)n 2−gn/m 2 IP

  • GF [2]n

1 (1/2 − g)n 2−gn 2 [BMN17] IP

  • K

n/lg |K|

n1−o(1) (1/2 − g)n 2−gn 2 Our Work ROTn/2 Θ(n) Θ(n) 2−Θ(n) 2 ROLE

  • F

n/2 lg |F| Θ(n) Θ(n) 2−Θ(n) 2 [BMN18] IP

  • K

n/lg |K|

Θ(n) (1/2 − g)n 2−gn 2

Notes

In an ongoing work, we reduce the communication complexity of our extractors from Θ(n log n) to Θ(n).

7 / 21

slide-24
SLIDE 24

Main Results

Theorem (Asymptotically Optimal Correlation Extractor for ROT)

∃ a 2-message (n, m, t, ε)-correlation extractor for ROTn/2 such that m = Θ(n) t = Θ(n) ε = 2−Θ(n)

8 / 21

slide-25
SLIDE 25

Main Results

Theorem (Asymptotically Optimal Correlation Extractor for ROT)

∃ a 2-message (n, m, t, ε)-correlation extractor for ROTn/2 such that m = Θ(n) t = Θ(n) ε = 2−Θ(n) The technical heart of this theorem is another correlation extractor for ROLE

  • F
  • .

8 / 21

slide-26
SLIDE 26

Main Results

Theorem (Asymptotically Optimal Correlation Extractor for ROT)

∃ a 2-message (n, m, t, ε)-correlation extractor for ROTn/2 such that m = Θ(n) t = Θ(n) ε = 2−Θ(n) The technical heart of this theorem is another correlation extractor for ROLE

  • F
  • .

Theorem (Asymptotically Optimal Correlation Extractor for ROLE

  • F
  • )

For all large enough constant sized fields F (e.g., |F| = 64) ∃ a 2-message (n, m, t, ε)-correlation extractor for ROLE

  • F

n/2 lg |F| such that m = Θ(n) t = Θ(n) ε = 2−Θ(n)

8 / 21

slide-27
SLIDE 27

Comparison of Concrete Efficiency I

We compare our CorrExt for ROLE

  • F
  • with the [BMN17] CorrExt for

IP

  • K

n/lg |K|

.

9 / 21

slide-28
SLIDE 28

Comparison of Concrete Efficiency I

We compare our CorrExt for ROLE

  • F
  • with the [BMN17] CorrExt for

IP

  • K

n/lg |K|

. The [BMN17] CorrExt achieves highest production rate when using IP

  • GF
  • 2n/44

, and achieves leakage rate t/n = (1/4 − g). We shall use ROLE

  • F
  • for F = GF
  • 216

as a comparison.

9 / 21

slide-29
SLIDE 29

Comparison of Concrete Efficiency I

We compare our CorrExt for ROLE

  • F
  • with the [BMN17] CorrExt for

IP

  • K

n/lg |K|

. The [BMN17] CorrExt achieves highest production rate when using IP

  • GF
  • 2n/44

, and achieves leakage rate t/n = (1/4 − g). We shall use ROLE

  • F
  • for F = GF
  • 216

as a comparison. n [BMN17] CorrExt Our CorrExt t/n = (1/4 − g) t/n = 1% t/n = 20% 103 66 163 30 106 5, 223 163, 200 30, 000 109 413, 913 163, 200, 000 30, 000, 000

9 / 21

slide-30
SLIDE 30

Comparison of Concrete Efficiency II

We compare our CorrExt for ROTn/2 with the [GIMS15] CorrExt for ROTn/2.

10 / 21

slide-31
SLIDE 31

Comparison of Concrete Efficiency II

We compare our CorrExt for ROTn/2 with the [GIMS15] CorrExt for ROTn/2. [GIMS15] trades off simulation error to achieve higher production by sampling the ROTs.

◮ Thus to achieve negligible simulation error, the production is

m = n/4 log2(n) with leakage rate t/n = 1%.

10 / 21

slide-32
SLIDE 32

Comparison of Concrete Efficiency II

We compare our CorrExt for ROTn/2 with the [GIMS15] CorrExt for ROTn/2. [GIMS15] trades off simulation error to achieve higher production by sampling the ROTs.

◮ Thus to achieve negligible simulation error, the production is

m = n/4 log2(n) with leakage rate t/n = 1%.

Our CorrExt trades off leakage resilience to achieve higher production.

◮ This tradeoff is inevitable due to information theoretic results. 10 / 21

slide-33
SLIDE 33

Comparison of Concrete Efficiency II

We compare our CorrExt for ROTn/2 with the [GIMS15] CorrExt for ROTn/2. [GIMS15] trades off simulation error to achieve higher production by sampling the ROTs.

◮ Thus to achieve negligible simulation error, the production is

m = n/4 log2(n) with leakage rate t/n = 1%.

Our CorrExt trades off leakage resilience to achieve higher production.

◮ This tradeoff is inevitable due to information theoretic results.

n [GIMS15] CorrExt Our CorrExt t/n = 1% t/n = 1% 103 3 42 106 625 42, 000 109 277, 777 42, 000, 000

10 / 21

slide-34
SLIDE 34

Construction Overview

Goal: Given leaky correlation ROTn/2, Alice and Bob want to securely compute m/2 ROT samples

11 / 21

slide-35
SLIDE 35

Construction Overview

Goal: Given leaky correlation ROTn/2, Alice and Bob want to securely compute m/2 ROT samples

Bilinear Multiplication

ROTn/2 with t-bits leaked n′ copies

  • ROLE
  • F

[t]

We use the well-known bilinear multiplication algorithms [CC87, TVZ82] to implement multiplications over F using multiplications over GF [2].

◮ Note the n′ copies of ROLE

  • F
  • retain the same t-bit leakage!

11 / 21

slide-36
SLIDE 36

Construction Overview

Goal: Given leaky correlation ROTn/2, Alice and Bob want to securely compute m/2 ROT samples

BMN Embedding BMN Embedding

Bilinear Multiplication

ROTn/2 with t-bits leaked n′ copies

  • ROLE
  • F

[t] m′ copies ROLE

  • F
  • ROTm/2

We use the well-known bilinear multiplication algorithms [CC87, TVZ82] to implement multiplications over F using multiplications over GF [2].

◮ Note the n′ copies of ROLE

  • F
  • retain the same t-bit leakage!

We use the [BMN17] embedding protocol to embed multiple samples of ROT into a single ROLE

  • F
  • .

11 / 21

slide-37
SLIDE 37

Construction Overview

Goal: Given leaky correlation ROTn/2, Alice and Bob want to securely compute m/2 ROT samples

BMN Embedding

Bilinear Multiplication

EXT ROTn/2 with t-bits leaked n′ copies

  • ROLE
  • F

[t] m′ copies ROLE

  • F
  • ROTm/2

We use the well-known bilinear multiplication algorithms [CC87, TVZ82] to implement multiplications over F using multiplications over GF [2].

◮ Note the n′ copies of ROLE

  • F
  • retain the same t-bit leakage!

We use the [BMN17] embedding protocol to embed multiple samples of ROT into a single ROLE

  • F
  • .

The heart of our construction is this ROLE

  • F
  • to-ROLE
  • F
  • correlation extractor.

11 / 21

slide-38
SLIDE 38

(n′, m′, t, ε)-ROLE

  • F
  • to-ROLE
  • F
  • CorrExt

Given finite field F:

(rA, rB) ∼ ROLE

  • F

n′/2

Preprocessing Phase

n′ elements of F

rA rB

t-bit leakage t-bit leakage sender corruption

  • r

receiver corruption

Leakage Phase 1 2 mBob mAlice ε-Secure Online Phase ROLE1 ROLEm′ ROLE2

· · · · · · · · ·

Fresh ROLE

  • F
  • Output Phase

12 / 21

slide-39
SLIDE 39

Our ROLE

  • F
  • to-ROLE
  • F
  • CorrExt Construction

Let {Cj}j∈J be some appropriate family of linear codes over Fm′+n′.

13 / 21

slide-40
SLIDE 40

Our ROLE

  • F
  • to-ROLE
  • F
  • CorrExt Construction

Let {Cj}j∈J be some appropriate family of linear codes over Fm′+n′.

ROLE

  • F

n′ (a[n′], b[n′]) (x[n′], z[n′])

13 / 21

slide-41
SLIDE 41

Our ROLE

  • F
  • to-ROLE
  • F
  • CorrExt Construction

Let {Cj}j∈J be some appropriate family of linear codes over Fm′+n′.

ROLE

  • F

n′ (a[n′], b[n′]) (x[n′], z[n′]) j

$

← J r[−m′,n′] ∼ Cj mi = ri + xi , j

mi, αi, and βi are computed for all i ∈ {1, . . . , n′}.

13 / 21

slide-42
SLIDE 42

Our ROLE

  • F
  • to-ROLE
  • F
  • CorrExt Construction

Let {Cj}j∈J be some appropriate family of linear codes over Fm′+n′.

ROLE

  • F

n′ (a[n′], b[n′]) (x[n′], z[n′]) j

$

← J r[−m′,n′] ∼ Cj mi = ri + xi , j u[−m′,n′] ∼ Cj v[−m′,n′] ∼ Cj ∗ Cj αi = ui − ai, βi = ai · mi + bi + vi

mi, αi, and βi are computed for all i ∈ {1, . . . , n′}.

13 / 21

slide-43
SLIDE 43

Our ROLE

  • F
  • to-ROLE
  • F
  • CorrExt Construction

Let {Cj}j∈J be some appropriate family of linear codes over Fm′+n′.

ROLE

  • F

n′ (a[n′], b[n′]) (x[n′], z[n′]) j

$

← J r[−m′,n′] ∼ Cj mi = ri + xi , j u[−m′,n′] ∼ Cj v[−m′,n′] ∼ Cj ∗ Cj αi = ui − ai, βi = ai · mi + bi + vi

mi, αi, and βi are computed for all i ∈ {1, . . . , n′}. Bob computes ti = αi · ri + βi − zi for all i ∈ {1, . . . , n′}.

13 / 21

slide-44
SLIDE 44

Our ROLE

  • F
  • to-ROLE
  • F
  • CorrExt Construction

Let {Cj}j∈J be some appropriate family of linear codes over Fm′+n′.

ROLE

  • F

n′ (a[n′], b[n′]) (x[n′], z[n′]) j

$

← J r[−m′,n′] ∼ Cj mi = ri + xi , j u[−m′,n′] ∼ Cj v[−m′,n′] ∼ Cj ∗ Cj αi = ui − ai, βi = ai · mi + bi + vi

mi, αi, and βi are computed for all i ∈ {1, . . . , n′}. Bob computes ti = αi · ri + βi − zi for all i ∈ {1, . . . , n′}. Performing erasure recovery of Cj ∗ Cj on t[n′], Bob obtains tk = uk · rk + vk for k ∈ {−m, . . . , −1}.

13 / 21

slide-45
SLIDE 45

Our Suitable Family of Codes: the Key

Let {Cj}j∈J be a family of linear codes of block length s ∈ N over a constant sized field F. For our ROLE-to-ROLE extractor to work, this family {Cj} needs the following properties

14 / 21

slide-46
SLIDE 46

Our Suitable Family of Codes: the Key

Let {Cj}j∈J be a family of linear codes of block length s ∈ N over a constant sized field F. For our ROLE-to-ROLE extractor to work, this family {Cj} needs the following properties

1 Each code Cj is a multiplication friendly good code: ⋆ the rate and distance of Cj, C⊥ j , and

Cj ∗ Cj = := c ∗ c′ : c, c′ ∈ Cj are Θ(s).

14 / 21

slide-47
SLIDE 47

Our Suitable Family of Codes: the Key

Let {Cj}j∈J be a family of linear codes of block length s ∈ N over a constant sized field F. For our ROLE-to-ROLE extractor to work, this family {Cj} needs the following properties

1 Each code Cj is a multiplication friendly good code: ⋆ the rate and distance of Cj, C⊥ j , and

Cj ∗ Cj = := c ∗ c′ : c, c′ ∈ Cj are Θ(s).

2 {Cj} is a small-bias family of distributions. 14 / 21

slide-48
SLIDE 48

Our Suitable Family of Codes: the Key

Let {Cj}j∈J be a family of linear codes of block length s ∈ N over a constant sized field F. For our ROLE-to-ROLE extractor to work, this family {Cj} needs the following properties

1 Each code Cj is a multiplication friendly good code: ⋆ the rate and distance of Cj, C⊥ j , and

Cj ∗ Cj = := c ∗ c′ : c, c′ ∈ Cj are Θ(s).

2 {Cj} is a small-bias family of distributions.

Key Technical Contribution

Construction of this family {Cj}j∈J !

14 / 21

slide-49
SLIDE 49

Small-Bias Family of Distributions

Our goal is for {Cj} to be a family of psuedorandom distributions on linear tests. For any S ∈ Fs, the vector S defines the linear test LS(x) := x1S1 + · · · + xsSs for x ∈ Fs.

15 / 21

slide-50
SLIDE 50

Small-Bias Family of Distributions

Our goal is for {Cj} to be a family of psuedorandom distributions on linear tests. For any S ∈ Fs, the vector S defines the linear test LS(x) := x1S1 + · · · + xsSs for x ∈ Fs. Consider the distribution

DS

sample: j

$

← J sample: c ∼ Cj Output: LS(c)

15 / 21

slide-51
SLIDE 51

Small-Bias Family of Distributions

Our goal is for {Cj} to be a family of psuedorandom distributions on linear tests. For any S ∈ Fs, the vector S defines the linear test LS(x) := x1S1 + · · · + xsSs for x ∈ Fs. Consider the distribution

DS

sample: j

$

← J sample: c ∼ Cj Output: LS(c) If {Cj} is ρ-biased, then SD ( DS , UF ) ρ, and we say {Cj} ρ-fools LS.

◮ In fact, {Cj} ρ-fools all linear tests. 15 / 21

slide-52
SLIDE 52

Small-Bias Family of Distributions

We emphasize that a single linear code cannot fool all linear tests.

16 / 21

slide-53
SLIDE 53

Small-Bias Family of Distributions

We emphasize that a single linear code cannot fool all linear tests. For any linear code C ⊆ Fs and linear test LS,

◮ If we sample c $

← C, then LS(c) =

  • UF

S ∈ C⊥ S ∈ C⊥

16 / 21

slide-54
SLIDE 54

Small-Bias Family of Distributions

We emphasize that a single linear code cannot fool all linear tests. For any linear code C ⊆ Fs and linear test LS,

◮ If we sample c $

← C, then LS(c) =

  • UF

S ∈ C⊥ S ∈ C⊥

Key insight: a single code cannot fool every linear test

◮ But an appropriate family of linear codes can fool every linear test 16 / 21

slide-55
SLIDE 55

Small-Bias Family of Distributions

We emphasize that a single linear code cannot fool all linear tests. For any linear code C ⊆ Fs and linear test LS,

◮ If we sample c $

← C, then LS(c) =

  • UF

S ∈ C⊥ S ∈ C⊥

Key insight: a single code cannot fool every linear test

◮ But an appropriate family of linear codes can fool every linear test

Intuition: given this family, a fixed S is unlikely to be in the dual

  • f a randomly chosen code.

16 / 21

slide-56
SLIDE 56

Code Construction: Multiplication Friendly

First we demonstrate how to construct a single code C∗ such that C∗, (C∗)⊥, and C∗ ∗ C∗ have distance and rate Θ(s).

17 / 21

slide-57
SLIDE 57

Code Construction: Multiplication Friendly

First we demonstrate how to construct a single code C∗ such that C∗, (C∗)⊥, and C∗ ∗ C∗ have distance and rate Θ(s). There are explicit constructions of such multiplication friendly codes: Algebraic Geometric (AG) Codes [Gop81, GS96, CC06].

17 / 21

slide-58
SLIDE 58

Code Construction: Multiplication Friendly

First we demonstrate how to construct a single code C∗ such that C∗, (C∗)⊥, and C∗ ∗ C∗ have distance and rate Θ(s). There are explicit constructions of such multiplication friendly codes: Algebraic Geometric (AG) Codes [Gop81, GS96, CC06]. We carefully choose the parameters of the AG code C∗ in our construction using Garcia-Stichtenoth curves [GS96] over constant sized finite fields F.

17 / 21

slide-59
SLIDE 59

Code Construction: Small-bias Family (“Twist-then-Permute”)

Fix our multiplication friendly AG code C∗. Let λ ∈ (F×)s. We define a λ-twist of the code C∗ as

18 / 21

slide-60
SLIDE 60

Code Construction: Small-bias Family (“Twist-then-Permute”)

Fix our multiplication friendly AG code C∗. Let λ ∈ (F×)s. We define a λ-twist of the code C∗ as C∗ ∋ (c1, . . . , cs) (λ1c1, . . . , λscs) ∈ C∗

λ

λ-twist λ has no 0 entries = ⇒ rate and distance of C∗

λ are the same as C∗.

18 / 21

slide-61
SLIDE 61

Code Construction: Small-bias Family (“Twist-then-Permute”)

Fix our multiplication friendly AG code C∗. Let λ ∈ (F×)s. We define a λ-twist of the code C∗ as C∗ ∋ (c1, . . . , cs) (λ1c1, . . . , λscs) ∈ C∗

λ

λ-twist λ has no 0 entries = ⇒ rate and distance of C∗

λ are the same as C∗.

Let π: {1, . . . , s} → {1, . . . , s} be any permutation. We define a π-permutation of the code C∗

λ as

18 / 21

slide-62
SLIDE 62

Code Construction: Small-bias Family (“Twist-then-Permute”)

Fix our multiplication friendly AG code C∗. Let λ ∈ (F×)s. We define a λ-twist of the code C∗ as C∗ ∋ (c1, . . . , cs) (λ1c1, . . . , λscs) ∈ C∗

λ

λ-twist λ has no 0 entries = ⇒ rate and distance of C∗

λ are the same as C∗.

Let π: {1, . . . , s} → {1, . . . , s} be any permutation. We define a π-permutation of the code C∗

λ as

C∗

λ ∋ (λ1c1, . . . , λscs)

(λπ(s)cπ(s), . . . , λπ(s)cπ(s)) ∈ C∗

π,λ

π-permutation permutation of C∗

λ does not change its rate or distance.

18 / 21

slide-63
SLIDE 63

Code Construction: Small-bias Family

Let J = {(π, λ)} for all permutations π: {1, . . . , s} → {1, . . . , s} and λ ∈ (F×)s.

19 / 21

slide-64
SLIDE 64

Code Construction: Small-bias Family

Let J = {(π, λ)} for all permutations π: {1, . . . , s} → {1, . . . , s} and λ ∈ (F×)s.

Theorem (Our Code Construction)

The family of linear codes {C∗

j }j∈J over Fs, where |F| = q is constant,

is a family of multiplication friendly good codes, and is a 2−δ-bias family of distributions for δ = Θ(s).

19 / 21

slide-65
SLIDE 65

Code Construction: Small-bias Family

Let J = {(π, λ)} for all permutations π: {1, . . . , s} → {1, . . . , s} and λ ∈ (F×)s.

Theorem (Our Code Construction)

The family of linear codes {C∗

j }j∈J over Fs, where |F| = q is constant,

is a family of multiplication friendly good codes, and is a 2−δ-bias family of distributions for δ = Θ(s).

Notes

The parameter δ has a dependence on the dual distance d⊥. Better d⊥ yields smaller bias!

19 / 21

slide-66
SLIDE 66

Showing Small-bias: High Level Idea

We give our key observation towards demonstrating that the family {C∗

j } is a family of small-bias distributions.

20 / 21

slide-67
SLIDE 67

Showing Small-bias: High Level Idea

We give our key observation towards demonstrating that the family {C∗

j } is a family of small-bias distributions.

Fix 0s = S ∈ Fs and draw (π, λ)

$

← J . Draw x ∼ C∗

π,λ and consider LS(x).

20 / 21

slide-68
SLIDE 68

Showing Small-bias: High Level Idea

We give our key observation towards demonstrating that the family {C∗

j } is a family of small-bias distributions.

Fix 0s = S ∈ Fs and draw (π, λ)

$

← J . Draw x ∼ C∗

π,λ and consider LS(x).

LS(x) =

s

  • i=1

xiSi

20 / 21

slide-69
SLIDE 69

Showing Small-bias: High Level Idea

We give our key observation towards demonstrating that the family {C∗

j } is a family of small-bias distributions.

Fix 0s = S ∈ Fs and draw (π, λ)

$

← J . Draw x ∼ C∗

π,λ and consider LS(x).

LS(x) =

s

  • i=1

xiSi

s

  • i=1

(cπ(i)λπ(i))Si

20 / 21

slide-70
SLIDE 70

Showing Small-bias: High Level Idea

We give our key observation towards demonstrating that the family {C∗

j } is a family of small-bias distributions.

Fix 0s = S ∈ Fs and draw (π, λ)

$

← J . Draw x ∼ C∗

π,λ and consider LS(x).

LS(x) =

s

  • i=1

xiSi

s

  • i=1

(cπ(i)λπ(i))Si

s

  • i=1

(ciλi)Sπ-1(i)

20 / 21

slide-71
SLIDE 71

Showing Small-bias: High Level Idea

We give our key observation towards demonstrating that the family {C∗

j } is a family of small-bias distributions.

Fix 0s = S ∈ Fs and draw (π, λ)

$

← J . Draw x ∼ C∗

π,λ and consider LS(x).

LS(x) =

s

  • i=1

xiSi

s

  • i=1

(cπ(i)λπ(i))Si

s

  • i=1

ci(Sπ-1(i)λi)

s

  • i=1

(ciλi)Sπ-1(i)

20 / 21

slide-72
SLIDE 72

Showing Small-bias: High Level Idea

We give our key observation towards demonstrating that the family {C∗

j } is a family of small-bias distributions.

Fix 0s = S ∈ Fs and draw (π, λ)

$

← J . Draw x ∼ C∗

π,λ and consider LS(x).

LS(x) =

s

  • i=1

xiSi

s

  • i=1

(cπ(i)λπ(i))Si

s

  • i=1

ciTi = LT (c)

s

  • i=1

ci(Sπ-1(i)λi)

s

  • i=1

(ciλi)Sπ-1(i)

20 / 21

slide-73
SLIDE 73

Showing Small-bias: High Level Idea

We give our key observation towards demonstrating that the family {C∗

j } is a family of small-bias distributions.

Fix 0s = S ∈ Fs and draw (π, λ)

$

← J . Draw x ∼ C∗

π,λ and consider LS(x).

LS(x) =

s

  • i=1

xiSi

s

  • i=1

(cπ(i)λπ(i))Si

s

  • i=1

ciTi = LT (c)

s

  • i=1

ci(Sπ-1(i)λi)

s

  • i=1

(ciλi)Sπ-1(i) Here T

$

← Fs such that wt (T) = wt (S) and c ∼ C∗.

20 / 21

slide-74
SLIDE 74

Conclusions

Contribution I: There exists a correlation extractor that

Uses n/2 independent samples of ROT produces Θ(n) secure independent OTs resilient to Θ(n) bits of leakage has 2−Θ(n) security Uses only 2 messages

21 / 21

slide-75
SLIDE 75

Conclusions

Contribution I: There exists a correlation extractor that

Uses n/2 independent samples of ROT produces Θ(n) secure independent OTs resilient to Θ(n) bits of leakage has 2−Θ(n) security Uses only 2 messages

Contribution II: There exists a family of linear codes such that

each code in the family is a multiplication friendly good code the Schur product code of each code in the family is a multiplication friendly good code the family is a small-bias family of distributions

21 / 21

slide-76
SLIDE 76

Conclusions

Contribution I: There exists a correlation extractor that

Uses n/2 independent samples of ROT produces Θ(n) secure independent OTs resilient to Θ(n) bits of leakage has 2−Θ(n) security Uses only 2 messages

Contribution II: There exists a family of linear codes such that

each code in the family is a multiplication friendly good code the Schur product code of each code in the family is a multiplication friendly good code the family is a small-bias family of distributions

Thank You!

21 / 21