Program Introduction 1 Definition of Secret Sharing 2 3 Lower - - PowerPoint PPT Presentation

program
SMART_READER_LITE
LIVE PREVIEW

Program Introduction 1 Definition of Secret Sharing 2 3 Lower - - PowerPoint PPT Presentation

I MPROVING THE L INEAR P ROGRAMMING T ECHNIQUE IN THE S EARCH FOR L OWER B OUNDS IN S ECRET S HARING O RIOL F ARRS 1 T ARIK K ACED 2 S EBASTI M ARTN 3 C ARLES P ADR 3 1 U NIVERSITAT R OVIRA I V IRGILI , T ARRAGONA , S PAIN 2 S ORBONNE U


slide-1
SLIDE 1

IMPROVING THE LINEAR PROGRAMMING TECHNIQUE IN THE SEARCH FOR LOWER BOUNDS IN SECRET SHARING

ORIOL FARRÀS1 TARIK KACED2 SEBASTIÀ MARTÍN3 CARLES PADRÓ3

1UNIVERSITAT ROVIRA I VIRGILI, TARRAGONA, SPAIN 2SORBONNE UNIVERSITÉ, LIP6, PARIS, FRANCE 3UNIVERSITAT POLITÈCNICA DE CATALUNYA, BARCELONA, SPAIN

EUROCRYPT 2018

slide-2
SLIDE 2

Program

1

Introduction

2

Definition of Secret Sharing

3

Lower Bounds on the Information Ratio

4

Improving the LP technique

slide-3
SLIDE 3

1

Introduction

2

Definition of Secret Sharing

3

Lower Bounds on the Information Ratio

4

Improving the LP technique

slide-4
SLIDE 4

Secret Sharing Scheme A method to protect a secret

slide-5
SLIDE 5

Secret Sharing Scheme A method to protect a secret

Secret P1 P2 P3 P4 P5

slide-6
SLIDE 6

Secret Sharing Scheme A method to protect a secret

Secret P1 P2 P3 P4 P5 s1 s2 s3 s4 s5

slide-7
SLIDE 7

Secret Sharing Scheme A method to protect a secret

P1 P2 P3 P4 P5

slide-8
SLIDE 8

Secret Sharing Scheme A method to protect a secret

P1 P2 P3 P4 P5 Secret

Authorized subset

slide-9
SLIDE 9

Secret Sharing Scheme A method to protect a secret

P1 P2 P3 P4 P5 No Information

Forbidden subset

slide-10
SLIDE 10

Secret Sharing Schemes: Overview

Shamir’79, Blakley’79, Ito Saito Nishizeki’87.

slide-11
SLIDE 11

Secret Sharing Schemes: Overview

Shamir’79, Blakley’79, Ito Saito Nishizeki’87. Unconditionally secure.

slide-12
SLIDE 12

Secret Sharing Schemes: Overview

Shamir’79, Blakley’79, Ito Saito Nishizeki’87. Unconditionally secure. Cryptographic primitive with many applications Secure multiparty computation Threshold cryptography Access control Attribute-based encryption Oblivious transfer ...

slide-13
SLIDE 13

Secret Sharing Schemes: Overview

Shamir’79, Blakley’79, Ito Saito Nishizeki’87. Unconditionally secure. Cryptographic primitive with many applications Secure multiparty computation Threshold cryptography Access control Attribute-based encryption Oblivious transfer ...

Need of efficient schemes. Shares have to be small.

slide-14
SLIDE 14

1

Introduction

2

Definition of Secret Sharing

3

Lower Bounds on the Information Ratio

4

Improving the LP technique

slide-15
SLIDE 15

Shannon Entropy

Unconditionally secure schemes Security based on Information Theory. We see the secret and the shares as random variables. Definition of security in terms of Shannon entropy:

slide-16
SLIDE 16

Shannon Entropy

Unconditionally secure schemes Security based on Information Theory. We see the secret and the shares as random variables. Definition of security in terms of Shannon entropy: The Shannon entropy of a discrete random variable X on E is H(X) = −

  • x∈E

p(x) log2 p(x). If X1, . . . , Xn are discrete random variables and A = {i1, . . . , ir} ⊆ [n], H(XA) = H(Xi1 × . . . × Xir ).

slide-17
SLIDE 17

Shannon Entropy

Unconditionally secure schemes Security based on Information Theory. We see the secret and the shares as random variables. Definition of security in terms of Shannon entropy: The Shannon entropy of a discrete random variable X on E is H(X) = −

  • x∈E

p(x) log2 p(x). If X1, . . . , Xn are discrete random variables and A = {i1, . . . , ir} ⊆ [n], H(XA) = H(Xi1 × . . . × Xir ). Also, H(X) approximates the min. average length of a binary code for X.

slide-18
SLIDE 18

Definition of Secret Sharing

Definition A secret sharing scheme on the set P = {1, . . . , n} is a collection of discrete random variables Σ = (S0, S1, . . . , Sn) such that H(S0) > 0 and H(S0|SP) = 0

slide-19
SLIDE 19

Definition of Secret Sharing

Definition A secret sharing scheme on the set P = {1, . . . , n} is a collection of discrete random variables Σ = (S0, S1, . . . , Sn) such that H(S0) > 0 and H(S0|SP) = 0 A ⊆ P is authorized if H(S0|SA) = 0. A ⊆ P is forbidden if H(S0|SA) = H(S0). We just consider perfect schemes: every subset is either authorized or forbidden.

slide-20
SLIDE 20

Access Structures and Linear Schemes

The access structure Γ of Σ is the family of authorized subsets. It is monotone increasing: if A ∈ Γ and A ⊆ B, then B ∈ Γ Every monotone increasing family of subsets admits a secret sharing scheme.

slide-21
SLIDE 21

Access Structures and Linear Schemes

The access structure Γ of Σ is the family of authorized subsets. It is monotone increasing: if A ∈ Γ and A ⊆ B, then B ∈ Γ Every monotone increasing family of subsets admits a secret sharing scheme. Definition A scheme Σ = (S0, S1, . . . , Sn) is F-linear if it is determined by a F-linear mapping Π : Fℓ → Fℓ0 × . . . × Fℓn, taking uniform probability distribution on Fℓ.

slide-22
SLIDE 22

Information Ratio

Measures of the efficiency: size of the shares: information ratio, average inf. ratio, length of the shares... cost of sharing and reconstructing the secret.

slide-23
SLIDE 23

Information Ratio

Measures of the efficiency: size of the shares: information ratio, average inf. ratio, length of the shares... cost of sharing and reconstructing the secret. Definition The information ratio of Σ = (S0, S1, . . . , Sn) is: σ(Σ) = maxi H(Si) H(S0) .

slide-24
SLIDE 24

Information Ratio

Measures of the efficiency: size of the shares: information ratio, average inf. ratio, length of the shares... cost of sharing and reconstructing the secret. Definition The information ratio of Σ = (S0, S1, . . . , Sn) is: σ(Σ) = maxi H(Si) H(S0) . Definition For every access structure Γ, σ(Γ) is the infimum of the information ratio of the sss for Γ. (optimal information ratio) λ(Γ) is the infimum of the information ratio of the linear sss for Γ.

slide-25
SLIDE 25

General Results on the Information Ratio

σ(Γ) ≤ λ(Γ) σ(Γ) ≥ 1 σ(Γ) = 2O(n)

(Ito Saito Nishizeki’87, Benaloh Leichter’88, Liu Vaikuntanathan’18)

There exists a family of access structures {Γn}n≥1 with σ(Γn) = Ω

  • n

log n

  • .

(Csirmaz’97)

slide-26
SLIDE 26

Open Problem Improve the techniques for finding lower bounds on σ(Γ)

slide-27
SLIDE 27

Open Problem Improve the techniques for finding lower bounds on σ(Γ)

Also lower bounds on λ(Γ) ˜ σ(Γ) (average optimal information ratio) ˜ λ(Γ) (average optimal information ratio of linear sss)

slide-28
SLIDE 28

1

Introduction

2

Definition of Secret Sharing

3

Lower Bounds on the Information Ratio

4

Improving the LP technique

slide-29
SLIDE 29

Towards an LP Problem

Let Q = P ∪ {0} = {0, . . . , n}. If Σ = (S0, . . . , Sn) is a scheme with access structure Γ, then the map f : P(Q) → R X → H(SX)/H(S0) satisfies the following properties: (P1) f(∅) = 0 (P2) f(X) ≤ f(Y) for every X ⊆ Y ⊆ Q (P3) f(X ∪ Y) + f(X ∩ Y) ≤ f(X) + f(Y) for every X, Y ⊆ Q. (N) f({0}) = 1 (Γ1) f(X ∪ {0}) = f(X) if X ∈ Γ (Γ2) f(X ∪ {0}) = f(X) + 1 if X / ∈ Γ We will consider the vector (f(X))X⊆Q ∈ RP(Q).

slide-30
SLIDE 30

LP Technique (I)

Linear Programming Problem Minimize max

x∈P f(x)

subject to f satisfies (N), (Γ1), (Γ2), (P1), (P2), (P3)

slide-31
SLIDE 31

LP Technique (I)

Linear Programming Problem Minimize max

x∈P f(x)

subject to f satisfies (N), (Γ1), (Γ2), (P1), (P2), (P3) Specifically, we consider the following LP problem: Minimize v subject to v ≥ f(x) for every x ∈ P where (f(X))X⊆Q ∈ RP(Q) is the vector defined by a function f satisfying (N), (Γ1), (Γ2), (P1), (P2), (P3)

slide-32
SLIDE 32

LP Technique (II)

Linear Programming Problem Minimize max

x∈P f(x)

subject to f satisfies (N), (Γ1), (Γ2), (P1), (P2), (P3) The optimal value of this LP problem is, by definition, κ(Γ).

slide-33
SLIDE 33

LP Technique (II)

Linear Programming Problem Minimize max

x∈P f(x)

subject to f satisfies (N), (Γ1), (Γ2), (P1), (P2), (P3) The optimal value of this LP problem is, by definition, κ(Γ). κ(Γ) was introduced by Martí-Farré Padró’10. The function f defined from a scheme Σ is a feasible solution of the LP problem, so κ(Γ) ≤ σ(Γ) It is the best lower bound on σ(Γ) that can be obtained from Shannon information inequalities on H(SX): H(SX) ≤ H(SY) for every X ⊆ Y ⊆ Q H(SX∩Y) + H(SX∪Y) ≤ H(SX) + H(SY) for every X, Y ⊆ Q

slide-34
SLIDE 34

Applications of the LP Technique

If Q is small, the LP problem can be solved:

  • F. et al.’12, Martí-Farré Padró Vázquez’11, Padró Vázquez Yang’13
slide-35
SLIDE 35

Applications of the LP Technique

If Q is small, the LP problem can be solved:

  • F. et al.’12, Martí-Farré Padró Vázquez’11, Padró Vázquez Yang’13

If Q is big, it is still possible to find useful lower bounds in κ(Γ) by selecting some constraints from (N), (Γ1), (Γ2), (P1), (P2), (P3). Every feasible solution of the dual LP problem provides a lower bound on κ(Γ).

Capocelli et al.’93, van Dijk’95, Jackson Martin’96, Blundo et al.’97... Csirmaz’97: For every n, there exists an access structure Γn on n participants such that κ(Γn) = Ω

  • n

log n

  • .
slide-36
SLIDE 36

Limitations of the LP Technique (I)

In general, κ is not tight. κ(Γ) ≤ n (Csirmaz’97).

slide-37
SLIDE 37

Limitations of the LP Technique (I)

In general, κ is not tight. κ(Γ) ≤ n (Csirmaz’97). The method was improved by adding Non-Shannon Information Inequalities to the LP problem. Inequalities on H that are not derived form the Shannon ones.

Zhang-Yeung’98: 2I(C; D) ≤ I(A; B) + I(A; C, D) + 3I(C; D|A) + I(C; D|B) where I(X; Y|Z) = H(XZ) + H(YZ) − H(XYZ) − H(Z)

slide-38
SLIDE 38

Limitations of the LP Technique (II)

Adding non-Shannon inequalities it was possible to obtain better lower bounds

(Beimel Livne Padró’08, Metcalf-Burton’11, Padró Vázquez Yang’13)

slide-39
SLIDE 39

Limitations of the LP Technique (II)

Adding non-Shannon inequalities it was possible to obtain better lower bounds

(Beimel Livne Padró’08, Metcalf-Burton’11, Padró Vázquez Yang’13)

But still, it has limitations: Infinitely many information inequalities in 4 random variables

(Matúš’07).

Negative results about the power of non-Shannon inf. ineq.

(Beimel Orlov’10, Martín Padró Yang’13).

slide-40
SLIDE 40

Limitations of the LP Technique (II)

Adding non-Shannon inequalities it was possible to obtain better lower bounds

(Beimel Livne Padró’08, Metcalf-Burton’11, Padró Vázquez Yang’13)

But still, it has limitations: Infinitely many information inequalities in 4 random variables

(Matúš’07).

Negative results about the power of non-Shannon inf. ineq.

(Beimel Orlov’10, Martín Padró Yang’13).

Our goal: Overcome these limitations with a new approach

slide-41
SLIDE 41

1

Introduction

2

Definition of Secret Sharing

3

Lower Bounds on the Information Ratio

4

Improving the LP technique

slide-42
SLIDE 42

Non-Shannon Information Inequalities

All known non-Shannon-type information inequalities are obtained in this way (Dougherty Freiling Zeger’11):

1

Start with a set of arbitrary random variables S0, . . . , Sn.

2

Add auxiliary random variables with special properties using the Copy lemma or the Ahlswede and Körner (AK) lemma: Sn+1, . . . , Sm.

3

Apply known information inequalities to the enlarged set of random variables S0, . . . , Sm.

slide-43
SLIDE 43

Our Approach for Bounding σ(Γ)

Let Γ be an access structure on P.

1

We consider the conditions (N), (Γ1), (Γ2) on subsets of P ∪ {0} = {0, 1, . . . , n}.

2

We add new participants that are guaranteed by the Ahlswede - Körner lemma (AK): {n + 1, . . . , m}. Their role in the access structure is not relevant (Γ is still on P)

3

Apply Shannon inf. ineq. (P1), (P2), (P3) on the extended set {0, 1, . . . , n, . . . , m}. Implicitly, we get (new) information inequalities adapted to Γ.

slide-44
SLIDE 44

Linear Programming Problem Minimize max

x∈P f(x)

subject to f satisfies (N), (Γ1), (Γ2), (P1), (P2), (P3), (AK) The optimal value of this LP problem is a lower bound on σ(Γ). If n is small, we can solve the LP problem, obtaining better bounds on σ(Γ). For every n, feasible solutions of the dual LP problem provide bounds on σ(Γ).

slide-45
SLIDE 45

Bounds on λ

For λ(Γ), we use the Common Information in a similar way. The Common Information is satisfied by linear random variables. It implies the Ingleton rank inequality. Linear Programming Problem The optimal value of this LP problem is a lower bound on λ(Γ): Minimize max

x

f(x) subject to f satisfies (N), (Γ1), (Γ2), (P1), (P2), (P3), (CI) Similar LP problems for ˜ σ(Γ) and ˜ λ(Γ).

slide-46
SLIDE 46

Computing σ and λ

We checked the power of the new technique trying to compute σ and λ for access structures with small n. n = 4: Solved by Stinson’92. n = 5: 8/180 unsolved (Jackson Martin’96 ...) n = 6, graphs: 9/112 unsolved (van Dijk’97 ...) n = 7, some matroid ports: (Beimel Livne Padró’08 ...) For the unsolved cases: We computed λ: tight bounds and optimal constructions. Better bounds for σ. Smallest structures with κ < σ.

slide-47
SLIDE 47

Conclusions and Open Problems

Main contributions: New technique for computing bounds on σ and λ. Accurate values for small n. Also valid for large n. Open problems and future directions: Improve lower asymptotic bounds on σ(Γ). Compute σ(Γ) for the unsolved cases with n=5,6,7. Apply the technique to similar optimization problems, like network coding.

slide-48
SLIDE 48

Thank You