Decision Reductions Crypto 2011 Daniele Micciancio Petros Mol - - PowerPoint PPT Presentation

decision reductions
SMART_READER_LITE
LIVE PREVIEW

Decision Reductions Crypto 2011 Daniele Micciancio Petros Mol - - PowerPoint PPT Presentation

Pseudorandom Knapsacks and the Sample Complexity of LWE Search-to- Decision Reductions Crypto 2011 Daniele Micciancio Petros Mol August 17, 2011 1 Learning With Errors ( LWE ) public: integers n, q secret small error from a known


slide-1
SLIDE 1

Pseudorandom Knapsacks and the Sample Complexity of LWE Search-to- Decision Reductions

1

August 17, 2011

Daniele Micciancio Petros Mol

Crypto 2011

slide-2
SLIDE 2

Learning With Errors (LWE)

2

secret public: integers n, q

small error from a known distribution noise

2

Goal: Find s

b

,

(mod q)

random small error vector

A

m n

A

S

e

+ =

secret

slide-3
SLIDE 3 3

LWE Background

  • Introduced by Regev [R05]
  • q = 2, Bernoulli noise -> Learning Parity with Noise (LPN)
  • Extremely successful in Cryptography
  • IND-CPA Public Key Encryption [Regev05]
  • Injective Trapdoor Functions/ IND-CCA encryption [PW08]
  • Strongly Unforgeable Signatures [GPV08, CHKP10]
  • (Hierarchical) Identity Based Encryption [GPV08, CHKP10, ABB10]
  • Circular- Secure Encryption [ACPS09]
  • Leakage-Resilient Cryptography [AGV09, DGK+10, GKPV10]
  • (Fully) Homomorphic Encryption [GHV10, BV11b]
slide-4
SLIDE 4

LWE: Search & Decision

4

Public parameters Given: Goal: find s (or e)

Find (Search)

Given:

Distinguish (Decision)

Goal: decide if

  • r

n: size of the secret, m: #samples q: modulus, :error distribution

slide-5
SLIDE 5

Search-to-Decision reductions (S-to-D)

5

Why do we care?

  • all LWE-based constructions

rely on decisional LWE

  • strong indistinguishability

flavor of security definitions

  • their hardness is better

understood

decision problems search problems

slide-6
SLIDE 6

Search-to-Decision reductions (S-to-D)

6

Why do we care?

  • all LWE-based constructions

rely on decisional LWE

  • strong indistinguishability

flavor of security definitions

  • S-to-D reductions: “Primitive Π is ABC-Secure assuming

search problem P is hard”

  • their hardness is better

understood

decision problems search problems

slide-7
SLIDE 7

Our results

7
  • Powerful and usable criteria to establish Search-to-

Decision equivalence for general classes of knapsack functions

  • Use known techniques from Fourier analysis in a

new context. Ideas potentially useful elsewhere

  • Toolset for studying Search-to-Decision reductions

for LWE with polynomially bounded noise.

  • Subsume and extend previously known ones
  • Reductions are in addition sample-preserving
slide-8
SLIDE 8

Our results

8
  • Powerful and usable criteria to establish Search-to-

Decision equivalence for general classes of knapsack functions

  • Use known techniques from Fourier analysis in a

new context. Ideas potentially useful elsewhere

  • Toolset for studying Search-to-Decision reductions

for LWE with polynomially bounded noise.

  • Subsume and extend previously known ones
  • Reductions are in addition sample-preserving
slide-9
SLIDE 9

Bounded knapsack functions over groups

Parameters

  • integer m
  • finite abelian group G
  • set S = {0,…, s - 1} of integers, s: poly(m)
9

(Random) Knapsack family

Sampling where Evaluation

Example

(random) modular subset sum:

slide-10
SLIDE 10

Knapsack functions: Computational problems

10

invert (search)

Input: Goal: Find x

Distinguish (decision)

Input: Samples from either: Goal: Label the samples

Glossary: If decision problem is hard, function is pseudorandom (PRG) If search problem is hard, function is One-Way

distribution over public

Notation: family of knapsacks over G with distribution

slide-11
SLIDE 11

Search-to-Decision: Known results

11

Decision as hard as search when…

[Fischer, Stern 96]: syndrome decoding , vector group uniform over all m-bit vectors with Hamming weight w. [Impagliazzo, Naor 89] : (random) modular subset sum , cyclic group uniform over

slide-12
SLIDE 12

Our contribution: S-to-D for general knapsack

12

One-Way

: knapsack family with range G and input distribution over

PRG PRG

+

s: poly(m)

slide-13
SLIDE 13 13

One-Way PRG PRG

+

Main Theorem

Our contribution: S-to-D for general knapsack

: knapsack family with range G and input distribution over

s: poly(m)

slide-14
SLIDE 14 14

One-Way PRG PRG

+

Main Theorem

PRG

Much less restrictive than it seems

In most interesting cases holds in a strong information theoretic sense

Our contribution: S-to-D for general knapsack

slide-15
SLIDE 15 15

S-to-D for general knapsack: Examples

Any group G and any distribution over Any group G with prime exponent and any distribution

Subsumes [IN89,FS96] and more One-Way PRG

And many more…

using known information theoretical tools (LHL, entropy bounds etc)

slide-16
SLIDE 16

Proof Sketch

16

Reminder

Input: Goal: Distinguish

Distinguisher

Inverter

Input: g , g.x Goal: Find x

slide-17
SLIDE 17

Proof follows outline of [IN89]

17

Input: Goal: Distinguish

Distinguisher Predictor

Input: g , g.x, r Goal: find x.r (mod t)

Inverter

Input: g , g.x Goal: Find x

Step 1: Goldreich–Levin replaced by general conditions for inverting given noisy predictions for x.r (mod t) for possibly composite t

  • Tool: learning heavy Fourier coefficients of general functions [AGS03]

<= <=

Proof Sketch

step 1 step 2

Step 2: Given a distinguisher, we get a predictor satisfying general conditions of step 1. Proof significantly more involved than [IN89]

slide-18
SLIDE 18

Our results

18
  • Powerful and usable criteria to establish Search-to-

Decision equivalence for general classes of knapsack functions

  • Use known techniques from Fourier analysis in a

new context. Ideas potentially useful elsewhere

  • Toolset for studying Search-to-Decision reductions

for LWE with polynomially bounded noise.

  • Subsume and extend previously known ones
  • Reductions are in addition sample-preserving
slide-19
SLIDE 19

What about LWE?

19

s

e

+

A ,

e

,

g1 g2…gm

G

A

g1 g2…gm G is the parity check matrix for the code generated by A If A is “random”, G is also “random” Error e from LWE  unknown input of the knapsack m n

slide-20
SLIDE 20

What about LWE?

20

e

,

g1 g2…gm

G

g1 g2…gm The transformation works in the other direction as well

s

e

+

A , A

(A, As +e ) <= (G, Ge) <= (G’, G’e) <= (A’,A’s’ + e) Search Search Decision Decision

S-to-D for knapsack Putting all the pieces together…

slide-21
SLIDE 21

LWE Implications

21

LWE reductions follow from knapsacks reductions over

All known Search-to-Decision results for LWE/LPN with

bounded error [BFKL93, R05, ACPS09, KSS10] follow as a direct corollary

Search-to-Decision for new instantiations of LWE

slide-22
SLIDE 22

LWE: Sample Preserving S-to-D

22

If we can solve decision LWE given m samples, we can solve search LWE given m samples

Caveat: Inverting probability goes down (seems unavoidable)

Ours: sample-preserving

Previous reductions

A

A’

search decision

poly(m) m

<=

b

,

b’

,

slide-23
SLIDE 23

Why care about #samples?

23

  • LWE-based schemes often expose a certain number of

samples, say m

  • With sample-preserving S-to-D we can base their security
  • n the hardness of search LWE with m samples
  • Concrete algorithmic attacks against LWE [MR09, AG11]

are sensitive to the number of exposed samples

  • for some parameters, LWE is completely broken by [AG11] if

number of given samples above a certain threshold

slide-24
SLIDE 24

Open problems

Sample preserving reductions for

24
  • 1. LWE with unbounded noise
  • used in various settings [Pei09, GKPV10, BV11b, BPR11]
  • some reductions known [Pei09] but not sample-preserving
  • 2. ring LWE
  • Samples (a, a*s+e) where a, s, e drawn from R=Zq[x]/<f(x)>
  • non sample-preserving reductions known [LPR10]