Pseudorandom fu functions in in alm lmost constant depth fr from - - PowerPoint PPT Presentation

pseudorandom fu functions in in alm lmost
SMART_READER_LITE
LIVE PREVIEW

Pseudorandom fu functions in in alm lmost constant depth fr from - - PowerPoint PPT Presentation

Pseudorandom fu functions in in alm lmost constant depth fr from lo low-noise LPN Yu Yu Cryptologic Research Center joint work with (John Steinberger) Outline Introduction to LPN Decisional and Computational LPN


slide-1
SLIDE 1

Pseudorandom fu functions in in alm lmost constant depth fr from lo low-noise LPN

Yu Yu Cryptologic Research Center joint work with (John Steinberger)

slide-2
SLIDE 2

Outline

  • Introduction to LPN

Decisional and Computational LPN Asymptotic hardness of LPN Related work (randomized) PRFs and PRGs

  • The road map

Overview of the LPN-based randomized PRG in AC0 mod 2 Bernoulli noise extractor in AC0 mod 2 Bernoulli-like noise sampler in AC0 randomized PRG  randomized PRF

 Conclusion and open problems

slide-3
SLIDE 3

Learning Parity with Noise (LPN)

Challenger:𝑏

$ {0,1}𝑟×𝑜, 𝑦 $ {0,1}𝑜, 𝑓 Ber𝜈 𝑟, 𝑧 ≔ 𝐵𝑦 + 𝑓

Search LPN: given 𝑏 and 𝑧, find out 𝑡 Decisional LPN: distinguish 𝑏, 𝑧 from 𝑏, 𝑉𝑟 [Blum et al.94, Katz&Shin06]: the two versions are (poly) equivalent In fact: can use 𝑦 Ber𝜈

𝑜 instead of 𝑦 $ {0,1}𝑜

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

  • 1

1 1 + 1 1 1 = 1 1 1 1 1 (mod 2)

a x e y

Ber𝜈 :Bernoulli distribution

  • f noise rate 0 < 𝜈 <

𝟐 𝟑

Pr [Ber𝜈 = 1] = 𝜈 Pr [Ber𝜈 = 0] = 1 − 𝜈 Ber𝜈

𝑟: q-fold of Ber𝜈

slide-4
SLIDE 4

Hardness of LPN

  • worst-case hardness

LPN (decoding random linear code) is NP-hard.

  • average-case hardness

 constant noise 𝜈 = 𝑃(1) BKW (Blum,Kalai,Wasserman): 𝑢 = 𝑟 = 2𝑃(𝑜/ log 𝑜) Lyubashevsky’s tradeoff: 𝑢 = 2𝑃(𝑜/ loglog 𝑜) ,𝑟 = 𝑜1+𝜗  low noise 𝜈 = 𝑜−𝑑 (for constant 0 < 𝑑 < 1) 𝑢 = 2𝑃(𝑜1−𝑑), 𝑟 = 𝑜 + 𝑃(1)

  • quantum resistance
slide-5
SLIDE 5

Related Work

  • public-key cryptography from LPN
  • CPA PKE from low-noise LPN [Alekhnovich 03]
  • CCA PKE from low-noise LPN [Dottling et al.12, Kiltz et al. 14 ]
  • CCA PKE from constant-noise LPN [Yu & J. Zhang C16]
  • symmetric cryptography from LPN
  • Pseudorandom generators [Blum et al.93, Applebaum et al.09]
  • Authentication schemes [Hopper&Blum 01, Juels et al. 05,…]

[Kiltz et al.11, Dodis et al.12, Lyu & Masny13, Cash et al.16]

  • Perfectly binding string commitment scheme [Jain et al. 12]
  • Pseudorandom functions from (low-noise) LPN?

This work

slide-6
SLIDE 6

Main results

  • Low-noise LPN implies

Polynomial-stretch pseudorandom generators (PRGs) in AC0 mod 2

AC0(mod 2):polynomial-size, O (1) -depth circuits with unbounded fan-in ∧,∨,⊕.

Pseudorandom functions (PRFs) in AC0(mod 2)

AC0(mod 2):polynomial-size, 𝜕 (1) -depth circuits with unbounded fan-in ∧,∨,⊕

  • More about the PRGs/PRFs:

weak seed/key of sublinear entropy & security ≈ LPN on linear size secret uniform seed/key of size λ & security up to 2𝑃(λ/logλ)

  • Technical tools:

Bernoulli noise extractor in AC0 mod 2 Rényi entropy source  Bernoulli distribution Bernoulli-like noise sampler in AC0 Uniform randomness  Bernoulli-like distribution  Security-preserving and depth-preserving domain extender for PRFs [Razborov & Rudich 94]: good PRFs do NOT exist in AC0(mod 2)

slide-7
SLIDE 7

(randomized) PRGs, PRFs and LPN

  • 𝐻𝑏: {0,1}𝑜 × {0,1}𝑛 → {0,1}𝑚 𝑜 < 𝑚 is randomized PRG if

(𝐻𝑏 𝑉𝑜 , 𝑏) ~𝑑 (𝑉𝑚, 𝑏)

  • 𝐺𝑙,𝑏: {0,1}𝑜 × {0,1}𝑜 × {0,1}𝑛 → {0,1}𝑚 is randomized PRF if for every PPT 𝐵

| Pr 𝐵𝐺𝑙,𝑏 𝑏 = 1 − Pr 𝐵𝑆 𝑏 = 1 | = 𝑜𝑓𝑕𝑚(𝑜) where 𝑆: {0,1}𝑛 → {0,1}𝑚 is a random function.

  • Can we obtain (randomized) PRGs and weak PRFs from LPN ?

try eliminating the noise (like LWR from LWE) 𝑏1, 𝑦 , ⋯ , 𝑏𝑗, 𝑦 𝑀(∙) , ⋯ , 𝑏𝑟−𝑗+1, 𝑦 , ⋯ , 𝑏𝑟, 𝑦 𝑀(∙) where 𝑀(∙) is deterministic, 𝐻𝑏 𝑦 = 𝑀 𝑏 ∙ 𝑦 , 𝐺

𝑦 𝑏 = 𝑀(𝑏 ∙ 𝑦)

[Akavia et al.14]: may not work!

  • ur approach: convert entropy source w into Bernoulli noise
slide-8
SLIDE 8

Overview: LPN-based randomized PRG

  • Input: (possibly weak) seed 𝑥 and public coin 𝑏
  • Noise sampling: convert (almost all entropy of) 𝑥 into Bernoulli-like noise (𝑦, 𝑓)
  • Output: 𝑯𝒃 𝒙 = 𝒃𝑦 + 𝒇
  • Theorem: Assume that the decisional LPN is (𝑟 = 1 + 𝑃 1

𝑜, 𝑢, ε )-hard

  • n secret of size 𝑜 and noise rate 𝜈 = 𝑜−𝑑(0 < 𝑑 < 1),

then 𝑯𝒃 is a (𝑢 − 𝑞𝑝𝑚𝑧 𝑜 , 𝑃 ε )-hard randomized PRG in AC0 mod 2 on

  • weak seed 𝒙 of entropy 𝑃(𝑜1−𝑑 ∙ log 𝑜)
  • uniform seed 𝒙 of size 𝑃(𝑜1−𝑑 ∙ log 𝑜)

Bernoulli noise extractor/sampler 𝒙 𝒚 = 𝒚𝟐, ⋯ , 𝒚𝒐 , 𝒇 = (𝒇𝟐, 𝒇𝟑, 𝒇𝟒, ⋯ , 𝒇𝒓) 𝒃𝒚 + 𝒇

slide-9
SLIDE 9

Bernoulli Noise Extractor

  • Sample Ber𝜈 𝜈 = 2−𝑗 : output ⨀(𝑥1, ⋯ , 𝑥𝑗) = 𝑥1𝑥2 ⋯ 𝑥𝑗
  • For 𝜈 = 𝑜−𝑑 (𝑗 = 𝑑log𝑜), Shannon entropy H(Ber𝜈) ≈ 𝜈log(1/𝜈)

λ random bits  (λ/ 𝑗) = 𝑃(λ/log𝑜) Bernoulli bits in theory: λ random bits  λ/H(Ber𝜈) ≈ 𝑃(λ𝑜𝑑/log𝑜) Bernoulli bits [Applebaum et al.09]: 𝑥 remains a lot of entropy given the noise sampled

  • The proposal:

𝑥

ℎ1 ℎ2

ℎ3

ℎ𝑟

⨀ ℎ1, ℎ2, ⋯ , ℎ𝑟: 2-wise independent hash functions (randomized by 𝑏)

slide-10
SLIDE 10

Bernoulli Noise Extractor (cont’d)

  • The extractor is in AC0 (mod 2)
  • Theorem (informal): Let ℎ1, ℎ2, ⋯ , ℎ𝑟 be 2-wise independent hash

functions, for any source 𝑥 of Renyi entropy λ, for any constant 0 <Δ≤ 1, Stat-Dist ( 𝑏, (𝑓1, ⋯ 𝑓𝑟 ), 𝑏, Ber𝜈

𝑟

) < 2

1+Δ H Ber𝜈 𝑟 −λ 2

+ 2−Δ2𝜈𝑟/3

  • Parameters: 𝜈 = 𝑜−𝑑, set q = Ω 𝑜 , λ = 1 + 2Δ H Ber𝜈

𝑟 = Ω(𝑜1−𝑑 ∙ logn)

  • PRG’s stretch:
  • utput length

input length = 𝑟−𝑜

λ = 𝑜Ω(1)

  • Proof: Cauchy-Schwarz + 2-wise independence + flattening Shannon entropy

like the crooked LHL [Dodis & Smith 05] Chernoff [HILL99]

slide-11
SLIDE 11

An alternative: Bernoulli noise sampler

  • Use uniform randomness (weak random source), and do it in AC0 (AC0 mod 2 )
  • The idea: take conjunction of 2𝜈𝑟 copies of random Hamming-weight-1 distributions
  • The above distribution (denoted as ψ𝜈

𝑟) need 2𝜈𝑟(log𝑟) uniform random bits

  • Asymptotically optimal: for 𝜈 = 𝑜−𝑑, 𝑟 = 𝑞𝑝𝑚𝑧(𝑜), 2𝜈𝑟log𝑟= 𝑃(H(Ber𝜈

𝑟))

  • PRG: 𝑯𝒃 𝒙 = 𝒃𝑦 + 𝒇 by sampling (𝑦, 𝑓) ψ𝜈

𝑜+𝑟 from uniform 𝑥

  • Theorem: 𝑯𝒃 is a randomized PRG of seed length 𝑃(𝑜1−𝑑log𝑜) with

comparable security to the underlying standard LPN of secret size 𝑜 .

  • Proof. (1) computational LPN  computational ψ𝜈

𝑜+𝑟-LPN

(2) computational ψ𝜈

𝑜+𝑟 LPN  decisional ψ𝜈 𝑜+𝑟-LPN

sample-preserving reduction by [Applebaum et al.07]

0001000000000000000 0000000100000000000 … 0000000000000001000

2𝜈𝑟 𝑟

bitwise OR

0001000100000001000

slide-12
SLIDE 12

Randomized PRGs to PRFs

  • Given randomized PRG 𝐻𝑏: {0,1}𝑜 × {0,1}𝑛 → {0,1}𝑜2in AC0 mod 2

how to construct a PRF in AC0(mod 2) ? ① a PRF of input size 𝜕 log𝑜 : 𝑜-ary GGM tree of depth 𝑒 = 𝜕(1) 𝐻𝑏 (𝑙) ≝ 𝐻𝑏

0⋯00(𝑙) 𝐻𝑏 0⋯01(𝑙) ⋯ 𝐻𝑏 1⋯11(𝑙)

𝐺𝑙,𝑏(𝑦1 ⋯ 𝑦𝑒log𝑜) ≝ 𝐻𝑏

𝑦 𝑒−1 log𝑜+1⋯𝑦dlog𝑜(⋯ 𝐻𝑏 𝑦log𝑜+1⋯𝑦2log𝑜(𝐻𝑏 𝑦1⋯𝑦log𝑜 𝑙 ) ⋯ )

② Domain extension from {0,1}𝜕 log𝑜 to {0,1}n (w. security & depth preserved) Generalized Levin’s trick: 𝐺′𝑙,𝑏(𝑦) ≝ 𝐺𝑙1,𝑏(ℎ1(𝑦)) ⊕ 𝐺𝑙2,𝑏(ℎ2(𝑦)) ⊕ ⋯ ⊕ 𝐺𝑙𝑚,𝑏(ℎ𝑚(𝑦)) universal hash functions ℎ1, ⋯ , ℎ𝑚:{0,1}n→ {0,1}𝜕 log𝑜 , 𝑙 ≝ (𝑙1, ℎ1, ⋯ , 𝑙𝑚ℎ𝑚)

n bits n bits n bits n blocks

slide-13
SLIDE 13

Randomized PRGs to PRFs (cont’d)

Theorem [Generalized Levin’s trick]: For random functions 𝑆1,⋯ ,𝑆𝑚 : {0,1}𝜕 𝑚𝑝𝑕𝑜 → {0,1}𝑜 and universal hash functions ℎ1, ⋯ , ℎ𝑚:{0,1}𝑜→ {0,1}𝜕 𝑚𝑝𝑕𝑜 , let 𝑆′(𝑦) ≝ 𝑆1(ℎ1(𝑦)) ⊕ 𝑆2(ℎ2(𝑦)) ⊕ ⋯ ⊕ 𝑆𝑚(ℎ𝑚(𝑦)) Then, 𝑆′ is 𝑟

𝑟 𝑜𝜕 1 𝑚

  • indistinguishable from random function {0,1}𝑜→ {0,1}𝑜

for any (computationally unbounded) adversary making up to 𝑟 oracle queries.

  • See [Bellare et al.99] [Maurer 02][Dottling,Schröder15] [Gazi&Tessaro15]
  • Our proof: using the Patarin’s H-coefficient technique
  • Security is preserved for 𝑟=𝑜𝜕 1 and 𝑚 = 𝑃(𝑜/log𝑜)

Theorem [The PRF] Assume the decisional LPN is (𝑟 = 1 + 𝑃 1 𝑜, 𝑢, ε )-hard

  • n secret of size 𝑜 and noise rate 𝜈 = 𝑜−𝑑(0 < 𝑑 < 1), then for any 𝜕 1 there

exists (𝑟 = 𝑜𝜕 1 , 𝑢 − 𝑞𝑝𝑚𝑧 𝑟, 𝑜 , 𝑃 𝑒𝑟ε )-hard randomized PRF 𝐺′𝑙,𝑏 in AC0(mod 2) of depth 𝜕 1 on any weak key 𝑙 of entropy 𝑃(𝑜1−𝑑 ∙ log 𝑜).

slide-14
SLIDE 14

Conclusion and open problems

From low-noise LPN we construct:

Polynomial-stretch pseudorandom generators (PRGs) in AC0 mod 2 Pseudorandom functions (PRFs) in AC0(mod 2)

  • Same (actually better) 𝑢/𝜗 security than the underlying LPN

seed/key of entropy λ = 𝑜1−𝑑log𝑜 with 𝑢/𝜗 security up to 2𝑃(𝑜1−𝑑) = 2𝑃(λ/logλ)

  • Query complexity 𝑟 = 𝑜𝜕(1).

𝜕(1): depth of the circuit.

  • Open problems

 LPN-based PRFs in constant depth

  • weak PRFs in AC0 mod 2
  • PRFs in TC0

More cryptomania objects from LPN?

  • Collision Resistant Hash Function (CRHF)
  • Fully Homomorphic Encryption (FHE)
  • Etc.
slide-15
SLIDE 15

Thank you!