pseudo entropy and pseudorandom generators
play

Pseudo-Entropy and Pseudorandom Generators Iftach Haitner Tel Aviv - PowerPoint PPT Presentation

Application of Information Theory, Lecture 11 Pseudo-Entropy and Pseudorandom Generators Iftach Haitner Tel Aviv University. January 6, 2015 Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 1 / 23 Part I


  1. Encryption schemes Definition 1 A pair of algorithms ( E , D ) is (perfectly correct) encryption scheme, if for any k ∈ { 0 , 1 } n and m ∈ { 0 , 1 } ℓ , it holds that D ( k , E ( k , m )) = m ◮ What security should we ask from such scheme? ◮ Perfect secrecy: E K ( m ) ≡ E K ( m ′ ) , for any m , m ′ ∈ { 0 , 1 } ℓ and K ∼ { 0 , 1 } n , letting E k ( x ) := E ( k , x ) . ◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ . ◮ Is is bad? Is it optimal? ◮ Proof : Let M ∼ { 0 , 1 } n . ◮ Perfect secrecy = ⇒ H ( M , E K ( M )) = H ( M , E K ( 0 ℓ )) ⇒ H ( M | E K ( M )) = H ( M , E K ( M )) − H ( E K ( M )) = H ( M | E K ( 0 ℓ )) = n = ◮ ◮ Perfect correctness = ⇒ H ( M | E K ( M ) , K ) = 0 = ⇒ H ( M | E K ( M )) ≤ H ( M , K | E K ( M )) ≤ H ( K | E K ( M )) + 0 ≤ H ( K ) ◮ = ⇒ n ≤ ℓ . ◮ ◮ Statistical security? Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

  2. Encryption schemes Definition 1 A pair of algorithms ( E , D ) is (perfectly correct) encryption scheme, if for any k ∈ { 0 , 1 } n and m ∈ { 0 , 1 } ℓ , it holds that D ( k , E ( k , m )) = m ◮ What security should we ask from such scheme? ◮ Perfect secrecy: E K ( m ) ≡ E K ( m ′ ) , for any m , m ′ ∈ { 0 , 1 } ℓ and K ∼ { 0 , 1 } n , letting E k ( x ) := E ( k , x ) . ◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ . ◮ Is is bad? Is it optimal? ◮ Proof : Let M ∼ { 0 , 1 } n . ◮ Perfect secrecy = ⇒ H ( M , E K ( M )) = H ( M , E K ( 0 ℓ )) ⇒ H ( M | E K ( M )) = H ( M , E K ( M )) − H ( E K ( M )) = H ( M | E K ( 0 ℓ )) = n = ◮ ◮ Perfect correctness = ⇒ H ( M | E K ( M ) , K ) = 0 = ⇒ H ( M | E K ( M )) ≤ H ( M , K | E K ( M )) ≤ H ( K | E K ( M )) + 0 ≤ H ( K ) ◮ = ⇒ n ≤ ℓ . ◮ ◮ Statistical security? Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

  3. Encryption schemes Definition 1 A pair of algorithms ( E , D ) is (perfectly correct) encryption scheme, if for any k ∈ { 0 , 1 } n and m ∈ { 0 , 1 } ℓ , it holds that D ( k , E ( k , m )) = m ◮ What security should we ask from such scheme? ◮ Perfect secrecy: E K ( m ) ≡ E K ( m ′ ) , for any m , m ′ ∈ { 0 , 1 } ℓ and K ∼ { 0 , 1 } n , letting E k ( x ) := E ( k , x ) . ◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ . ◮ Is is bad? Is it optimal? ◮ Proof : Let M ∼ { 0 , 1 } n . ◮ Perfect secrecy = ⇒ H ( M , E K ( M )) = H ( M , E K ( 0 ℓ )) ⇒ H ( M | E K ( M )) = H ( M , E K ( M )) − H ( E K ( M )) = H ( M | E K ( 0 ℓ )) = n = ◮ ◮ Perfect correctness = ⇒ H ( M | E K ( M ) , K ) = 0 = ⇒ H ( M | E K ( M )) ≤ H ( M , K | E K ( M )) ≤ H ( K | E K ( M )) + 0 ≤ H ( K ) ◮ = ⇒ n ≤ ℓ . ◮ ◮ Statistical security? HW. Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

  4. Encryption schemes Definition 1 A pair of algorithms ( E , D ) is (perfectly correct) encryption scheme, if for any k ∈ { 0 , 1 } n and m ∈ { 0 , 1 } ℓ , it holds that D ( k , E ( k , m )) = m ◮ What security should we ask from such scheme? ◮ Perfect secrecy: E K ( m ) ≡ E K ( m ′ ) , for any m , m ′ ∈ { 0 , 1 } ℓ and K ∼ { 0 , 1 } n , letting E k ( x ) := E ( k , x ) . ◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ . ◮ Is is bad? Is it optimal? ◮ Proof : Let M ∼ { 0 , 1 } n . ◮ Perfect secrecy = ⇒ H ( M , E K ( M )) = H ( M , E K ( 0 ℓ )) ⇒ H ( M | E K ( M )) = H ( M , E K ( M )) − H ( E K ( M )) = H ( M | E K ( 0 ℓ )) = n = ◮ ◮ Perfect correctness = ⇒ H ( M | E K ( M ) , K ) = 0 = ⇒ H ( M | E K ( M )) ≤ H ( M , K | E K ( M )) ≤ H ( K | E K ( M )) + 0 ≤ H ( K ) ◮ = ⇒ n ≤ ℓ . ◮ ◮ Statistical security? HW. Computational security? Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

  5. Part II Statistical Vs. Computational distance Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 4 / 23

  6. Distributions and statistical distance Let P and Q be two distributions over a finite set U . Their statistical distance (also known as, variation distance) is defined as SD ( P , Q ) := 1 � |P ( x ) − Q ( x ) | = max S⊆U ( P ( S ) − Q ( S )) 2 x ∈U We will only consider finite distributions. Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 5 / 23

  7. Distributions and statistical distance Let P and Q be two distributions over a finite set U . Their statistical distance (also known as, variation distance) is defined as SD ( P , Q ) := 1 � |P ( x ) − Q ( x ) | = max S⊆U ( P ( S ) − Q ( S )) 2 x ∈U We will only consider finite distributions. Claim 2 For any pair of (finite) distributions P and Q , it holds that D { ∆ D ( P , Q ) := Pr SD ( P , Q ) = max x ←P [ D ( x ) = 1 ] − Pr x ←Q [ D ( x ) = 1 ] } , where D is any algorithm. Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 5 / 23

  8. Distributions and statistical distance Let P and Q be two distributions over a finite set U . Their statistical distance (also known as, variation distance) is defined as SD ( P , Q ) := 1 � |P ( x ) − Q ( x ) | = max S⊆U ( P ( S ) − Q ( S )) 2 x ∈U We will only consider finite distributions. Claim 2 For any pair of (finite) distributions P and Q , it holds that D { ∆ D ( P , Q ) := Pr SD ( P , Q ) = max x ←P [ D ( x ) = 1 ] − Pr x ←Q [ D ( x ) = 1 ] } , where D is any algorithm. Let P , Q , R be finite distributions, then Triangle inequality: SD ( P , R ) ≤ SD ( P , Q ) + SD ( Q , R ) Repeated sampling: SD ( P 2 = ( P , P ) , Q 2 = ( Q , Q )) ≤ 2 · SD ( P , Q ) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 5 / 23

  9. Section 1 Computational Indistinguishability Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 6 / 23

  10. Computational indistinguishability Definition 3 (computational indistinguishability) P and Q are ( s , ε ) -indistinguishable, if ∆ D P , Q ≤ ε , for any s -size D. Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

  11. Computational indistinguishability Definition 3 (computational indistinguishability) P and Q are ( s , ε ) -indistinguishable, if ∆ D P , Q ≤ ε , for any s -size D. ◮ Adversaries are circuits (possibly randomized) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

  12. Computational indistinguishability Definition 3 (computational indistinguishability) P and Q are ( s , ε ) -indistinguishable, if ∆ D P , Q ≤ ε , for any s -size D. ◮ Adversaries are circuits (possibly randomized) ◮ ( ∞ , ε ) -indistinguishable is equivalent to statistical distance ε Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

  13. Computational indistinguishability Definition 3 (computational indistinguishability) P and Q are ( s , ε ) -indistinguishable, if ∆ D P , Q ≤ ε , for any s -size D. ◮ Adversaries are circuits (possibly randomized) ◮ ( ∞ , ε ) -indistinguishable is equivalent to statistical distance ε ◮ We sometimes think of s = n ω ( 1 ) and ε = 1 / s , where n is the “security parameter” Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

  14. Computational indistinguishability Definition 3 (computational indistinguishability) P and Q are ( s , ε ) -indistinguishable, if ∆ D P , Q ≤ ε , for any s -size D. ◮ Adversaries are circuits (possibly randomized) ◮ ( ∞ , ε ) -indistinguishable is equivalent to statistical distance ε ◮ We sometimes think of s = n ω ( 1 ) and ε = 1 / s , where n is the “security parameter” ◮ Can it be different from the statistical case? Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

  15. Computational indistinguishability Definition 3 (computational indistinguishability) P and Q are ( s , ε ) -indistinguishable, if ∆ D P , Q ≤ ε , for any s -size D. ◮ Adversaries are circuits (possibly randomized) ◮ ( ∞ , ε ) -indistinguishable is equivalent to statistical distance ε ◮ We sometimes think of s = n ω ( 1 ) and ε = 1 / s , where n is the “security parameter” ◮ Can it be different from the statistical case? ◮ Unless said otherwise, distributions are over { 0 , 1 } n Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

  16. Repeated sampling Question 4 Assume P and Q are ( s , ε ) -indistinguishable, what about P 2 and Q 2 ? Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

  17. Repeated sampling Question 4 Assume P and Q are ( s , ε ) -indistinguishable, what about P 2 and Q 2 ? ◮ Let D be an s ′ -size algorithm with ∆ D ( P 2 , Q 2 ) = ε ′ Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

  18. Repeated sampling Question 4 Assume P and Q are ( s , ε ) -indistinguishable, what about P 2 and Q 2 ? ◮ Let D be an s ′ -size algorithm with ∆ D ( P 2 , Q 2 ) = ε ′ Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

  19. Repeated sampling Question 4 Assume P and Q are ( s , ε ) -indistinguishable, what about P 2 and Q 2 ? ◮ Let D be an s ′ -size algorithm with ∆ D ( P 2 , Q 2 ) = ε ′ ε ′ = x ←P 2 [ D ( x ) = 1 ] − Pr x ←Q 2 [ D ( x ) = 1 ] Pr = ( Pr x ←P 2 [ D ( x ) = 1 ] − x ← ( P , Q ) [ D ( x ) = 1 ]) Pr + ( x ← ( P , Q ) [ D ( x ) = 1 ] − Pr x ←Q 2 [ D ( x ) = 1 ]) Pr Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

  20. Repeated sampling Question 4 Assume P and Q are ( s , ε ) -indistinguishable, what about P 2 and Q 2 ? ◮ Let D be an s ′ -size algorithm with ∆ D ( P 2 , Q 2 ) = ε ′ ε ′ = x ←P 2 [ D ( x ) = 1 ] − Pr x ←Q 2 [ D ( x ) = 1 ] Pr = ( Pr x ←P 2 [ D ( x ) = 1 ] − x ← ( P , Q ) [ D ( x ) = 1 ]) Pr + ( x ← ( P , Q ) [ D ( x ) = 1 ] − Pr x ←Q 2 [ D ( x ) = 1 ]) Pr = ∆ D ( P 2 , ( P , Q )) + ∆ D (( P , Q ) , Q 2 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

  21. Repeated sampling Question 4 Assume P and Q are ( s , ε ) -indistinguishable, what about P 2 and Q 2 ? ◮ Let D be an s ′ -size algorithm with ∆ D ( P 2 , Q 2 ) = ε ′ ε ′ = x ←P 2 [ D ( x ) = 1 ] − Pr x ←Q 2 [ D ( x ) = 1 ] Pr = ( Pr x ←P 2 [ D ( x ) = 1 ] − x ← ( P , Q ) [ D ( x ) = 1 ]) Pr + ( x ← ( P , Q ) [ D ( x ) = 1 ] − Pr x ←Q 2 [ D ( x ) = 1 ]) Pr = ∆ D ( P 2 , ( P , Q )) + ∆ D (( P , Q ) , Q 2 ) ◮ So either ∆ D ( P 2 , ( P , Q )) ≥ ε ′ / 2, or ∆ D (( P , Q ) , Q 2 ) ≥ ε ′ / 2 Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

  22. Repeated sampling Question 4 Assume P and Q are ( s , ε ) -indistinguishable, what about P 2 and Q 2 ? ◮ Let D be an s ′ -size algorithm with ∆ D ( P 2 , Q 2 ) = ε ′ ε ′ = x ←P 2 [ D ( x ) = 1 ] − Pr x ←Q 2 [ D ( x ) = 1 ] Pr = ( Pr x ←P 2 [ D ( x ) = 1 ] − x ← ( P , Q ) [ D ( x ) = 1 ]) Pr + ( x ← ( P , Q ) [ D ( x ) = 1 ] − Pr x ←Q 2 [ D ( x ) = 1 ]) Pr = ∆ D ( P 2 , ( P , Q )) + ∆ D (( P , Q ) , Q 2 ) ◮ So either ∆ D ( P 2 , ( P , Q )) ≥ ε ′ / 2, or ∆ D (( P , Q ) , Q 2 ) ≥ ε ′ / 2 ◮ Hence, ε ′ < 2 ε implies s ′ ≥ s − n . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

  23. Repeated sampling cont. What about P k and Q k ? Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  24. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  25. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Proof : ? Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  26. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Proof : ? ◮ For i ∈ { 0 , . . . , k } , let H i = ( P 1 , . . . , P i , Q i + 1 , . . . , Q k ) , where the P i ’s are iid ∼ P and the Q i ’s are iid ∼ Q . (hybrids) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  27. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Proof : ? ◮ For i ∈ { 0 , . . . , k } , let H i = ( P 1 , . . . , P i , Q i + 1 , . . . , Q k ) , where the P i ’s are iid ∼ P and the Q i ’s are iid ∼ Q . (hybrids) ◮ Let D be a s ′ -size algorithm with ∆ D ( P k , Q k ) = ε ′ Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  28. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Proof : ? ◮ For i ∈ { 0 , . . . , k } , let H i = ( P 1 , . . . , P i , Q i + 1 , . . . , Q k ) , where the P i ’s are iid ∼ P and the Q i ’s are iid ∼ Q . (hybrids) ◮ Let D be a s ′ -size algorithm with ∆ D ( P k , Q k ) = ε ′ ◮ ε ′ = Pr � D ( H k ) = 1 � � D ( H 0 ) = 1 � − Pr . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  29. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Proof : ? ◮ For i ∈ { 0 , . . . , k } , let H i = ( P 1 , . . . , P i , Q i + 1 , . . . , Q k ) , where the P i ’s are iid ∼ P and the Q i ’s are iid ∼ Q . (hybrids) ◮ Let D be a s ′ -size algorithm with ∆ D ( P k , Q k ) = ε ′ ◮ ε ′ = Pr � D ( H k ) = 1 � � D ( H 0 ) = 1 � − Pr . ◮ ε ′ = � � D ( H i ) = 1 � � D ( H i − 1 ) = 1 � i ∈ [ k ] Pr − Pr Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  30. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Proof : ? ◮ For i ∈ { 0 , . . . , k } , let H i = ( P 1 , . . . , P i , Q i + 1 , . . . , Q k ) , where the P i ’s are iid ∼ P and the Q i ’s are iid ∼ Q . (hybrids) ◮ Let D be a s ′ -size algorithm with ∆ D ( P k , Q k ) = ε ′ ◮ ε ′ = Pr � D ( H k ) = 1 � � D ( H 0 ) = 1 � − Pr . ◮ ε ′ = � � D ( H i ) = 1 � � D ( H i − 1 ) = 1 � i ∈ [ k ] Pr − Pr Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  31. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Proof : ? ◮ For i ∈ { 0 , . . . , k } , let H i = ( P 1 , . . . , P i , Q i + 1 , . . . , Q k ) , where the P i ’s are iid ∼ P and the Q i ’s are iid ∼ Q . (hybrids) ◮ Let D be a s ′ -size algorithm with ∆ D ( P k , Q k ) = ε ′ ◮ ε ′ = Pr � D ( H k ) = 1 � � D ( H 0 ) = 1 � − Pr . ◮ ε ′ = � � D ( H i ) = 1 � � D ( H i − 1 ) = 1 � i ∈ [ k ] ∆ D ( H i , H i − 1 ) i ∈ [ k ] Pr − Pr = � Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  32. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Proof : ? ◮ For i ∈ { 0 , . . . , k } , let H i = ( P 1 , . . . , P i , Q i + 1 , . . . , Q k ) , where the P i ’s are iid ∼ P and the Q i ’s are iid ∼ Q . (hybrids) ◮ Let D be a s ′ -size algorithm with ∆ D ( P k , Q k ) = ε ′ ◮ ε ′ = Pr � D ( H k ) = 1 � � D ( H 0 ) = 1 � − Pr . ◮ ε ′ = � � D ( H i ) = 1 � � D ( H i − 1 ) = 1 � i ∈ [ k ] ∆ D ( H i , H i − 1 ) i ∈ [ k ] Pr − Pr = � ⇒ ∃ i ∈ [ k ] with ∆ D ( H i , H i − 1 ) ≥ ε ′ / k . = ◮ Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  33. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Proof : ? ◮ For i ∈ { 0 , . . . , k } , let H i = ( P 1 , . . . , P i , Q i + 1 , . . . , Q k ) , where the P i ’s are iid ∼ P and the Q i ’s are iid ∼ Q . (hybrids) ◮ Let D be a s ′ -size algorithm with ∆ D ( P k , Q k ) = ε ′ ◮ ε ′ = Pr � D ( H k ) = 1 � � D ( H 0 ) = 1 � − Pr . ◮ ε ′ = � � D ( H i ) = 1 � � D ( H i − 1 ) = 1 � i ∈ [ k ] ∆ D ( H i , H i − 1 ) i ∈ [ k ] Pr − Pr = � ⇒ ∃ i ∈ [ k ] with ∆ D ( H i , H i − 1 ) ≥ ε ′ / k . = ◮ ◮ Thus, ε ′ ≤ k ε implies s ′ > s − kn Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  34. Repeated sampling cont. What about P k and Q k ? Claim 5 Assume P and Q are ( s , ε ) -indistinguishable, then P k and Q k are ( s − kn , k ε ) -indistinguishable. Proof : ? ◮ For i ∈ { 0 , . . . , k } , let H i = ( P 1 , . . . , P i , Q i + 1 , . . . , Q k ) , where the P i ’s are iid ∼ P and the Q i ’s are iid ∼ Q . (hybrids) ◮ Let D be a s ′ -size algorithm with ∆ D ( P k , Q k ) = ε ′ ◮ ε ′ = Pr � D ( H k ) = 1 � � D ( H 0 ) = 1 � − Pr . ◮ ε ′ = � � D ( H i ) = 1 � � D ( H i − 1 ) = 1 � i ∈ [ k ] ∆ D ( H i , H i − 1 ) i ∈ [ k ] Pr − Pr = � ⇒ ∃ i ∈ [ k ] with ∆ D ( H i , H i − 1 ) ≥ ε ′ / k . = ◮ ◮ Thus, ε ′ ≤ k ε implies s ′ > s − kn ◮ When considering bounded time algorithms, things behaves very differently! Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

  35. Part III Pseudorandom Generators Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 10 / 23

  36. Pseudorandom generator Definition 6 (pseudorandom distributions) A distribution P over { 0 , 1 } n is ( s , ε ) -pseudorandom, if it is ( s , ε ) -indistinguishable from U n . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

  37. Pseudorandom generator Definition 6 (pseudorandom distributions) A distribution P over { 0 , 1 } n is ( s , ε ) -pseudorandom, if it is ( s , ε ) -indistinguishable from U n . ◮ Do such distributions exit for interesting ( s , ε ) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

  38. Pseudorandom generator Definition 6 (pseudorandom distributions) A distribution P over { 0 , 1 } n is ( s , ε ) -pseudorandom, if it is ( s , ε ) -indistinguishable from U n . ◮ Do such distributions exit for interesting ( s , ε ) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

  39. Pseudorandom generator Definition 6 (pseudorandom distributions) A distribution P over { 0 , 1 } n is ( s , ε ) -pseudorandom, if it is ( s , ε ) -indistinguishable from U n . ◮ Do such distributions exit for interesting ( s , ε ) Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : { 0 , 1 } n �→ { 0 , 1 } ℓ ( n ) is a ( s , ε ) -pseudorandom generator, if for any n ∈ N Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

  40. Pseudorandom generator Definition 6 (pseudorandom distributions) A distribution P over { 0 , 1 } n is ( s , ε ) -pseudorandom, if it is ( s , ε ) -indistinguishable from U n . ◮ Do such distributions exit for interesting ( s , ε ) Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : { 0 , 1 } n �→ { 0 , 1 } ℓ ( n ) is a ( s , ε ) -pseudorandom generator, if for any n ∈ N ◮ g is length extending (i.e., ℓ ( n ) > n ) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

  41. Pseudorandom generator Definition 6 (pseudorandom distributions) A distribution P over { 0 , 1 } n is ( s , ε ) -pseudorandom, if it is ( s , ε ) -indistinguishable from U n . ◮ Do such distributions exit for interesting ( s , ε ) Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : { 0 , 1 } n �→ { 0 , 1 } ℓ ( n ) is a ( s , ε ) -pseudorandom generator, if for any n ∈ N ◮ g is length extending (i.e., ℓ ( n ) > n ) ◮ g ( U n ) is ( s ( n ) , ε ( n )) -pseudorandom Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

  42. Pseudorandom generator Definition 6 (pseudorandom distributions) A distribution P over { 0 , 1 } n is ( s , ε ) -pseudorandom, if it is ( s , ε ) -indistinguishable from U n . ◮ Do such distributions exit for interesting ( s , ε ) Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : { 0 , 1 } n �→ { 0 , 1 } ℓ ( n ) is a ( s , ε ) -pseudorandom generator, if for any n ∈ N ◮ g is length extending (i.e., ℓ ( n ) > n ) ◮ g ( U n ) is ( s ( n ) , ε ( n )) -pseudorandom Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

  43. Pseudorandom generator Definition 6 (pseudorandom distributions) A distribution P over { 0 , 1 } n is ( s , ε ) -pseudorandom, if it is ( s , ε ) -indistinguishable from U n . ◮ Do such distributions exit for interesting ( s , ε ) Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : { 0 , 1 } n �→ { 0 , 1 } ℓ ( n ) is a ( s , ε ) -pseudorandom generator, if for any n ∈ N ◮ g is length extending (i.e., ℓ ( n ) > n ) ◮ g ( U n ) is ( s ( n ) , ε ( n )) -pseudorandom ◮ We omit the “security parameter", i.e., n , when its value is clear from the context Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

  44. Pseudorandom generator Definition 6 (pseudorandom distributions) A distribution P over { 0 , 1 } n is ( s , ε ) -pseudorandom, if it is ( s , ε ) -indistinguishable from U n . ◮ Do such distributions exit for interesting ( s , ε ) Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : { 0 , 1 } n �→ { 0 , 1 } ℓ ( n ) is a ( s , ε ) -pseudorandom generator, if for any n ∈ N ◮ g is length extending (i.e., ℓ ( n ) > n ) ◮ g ( U n ) is ( s ( n ) , ε ( n )) -pseudorandom ◮ We omit the “security parameter", i.e., n , when its value is clear from the context ◮ Do such generators exist? Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

  45. Pseudorandom generator Definition 6 (pseudorandom distributions) A distribution P over { 0 , 1 } n is ( s , ε ) -pseudorandom, if it is ( s , ε ) -indistinguishable from U n . ◮ Do such distributions exit for interesting ( s , ε ) Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : { 0 , 1 } n �→ { 0 , 1 } ℓ ( n ) is a ( s , ε ) -pseudorandom generator, if for any n ∈ N ◮ g is length extending (i.e., ℓ ( n ) > n ) ◮ g ( U n ) is ( s ( n ) , ε ( n )) -pseudorandom ◮ We omit the “security parameter", i.e., n , when its value is clear from the context ◮ Do such generators exist? ◮ Applications? Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

  46. Section 2 Pseudorandom generators (PRGs) from One-Way Permutations (OWPs) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 12 / 23

  47. OWP to PRG Claim 8 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a poly-time permutation and let b : { 0 , 1 } n �→ { 0 , 1 } be a poly-time ( s , ε ) -hardcore predicate of f , then g ( x ) = ( f ( x ) , b ( x )) is a ( s − O ( n ) , ε ) -PRG. Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

  48. OWP to PRG Claim 8 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a poly-time permutation and let b : { 0 , 1 } n �→ { 0 , 1 } be a poly-time ( s , ε ) -hardcore predicate of f , then g ( x ) = ( f ( x ) , b ( x )) is a ( s − O ( n ) , ε ) -PRG. ◮ Hence, OWP = ⇒ PRG Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

  49. OWP to PRG Claim 8 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a poly-time permutation and let b : { 0 , 1 } n �→ { 0 , 1 } be a poly-time ( s , ε ) -hardcore predicate of f , then g ( x ) = ( f ( x ) , b ( x )) is a ( s − O ( n ) , ε ) -PRG. ◮ Hence, OWP = ⇒ PRG ◮ Proof : Let D be an s ′ -size algorithm with ∆ D ( g ( U n ) , U n + 1 ) = ε ′ , we will show ∃ ( s ′ + O ( n )) -size P with Pr [ P ( f ( U n )) = b ( U n )] = 1 2 + ε ′ . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

  50. OWP to PRG Claim 8 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a poly-time permutation and let b : { 0 , 1 } n �→ { 0 , 1 } be a poly-time ( s , ε ) -hardcore predicate of f , then g ( x ) = ( f ( x ) , b ( x )) is a ( s − O ( n ) , ε ) -PRG. ◮ Hence, OWP = ⇒ PRG ◮ Proof : Let D be an s ′ -size algorithm with ∆ D ( g ( U n ) , U n + 1 ) = ε ′ , we will show ∃ ( s ′ + O ( n )) -size P with Pr [ P ( f ( U n )) = b ( U n )] = 1 2 + ε ′ . ◮ Let δ = Pr [ D ( U n + 1 ) = 1 ] (hence, Pr [ D ( g ( U n )) = 1 ] = δ + ε ′ ) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

  51. OWP to PRG Claim 8 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a poly-time permutation and let b : { 0 , 1 } n �→ { 0 , 1 } be a poly-time ( s , ε ) -hardcore predicate of f , then g ( x ) = ( f ( x ) , b ( x )) is a ( s − O ( n ) , ε ) -PRG. ◮ Hence, OWP = ⇒ PRG ◮ Proof : Let D be an s ′ -size algorithm with ∆ D ( g ( U n ) , U n + 1 ) = ε ′ , we will show ∃ ( s ′ + O ( n )) -size P with Pr [ P ( f ( U n )) = b ( U n )] = 1 2 + ε ′ . ◮ Let δ = Pr [ D ( U n + 1 ) = 1 ] (hence, Pr [ D ( g ( U n )) = 1 ] = δ + ε ′ ) ◮ Compute δ = Pr [ D ( f ( U n ) , U 1 ) = 1 ] Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

  52. OWP to PRG Claim 8 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a poly-time permutation and let b : { 0 , 1 } n �→ { 0 , 1 } be a poly-time ( s , ε ) -hardcore predicate of f , then g ( x ) = ( f ( x ) , b ( x )) is a ( s − O ( n ) , ε ) -PRG. ◮ Hence, OWP = ⇒ PRG ◮ Proof : Let D be an s ′ -size algorithm with ∆ D ( g ( U n ) , U n + 1 ) = ε ′ , we will show ∃ ( s ′ + O ( n )) -size P with Pr [ P ( f ( U n )) = b ( U n )] = 1 2 + ε ′ . ◮ Let δ = Pr [ D ( U n + 1 ) = 1 ] (hence, Pr [ D ( g ( U n )) = 1 ] = δ + ε ′ ) ◮ Compute δ = Pr [ D ( f ( U n ) , U 1 ) = 1 ] Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

  53. OWP to PRG Claim 8 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a poly-time permutation and let b : { 0 , 1 } n �→ { 0 , 1 } be a poly-time ( s , ε ) -hardcore predicate of f , then g ( x ) = ( f ( x ) , b ( x )) is a ( s − O ( n ) , ε ) -PRG. ◮ Hence, OWP = ⇒ PRG ◮ Proof : Let D be an s ′ -size algorithm with ∆ D ( g ( U n ) , U n + 1 ) = ε ′ , we will show ∃ ( s ′ + O ( n )) -size P with Pr [ P ( f ( U n )) = b ( U n )] = 1 2 + ε ′ . ◮ Let δ = Pr [ D ( U n + 1 ) = 1 ] (hence, Pr [ D ( g ( U n )) = 1 ] = δ + ε ′ ) ◮ Compute δ = Pr [ D ( f ( U n ) , U 1 ) = 1 ] ( f is a permuation) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

  54. OWP to PRG Claim 8 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a poly-time permutation and let b : { 0 , 1 } n �→ { 0 , 1 } be a poly-time ( s , ε ) -hardcore predicate of f , then g ( x ) = ( f ( x ) , b ( x )) is a ( s − O ( n ) , ε ) -PRG. ◮ Hence, OWP = ⇒ PRG ◮ Proof : Let D be an s ′ -size algorithm with ∆ D ( g ( U n ) , U n + 1 ) = ε ′ , we will show ∃ ( s ′ + O ( n )) -size P with Pr [ P ( f ( U n )) = b ( U n )] = 1 2 + ε ′ . ◮ Let δ = Pr [ D ( U n + 1 ) = 1 ] (hence, Pr [ D ( g ( U n )) = 1 ] = δ + ε ′ ) ◮ Compute δ = Pr [ D ( f ( U n ) , U 1 ) = 1 ] ( f is a permuation) = Pr [ U 1 = b ( U n )] · Pr [ D ( f ( U n ) , U 1 ) = 1 | U 1 = b ( U n )] + Pr [ U 1 = b ( U n )] · Pr [ D ( f ( U n ) , U 1 ) = 1 | U 1 = b ( U n )] Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

  55. OWP to PRG Claim 8 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a poly-time permutation and let b : { 0 , 1 } n �→ { 0 , 1 } be a poly-time ( s , ε ) -hardcore predicate of f , then g ( x ) = ( f ( x ) , b ( x )) is a ( s − O ( n ) , ε ) -PRG. ◮ Hence, OWP = ⇒ PRG ◮ Proof : Let D be an s ′ -size algorithm with ∆ D ( g ( U n ) , U n + 1 ) = ε ′ , we will show ∃ ( s ′ + O ( n )) -size P with Pr [ P ( f ( U n )) = b ( U n )] = 1 2 + ε ′ . ◮ Let δ = Pr [ D ( U n + 1 ) = 1 ] (hence, Pr [ D ( g ( U n )) = 1 ] = δ + ε ′ ) ◮ Compute δ = Pr [ D ( f ( U n ) , U 1 ) = 1 ] ( f is a permuation) = Pr [ U 1 = b ( U n )] · Pr [ D ( f ( U n ) , U 1 ) = 1 | U 1 = b ( U n )] + Pr [ U 1 = b ( U n )] · Pr [ D ( f ( U n ) , U 1 ) = 1 | U 1 = b ( U n )] = 1 2 ( δ + ε ′ ) + 1 2 · Pr [ D ( f ( U n ) , U 1 ) = 1 | U 1 = b ( U n )] . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

  56. OWP to PRG Claim 8 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a poly-time permutation and let b : { 0 , 1 } n �→ { 0 , 1 } be a poly-time ( s , ε ) -hardcore predicate of f , then g ( x ) = ( f ( x ) , b ( x )) is a ( s − O ( n ) , ε ) -PRG. ◮ Hence, OWP = ⇒ PRG ◮ Proof : Let D be an s ′ -size algorithm with ∆ D ( g ( U n ) , U n + 1 ) = ε ′ , we will show ∃ ( s ′ + O ( n )) -size P with Pr [ P ( f ( U n )) = b ( U n )] = 1 2 + ε ′ . ◮ Let δ = Pr [ D ( U n + 1 ) = 1 ] (hence, Pr [ D ( g ( U n )) = 1 ] = δ + ε ′ ) ◮ Compute δ = Pr [ D ( f ( U n ) , U 1 ) = 1 ] ( f is a permuation) = Pr [ U 1 = b ( U n )] · Pr [ D ( f ( U n ) , U 1 ) = 1 | U 1 = b ( U n )] + Pr [ U 1 = b ( U n )] · Pr [ D ( f ( U n ) , U 1 ) = 1 | U 1 = b ( U n )] = 1 2 ( δ + ε ′ ) + 1 2 · Pr [ D ( f ( U n ) , U 1 ) = 1 | U 1 = b ( U n )] . � � ◮ Hence, Pr = δ − ε ′ D ( f ( U n ) , b ( U n )) = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

  57. OWP to PRG cont. ◮ Pr [ D ( f ( U n ) , b ( U n )) = 1 ] = δ + ε ′ ◮ Pr [ D ( f ( U n ) , b ( U n )) = 1 ] = δ − ε ′ Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23

  58. OWP to PRG cont. ◮ Pr [ D ( f ( U n ) , b ( U n )) = 1 ] = δ + ε ′ ◮ Pr [ D ( f ( U n ) , b ( U n )) = 1 ] = δ − ε ′ Algorithm 9 ( P ) Input: y ∈ { 0 , 1 } n 1. Flip a random coin c ← { 0 , 1 } . 2. If D ( y , c ) = 1 output c , otherwise, output c . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23

  59. OWP to PRG cont. ◮ Pr [ D ( f ( U n ) , b ( U n )) = 1 ] = δ + ε ′ ◮ Pr [ D ( f ( U n ) , b ( U n )) = 1 ] = δ − ε ′ Algorithm 9 ( P ) Input: y ∈ { 0 , 1 } n 1. Flip a random coin c ← { 0 , 1 } . 2. If D ( y , c ) = 1 output c , otherwise, output c . ◮ It follows that Pr [ P ( f ( U n )) = b ( U n )] = Pr [ c = b ( U n )] · Pr [ D ( f ( U n ) , c ) = 1 | c = b ( U n )] + Pr [ c = b ( U n )] · Pr [ D ( f ( U n ) , c ) = 0 | c = b ( U n )] Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23

  60. OWP to PRG cont. ◮ Pr [ D ( f ( U n ) , b ( U n )) = 1 ] = δ + ε ′ ◮ Pr [ D ( f ( U n ) , b ( U n )) = 1 ] = δ − ε ′ Algorithm 9 ( P ) Input: y ∈ { 0 , 1 } n 1. Flip a random coin c ← { 0 , 1 } . 2. If D ( y , c ) = 1 output c , otherwise, output c . ◮ It follows that Pr [ P ( f ( U n )) = b ( U n )] = Pr [ c = b ( U n )] · Pr [ D ( f ( U n ) , c ) = 1 | c = b ( U n )] + Pr [ c = b ( U n )] · Pr [ D ( f ( U n ) , c ) = 0 | c = b ( U n )] = 1 2 · ( δ + ε ′ ) + 1 2 ( 1 − δ + ε ′ ) = 1 2 + ε ′ . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23

  61. Part IV PRG from Regular OWF Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 15 / 23

  62. Computational notions of entropy Definition 10 X has ( s , ε ) -pseudoentropy at least k , if ∃ rv Y with H ( Y ) ≥ k and ∆ D ( X , Y ) ≤ ε for any s -size D. ( s , ε ) -pseudo min/Reiny -entropy are analogously defined. Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

  63. Computational notions of entropy Definition 10 X has ( s , ε ) -pseudoentropy at least k , if ∃ rv Y with H ( Y ) ≥ k and ∆ D ( X , Y ) ≤ ε for any s -size D. ( s , ε ) -pseudo min/Reiny -entropy are analogously defined. ◮ Example Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

  64. Computational notions of entropy Definition 10 X has ( s , ε ) -pseudoentropy at least k , if ∃ rv Y with H ( Y ) ≥ k and ∆ D ( X , Y ) ≤ ε for any s -size D. ( s , ε ) -pseudo min/Reiny -entropy are analogously defined. ◮ Example ◮ Repeated sampling Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

  65. Computational notions of entropy Definition 10 X has ( s , ε ) -pseudoentropy at least k , if ∃ rv Y with H ( Y ) ≥ k and ∆ D ( X , Y ) ≤ ε for any s -size D. ( s , ε ) -pseudo min/Reiny -entropy are analogously defined. ◮ Example ◮ Repeated sampling ◮ Non-monotonicity Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

  66. Computational notions of entropy Definition 10 X has ( s , ε ) -pseudoentropy at least k , if ∃ rv Y with H ( Y ) ≥ k and ∆ D ( X , Y ) ≤ ε for any s -size D. ( s , ε ) -pseudo min/Reiny -entropy are analogously defined. ◮ Example ◮ Repeated sampling ◮ Non-monotonicity ◮ Ensembles Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

  67. Computational notions of entropy Definition 10 X has ( s , ε ) -pseudoentropy at least k , if ∃ rv Y with H ( Y ) ≥ k and ∆ D ( X , Y ) ≤ ε for any s -size D. ( s , ε ) -pseudo min/Reiny -entropy are analogously defined. ◮ Example ◮ Repeated sampling ◮ Non-monotonicity ◮ Ensembles ◮ In the following we will simply write ( s , ε ) -entropy, etc Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

  68. High entropy OWF from regular OWF Claim 11 Let f : { 0 , 1 } n �→ { 0 , 1 } n be a 2 k -regular ( s , ε ) -one-way, let H = { h : { 0 , 1 } n �→ { 0 , 1 } k + 2 } be 2-universal family, and let g ( h , x ) = ( f ( x ) , h , h ( x )) . Then 1. H 2 ( g ( U n , H )) ≥ 2 n − 1 2 , for H ← H . 2. g is (Θ( s ε 2 ) , 2 ε ) -one-way. ◮ k and m and H are parameterized by of n ◮ We assume log |H| = n and s ≥ n Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 17 / 23

  69. g has high Renyi entropy Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

  70. g has high Renyi entropy w , w ′ ←{ 0 , 1 } n ×H [ g ( w ) = g ( w ′ )] CP ( g ( U n , H )) := Pr h , h ′ ←H [ h = h ′ ] · ( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ f ( x ) = f ( x ′ )] = Pr Pr h ←H ;( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ h ( x ) = h ( x ′ ) | f ( x ) = f ( x ′ )] · Pr Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

  71. g has high Renyi entropy w , w ′ ←{ 0 , 1 } n ×H [ g ( w ) = g ( w ′ )] CP ( g ( U n , H )) := Pr h , h ′ ←H [ h = h ′ ] · ( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ f ( x ) = f ( x ′ )] = Pr Pr h ←H ;( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ h ( x ) = h ( x ′ ) | f ( x ) = f ( x ′ )] · Pr = CP ( H ) · CP ( f ( U n )) · ( 2 − k + ( 1 − 2 − k ) · 2 − k − 2 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

  72. g has high Renyi entropy w , w ′ ←{ 0 , 1 } n ×H [ g ( w ) = g ( w ′ )] CP ( g ( U n , H )) := Pr h , h ′ ←H [ h = h ′ ] · ( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ f ( x ) = f ( x ′ )] = Pr Pr h ←H ;( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ h ( x ) = h ( x ′ ) | f ( x ) = f ( x ′ )] · Pr = CP ( H ) · CP ( f ( U n )) · ( 2 − k + ( 1 − 2 − k ) · 2 − k − 2 ) ≤ CP ( H ) · CP ( f ( U n )) · 2 − k · 4 3 Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

  73. g has high Renyi entropy w , w ′ ←{ 0 , 1 } n ×H [ g ( w ) = g ( w ′ )] CP ( g ( U n , H )) := Pr h , h ′ ←H [ h = h ′ ] · ( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ f ( x ) = f ( x ′ )] = Pr Pr h ←H ;( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ h ( x ) = h ( x ′ ) | f ( x ) = f ( x ′ )] · Pr = CP ( H ) · CP ( f ( U n )) · ( 2 − k + ( 1 − 2 − k ) · 2 − k − 2 ) ≤ CP ( H ) · CP ( f ( U n )) · 2 − k · 4 3 = 2 − n · 2 − n · 4 3 . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

  74. g has high Renyi entropy w , w ′ ←{ 0 , 1 } n ×H [ g ( w ) = g ( w ′ )] CP ( g ( U n , H )) := Pr h , h ′ ←H [ h = h ′ ] · ( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ f ( x ) = f ( x ′ )] = Pr Pr h ←H ;( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ h ( x ) = h ( x ′ ) | f ( x ) = f ( x ′ )] · Pr = CP ( H ) · CP ( f ( U n )) · ( 2 − k + ( 1 − 2 − k ) · 2 − k − 2 ) ≤ CP ( H ) · CP ( f ( U n )) · 2 − k · 4 3 = 2 − n · 2 − n · 4 3 . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

  75. g has high Renyi entropy w , w ′ ←{ 0 , 1 } n ×H [ g ( w ) = g ( w ′ )] CP ( g ( U n , H )) := Pr h , h ′ ←H [ h = h ′ ] · ( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ f ( x ) = f ( x ′ )] = Pr Pr h ←H ;( x , x ′ ) ← ( { 0 , 1 } n ) 2 [ h ( x ) = h ( x ′ ) | f ( x ) = f ( x ′ )] · Pr = CP ( H ) · CP ( f ( U n )) · ( 2 − k + ( 1 − 2 − k ) · 2 − k − 2 ) ≤ CP ( H ) · CP ( f ( U n )) · 2 − k · 4 3 = 2 − n · 2 − n · 4 3 . Hence, H 2 ( g ( U n , H )) ≥ 2 n + log 3 4 ≥ 2 n − 1 2 . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

  76. g is one-way Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23

  77. g is one-way Let A be an s ′ -size algorithm that inverts g w.p ε ′ and let ℓ = k − 2 log 1 � � . ε ′ Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23

  78. g is one-way Let A be an s ′ -size algorithm that inverts g w.p ε ′ and let ℓ = k − 2 log 1 � � . ε ′ Consider the following inverter for f Algorithm 12 ( B ) Input: y ∈ { 0 , 1 } n . Return D ( y , h , z ) , for h ← H and z ← { 0 , 1 } ℓ . Algorithm 13 ( D ) Input: y ∈ { 0 , 1 } n , h ∈ H and z 1 ∈ { 0 , 1 } ℓ . For all z 2 ∈ { 0 , 1 } k + 2 − ℓ : 1. Let ( x , h ) = A ( y , h , z 1 ◦ z 2 ) . 2. If f ( x ) = y , return x . Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23

  79. g is one-way Let A be an s ′ -size algorithm that inverts g w.p ε ′ and let ℓ = k − 2 log 1 � � . ε ′ Consider the following inverter for f Algorithm 12 ( B ) Input: y ∈ { 0 , 1 } n . Return D ( y , h , z ) , for h ← H and z ← { 0 , 1 } ℓ . Algorithm 13 ( D ) Input: y ∈ { 0 , 1 } n , h ∈ H and z 1 ∈ { 0 , 1 } ℓ . For all z 2 ∈ { 0 , 1 } k + 2 − ℓ : 1. Let ( x , h ) = A ( y , h , z 1 ◦ z 2 ) . 2. If f ( x ) = y , return x . ◮ B’s size is (( s ′ + O ( n )) · 2 2 log ε ′ + 2 = Θ( s ′ /ε 2 ) 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23

  80. g is one-way Let A be an s ′ -size algorithm that inverts g w.p ε ′ and let ℓ = k − 2 log 1 � � . ε ′ Consider the following inverter for f Algorithm 12 ( B ) Input: y ∈ { 0 , 1 } n . Return D ( y , h , z ) , for h ← H and z ← { 0 , 1 } ℓ . Algorithm 13 ( D ) Input: y ∈ { 0 , 1 } n , h ∈ H and z 1 ∈ { 0 , 1 } ℓ . For all z 2 ∈ { 0 , 1 } k + 2 − ℓ : 1. Let ( x , h ) = A ( y , h , z 1 ◦ z 2 ) . 2. If f ( x ) = y , return x . ◮ B’s size is (( s ′ + O ( n )) · 2 2 log ε ′ + 2 = Θ( s ′ /ε 2 ) 1 D ( f ( x ) , h , h ( x ) 1 ,...,ℓ ) ∈ f − 1 ( f ( x )) = ε ′ ◮ Pr x ←{ 0 , 1 } n ; h ←H � � Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23

  81. g is one-way, cont. We saw that D ( f ( x ) , h , h ( x ) 1 ,...,ℓ ) ∈ f − 1 ( f ( x )) = ε ′ � � Pr (1) x ←{ 0 , 1 } n ; h ←H Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 20 / 23

  82. g is one-way, cont. We saw that D ( f ( x ) , h , h ( x ) 1 ,...,ℓ ) ∈ f − 1 ( f ( x )) = ε ′ � � Pr (1) x ←{ 0 , 1 } n ; h ←H By the leftover hash lemma SD (( f ( x ) , h , h ( x ) 1 ,...,ℓ ) x ←{ 0 , 1 } , h ←H , ( f ( x ) , h , U ℓ ) x ←{ 0 , 1 } , h ←H ) ≤ ε ′ / 2 (2) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 20 / 23

  83. g is one-way, cont. We saw that D ( f ( x ) , h , h ( x ) 1 ,...,ℓ ) ∈ f − 1 ( f ( x )) = ε ′ � � Pr (1) x ←{ 0 , 1 } n ; h ←H By the leftover hash lemma SD (( f ( x ) , h , h ( x ) 1 ,...,ℓ ) x ←{ 0 , 1 } , h ←H , ( f ( x ) , h , U ℓ ) x ←{ 0 , 1 } , h ←H ) ≤ ε ′ / 2 (2) Hence, ≥ ε ′ − ε ′ / 2 = ε ′ / 2 . B ( f ( x )) ∈ f − 1 ( f ( x )) � � Pr x ←{ 0 , 1 } n Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 20 / 23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend