Pseudo-Entropy and Pseudorandom Generators Iftach Haitner Tel Aviv - - PowerPoint PPT Presentation

pseudo entropy and pseudorandom generators
SMART_READER_LITE
LIVE PREVIEW

Pseudo-Entropy and Pseudorandom Generators Iftach Haitner Tel Aviv - - PowerPoint PPT Presentation

Application of Information Theory, Lecture 11 Pseudo-Entropy and Pseudorandom Generators Iftach Haitner Tel Aviv University. January 6, 2015 Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 1 / 23 Part I


slide-1
SLIDE 1

Application of Information Theory, Lecture 11

Pseudo-Entropy and Pseudorandom Generators

Iftach Haitner

Tel Aviv University.

January 6, 2015

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 1 / 23

slide-2
SLIDE 2

Part I Motivation

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 2 / 23

slide-3
SLIDE 3

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-4
SLIDE 4

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-5
SLIDE 5

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-6
SLIDE 6

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-7
SLIDE 7

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-8
SLIDE 8

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-9
SLIDE 9

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-10
SLIDE 10

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof:

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-11
SLIDE 11

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof:

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-12
SLIDE 12

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-13
SLIDE 13

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =

⇒ H(M, EK(M)) = H(M, EK(0ℓ))

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-14
SLIDE 14

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =

⇒ H(M, EK(M)) = H(M, EK(0ℓ))

= ⇒ H(M|EK(M)) = H(M, EK(M)) − H(EK(M)) = H(M|EK(0ℓ)) = n

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-15
SLIDE 15

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =

⇒ H(M, EK(M)) = H(M, EK(0ℓ))

= ⇒ H(M|EK(M)) = H(M, EK(M)) − H(EK(M)) = H(M|EK(0ℓ)) = n

◮ Perfect correctness =

⇒ H(M|EK(M), K) = 0

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-16
SLIDE 16

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =

⇒ H(M, EK(M)) = H(M, EK(0ℓ))

= ⇒ H(M|EK(M)) = H(M, EK(M)) − H(EK(M)) = H(M|EK(0ℓ)) = n

◮ Perfect correctness =

⇒ H(M|EK(M), K) = 0

= ⇒ H(M|EK(M)) ≤ H(M, K|EK(M)) ≤ H(K|EK(M)) + 0 ≤ H(K)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-17
SLIDE 17

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =

⇒ H(M, EK(M)) = H(M, EK(0ℓ))

= ⇒ H(M|EK(M)) = H(M, EK(M)) − H(EK(M)) = H(M|EK(0ℓ)) = n

◮ Perfect correctness =

⇒ H(M|EK(M), K) = 0

= ⇒ H(M|EK(M)) ≤ H(M, K|EK(M)) ≤ H(K|EK(M)) + 0 ≤ H(K)

= ⇒ n ≤ ℓ.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-18
SLIDE 18

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =

⇒ H(M, EK(M)) = H(M, EK(0ℓ))

= ⇒ H(M|EK(M)) = H(M, EK(M)) − H(EK(M)) = H(M|EK(0ℓ)) = n

◮ Perfect correctness =

⇒ H(M|EK(M), K) = 0

= ⇒ H(M|EK(M)) ≤ H(M, K|EK(M)) ≤ H(K|EK(M)) + 0 ≤ H(K)

= ⇒ n ≤ ℓ.

◮ Statistical security?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-19
SLIDE 19

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =

⇒ H(M, EK(M)) = H(M, EK(0ℓ))

= ⇒ H(M|EK(M)) = H(M, EK(M)) − H(EK(M)) = H(M|EK(0ℓ)) = n

◮ Perfect correctness =

⇒ H(M|EK(M), K) = 0

= ⇒ H(M|EK(M)) ≤ H(M, K|EK(M)) ≤ H(K|EK(M)) + 0 ≤ H(K)

= ⇒ n ≤ ℓ.

◮ Statistical security?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-20
SLIDE 20

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =

⇒ H(M, EK(M)) = H(M, EK(0ℓ))

= ⇒ H(M|EK(M)) = H(M, EK(M)) − H(EK(M)) = H(M|EK(0ℓ)) = n

◮ Perfect correctness =

⇒ H(M|EK(M), K) = 0

= ⇒ H(M|EK(M)) ≤ H(M, K|EK(M)) ≤ H(K|EK(M)) + 0 ≤ H(K)

= ⇒ n ≤ ℓ.

◮ Statistical security? HW.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-21
SLIDE 21

Encryption schemes

Definition 1 A pair of algorithms (E, D) is (perfectly correct) encryption scheme, if for any k ∈ {0, 1}n and m ∈ {0, 1}ℓ, it holds that D(k, E(k, m)) = m

◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and

K ∼ {0, 1}n, letting Ek(x) := E(k, x).

◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =

⇒ H(M, EK(M)) = H(M, EK(0ℓ))

= ⇒ H(M|EK(M)) = H(M, EK(M)) − H(EK(M)) = H(M|EK(0ℓ)) = n

◮ Perfect correctness =

⇒ H(M|EK(M), K) = 0

= ⇒ H(M|EK(M)) ≤ H(M, K|EK(M)) ≤ H(K|EK(M)) + 0 ≤ H(K)

= ⇒ n ≤ ℓ.

◮ Statistical security? HW. Computational security?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23

slide-22
SLIDE 22

Part II Statistical Vs. Computational distance

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 4 / 23

slide-23
SLIDE 23

Distributions and statistical distance

Let P and Q be two distributions over a finite set U. Their statistical distance (also known as, variation distance) is defined as SD(P, Q) := 1 2

  • x∈U

|P(x) − Q(x)| = max

S⊆U (P(S) − Q(S))

We will only consider finite distributions.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 5 / 23

slide-24
SLIDE 24

Distributions and statistical distance

Let P and Q be two distributions over a finite set U. Their statistical distance (also known as, variation distance) is defined as SD(P, Q) := 1 2

  • x∈U

|P(x) − Q(x)| = max

S⊆U (P(S) − Q(S))

We will only consider finite distributions. Claim 2 For any pair of (finite) distributions P and Q, it holds that SD(P, Q) = max

D {∆D(P, Q) := Pr x←P [D(x) = 1] − Pr x←Q [D(x) = 1]},

where D is any algorithm.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 5 / 23

slide-25
SLIDE 25

Distributions and statistical distance

Let P and Q be two distributions over a finite set U. Their statistical distance (also known as, variation distance) is defined as SD(P, Q) := 1 2

  • x∈U

|P(x) − Q(x)| = max

S⊆U (P(S) − Q(S))

We will only consider finite distributions. Claim 2 For any pair of (finite) distributions P and Q, it holds that SD(P, Q) = max

D {∆D(P, Q) := Pr x←P [D(x) = 1] − Pr x←Q [D(x) = 1]},

where D is any algorithm. Let P, Q, R be finite distributions, then Triangle inequality: SD(P, R) ≤ SD(P, Q) + SD(Q, R) Repeated sampling: SD(P2 = (P, P), Q2 = (Q, Q)) ≤ 2 · SD(P, Q)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 5 / 23

slide-26
SLIDE 26

Section 1 Computational Indistinguishability

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 6 / 23

slide-27
SLIDE 27

Computational indistinguishability

Definition 3 (computational indistinguishability) P and Q are (s, ε)-indistinguishable, if ∆D

P,Q ≤ ε, for any s-size D.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

slide-28
SLIDE 28

Computational indistinguishability

Definition 3 (computational indistinguishability) P and Q are (s, ε)-indistinguishable, if ∆D

P,Q ≤ ε, for any s-size D.

◮ Adversaries are circuits (possibly randomized)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

slide-29
SLIDE 29

Computational indistinguishability

Definition 3 (computational indistinguishability) P and Q are (s, ε)-indistinguishable, if ∆D

P,Q ≤ ε, for any s-size D.

◮ Adversaries are circuits (possibly randomized) ◮ (∞, ε)-indistinguishable is equivalent to statistical distance ε

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

slide-30
SLIDE 30

Computational indistinguishability

Definition 3 (computational indistinguishability) P and Q are (s, ε)-indistinguishable, if ∆D

P,Q ≤ ε, for any s-size D.

◮ Adversaries are circuits (possibly randomized) ◮ (∞, ε)-indistinguishable is equivalent to statistical distance ε ◮ We sometimes think of s = nω(1) and ε = 1/s, where n is the “security

parameter”

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

slide-31
SLIDE 31

Computational indistinguishability

Definition 3 (computational indistinguishability) P and Q are (s, ε)-indistinguishable, if ∆D

P,Q ≤ ε, for any s-size D.

◮ Adversaries are circuits (possibly randomized) ◮ (∞, ε)-indistinguishable is equivalent to statistical distance ε ◮ We sometimes think of s = nω(1) and ε = 1/s, where n is the “security

parameter”

◮ Can it be different from the statistical case?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

slide-32
SLIDE 32

Computational indistinguishability

Definition 3 (computational indistinguishability) P and Q are (s, ε)-indistinguishable, if ∆D

P,Q ≤ ε, for any s-size D.

◮ Adversaries are circuits (possibly randomized) ◮ (∞, ε)-indistinguishable is equivalent to statistical distance ε ◮ We sometimes think of s = nω(1) and ε = 1/s, where n is the “security

parameter”

◮ Can it be different from the statistical case? ◮ Unless said otherwise, distributions are over {0, 1}n

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23

slide-33
SLIDE 33

Repeated sampling

Question 4 Assume P and Q are (s, ε)-indistinguishable, what about P2 and Q2?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

slide-34
SLIDE 34

Repeated sampling

Question 4 Assume P and Q are (s, ε)-indistinguishable, what about P2 and Q2?

◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

slide-35
SLIDE 35

Repeated sampling

Question 4 Assume P and Q are (s, ε)-indistinguishable, what about P2 and Q2?

◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

slide-36
SLIDE 36

Repeated sampling

Question 4 Assume P and Q are (s, ε)-indistinguishable, what about P2 and Q2?

◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′

ε′ = Pr

x←P2 [D(x) = 1] −

Pr

x←Q2 [D(x) = 1]

= ( Pr

x←P2 [D(x) = 1] −

Pr

x←(P,Q) [D(x) = 1])

+ ( Pr

x←(P,Q) [D(x) = 1] −

Pr

x←Q2 [D(x) = 1])

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

slide-37
SLIDE 37

Repeated sampling

Question 4 Assume P and Q are (s, ε)-indistinguishable, what about P2 and Q2?

◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′

ε′ = Pr

x←P2 [D(x) = 1] −

Pr

x←Q2 [D(x) = 1]

= ( Pr

x←P2 [D(x) = 1] −

Pr

x←(P,Q) [D(x) = 1])

+ ( Pr

x←(P,Q) [D(x) = 1] −

Pr

x←Q2 [D(x) = 1])

= ∆D(P2, (P, Q)) + ∆D((P, Q), Q2)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

slide-38
SLIDE 38

Repeated sampling

Question 4 Assume P and Q are (s, ε)-indistinguishable, what about P2 and Q2?

◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′

ε′ = Pr

x←P2 [D(x) = 1] −

Pr

x←Q2 [D(x) = 1]

= ( Pr

x←P2 [D(x) = 1] −

Pr

x←(P,Q) [D(x) = 1])

+ ( Pr

x←(P,Q) [D(x) = 1] −

Pr

x←Q2 [D(x) = 1])

= ∆D(P2, (P, Q)) + ∆D((P, Q), Q2)

◮ So either ∆D(P2, (P, Q)) ≥ ε′/2, or ∆D((P, Q), Q2) ≥ ε′/2

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

slide-39
SLIDE 39

Repeated sampling

Question 4 Assume P and Q are (s, ε)-indistinguishable, what about P2 and Q2?

◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′

ε′ = Pr

x←P2 [D(x) = 1] −

Pr

x←Q2 [D(x) = 1]

= ( Pr

x←P2 [D(x) = 1] −

Pr

x←(P,Q) [D(x) = 1])

+ ( Pr

x←(P,Q) [D(x) = 1] −

Pr

x←Q2 [D(x) = 1])

= ∆D(P2, (P, Q)) + ∆D((P, Q), Q2)

◮ So either ∆D(P2, (P, Q)) ≥ ε′/2, or ∆D((P, Q), Q2) ≥ ε′/2 ◮ Hence, ε′ < 2ε implies s′ ≥ s − n.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23

slide-40
SLIDE 40

Repeated sampling cont.

What about Pk and Qk?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-41
SLIDE 41

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-42
SLIDE 42

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable. Proof: ?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-43
SLIDE 43

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable. Proof: ?

◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are

iid ∼ P and the Qi’s are iid ∼ Q. (hybrids)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-44
SLIDE 44

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable. Proof: ?

◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are

iid ∼ P and the Qi’s are iid ∼ Q. (hybrids)

◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-45
SLIDE 45

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable. Proof: ?

◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are

iid ∼ P and the Qi’s are iid ∼ Q. (hybrids)

◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr

  • D(Hk) = 1
  • − Pr
  • D(H0) = 1
  • .

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-46
SLIDE 46

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable. Proof: ?

◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are

iid ∼ P and the Qi’s are iid ∼ Q. (hybrids)

◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr

  • D(Hk) = 1
  • − Pr
  • D(H0) = 1
  • .

◮ ε′ =

i∈[k] Pr

  • D(Hi) = 1
  • − Pr
  • D(Hi−1) = 1
  • Iftach Haitner (TAU)

Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-47
SLIDE 47

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable. Proof: ?

◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are

iid ∼ P and the Qi’s are iid ∼ Q. (hybrids)

◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr

  • D(Hk) = 1
  • − Pr
  • D(H0) = 1
  • .

◮ ε′ =

i∈[k] Pr

  • D(Hi) = 1
  • − Pr
  • D(Hi−1) = 1
  • Iftach Haitner (TAU)

Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-48
SLIDE 48

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable. Proof: ?

◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are

iid ∼ P and the Qi’s are iid ∼ Q. (hybrids)

◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr

  • D(Hk) = 1
  • − Pr
  • D(H0) = 1
  • .

◮ ε′ =

i∈[k] Pr

  • D(Hi) = 1
  • − Pr
  • D(Hi−1) = 1
  • =

i∈[k] ∆D(Hi, Hi−1)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-49
SLIDE 49

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable. Proof: ?

◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are

iid ∼ P and the Qi’s are iid ∼ Q. (hybrids)

◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr

  • D(Hk) = 1
  • − Pr
  • D(H0) = 1
  • .

◮ ε′ =

i∈[k] Pr

  • D(Hi) = 1
  • − Pr
  • D(Hi−1) = 1
  • =

i∈[k] ∆D(Hi, Hi−1)

= ⇒ ∃i ∈ [k] with ∆D(Hi, Hi−1) ≥ ε′/k.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-50
SLIDE 50

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable. Proof: ?

◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are

iid ∼ P and the Qi’s are iid ∼ Q. (hybrids)

◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr

  • D(Hk) = 1
  • − Pr
  • D(H0) = 1
  • .

◮ ε′ =

i∈[k] Pr

  • D(Hi) = 1
  • − Pr
  • D(Hi−1) = 1
  • =

i∈[k] ∆D(Hi, Hi−1)

= ⇒ ∃i ∈ [k] with ∆D(Hi, Hi−1) ≥ ε′/k.

◮ Thus, ε′ ≤ kε implies s′ > s − kn

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-51
SLIDE 51

Repeated sampling cont.

What about Pk and Qk? Claim 5 Assume P and Q are (s, ε)-indistinguishable, then Pk and Qk are (s − kn, kε)-indistinguishable. Proof: ?

◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are

iid ∼ P and the Qi’s are iid ∼ Q. (hybrids)

◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr

  • D(Hk) = 1
  • − Pr
  • D(H0) = 1
  • .

◮ ε′ =

i∈[k] Pr

  • D(Hi) = 1
  • − Pr
  • D(Hi−1) = 1
  • =

i∈[k] ∆D(Hi, Hi−1)

= ⇒ ∃i ∈ [k] with ∆D(Hi, Hi−1) ≥ ε′/k.

◮ Thus, ε′ ≤ kε implies s′ > s − kn ◮ When considering bounded time algorithms, things behaves very

differently!

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23

slide-52
SLIDE 52

Part III Pseudorandom Generators

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 10 / 23

slide-53
SLIDE 53

Pseudorandom generator

Definition 6 (pseudorandom distributions) A distribution P over {0, 1}n is (s, ε)-pseudorandom, if it is (s, ε)-indistinguishable from Un.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

slide-54
SLIDE 54

Pseudorandom generator

Definition 6 (pseudorandom distributions) A distribution P over {0, 1}n is (s, ε)-pseudorandom, if it is (s, ε)-indistinguishable from Un.

◮ Do such distributions exit for interesting (s, ε)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

slide-55
SLIDE 55

Pseudorandom generator

Definition 6 (pseudorandom distributions) A distribution P over {0, 1}n is (s, ε)-pseudorandom, if it is (s, ε)-indistinguishable from Un.

◮ Do such distributions exit for interesting (s, ε)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

slide-56
SLIDE 56

Pseudorandom generator

Definition 6 (pseudorandom distributions) A distribution P over {0, 1}n is (s, ε)-pseudorandom, if it is (s, ε)-indistinguishable from Un.

◮ Do such distributions exit for interesting (s, ε)

Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : {0, 1}n → {0, 1}ℓ(n) is a (s, ε)-pseudorandom generator, if for any n ∈ N

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

slide-57
SLIDE 57

Pseudorandom generator

Definition 6 (pseudorandom distributions) A distribution P over {0, 1}n is (s, ε)-pseudorandom, if it is (s, ε)-indistinguishable from Un.

◮ Do such distributions exit for interesting (s, ε)

Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : {0, 1}n → {0, 1}ℓ(n) is a (s, ε)-pseudorandom generator, if for any n ∈ N

◮ g is length extending (i.e., ℓ(n) > n ) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

slide-58
SLIDE 58

Pseudorandom generator

Definition 6 (pseudorandom distributions) A distribution P over {0, 1}n is (s, ε)-pseudorandom, if it is (s, ε)-indistinguishable from Un.

◮ Do such distributions exit for interesting (s, ε)

Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : {0, 1}n → {0, 1}ℓ(n) is a (s, ε)-pseudorandom generator, if for any n ∈ N

◮ g is length extending (i.e., ℓ(n) > n ) ◮ g(Un) is (s(n), ε(n))-pseudorandom Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

slide-59
SLIDE 59

Pseudorandom generator

Definition 6 (pseudorandom distributions) A distribution P over {0, 1}n is (s, ε)-pseudorandom, if it is (s, ε)-indistinguishable from Un.

◮ Do such distributions exit for interesting (s, ε)

Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : {0, 1}n → {0, 1}ℓ(n) is a (s, ε)-pseudorandom generator, if for any n ∈ N

◮ g is length extending (i.e., ℓ(n) > n ) ◮ g(Un) is (s(n), ε(n))-pseudorandom Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

slide-60
SLIDE 60

Pseudorandom generator

Definition 6 (pseudorandom distributions) A distribution P over {0, 1}n is (s, ε)-pseudorandom, if it is (s, ε)-indistinguishable from Un.

◮ Do such distributions exit for interesting (s, ε)

Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : {0, 1}n → {0, 1}ℓ(n) is a (s, ε)-pseudorandom generator, if for any n ∈ N

◮ g is length extending (i.e., ℓ(n) > n ) ◮ g(Un) is (s(n), ε(n))-pseudorandom

◮ We omit the “security parameter", i.e., n, when its value is clear from the

context

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

slide-61
SLIDE 61

Pseudorandom generator

Definition 6 (pseudorandom distributions) A distribution P over {0, 1}n is (s, ε)-pseudorandom, if it is (s, ε)-indistinguishable from Un.

◮ Do such distributions exit for interesting (s, ε)

Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : {0, 1}n → {0, 1}ℓ(n) is a (s, ε)-pseudorandom generator, if for any n ∈ N

◮ g is length extending (i.e., ℓ(n) > n ) ◮ g(Un) is (s(n), ε(n))-pseudorandom

◮ We omit the “security parameter", i.e., n, when its value is clear from the

context

◮ Do such generators exist?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

slide-62
SLIDE 62

Pseudorandom generator

Definition 6 (pseudorandom distributions) A distribution P over {0, 1}n is (s, ε)-pseudorandom, if it is (s, ε)-indistinguishable from Un.

◮ Do such distributions exit for interesting (s, ε)

Definition 7 (pseudorandom generators (PRGs)) A poly-time computable function g : {0, 1}n → {0, 1}ℓ(n) is a (s, ε)-pseudorandom generator, if for any n ∈ N

◮ g is length extending (i.e., ℓ(n) > n ) ◮ g(Un) is (s(n), ε(n))-pseudorandom

◮ We omit the “security parameter", i.e., n, when its value is clear from the

context

◮ Do such generators exist? ◮ Applications?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23

slide-63
SLIDE 63

Section 2 Pseudorandom generators (PRGs) from One-Way Permutations (OWPs)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 12 / 23

slide-64
SLIDE 64

OWP to PRG

Claim 8 Let f : {0, 1}n → {0, 1}n be a poly-time permutation and let b: {0, 1}n → {0, 1} be a poly-time (s, ε)-hardcore predicate of f, then g(x) = (f(x), b(x)) is a (s − O(n), ε)-PRG.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

slide-65
SLIDE 65

OWP to PRG

Claim 8 Let f : {0, 1}n → {0, 1}n be a poly-time permutation and let b: {0, 1}n → {0, 1} be a poly-time (s, ε)-hardcore predicate of f, then g(x) = (f(x), b(x)) is a (s − O(n), ε)-PRG.

◮ Hence, OWP =

⇒ PRG

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

slide-66
SLIDE 66

OWP to PRG

Claim 8 Let f : {0, 1}n → {0, 1}n be a poly-time permutation and let b: {0, 1}n → {0, 1} be a poly-time (s, ε)-hardcore predicate of f, then g(x) = (f(x), b(x)) is a (s − O(n), ε)-PRG.

◮ Hence, OWP =

⇒ PRG

◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will

show ∃ (s′ + O(n))-size P with Pr [P(f(Un)) = b(Un)] = 1

2 + ε′.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

slide-67
SLIDE 67

OWP to PRG

Claim 8 Let f : {0, 1}n → {0, 1}n be a poly-time permutation and let b: {0, 1}n → {0, 1} be a poly-time (s, ε)-hardcore predicate of f, then g(x) = (f(x), b(x)) is a (s − O(n), ε)-PRG.

◮ Hence, OWP =

⇒ PRG

◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will

show ∃ (s′ + O(n))-size P with Pr [P(f(Un)) = b(Un)] = 1

2 + ε′.

◮ Let δ = Pr [D(Un+1) = 1]

(hence, Pr [D(g(Un)) = 1] = δ + ε′)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

slide-68
SLIDE 68

OWP to PRG

Claim 8 Let f : {0, 1}n → {0, 1}n be a poly-time permutation and let b: {0, 1}n → {0, 1} be a poly-time (s, ε)-hardcore predicate of f, then g(x) = (f(x), b(x)) is a (s − O(n), ε)-PRG.

◮ Hence, OWP =

⇒ PRG

◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will

show ∃ (s′ + O(n))-size P with Pr [P(f(Un)) = b(Un)] = 1

2 + ε′.

◮ Let δ = Pr [D(Un+1) = 1]

(hence, Pr [D(g(Un)) = 1] = δ + ε′)

◮ Compute

δ = Pr[D(f(Un), U1) = 1]

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

slide-69
SLIDE 69

OWP to PRG

Claim 8 Let f : {0, 1}n → {0, 1}n be a poly-time permutation and let b: {0, 1}n → {0, 1} be a poly-time (s, ε)-hardcore predicate of f, then g(x) = (f(x), b(x)) is a (s − O(n), ε)-PRG.

◮ Hence, OWP =

⇒ PRG

◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will

show ∃ (s′ + O(n))-size P with Pr [P(f(Un)) = b(Un)] = 1

2 + ε′.

◮ Let δ = Pr [D(Un+1) = 1]

(hence, Pr [D(g(Un)) = 1] = δ + ε′)

◮ Compute

δ = Pr[D(f(Un), U1) = 1]

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

slide-70
SLIDE 70

OWP to PRG

Claim 8 Let f : {0, 1}n → {0, 1}n be a poly-time permutation and let b: {0, 1}n → {0, 1} be a poly-time (s, ε)-hardcore predicate of f, then g(x) = (f(x), b(x)) is a (s − O(n), ε)-PRG.

◮ Hence, OWP =

⇒ PRG

◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will

show ∃ (s′ + O(n))-size P with Pr [P(f(Un)) = b(Un)] = 1

2 + ε′.

◮ Let δ = Pr [D(Un+1) = 1]

(hence, Pr [D(g(Un)) = 1] = δ + ε′)

◮ Compute

δ = Pr[D(f(Un), U1) = 1] (f is a permuation)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

slide-71
SLIDE 71

OWP to PRG

Claim 8 Let f : {0, 1}n → {0, 1}n be a poly-time permutation and let b: {0, 1}n → {0, 1} be a poly-time (s, ε)-hardcore predicate of f, then g(x) = (f(x), b(x)) is a (s − O(n), ε)-PRG.

◮ Hence, OWP =

⇒ PRG

◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will

show ∃ (s′ + O(n))-size P with Pr [P(f(Un)) = b(Un)] = 1

2 + ε′.

◮ Let δ = Pr [D(Un+1) = 1]

(hence, Pr [D(g(Un)) = 1] = δ + ε′)

◮ Compute

δ = Pr[D(f(Un), U1) = 1] (f is a permuation) = Pr[U1 = b(Un)] · Pr[D(f(Un), U1) = 1 | U1 = b(Un)] + Pr[U1 = b(Un)] · Pr[D(f(Un), U1) = 1 | U1 = b(Un)]

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

slide-72
SLIDE 72

OWP to PRG

Claim 8 Let f : {0, 1}n → {0, 1}n be a poly-time permutation and let b: {0, 1}n → {0, 1} be a poly-time (s, ε)-hardcore predicate of f, then g(x) = (f(x), b(x)) is a (s − O(n), ε)-PRG.

◮ Hence, OWP =

⇒ PRG

◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will

show ∃ (s′ + O(n))-size P with Pr [P(f(Un)) = b(Un)] = 1

2 + ε′.

◮ Let δ = Pr [D(Un+1) = 1]

(hence, Pr [D(g(Un)) = 1] = δ + ε′)

◮ Compute

δ = Pr[D(f(Un), U1) = 1] (f is a permuation) = Pr[U1 = b(Un)] · Pr[D(f(Un), U1) = 1 | U1 = b(Un)] + Pr[U1 = b(Un)] · Pr[D(f(Un), U1) = 1 | U1 = b(Un)] = 1 2(δ + ε′) + 1 2 · Pr[D(f(Un), U1) = 1 | U1 = b(Un)].

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

slide-73
SLIDE 73

OWP to PRG

Claim 8 Let f : {0, 1}n → {0, 1}n be a poly-time permutation and let b: {0, 1}n → {0, 1} be a poly-time (s, ε)-hardcore predicate of f, then g(x) = (f(x), b(x)) is a (s − O(n), ε)-PRG.

◮ Hence, OWP =

⇒ PRG

◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will

show ∃ (s′ + O(n))-size P with Pr [P(f(Un)) = b(Un)] = 1

2 + ε′.

◮ Let δ = Pr [D(Un+1) = 1]

(hence, Pr [D(g(Un)) = 1] = δ + ε′)

◮ Compute

δ = Pr[D(f(Un), U1) = 1] (f is a permuation) = Pr[U1 = b(Un)] · Pr[D(f(Un), U1) = 1 | U1 = b(Un)] + Pr[U1 = b(Un)] · Pr[D(f(Un), U1) = 1 | U1 = b(Un)] = 1 2(δ + ε′) + 1 2 · Pr[D(f(Un), U1) = 1 | U1 = b(Un)].

◮ Hence, Pr

  • D(f(Un), b(Un)) = 1
  • = δ − ε′

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23

slide-74
SLIDE 74

OWP to PRG cont.

◮ Pr[D(f(Un), b(Un)) = 1] = δ + ε′ ◮ Pr[D(f(Un), b(Un)) = 1] = δ − ε′

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23

slide-75
SLIDE 75

OWP to PRG cont.

◮ Pr[D(f(Un), b(Un)) = 1] = δ + ε′ ◮ Pr[D(f(Un), b(Un)) = 1] = δ − ε′

Algorithm 9 (P) Input: y ∈ {0, 1}n

  • 1. Flip a random coin c ← {0, 1}.
  • 2. If D(y, c) = 1 output c, otherwise, output c.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23

slide-76
SLIDE 76

OWP to PRG cont.

◮ Pr[D(f(Un), b(Un)) = 1] = δ + ε′ ◮ Pr[D(f(Un), b(Un)) = 1] = δ − ε′

Algorithm 9 (P) Input: y ∈ {0, 1}n

  • 1. Flip a random coin c ← {0, 1}.
  • 2. If D(y, c) = 1 output c, otherwise, output c.

◮ It follows that

Pr [P(f(Un)) = b(Un)] = Pr[c = b(Un)] · Pr [D(f(Un), c) = 1 | c = b(Un)] + Pr[c = b(Un)] · Pr[D(f(Un), c) = 0 | c = b(Un)]

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23

slide-77
SLIDE 77

OWP to PRG cont.

◮ Pr[D(f(Un), b(Un)) = 1] = δ + ε′ ◮ Pr[D(f(Un), b(Un)) = 1] = δ − ε′

Algorithm 9 (P) Input: y ∈ {0, 1}n

  • 1. Flip a random coin c ← {0, 1}.
  • 2. If D(y, c) = 1 output c, otherwise, output c.

◮ It follows that

Pr [P(f(Un)) = b(Un)] = Pr[c = b(Un)] · Pr [D(f(Un), c) = 1 | c = b(Un)] + Pr[c = b(Un)] · Pr[D(f(Un), c) = 0 | c = b(Un)] = 1 2 · (δ + ε′) + 1 2(1 − δ + ε′) = 1 2 + ε′.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23

slide-78
SLIDE 78

Part IV PRG from Regular OWF

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 15 / 23

slide-79
SLIDE 79

Computational notions of entropy

Definition 10 X has (s, ε)-pseudoentropy at least k, if ∃ rv Y with H(Y) ≥ k and ∆D(X, Y) ≤ ε for any s-size D. (s, ε)-pseudo min/Reiny -entropy are analogously defined.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

slide-80
SLIDE 80

Computational notions of entropy

Definition 10 X has (s, ε)-pseudoentropy at least k, if ∃ rv Y with H(Y) ≥ k and ∆D(X, Y) ≤ ε for any s-size D. (s, ε)-pseudo min/Reiny -entropy are analogously defined.

◮ Example

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

slide-81
SLIDE 81

Computational notions of entropy

Definition 10 X has (s, ε)-pseudoentropy at least k, if ∃ rv Y with H(Y) ≥ k and ∆D(X, Y) ≤ ε for any s-size D. (s, ε)-pseudo min/Reiny -entropy are analogously defined.

◮ Example ◮ Repeated sampling

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

slide-82
SLIDE 82

Computational notions of entropy

Definition 10 X has (s, ε)-pseudoentropy at least k, if ∃ rv Y with H(Y) ≥ k and ∆D(X, Y) ≤ ε for any s-size D. (s, ε)-pseudo min/Reiny -entropy are analogously defined.

◮ Example ◮ Repeated sampling ◮ Non-monotonicity

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

slide-83
SLIDE 83

Computational notions of entropy

Definition 10 X has (s, ε)-pseudoentropy at least k, if ∃ rv Y with H(Y) ≥ k and ∆D(X, Y) ≤ ε for any s-size D. (s, ε)-pseudo min/Reiny -entropy are analogously defined.

◮ Example ◮ Repeated sampling ◮ Non-monotonicity ◮ Ensembles

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

slide-84
SLIDE 84

Computational notions of entropy

Definition 10 X has (s, ε)-pseudoentropy at least k, if ∃ rv Y with H(Y) ≥ k and ∆D(X, Y) ≤ ε for any s-size D. (s, ε)-pseudo min/Reiny -entropy are analogously defined.

◮ Example ◮ Repeated sampling ◮ Non-monotonicity ◮ Ensembles ◮ In the following we will simply write (s, ε)-entropy, etc

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23

slide-85
SLIDE 85

High entropy OWF from regular OWF

Claim 11 Let f : {0, 1}n → {0, 1}n be a 2k-regular (s, ε)-one-way, let H = {h: {0, 1}n → {0, 1}k+2} be 2-universal family, and let g(h, x) = (f(x), h, h(x)). Then

  • 1. H2(g(Un, H)) ≥ 2n − 1

2, for H ← H.

  • 2. g is (Θ(sε2), 2ε)-one-way.

◮ k and m and H are parameterized by of n ◮ We assume log |H| = n and s ≥ n

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 17 / 23

slide-86
SLIDE 86

g has high Renyi entropy

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

slide-87
SLIDE 87

g has high Renyi entropy

CP(g(Un, H)) := Pr

w,w′←{0,1}n×H [g(w) = g(w′)]

= Pr

h,h′←H [h = h′] ·

Pr

(x,x′)←({0,1}n)2 [f(x) = f(x′)]

· Pr

h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

slide-88
SLIDE 88

g has high Renyi entropy

CP(g(Un, H)) := Pr

w,w′←{0,1}n×H [g(w) = g(w′)]

= Pr

h,h′←H [h = h′] ·

Pr

(x,x′)←({0,1}n)2 [f(x) = f(x′)]

· Pr

h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]

= CP(H) · CP(f(Un)) · (2−k + (1 − 2−k) · 2−k−2)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

slide-89
SLIDE 89

g has high Renyi entropy

CP(g(Un, H)) := Pr

w,w′←{0,1}n×H [g(w) = g(w′)]

= Pr

h,h′←H [h = h′] ·

Pr

(x,x′)←({0,1}n)2 [f(x) = f(x′)]

· Pr

h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]

= CP(H) · CP(f(Un)) · (2−k + (1 − 2−k) · 2−k−2) ≤ CP(H) · CP(f(Un)) · 2−k · 4 3

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

slide-90
SLIDE 90

g has high Renyi entropy

CP(g(Un, H)) := Pr

w,w′←{0,1}n×H [g(w) = g(w′)]

= Pr

h,h′←H [h = h′] ·

Pr

(x,x′)←({0,1}n)2 [f(x) = f(x′)]

· Pr

h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]

= CP(H) · CP(f(Un)) · (2−k + (1 − 2−k) · 2−k−2) ≤ CP(H) · CP(f(Un)) · 2−k · 4 3 = 2−n · 2−n · 4 3.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

slide-91
SLIDE 91

g has high Renyi entropy

CP(g(Un, H)) := Pr

w,w′←{0,1}n×H [g(w) = g(w′)]

= Pr

h,h′←H [h = h′] ·

Pr

(x,x′)←({0,1}n)2 [f(x) = f(x′)]

· Pr

h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]

= CP(H) · CP(f(Un)) · (2−k + (1 − 2−k) · 2−k−2) ≤ CP(H) · CP(f(Un)) · 2−k · 4 3 = 2−n · 2−n · 4 3.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

slide-92
SLIDE 92

g has high Renyi entropy

CP(g(Un, H)) := Pr

w,w′←{0,1}n×H [g(w) = g(w′)]

= Pr

h,h′←H [h = h′] ·

Pr

(x,x′)←({0,1}n)2 [f(x) = f(x′)]

· Pr

h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]

= CP(H) · CP(f(Un)) · (2−k + (1 − 2−k) · 2−k−2) ≤ CP(H) · CP(f(Un)) · 2−k · 4 3 = 2−n · 2−n · 4 3. Hence, H2(g(Un, H)) ≥ 2n + log 3

4 ≥ 2n − 1 2.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23

slide-93
SLIDE 93

g is one-way

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23

slide-94
SLIDE 94

g is one-way

Let A be an s′-size algorithm that inverts g w.p ε′ and let ℓ = k −

  • 2 log 1

ε′

  • .

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23

slide-95
SLIDE 95

g is one-way

Let A be an s′-size algorithm that inverts g w.p ε′ and let ℓ = k −

  • 2 log 1

ε′

  • .

Consider the following inverter for f Algorithm 12 (B) Input: y ∈ {0, 1}n. Return D(y, h, z), for h ← H and z ← {0, 1}ℓ. Algorithm 13 (D) Input: y ∈ {0, 1}n, h ∈ H and z1 ∈ {0, 1}ℓ. For all z2 ∈ {0, 1}k+2−ℓ:

  • 1. Let (x, h) = A(y, h, z1 ◦ z2).
  • 2. If f(x) = y, return x.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23

slide-96
SLIDE 96

g is one-way

Let A be an s′-size algorithm that inverts g w.p ε′ and let ℓ = k −

  • 2 log 1

ε′

  • .

Consider the following inverter for f Algorithm 12 (B) Input: y ∈ {0, 1}n. Return D(y, h, z), for h ← H and z ← {0, 1}ℓ. Algorithm 13 (D) Input: y ∈ {0, 1}n, h ∈ H and z1 ∈ {0, 1}ℓ. For all z2 ∈ {0, 1}k+2−ℓ:

  • 1. Let (x, h) = A(y, h, z1 ◦ z2).
  • 2. If f(x) = y, return x.

◮ B’s size is ((s′ + O(n)) · 22 log

1 ε′ +2 = Θ(s′/ε2)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23

slide-97
SLIDE 97

g is one-way

Let A be an s′-size algorithm that inverts g w.p ε′ and let ℓ = k −

  • 2 log 1

ε′

  • .

Consider the following inverter for f Algorithm 12 (B) Input: y ∈ {0, 1}n. Return D(y, h, z), for h ← H and z ← {0, 1}ℓ. Algorithm 13 (D) Input: y ∈ {0, 1}n, h ∈ H and z1 ∈ {0, 1}ℓ. For all z2 ∈ {0, 1}k+2−ℓ:

  • 1. Let (x, h) = A(y, h, z1 ◦ z2).
  • 2. If f(x) = y, return x.

◮ B’s size is ((s′ + O(n)) · 22 log

1 ε′ +2 = Θ(s′/ε2)

◮ Prx←{0,1}n;h←H

  • D(f(x), h, h(x)1,...,ℓ) ∈ f −1(f(x))
  • = ε′

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23

slide-98
SLIDE 98

g is one-way, cont.

We saw that Pr

x←{0,1}n;h←H

  • D(f(x), h, h(x)1,...,ℓ) ∈ f −1(f(x))
  • = ε′

(1)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 20 / 23

slide-99
SLIDE 99

g is one-way, cont.

We saw that Pr

x←{0,1}n;h←H

  • D(f(x), h, h(x)1,...,ℓ) ∈ f −1(f(x))
  • = ε′

(1) By the leftover hash lemma SD((f(x), h, h(x)1,...,ℓ)x←{0,1},h←H, (f(x), h, Uℓ)x←{0,1},h←H) ≤ ε′/2 (2)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 20 / 23

slide-100
SLIDE 100

g is one-way, cont.

We saw that Pr

x←{0,1}n;h←H

  • D(f(x), h, h(x)1,...,ℓ) ∈ f −1(f(x))
  • = ε′

(1) By the leftover hash lemma SD((f(x), h, h(x)1,...,ℓ)x←{0,1},h←H, (f(x), h, Uℓ)x←{0,1},h←H) ≤ ε′/2 (2) Hence, Pr

x←{0,1}n

  • B(f(x)) ∈ f −1(f(x))
  • ≥ ε′ − ε′/2 = ε′/2.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 20 / 23

slide-101
SLIDE 101

The generator

Claim 14 Let g : {0, 1}n → {0, 1}m be a function with H2(g(Un)) ≥ n − 1

2, and let b be

(s, ε)-hardcore predicate for g. Then v(Un) = (g(Un), b(Un)) has (s, ε)-Renyi-entropy n + 1

2.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23

slide-102
SLIDE 102

The generator

Claim 14 Let g : {0, 1}n → {0, 1}m be a function with H2(g(Un)) ≥ n − 1

2, and let b be

(s, ε)-hardcore predicate for g. Then v(Un) = (g(Un), b(Un)) has (s, ε)-Renyi-entropy n + 1

2.

Proof: ?

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23

slide-103
SLIDE 103

The generator

Claim 14 Let g : {0, 1}n → {0, 1}m be a function with H2(g(Un)) ≥ n − 1

2, and let b be

(s, ε)-hardcore predicate for g. Then v(Un) = (g(Un), b(Un)) has (s, ε)-Renyi-entropy n + 1

2.

Proof: ? We call such v a pseudo Renyi-entropy generator.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23

slide-104
SLIDE 104

The generator

Claim 14 Let g : {0, 1}n → {0, 1}m be a function with H2(g(Un)) ≥ n − 1

2, and let b be

(s, ε)-hardcore predicate for g. Then v(Un) = (g(Un), b(Un)) has (s, ε)-Renyi-entropy n + 1

2.

Proof: ? We call such v a pseudo Renyi-entropy generator. Claim 15 The function vn(x1, . . . , xn) = (v(x1), . . . , v(xn)) has (s − n2, nε)-Renyi-entropy n2 + n

2.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23

slide-105
SLIDE 105

The generator

Claim 14 Let g : {0, 1}n → {0, 1}m be a function with H2(g(Un)) ≥ n − 1

2, and let b be

(s, ε)-hardcore predicate for g. Then v(Un) = (g(Un), b(Un)) has (s, ε)-Renyi-entropy n + 1

2.

Proof: ? We call such v a pseudo Renyi-entropy generator. Claim 15 The function vn(x1, . . . , xn) = (v(x1), . . . , v(xn)) has (s − n2, nε)-Renyi-entropy n2 + n

2.

Proof:

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23

slide-106
SLIDE 106

The generator

Claim 14 Let g : {0, 1}n → {0, 1}m be a function with H2(g(Un)) ≥ n − 1

2, and let b be

(s, ε)-hardcore predicate for g. Then v(Un) = (g(Un), b(Un)) has (s, ε)-Renyi-entropy n + 1

2.

Proof: ? We call such v a pseudo Renyi-entropy generator. Claim 15 The function vn(x1, . . . , xn) = (v(x1), . . . , v(xn)) has (s − n2, nε)-Renyi-entropy n2 + n

2.

Proof:

◮ Let Z be a rv with H2(Z) ≥ n + 1

2 such that Z and v(Un) are (s, ε)

indistinguishable.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23

slide-107
SLIDE 107

The generator

Claim 14 Let g : {0, 1}n → {0, 1}m be a function with H2(g(Un)) ≥ n − 1

2, and let b be

(s, ε)-hardcore predicate for g. Then v(Un) = (g(Un), b(Un)) has (s, ε)-Renyi-entropy n + 1

2.

Proof: ? We call such v a pseudo Renyi-entropy generator. Claim 15 The function vn(x1, . . . , xn) = (v(x1), . . . , v(xn)) has (s − n2, nε)-Renyi-entropy n2 + n

2.

Proof:

◮ Let Z be a rv with H2(Z) ≥ n + 1

2 such that Z and v(Un) are (s, ε)

indistinguishable.

◮ H2(Z n) ≥ n2 + n

2

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23

slide-108
SLIDE 108

The generator

Claim 14 Let g : {0, 1}n → {0, 1}m be a function with H2(g(Un)) ≥ n − 1

2, and let b be

(s, ε)-hardcore predicate for g. Then v(Un) = (g(Un), b(Un)) has (s, ε)-Renyi-entropy n + 1

2.

Proof: ? We call such v a pseudo Renyi-entropy generator. Claim 15 The function vn(x1, . . . , xn) = (v(x1), . . . , v(xn)) has (s − n2, nε)-Renyi-entropy n2 + n

2.

Proof:

◮ Let Z be a rv with H2(Z) ≥ n + 1

2 such that Z and v(Un) are (s, ε)

indistinguishable.

◮ H2(Z n) ≥ n2 + n

2

◮ Z n and vn(Un

n) are (s − n2, nε) indistinguishable

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23

slide-109
SLIDE 109

The generator

Claim 14 Let g : {0, 1}n → {0, 1}m be a function with H2(g(Un)) ≥ n − 1

2, and let b be

(s, ε)-hardcore predicate for g. Then v(Un) = (g(Un), b(Un)) has (s, ε)-Renyi-entropy n + 1

2.

Proof: ? We call such v a pseudo Renyi-entropy generator. Claim 15 The function vn(x1, . . . , xn) = (v(x1), . . . , v(xn)) has (s − n2, nε)-Renyi-entropy n2 + n

2.

Proof:

◮ Let Z be a rv with H2(Z) ≥ n + 1

2 such that Z and v(Un) are (s, ε)

indistinguishable.

◮ H2(Z n) ≥ n2 + n

2

◮ Z n and vn(Un

n) are (s − n2, nε) indistinguishable

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23

slide-110
SLIDE 110

The generator

Claim 14 Let g : {0, 1}n → {0, 1}m be a function with H2(g(Un)) ≥ n − 1

2, and let b be

(s, ε)-hardcore predicate for g. Then v(Un) = (g(Un), b(Un)) has (s, ε)-Renyi-entropy n + 1

2.

Proof: ? We call such v a pseudo Renyi-entropy generator. Claim 15 The function vn(x1, . . . , xn) = (v(x1), . . . , v(xn)) has (s − n2, nε)-Renyi-entropy n2 + n

2.

Proof:

◮ Let Z be a rv with H2(Z) ≥ n + 1

2 such that Z and v(Un) are (s, ε)

indistinguishable.

◮ H2(Z n) ≥ n2 + n

2

◮ Z n and vn(Un

n) are (s − n2, nε) indistinguishable

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23

slide-111
SLIDE 111

The generator cont.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23

slide-112
SLIDE 112

The generator cont.

Claim 16 Let H: {0, 1}n2+n → {0, 1}n2+n/4 be an 2-universal family and let G: {0, 1}n × H defined by G(x1, . . . , xn, h) = (h, h(vn(x1, . . . , xn))). Then G(H, Un

n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for

H ← H and sH being the size of sampling and evaluating algorithm for H.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23

slide-113
SLIDE 113

The generator cont.

Claim 16 Let H: {0, 1}n2+n → {0, 1}n2+n/4 be an 2-universal family and let G: {0, 1}n × H defined by G(x1, . . . , xn, h) = (h, h(vn(x1, . . . , xn))). Then G(H, Un

n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for

H ← H and sH being the size of sampling and evaluating algorithm for H. Corollary 17 If f and b and H (?) are poly-time computable, then G is a (s − n2 − sH, nε + 2−n/4)-PRG.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23

slide-114
SLIDE 114

The generator cont.

Claim 16 Let H: {0, 1}n2+n → {0, 1}n2+n/4 be an 2-universal family and let G: {0, 1}n × H defined by G(x1, . . . , xn, h) = (h, h(vn(x1, . . . , xn))). Then G(H, Un

n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for

H ← H and sH being the size of sampling and evaluating algorithm for H. Corollary 17 If f and b and H (?) are poly-time computable, then G is a (s − n2 − sH, nε + 2−n/4)-PRG. Proof: (of claim)

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23

slide-115
SLIDE 115

The generator cont.

Claim 16 Let H: {0, 1}n2+n → {0, 1}n2+n/4 be an 2-universal family and let G: {0, 1}n × H defined by G(x1, . . . , xn, h) = (h, h(vn(x1, . . . , xn))). Then G(H, Un

n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for

H ← H and sH being the size of sampling and evaluating algorithm for H. Corollary 17 If f and b and H (?) are poly-time computable, then G is a (s − n2 − sH, nε + 2−n/4)-PRG. Proof: (of claim)

◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23

slide-116
SLIDE 116

The generator cont.

Claim 16 Let H: {0, 1}n2+n → {0, 1}n2+n/4 be an 2-universal family and let G: {0, 1}n × H defined by G(x1, . . . , xn, h) = (h, h(vn(x1, . . . , xn))). Then G(H, Un

n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for

H ← H and sH being the size of sampling and evaluating algorithm for H. Corollary 17 If f and b and H (?) are poly-time computable, then G is a (s − n2 − sH, nε + 2−n/4)-PRG. Proof: (of claim)

◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4 ◮ Let D be an s′-size algorithm that distinguishes G(Un

n, H) from

(H, Un2+n/4) with advantage ε′ + 2−n/4

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23

slide-117
SLIDE 117

The generator cont.

Claim 16 Let H: {0, 1}n2+n → {0, 1}n2+n/4 be an 2-universal family and let G: {0, 1}n × H defined by G(x1, . . . , xn, h) = (h, h(vn(x1, . . . , xn))). Then G(H, Un

n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for

H ← H and sH being the size of sampling and evaluating algorithm for H. Corollary 17 If f and b and H (?) are poly-time computable, then G is a (s − n2 − sH, nε + 2−n/4)-PRG. Proof: (of claim)

◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4 ◮ Let D be an s′-size algorithm that distinguishes G(Un

n, H) from

(H, Un2+n/4) with advantage ε′ + 2−n/4

◮ Hence, ∃ (s′ + sH)-size algorithm that distinguishes G(Un

n, H) from

(H, H(Z n)) with advantage ε′

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23

slide-118
SLIDE 118

The generator cont.

Claim 16 Let H: {0, 1}n2+n → {0, 1}n2+n/4 be an 2-universal family and let G: {0, 1}n × H defined by G(x1, . . . , xn, h) = (h, h(vn(x1, . . . , xn))). Then G(H, Un

n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for

H ← H and sH being the size of sampling and evaluating algorithm for H. Corollary 17 If f and b and H (?) are poly-time computable, then G is a (s − n2 − sH, nε + 2−n/4)-PRG. Proof: (of claim)

◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4 ◮ Let D be an s′-size algorithm that distinguishes G(Un

n, H) from

(H, Un2+n/4) with advantage ε′ + 2−n/4

◮ Hence, ∃ (s′ + sH)-size algorithm that distinguishes G(Un

n, H) from

(H, H(Z n)) with advantage ε′

◮ Hence s′ ≤ s − n2 − sH =

⇒ ε′ ≤ nε.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23

slide-119
SLIDE 119

The generator cont.

Claim 16 Let H: {0, 1}n2+n → {0, 1}n2+n/4 be an 2-universal family and let G: {0, 1}n × H defined by G(x1, . . . , xn, h) = (h, h(vn(x1, . . . , xn))). Then G(H, Un

n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for

H ← H and sH being the size of sampling and evaluating algorithm for H. Corollary 17 If f and b and H (?) are poly-time computable, then G is a (s − n2 − sH, nε + 2−n/4)-PRG. Proof: (of claim)

◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4 ◮ Let D be an s′-size algorithm that distinguishes G(Un

n, H) from

(H, Un2+n/4) with advantage ε′ + 2−n/4

◮ Hence, ∃ (s′ + sH)-size algorithm that distinguishes G(Un

n, H) from

(H, H(Z n)) with advantage ε′

◮ Hence s′ ≤ s − n2 − sH =

⇒ ε′ ≤ nε.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23

slide-120
SLIDE 120

The generator cont.

Claim 16 Let H: {0, 1}n2+n → {0, 1}n2+n/4 be an 2-universal family and let G: {0, 1}n × H defined by G(x1, . . . , xn, h) = (h, h(vn(x1, . . . , xn))). Then G(H, Un

n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for

H ← H and sH being the size of sampling and evaluating algorithm for H. Corollary 17 If f and b and H (?) are poly-time computable, then G is a (s − n2 − sH, nε + 2−n/4)-PRG. Proof: (of claim)

◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4 ◮ Let D be an s′-size algorithm that distinguishes G(Un

n, H) from

(H, Un2+n/4) with advantage ε′ + 2−n/4

◮ Hence, ∃ (s′ + sH)-size algorithm that distinguishes G(Un

n, H) from

(H, H(Z n)) with advantage ε′

◮ Hence s′ ≤ s − n2 − sH =

⇒ ε′ ≤ nε.

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23

slide-121
SLIDE 121

Remarks

◮ PRG “length extension"

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 23 / 23

slide-122
SLIDE 122

Remarks

◮ PRG “length extension" ◮ PRG from any OWF

Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 23 / 23