Application of Information Theory, Lecture 11
Pseudo-Entropy and Pseudorandom Generators
Iftach Haitner
Tel Aviv University.
January 6, 2015
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 1 / 23
Pseudo-Entropy and Pseudorandom Generators Iftach Haitner Tel Aviv - - PowerPoint PPT Presentation
Application of Information Theory, Lecture 11 Pseudo-Entropy and Pseudorandom Generators Iftach Haitner Tel Aviv University. January 6, 2015 Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 1 / 23 Part I
Tel Aviv University.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 1 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 2 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme?
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad?
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad?
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal?
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof:
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof:
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =
◮
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =
◮
◮ Perfect correctness =
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =
◮
◮ Perfect correctness =
◮
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =
◮
◮ Perfect correctness =
◮
◮
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =
◮
◮ Perfect correctness =
◮
◮
◮ Statistical security?
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =
◮
◮ Perfect correctness =
◮
◮
◮ Statistical security?
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =
◮
◮ Perfect correctness =
◮
◮
◮ Statistical security? HW.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
◮ What security should we ask from such scheme? ◮ Perfect secrecy: EK(m) ≡ EK(m′), for any m, m′ ∈ {0, 1}ℓ and
◮ Theorem (Shannon): Perfect secrecy implies n ≥ ℓ. ◮ Is is bad? Is it optimal? ◮ Proof: Let M ∼ {0, 1}n. ◮ Perfect secrecy =
◮
◮ Perfect correctness =
◮
◮
◮ Statistical security? HW. Computational security?
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 3 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 4 / 23
S⊆U (P(S) − Q(S))
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 5 / 23
S⊆U (P(S) − Q(S))
D {∆D(P, Q) := Pr x←P [D(x) = 1] − Pr x←Q [D(x) = 1]},
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 5 / 23
S⊆U (P(S) − Q(S))
D {∆D(P, Q) := Pr x←P [D(x) = 1] − Pr x←Q [D(x) = 1]},
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 5 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 6 / 23
P,Q ≤ ε, for any s-size D.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23
P,Q ≤ ε, for any s-size D.
◮ Adversaries are circuits (possibly randomized)
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23
P,Q ≤ ε, for any s-size D.
◮ Adversaries are circuits (possibly randomized) ◮ (∞, ε)-indistinguishable is equivalent to statistical distance ε
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23
P,Q ≤ ε, for any s-size D.
◮ Adversaries are circuits (possibly randomized) ◮ (∞, ε)-indistinguishable is equivalent to statistical distance ε ◮ We sometimes think of s = nω(1) and ε = 1/s, where n is the “security
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23
P,Q ≤ ε, for any s-size D.
◮ Adversaries are circuits (possibly randomized) ◮ (∞, ε)-indistinguishable is equivalent to statistical distance ε ◮ We sometimes think of s = nω(1) and ε = 1/s, where n is the “security
◮ Can it be different from the statistical case?
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23
P,Q ≤ ε, for any s-size D.
◮ Adversaries are circuits (possibly randomized) ◮ (∞, ε)-indistinguishable is equivalent to statistical distance ε ◮ We sometimes think of s = nω(1) and ε = 1/s, where n is the “security
◮ Can it be different from the statistical case? ◮ Unless said otherwise, distributions are over {0, 1}n
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 7 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23
◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23
◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23
◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′
x←P2 [D(x) = 1] −
x←Q2 [D(x) = 1]
x←P2 [D(x) = 1] −
x←(P,Q) [D(x) = 1])
x←(P,Q) [D(x) = 1] −
x←Q2 [D(x) = 1])
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23
◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′
x←P2 [D(x) = 1] −
x←Q2 [D(x) = 1]
x←P2 [D(x) = 1] −
x←(P,Q) [D(x) = 1])
x←(P,Q) [D(x) = 1] −
x←Q2 [D(x) = 1])
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23
◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′
x←P2 [D(x) = 1] −
x←Q2 [D(x) = 1]
x←P2 [D(x) = 1] −
x←(P,Q) [D(x) = 1])
x←(P,Q) [D(x) = 1] −
x←Q2 [D(x) = 1])
◮ So either ∆D(P2, (P, Q)) ≥ ε′/2, or ∆D((P, Q), Q2) ≥ ε′/2
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23
◮ Let D be an s′-size algorithm with ∆D(P2, Q2) = ε′
x←P2 [D(x) = 1] −
x←Q2 [D(x) = 1]
x←P2 [D(x) = 1] −
x←(P,Q) [D(x) = 1])
x←(P,Q) [D(x) = 1] −
x←Q2 [D(x) = 1])
◮ So either ∆D(P2, (P, Q)) ≥ ε′/2, or ∆D((P, Q), Q2) ≥ ε′/2 ◮ Hence, ε′ < 2ε implies s′ ≥ s − n.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 8 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are
◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are
◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are
◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr
◮ ε′ =
i∈[k] Pr
Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are
◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr
◮ ε′ =
i∈[k] Pr
Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are
◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr
◮ ε′ =
i∈[k] Pr
i∈[k] ∆D(Hi, Hi−1)
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are
◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr
◮ ε′ =
i∈[k] Pr
i∈[k] ∆D(Hi, Hi−1)
◮
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are
◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr
◮ ε′ =
i∈[k] Pr
i∈[k] ∆D(Hi, Hi−1)
◮
◮ Thus, ε′ ≤ kε implies s′ > s − kn
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
◮ For i ∈ {0, . . . , k}, let Hi = (P1, . . . , Pi, Qi+1, . . . , Qk), where the Pi’s are
◮ Let D be a s′-size algorithm with ∆D(Pk, Qk) = ε′ ◮ ε′ = Pr
◮ ε′ =
i∈[k] Pr
i∈[k] ∆D(Hi, Hi−1)
◮
◮ Thus, ε′ ≤ kε implies s′ > s − kn ◮ When considering bounded time algorithms, things behaves very
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 9 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 10 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23
◮ Do such distributions exit for interesting (s, ε)
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23
◮ Do such distributions exit for interesting (s, ε)
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23
◮ Do such distributions exit for interesting (s, ε)
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23
◮ Do such distributions exit for interesting (s, ε)
◮ g is length extending (i.e., ℓ(n) > n ) Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23
◮ Do such distributions exit for interesting (s, ε)
◮ g is length extending (i.e., ℓ(n) > n ) ◮ g(Un) is (s(n), ε(n))-pseudorandom Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23
◮ Do such distributions exit for interesting (s, ε)
◮ g is length extending (i.e., ℓ(n) > n ) ◮ g(Un) is (s(n), ε(n))-pseudorandom Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23
◮ Do such distributions exit for interesting (s, ε)
◮ g is length extending (i.e., ℓ(n) > n ) ◮ g(Un) is (s(n), ε(n))-pseudorandom
◮ We omit the “security parameter", i.e., n, when its value is clear from the
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23
◮ Do such distributions exit for interesting (s, ε)
◮ g is length extending (i.e., ℓ(n) > n ) ◮ g(Un) is (s(n), ε(n))-pseudorandom
◮ We omit the “security parameter", i.e., n, when its value is clear from the
◮ Do such generators exist?
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23
◮ Do such distributions exit for interesting (s, ε)
◮ g is length extending (i.e., ℓ(n) > n ) ◮ g(Un) is (s(n), ε(n))-pseudorandom
◮ We omit the “security parameter", i.e., n, when its value is clear from the
◮ Do such generators exist? ◮ Applications?
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 11 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 12 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23
◮ Hence, OWP =
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23
◮ Hence, OWP =
◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will
2 + ε′.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23
◮ Hence, OWP =
◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will
2 + ε′.
◮ Let δ = Pr [D(Un+1) = 1]
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23
◮ Hence, OWP =
◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will
2 + ε′.
◮ Let δ = Pr [D(Un+1) = 1]
◮ Compute
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23
◮ Hence, OWP =
◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will
2 + ε′.
◮ Let δ = Pr [D(Un+1) = 1]
◮ Compute
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23
◮ Hence, OWP =
◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will
2 + ε′.
◮ Let δ = Pr [D(Un+1) = 1]
◮ Compute
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23
◮ Hence, OWP =
◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will
2 + ε′.
◮ Let δ = Pr [D(Un+1) = 1]
◮ Compute
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23
◮ Hence, OWP =
◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will
2 + ε′.
◮ Let δ = Pr [D(Un+1) = 1]
◮ Compute
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23
◮ Hence, OWP =
◮ Proof: Let D be an s′-size algorithm with ∆D(g(Un), Un+1) = ε′, we will
2 + ε′.
◮ Let δ = Pr [D(Un+1) = 1]
◮ Compute
◮ Hence, Pr
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 13 / 23
◮ Pr[D(f(Un), b(Un)) = 1] = δ + ε′ ◮ Pr[D(f(Un), b(Un)) = 1] = δ − ε′
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23
◮ Pr[D(f(Un), b(Un)) = 1] = δ + ε′ ◮ Pr[D(f(Un), b(Un)) = 1] = δ − ε′
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23
◮ Pr[D(f(Un), b(Un)) = 1] = δ + ε′ ◮ Pr[D(f(Un), b(Un)) = 1] = δ − ε′
◮ It follows that
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23
◮ Pr[D(f(Un), b(Un)) = 1] = δ + ε′ ◮ Pr[D(f(Un), b(Un)) = 1] = δ − ε′
◮ It follows that
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 14 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 15 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23
◮ Example
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23
◮ Example ◮ Repeated sampling
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23
◮ Example ◮ Repeated sampling ◮ Non-monotonicity
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23
◮ Example ◮ Repeated sampling ◮ Non-monotonicity ◮ Ensembles
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23
◮ Example ◮ Repeated sampling ◮ Non-monotonicity ◮ Ensembles ◮ In the following we will simply write (s, ε)-entropy, etc
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 16 / 23
2, for H ← H.
◮ k and m and H are parameterized by of n ◮ We assume log |H| = n and s ≥ n
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 17 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23
w,w′←{0,1}n×H [g(w) = g(w′)]
h,h′←H [h = h′] ·
(x,x′)←({0,1}n)2 [f(x) = f(x′)]
h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23
w,w′←{0,1}n×H [g(w) = g(w′)]
h,h′←H [h = h′] ·
(x,x′)←({0,1}n)2 [f(x) = f(x′)]
h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23
w,w′←{0,1}n×H [g(w) = g(w′)]
h,h′←H [h = h′] ·
(x,x′)←({0,1}n)2 [f(x) = f(x′)]
h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23
w,w′←{0,1}n×H [g(w) = g(w′)]
h,h′←H [h = h′] ·
(x,x′)←({0,1}n)2 [f(x) = f(x′)]
h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23
w,w′←{0,1}n×H [g(w) = g(w′)]
h,h′←H [h = h′] ·
(x,x′)←({0,1}n)2 [f(x) = f(x′)]
h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23
w,w′←{0,1}n×H [g(w) = g(w′)]
h,h′←H [h = h′] ·
(x,x′)←({0,1}n)2 [f(x) = f(x′)]
h←H;(x,x′)←({0,1}n)2 [h(x) = h(x′) | f(x) = f(x′)]
4 ≥ 2n − 1 2.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 18 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23
ε′
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23
ε′
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23
ε′
◮ B’s size is ((s′ + O(n)) · 22 log
1 ε′ +2 = Θ(s′/ε2)
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23
ε′
◮ B’s size is ((s′ + O(n)) · 22 log
1 ε′ +2 = Θ(s′/ε2)
◮ Prx←{0,1}n;h←H
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 19 / 23
x←{0,1}n;h←H
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 20 / 23
x←{0,1}n;h←H
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 20 / 23
x←{0,1}n;h←H
x←{0,1}n
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 20 / 23
2, and let b be
2.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23
2, and let b be
2.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23
2, and let b be
2.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23
2, and let b be
2.
2.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23
2, and let b be
2.
2.
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23
2, and let b be
2.
2.
◮ Let Z be a rv with H2(Z) ≥ n + 1
2 such that Z and v(Un) are (s, ε)
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23
2, and let b be
2.
2.
◮ Let Z be a rv with H2(Z) ≥ n + 1
2 such that Z and v(Un) are (s, ε)
◮ H2(Z n) ≥ n2 + n
2
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23
2, and let b be
2.
2.
◮ Let Z be a rv with H2(Z) ≥ n + 1
2 such that Z and v(Un) are (s, ε)
◮ H2(Z n) ≥ n2 + n
2
◮ Z n and vn(Un
n) are (s − n2, nε) indistinguishable
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23
2, and let b be
2.
2.
◮ Let Z be a rv with H2(Z) ≥ n + 1
2 such that Z and v(Un) are (s, ε)
◮ H2(Z n) ≥ n2 + n
2
◮ Z n and vn(Un
n) are (s − n2, nε) indistinguishable
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23
2, and let b be
2.
2.
◮ Let Z be a rv with H2(Z) ≥ n + 1
2 such that Z and v(Un) are (s, ε)
◮ H2(Z n) ≥ n2 + n
2
◮ Z n and vn(Un
n) are (s − n2, nε) indistinguishable
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 21 / 23
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23
n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23
n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23
n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23
n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for
◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23
n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for
◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4 ◮ Let D be an s′-size algorithm that distinguishes G(Un
n, H) from
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23
n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for
◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4 ◮ Let D be an s′-size algorithm that distinguishes G(Un
n, H) from
◮ Hence, ∃ (s′ + sH)-size algorithm that distinguishes G(Un
n, H) from
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23
n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for
◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4 ◮ Let D be an s′-size algorithm that distinguishes G(Un
n, H) from
◮ Hence, ∃ (s′ + sH)-size algorithm that distinguishes G(Un
n, H) from
◮ Hence s′ ≤ s − n2 − sH =
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23
n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for
◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4 ◮ Let D be an s′-size algorithm that distinguishes G(Un
n, H) from
◮ Hence, ∃ (s′ + sH)-size algorithm that distinguishes G(Un
n, H) from
◮ Hence s′ ≤ s − n2 − sH =
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23
n) is (s − n2 − sH, nε + 2−n/4) indistinguishable from (H, Un2+n/4), for
◮ By the leftover hash lemma SD((H, H(Z n)), (H, Un2+n/4)) ≤ 2−n/4 ◮ Let D be an s′-size algorithm that distinguishes G(Un
n, H) from
◮ Hence, ∃ (s′ + sH)-size algorithm that distinguishes G(Un
n, H) from
◮ Hence s′ ≤ s − n2 − sH =
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 22 / 23
◮ PRG “length extension"
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 23 / 23
◮ PRG “length extension" ◮ PRG from any OWF
Iftach Haitner (TAU) Application of Information Theory, Lecture 11 January 6, 2015 23 / 23