Two-party computation
By Shuoyao Zhao 2018.1.4
1
Two-party computation By Shuoyao Zhao 2018.1.4 1 Problem - - PowerPoint PPT Presentation
Two-party computation By Shuoyao Zhao 2018.1.4 1 Problem Abstraction Bob Alice Public function f y {0,1} t x {0,1} s Holds Holds z = f(x, y) Reveal z Security but nothing more ! requirement: 2 Ideally, with a Trusted Party z = f (
Two-party computation
By Shuoyao Zhao 2018.1.4
1
Problem Abstraction
Bob Alice
Holds Holds
Public function f
z = f(x, y)
x Î{0,1}s y Î{0,1}t
Reveal z but nothing more! Security requirement:
2
3
z = f (x, y)
x
y
z z Ideally, with a Trusted Party
4
x
y
In the Real World
z = f (x,y)
z z
f (x,y) f (x,y)
Secure computation enables this!
but nothing more! but nothing more!
5
Alice Bob (Evaluator) 0 NAND 0
x=0 y=0
A B Z
A Binary Gate
[Yao, FOCS’86]
6
Alice
a1 a0 a0, a1 are random bit strings
(Generator) Bob (Evaluator)
A Binary Gate
A B Z
[Yao, FOCS’86]
b1 z0 a1 b0 a0 z1
7
Alice (Generator)
a0, a1, b0, b1, z0, z1 are independent random bit strings
A Binary Gate
A B Z
[Yao, FOCS’86]
b1 z0 a1 b0 a0 z1
A Binary Gate
A
8
B Z Alice (Generator)
Enca1, b1(z0) Enca1, b0(z1) Enca0, b1(z1) Enca0, b0(z1)
messages keys [Yao, FOCS’86]
b1 z0 a1 b0 a0 z1
A Binary Gate
AND
A
9
B Z Alice (Generator) [Yao, FOCS’86]
Enca1, b1(z0) Enca1, b0(z1) Enca0, b1(z1) Enca0, b0(z1)
b1 z0 a1 b0 a0 z1
A Binary Gate
A
10
B Z
Enca1, b1(z0) Enca1, b0(z1) Enca0, b1(z1) Enca0, b0(z1)
[Yao, FOCS’86] Alice (Generator) Bob (Evaluator)
11
Enca0, b1(z1) Enca1, b0(z1) Enca1, b1(z0) Enca0, b0(z1)
Bob (Evaluator)
a0 b0
Enca0, b1(z1) Enca1, b0(z1) Enca1, b1(z0) Enca0, b0(z1) ✔ ✗ ✗ ✗
[Yao, FOCS’86]
Prevent the Leak
Alice (Generator)
b1 b0
Transferring b0 obliviously
12
Alice (Generator) Bob (Evaluator)
y=0
Oblivious Transfer
b0
b1 b0
Transferring b0 obliviously
13
Alice (Generator) Bob (Evaluator)
y
Oblivious Transfer
by
[Naor-Pinkas, SODA’00]
Output
Security of NPOT
– h is uniformly random, independent of y
– Receiver cannot learn by as it doesn’t know loggC
14
Output
Paper
Party Computation Author: Yehuda Lindell , Benny Pinkas
15
The differences
16
Parameter table
Symbol Meaning g(α,β) Circuit-output gate 𝑥𝑗 , ex: 𝑥1 Circuit-output wire 0,1 Corresponding real values 𝑙𝑥
0 , 𝑙𝑥 1
Random keys (0, 𝑙𝑥
0 )
Output decryption tables 𝐹𝑙1
0(𝐹𝑙2 0(𝑙3
0))
Garbled computation box 𝐹1, 𝐹2, 𝐹3, 𝐹4 Garbled computation table Each pair of keys open only one box for each gate!!!
Modeling Adversaries
Semi-Honest (Honest-but-curious)
Always follow the protocol but tries to learn extra from the execution transcripts
18
Malicious/Active
Absolutely no restriction
adversaries
Definition(1)
19
1, 𝑔 2) be a probabilistic polynomial-
time functionality, and let π be a two-party protocol for computing f .
an execution of π on (x,y) is denoted: 𝑤𝑗𝑓𝑥iπ (𝑦, 𝑧) = (𝑦, 𝑠𝑗, 𝑛1
𝑗 , … , 𝑛𝑢 𝑗)
where 𝑠𝑗 equals the contents of the i_th party’s internal random tape, and 𝑛𝑘
𝑗 represents
the j_th message that it received.
Definition(2)
20
execution of π on (x,y) is denoted 𝑝𝑣𝑢𝑞𝑣𝑢𝑗 π(𝑦, 𝑧) and can be computed from its
𝑝𝑣𝑢𝑞𝑣𝑢π 𝑦, 𝑧 = 𝑝𝑣𝑢𝑞𝑣𝑢1 π 𝑦, 𝑧 , 𝑝𝑣𝑢𝑞𝑣𝑢2 π 𝑦, 𝑧 Differ from f(x,y)
Definition(3)
21
1, 𝑔 2) be a functionality. We
say that π securely computes f in the presence of static semi-honest adversaries if there exist probabilistic polynomial-time algorithms 𝑇1 and 𝑇2 such that:
𝑇1 𝑦, 𝑔
1 𝑦, 𝑧
, 𝑔 𝑦, 𝑧
𝑦,𝑧∈ 0,1 ∗ ֞ 𝐷 {(𝑤𝑗𝑓𝑥1
π 𝑦, 𝑧 , 𝑝𝑣𝑢𝑞𝑣𝑢π 𝑦, 𝑧 )}𝑦,𝑧∈ 0,1 ∗
And:
𝑇2 𝑧, 𝑔
2 𝑦, 𝑧
, 𝑔 𝑦, 𝑧
𝑦,𝑧∈ 0,1 ∗ ֞ 𝐷 {(𝑤𝑗𝑓𝑥2
π 𝑦, 𝑧 , 𝑝𝑣𝑢𝑞𝑣𝑢π 𝑦, 𝑧 )}𝑦,𝑧∈ 0,1 ∗
Definition(4)
22
Functionalities: In the case that the functionality f is deterministic, a simpler definition can be used. Specifically, we do not need to consider the joint distribution of the simulator’s output with the protocol output. Rather, we separately require that: 𝑝𝑣𝑢𝑞𝑣𝑢π 𝑦, 𝑧 = 𝑔(𝑦, 𝑧) And in addition, that there exist S1 and S2 such that: {𝑇1 𝑦, 𝑔
1 𝑦, 𝑧
}𝑦,𝑧∈ 0,1 ∗ ֞
𝐷 {𝑤𝑗𝑓𝑥1
π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗ {𝑇2 𝑧, 𝑔
2 𝑦, 𝑧
}𝑦,𝑧∈ 0,1 ∗ ֞
𝐷 {𝑤𝑗𝑓𝑥2
π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗
Definition(5)
23
say that a functionality f = (f1,f2) is same-
securely compute deterministic same output functionalities only. This suffices for obtaining secure protocols for arbitrary probabilistic functionalities.
Definition(6)
24
From deterministic Functionalities to probabilistic polynomial-time:
f ’((x,r) , (y,s)) = f (x , y , r ⊕ s)
Deterministic Same-Output Functionalities :
f ’((x,r) , (y,s)) = f1(x,y)⊕r||f2(x,y)⊕s
Tools—private-key encryption (1)
scheme and denote the range
𝑆𝑏𝑜𝑓𝑜 𝑙 = 𝐹𝑙 𝑦 , 𝑦 ∈ {0,1}𝑜
25
Tools—private-key encryption (2)
every probabilistic polynomial time machine A, every polynomial p(·), and all sufficiently large n, 𝑄𝑠𝑙←𝐻(1𝑜) 𝐵 1𝑜 ∈ 𝑆𝑏𝑜𝑓𝑜 𝑙 <
1 𝑞(𝑜)
26
Tools—private-key encryption (3)
27
verifiable range if there exists a probabilistic polynomial-time machine M such that : M(k,c) = 1 if and only if c ∈ Rangen(k)
Tools—private-key encryption (4)
𝑙} be a family of pseudorandom
functions, where 𝑔
𝑙: {0,1}𝑜→ {0,1}2𝑜 for k ∈
{0,1}𝑜. Then, define: 𝐹𝑙 𝑦 = {𝑠, 𝑔
𝑙 𝑠 ⨁(𝑦| 0𝑜 }
This 𝐹𝑙 has an efficiently verifiable range. Proof: 𝑔
𝑙 𝑦 and 𝑔 𝑠𝑏𝑜𝑒 𝑦 is indistinguishable.
28
Tools—private-key encryption (5)
and y, no polynomial-time adversary can distinguish an encryption of the vector x from an encryption of the vector y.
range of an encryption under another key with negligible probability.
Easy to fulfill.
29
Proof of correctness(1)
30
then the Yao’s Two-Party Protocol constructed by 𝐹𝑙(𝑦) is correct.
0, 𝑙1 1, 𝑙2 0, 𝑙2 1, 𝑙3 are
uniformly independently chosen, then: Pr 𝐹𝑙1
𝑗
𝐹𝑙2
𝑘 𝑙3
∈ 𝑆𝑏𝑜𝑓𝑜 𝑙1
0, 𝑙2
< 1 𝑞(𝑜) For each (i,j)=(0,1),(1,0),(1,1)
Proof of correctness(2)
31
(1) i=0, j=1: Pr 𝐹𝑙1
0 𝐹𝑙2 1 𝑙3
∈ 𝑆𝑏𝑜𝑓𝑜 𝑙1
0, 𝑙2
= Pr 𝐹𝑙2
1 𝑙3 ∈ 𝑆𝑏𝑜𝑓𝑜 𝑙2
<
1 𝑞(𝑜)
(2)i=1: Pr 𝐹𝑙1
1 𝐹𝑙2 𝑘 𝑙3
∈ 𝑆𝑏𝑜𝑓𝑜 𝑙1
0, 𝑙2
≤ Pr 𝐹𝑙1
1 𝑙′ ∈ 𝑆𝑏𝑜𝑓𝑜 𝑙1
<
1 𝑞(𝑜)
b1 b0
Transferring b0 obliviously
32
Alice (Generator) Bob (Evaluator)
y
Oblivious Transfer (f,t) is a permutation-trapdoor pair in a family of enhanced trapdoor permutation and B() is a hard-core of f
by
𝑤𝑧 ← 𝐸 𝑔 , 𝑥𝑧 = 𝑔(𝑤𝑧) 𝑥1−𝑧 ← 𝑊 𝑔 𝑥0, 𝑥1 𝑤0 = 𝑔−1 𝑥0 𝑤1 = 𝑔−1 𝑥1 𝑛0 = 𝐶 𝑤0 ⨁𝑐0 𝑛1 = 𝐶 𝑤1 ⨁𝑐1 𝑛0, 𝑛1 𝑐𝑧= 𝐶 𝑤𝑧 ⨁𝑐𝑧 Bob have no information of t (the trapdoor), means (f,t) should be sampled by Alice and then be sent to Bob.
Tools—OT
33
property that it is possible to sample from the range, so that given the coins used for sampling.
𝑤𝑧 ← 𝐸 𝑔 , 𝑥𝑧 = 𝑔 𝑤𝑧 , 𝑥1−𝑧 ← 𝑊 𝑔 ℎ𝑧 ← 𝑙, ℎ1−𝑧 ← 𝐷−𝑙
VS
b1 b0
Transferring b0 obliviously
34
Alice (Generator) Bob (Evaluator)
y
Oblivious Transfer
by
[Naor-Pinkas, SODA’00]
Output
b1 b0
Transferring b0 obliviously
35
Alice (Generator) Bob (Evaluator)
y
Oblivious Transfer (f,t) is a permutation-trapdoor pair in a family of enhanced trapdoor permutation and B() is a hard-core of f
by
𝑤𝑧 ← 𝐸 𝑔 , 𝑥𝑧 = 𝑔(𝑤𝑧) 𝑥1−𝑧 ← 𝑊 𝑔 𝑥0, 𝑥1 𝑤0 = 𝑔−1 𝑥0 𝑤1 = 𝑔−1 𝑥1 𝑛0 = 𝐶 𝑤0 ⨁𝑐0 𝑛1 = 𝐶 𝑤1 ⨁𝑐1 𝑛0, 𝑛1 𝑐𝑧= 𝐶 𝑤𝑧 ⨁𝑐𝑧 Bob have no information of t (the trapdoor), means (f,t) should be sampled by Alice and then be sent to Bob.
Security of the OT
36
Goldreich, Foundations of Cryptography; vol. 2: Basic Applications (Cambridge University Press,Cambridge, 2004) Sec. 7.3.2
𝑃𝑈 𝑏𝑜𝑒 𝑇2 𝑃𝑈 be the simulator of P1 and
P2 in the oblivious transfer.
Security of double encryption(1)
37
Def: 𝐹𝑙1(𝐹𝑙2 𝑦 )is CPA-security iff: For every PPT adversary A ,which can query the encrypt function 𝐹𝑙1 . and 𝐹𝑙2 . and 𝐹𝑙1 𝐹𝑙2 . :
Pr 𝐵1 1𝑜 → 𝑛0, 𝑛1, 𝑡 , 𝑑 ←
$ (𝑑1, 𝑑2), 𝐵2 𝑑, 𝑡 → 𝑐′ <
1 𝑞(𝑜)
Where 𝑑𝑗 = 𝐹𝑙1(𝐹𝑙2 𝑛𝑗 ) computed by the challenger.
Security of double encryption(2)
38
If 𝐹𝑙 𝑛 is CPA-security, then 𝐹𝑙1(𝐹𝑙2 𝑦 ) is also CPA-security.
Review of the Definition
39
an execution of π on (x,y) is denoted: 𝑤𝑗𝑓𝑥iπ (𝑦, 𝑧) = (𝑦, 𝑠𝑗, 𝑛1
𝑗 , … , 𝑛𝑢 𝑗)
where 𝑠𝑗 equals the contents of the i_th party’s internal random tape, and 𝑛𝑘
𝑗 represents
the j_th message that it received.
Security of Yao’s Two-Party Protocol (1)
40
𝑤𝑗𝑓𝑥1 π (𝑦, 𝑧) = (𝑦, 𝑠
𝐷, 𝑆1 𝑃𝑈 𝑙𝑜+1
, 𝑙𝑜+1
1
, … , 𝑆1
𝑃𝑈 𝑙2𝑜 0 , 𝑙2𝑜 1
, 𝑔 𝑦, 𝑧 )
Construction of 𝑇1:
𝑇1 = (𝑦, 𝑠
𝐷, 𝑇1 𝑃𝑈 𝑙𝑜+1
, 𝑙𝑜+1
1
, … , 𝑇1
𝑃𝑈 𝑙2𝑜 0 , 𝑙2𝑜 1
, 𝑔 𝑦, 𝑧 )
Proof of 𝑇1:
{𝑇1 𝑦, 𝑔
1 𝑦, 𝑧
}𝑦,𝑧∈ 0,1 ∗ ֞
𝐷 {𝑤𝑗𝑓𝑥1
π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗
Using hybrid argument!!
Security of Yao’s Two-Party Protocol (2)
41
𝐼𝑗 = (𝑦, 𝑠𝐷, 𝑇1
𝑃𝑈 𝑙𝑜+1
, 𝑙𝑜+1
1
, . . . , 𝑇1
𝑃𝑈 𝑙𝑜+𝑗
, 𝑙𝑜+𝑗
1
𝑆1
𝑃𝑈 𝑙𝑜+𝑗+1
, 𝑙𝑜+𝑗+1
1
, … , 𝑔 𝑦, 𝑧 )
Then we prove {𝐼0}𝑦,𝑧∈ 0,1 ∗ ֞
𝐷 {𝐼𝑜}𝑦,𝑧∈ 0,1 ∗
Otherwise Pr{(𝐸 𝐼0 . = 1} − Pr{(𝐸(𝐼𝑜 . ) = 1} > 1 𝑞(𝑜) Means for some i: Pr{(𝐸 𝐼𝑗 . = 1} − Pr{(𝐸(𝐼𝑗+1 . ) = 1} >
1 𝑜∗𝑞(𝑜) ⇒ Pr{(𝐸 𝑆1
𝑃𝑈 𝑙𝑜+𝑗+1
, 𝑙𝑜+𝑗+1
1
= 1}−Pr{(𝐸 𝑇1
𝑃𝑈 𝑙𝑜+𝑗+1
, 𝑙𝑜+𝑗+1
1
= 1} >
1 𝑜∗𝑞(𝑜)
Which is contradicted with 𝑇1
𝑃𝑈 is the simulator of P1 in the
Security of Yao’s Two-Party Protocol (3)
42
{𝑇2 𝑧, 𝑔
2 𝑦, 𝑧
}𝑦,𝑧∈ 0,1 ∗ ֞
𝐷 {𝑤𝑗𝑓𝑥2
π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗
𝑤𝑗𝑓𝑥2 π (𝑦, 𝑧) = (𝑦, G 𝐷 , 𝑙1
𝑦1, … , 𝑙𝑜 𝑦𝑜, 𝑆2 𝑃𝑈 𝑙1 0, 𝑙1 1 , … , 𝑆2 𝑃𝑈 𝑙𝑜 0, 𝑙𝑜 1 , 𝑔 𝑦, 𝑧 )
Step1: Simulate one gate g(α,β) :
𝑨1
Security of Yao’s Two-Party Protocol (4)
43
Step2: Simulate the output decryption table :
𝑨𝑗, 𝑙𝑝𝑣𝑢𝑗
1
, 1 − 𝑨𝑗, 𝑙𝑝𝑣𝑢𝑗 , 𝑗 = 1,2, … . . , 𝑜; 𝑎 = 𝑔(𝑦, 𝑧)
let the “fake” circuit be: G′ 𝐷 Step3:Simulate the oblivious transfer like 𝑇1.
Security of Yao’s Two-Party Protocol (5)
44
Construction of 𝑇2:
𝑇2 = (𝑦, G′ 𝐷 , 𝑙1
𝑦1, … , 𝑙𝑜 𝑦𝑜, 𝑇2 𝑃𝑈 𝑙1 0, 𝑙1 1 , … , 𝑇2 𝑃𝑈 𝑙𝑜 0, 𝑙𝑜 1 )
Proof of 𝑇2:
{𝑇2 𝑦, 𝑔
2 𝑦, 𝑧
}𝑦,𝑧∈ 0,1 ∗ ֞
𝐷 {𝑤𝑗𝑓𝑥2
π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗
Using hybrid argument as well!!
Security of Yao’s Two-Party Protocol (6)
45
Let 𝐼𝑃𝑈 = (𝑦, G 𝐷 , 𝑙1
𝑦1, … , 𝑙𝑜 𝑦𝑜, 𝑇2 𝑃𝑈 𝑙1 0, 𝑙1 1 , … , 𝑇2 𝑃𝑈 𝑙𝑜 0, 𝑙𝑜 1 )
From the proof of 𝑇1, we know : {𝐼𝑃𝑈}𝑦,𝑧∈ 0,1 ∗ ֞
𝐷 {𝑤𝑗𝑓𝑥2
π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗ The hybrid experiment 𝐼𝑗 means the experiment use the first i gates in G 𝐷 and
At last we just need to prove : {𝐼|𝐷|}𝑦,𝑧∈ 0,1 ∗ ֞
𝐷 {𝐼0}𝑦,𝑧∈ 0,1 ∗
Security of Yao’s Two-Party Protocol (7)
46
Assume that there exists a nonuniform probabilistic polynomial-time distinguisher D,s.t.
Pr{(𝐸 𝐼0 . = 1} − Pr{(𝐸(𝐼𝑜 . ) = 1} > 1 𝑞(𝑜) Means for some i: Pr{(𝐸 𝐼𝑗 . = 1} − Pr{(𝐸(𝐼𝑗+1 . ) = 1} >
1 |𝐷|∗𝑞(𝑜) ⇒ Pr{(𝐸 𝐹𝑗
1, 𝐹𝑗 2, 𝐹𝑗 3, 𝐹𝑗 4 = 1}−Pr{(𝐸 𝐹′𝑗 1, 𝐹′𝑗 2, 𝐹′𝑗 3, 𝐹′𝑗 4 = 1} > 1 |𝐷|∗𝑞(𝑜)
Which is impossible, which can be reduced from the security of the double encryption.
Paper
Computation in the Presence of Malicious Adversaries
47
Modeling Adversaries
Semi-Honest (Honest-but-curious)
Always follow the protocol but tries to learn extra from the execution transcripts
48
Malicious/Active
Absolutely no restriction
adversaries
Malicious Adversary (1)
49
No! Bob have abort his computation, Alice can’t get her result.
Malicious Adversary (2)
party in the protocol, it can abort the computation to make the other party get no
50
New ideal model(1)
the attack of the malicious adversary, we need define a new ideal model.
wrong result. But…
51
New ideal model(2)
and w = y for P2).
to the trusted party. A malicious party may, depending on w, either abort or send some other w’ to the trusted party.
input pair (x, y), the trusted party first replies to the first party with f1(x, y). Otherwise (i.e., in case it receives only one valid input), the trusted party replies to both parties with a special symbol ⊥.
52
New ideal model(3)
malicious it may, depending on its input and the trusted party’s answer, decide to stop the trusted party by sending it ⊥. In this case the trusted party sends ⊥ to the second party. Otherwise the trusted party sends f2(x, y) to the second party.
an arbitrary (probabilistic polynomial-time computable) function of its initial input and the message obtained from the trusted party.
53
Definition
Protocol π is said to securely compute f (in the malicious model) if for every pair of admissible non- uniform probabilistic polynomial-time machines A = (A1, A2) for the real model, there exists a pair of admissible non-uniform probabilistic expected polynomial-time machines B = (B1, B2) for the ideal model, such that: 𝐽𝑒𝑓𝑏𝑚𝑔,𝐶 𝑦,𝑧
𝑦,𝑧 ≡ 𝑆𝑓𝑏𝑚𝜌,𝐵 𝑦,𝑧 𝑦,𝑧
54
Notice of Definition
party should not be achieved.
(malicious or not) in the ideal model. 𝑆𝑓𝑏𝑚𝜌,𝐵 𝑦,𝑧 means the output of the two parties (malicious or not) in the real model.
55
The simple case
the case that only party P2 receives output. (f=f2, and f1 does not exist, but WHY?)
f1(x, y), β = a · α + b; a,b,p are chosen by P1.
56
. =(α, β, 𝑔
2)
α, β 𝑔
1 = α−p, and check
β = a · α + b
Attack for Yao’s protocol
Attack 2: Alice make a fake circuit.
57
Achieve Active Security(1) (against the malicious adversary)
𝑧ˆ = 𝑧ˆ1, . . . , 𝑧ˆ𝑜𝑡 𝑧𝑗 = 𝑧ˆ((𝑗−1)·𝑡+1) ⊕· · · ⊕ 𝑧ˆ𝑗𝑡
58
Achieve Active Security(2)
59
Achieve Active Security(3)
circuit of C, denoted GC1, . . . , GCs, and P1 commits to the garbled values of the all the wires.
circuits are used for the evaluation.
same input in the processing of evaluation.
60
Achieve Active Security(4)
61
z=1
But!!
No change for z!!! For checking the consistency of Alice’s input, she need provide more commitment
Achieve Active Security(5)
62
Full protocol
63
64
65
66
67
Proof of the paper
Security against a Malicious P1: The proof constructs an ideal-model adversary/simulator which has access to P1 and to the trusted party, and can simulate the view of an actual run of the protocol. It uses the fact that the strings 𝜍, 𝜍0, which choose the circuits and commitment sets that are checked, are uniformly distributed even if P1 is malicious. The simulator runs the protocol until P1 opens the commitments of the checked circuits and checked commitment sets, and then rewinds the execution and runs it again with new random 𝜍, 𝜍0 values. We expect that about one quarter of the circuits are checked in the first execution and evaluated in the second execution. For these circuits, in the first execution the simulator learns the translation between the garbled values of P1’s input wires and the actual values of these wires, and in the second execution it learns the garbled values that are associated with P1’s input (this association is learned from the garbled values that P1 sends to P2). Combining the two, it learns P1’s input x, which can then be sent to the trusted party. The trusted party answers with f(x, y), which we use to define P2’s output and complete the simulation.
68
derived from the fact that:
set of keys (corresponding to a single input y) for decrypting the garbled circuits, and
input corresponds to the garbled values that P1 sends it for evaluating the circuit.
simulator B2 working with an adversary A2 that has corrupted P2.
69
protocol, and then sends the input y it obtained to the trusted party and receives back z = f(x, y). Given the output, the simulator constructs the garbled circuits. However, rather than constructing them all correctly, for each circuit it tosses a coin and, based on the result, either constructs the circuit correctly, or constructs it to compute the constant function
make sure that the simulator is not caught cheating, it biases the coin- tossing phase so that all of the correctly constructed garbled circuits are check-circuits, and all of the other circuits are evaluation-circuits (this is why the protocol uses joint coin-tossing rather than let P2 alone choose the circuits to be opened). A2 then checks the correctly-constructed circuits, and is satisfied with the result as if it were interacting with a legitimate P1. A2 therefore continues the execution with the circuits which always output z.
70
Reducing the Number of Oblivious Transfers(1)
we can encode 𝑧 like: 𝑧𝑗 = 𝑥𝑗 + 𝑥𝑗+1 + ⋯ + 𝑥𝑗+𝑡−1 𝑧 has been encoded as 𝑥 = 𝑥1𝑥2 … … 𝑥𝑜+𝑡−1 we use 𝑥 as the input of Bob. It is secure for each single bit of 𝑧. BUT!!! 𝑧1⨁𝑧2 = 𝑥1⨁𝑥𝑡+1
71
Reducing the Number of Oblivious Transfers(2)
make sure that the exclusive-or of every subset of 𝑧 can’ t be written as any s𝑥𝑗’s exclusive-or.
𝑛
𝑏𝑗𝑘𝑥𝑘, we call the encode of 𝑧𝑗 is 𝑑𝑗 = 𝑏𝑗1𝑏𝑗2 … 𝑏𝑗𝑛 then the requirement is converted to minimal hamming distance of code C = {𝑑1, 𝑑2, … 𝑑𝑜} is s.
72
Reducing the Number of Oblivious Transfers(3)
2𝑛 σ𝑗=1
𝑡−1 𝑛
𝑗 ≤ 𝑂 = 2𝑜 we can find that: 𝑛 ≤ 𝑜 + 𝑡 · (𝑚𝑝𝑜 + 𝑚𝑝𝑡) Still longer!? We can use randomization.
73
Reducing the Number of Oblivious Transfers(4)
m=max{4n,8s}=O(n,s), then we assume 𝑜 > 2𝑡, otherwise expand the input’s length.
hamming weight be X then:
𝑌 4𝑜 − 1 2 > 3 8 < 2𝑓−9𝑜/8
74
Thank you
75
76