Two-party computation By Shuoyao Zhao 2018.1.4 1 Problem - - PowerPoint PPT Presentation

two party computation
SMART_READER_LITE
LIVE PREVIEW

Two-party computation By Shuoyao Zhao 2018.1.4 1 Problem - - PowerPoint PPT Presentation

Two-party computation By Shuoyao Zhao 2018.1.4 1 Problem Abstraction Bob Alice Public function f y {0,1} t x {0,1} s Holds Holds z = f(x, y) Reveal z Security but nothing more ! requirement: 2 Ideally, with a Trusted Party z = f (


slide-1
SLIDE 1

Two-party computation

By Shuoyao Zhao 2018.1.4

1

slide-2
SLIDE 2

Problem Abstraction

Bob Alice

Holds Holds

Public function f

z = f(x, y)

x Î{0,1}s y Î{0,1}t

Reveal z but nothing more! Security requirement:

2

slide-3
SLIDE 3

3

z = f (x, y)

x

y

z z Ideally, with a Trusted Party

slide-4
SLIDE 4

4

x

y

In the Real World

z = f (x,y)

z z

……

f (x,y) f (x,y)

Secure computation enables this!

but nothing more! but nothing more!

slide-5
SLIDE 5

5

Alice Bob (Evaluator) 0 NAND 0

x=0 y=0

NAND

A B Z

A Binary Gate

[Yao, FOCS’86]

slide-6
SLIDE 6

6

Alice

a1 a0 a0, a1 are random bit strings

(Generator) Bob (Evaluator)

A Binary Gate

A B Z

NAND

[Yao, FOCS’86]

slide-7
SLIDE 7

b1 z0 a1 b0 a0 z1

7

Alice (Generator)

a0, a1, b0, b1, z0, z1 are independent random bit strings

A Binary Gate

A B Z

NAND

[Yao, FOCS’86]

slide-8
SLIDE 8

NAND

b1 z0 a1 b0 a0 z1

A Binary Gate

A

8

B Z Alice (Generator)

Enca1, b1(z0) Enca1, b0(z1) Enca0, b1(z1) Enca0, b0(z1)

messages keys [Yao, FOCS’86]

slide-9
SLIDE 9

b1 z0 a1 b0 a0 z1

A Binary Gate

AND

A

9

B Z Alice (Generator) [Yao, FOCS’86]

Enca1, b1(z0) Enca1, b0(z1) Enca0, b1(z1) Enca0, b0(z1)

slide-10
SLIDE 10

b1 z0 a1 b0 a0 z1

A Binary Gate

NAND

A

10

B Z

Enca1, b1(z0) Enca1, b0(z1) Enca0, b1(z1) Enca0, b0(z1)

[Yao, FOCS’86] Alice (Generator) Bob (Evaluator)

slide-11
SLIDE 11

11

Enca0, b1(z1) Enca1, b0(z1) Enca1, b1(z0) Enca0, b0(z1)

Bob (Evaluator)

a0 b0

Enca0, b1(z1) Enca1, b0(z1) Enca1, b1(z0) Enca0, b0(z1) ✔ ✗ ✗ ✗

[Yao, FOCS’86]

Prevent the Leak

Alice (Generator)

slide-12
SLIDE 12

b1 b0

Transferring b0 obliviously

12

Alice (Generator) Bob (Evaluator)

y=0

Oblivious Transfer

b0

slide-13
SLIDE 13

b1 b0

Transferring b0 obliviously

13

Alice (Generator) Bob (Evaluator)

y

Oblivious Transfer

by

[Naor-Pinkas, SODA’00]

Output

slide-14
SLIDE 14

Security of NPOT

  • Receiver’s Privacy

– h is uniformly random, independent of y

  • Sender’s Privacy

– Receiver cannot learn by as it doesn’t know loggC

14

Output

slide-15
SLIDE 15

Paper

  • A Proof of Security of Yao’s Protocol for Two-

Party Computation Author: Yehuda Lindell , Benny Pinkas

15

slide-16
SLIDE 16

The differences

  • f (x,y) = (f1(x,y),f2(x,y))
  • Description of Garbled gate

16

slide-17
SLIDE 17

Parameter table

Symbol Meaning g(α,β) Circuit-output gate 𝑥𝑗 , ex: 𝑥1 Circuit-output wire 0,1 Corresponding real values 𝑙𝑥

0 , 𝑙𝑥 1

Random keys (0, 𝑙𝑥

0 )

Output decryption tables 𝐹𝑙1

0(𝐹𝑙2 0(𝑙3

0))

Garbled computation box 𝐹1, 𝐹2, 𝐹3, 𝐹4 Garbled computation table Each pair of keys open only one box for each gate!!!

slide-18
SLIDE 18

Modeling Adversaries

Semi-Honest (Honest-but-curious)

Always follow the protocol but tries to learn extra from the execution transcripts

18

Malicious/Active

Absolutely no restriction

  • n polynomial time

adversaries

slide-19
SLIDE 19

Definition(1)

19

  • Let 𝑔 = (𝑔

1, 𝑔 2) be a probabilistic polynomial-

time functionality, and let π be a two-party protocol for computing f .

  • The view of the i_th party (i ∈ {1, 2}) during

an execution of π on (x,y) is denoted: 𝑤𝑗𝑓𝑥iπ (𝑦, 𝑧) = (𝑦, 𝑠𝑗, 𝑛1

𝑗 , … , 𝑛𝑢 𝑗)

where 𝑠𝑗 equals the contents of the i_th party’s internal random tape, and 𝑛𝑘

𝑗 represents

the j_th message that it received.

slide-20
SLIDE 20

Definition(2)

20

  • The output of the i_th party during an

execution of π on (x,y) is denoted 𝑝𝑣𝑢𝑞𝑣𝑢𝑗 π(𝑦, 𝑧) and can be computed from its

  • wn view of the execution. Denote:

𝑝𝑣𝑢𝑞𝑣𝑢π 𝑦, 𝑧 = 𝑝𝑣𝑢𝑞𝑣𝑢1 π 𝑦, 𝑧 , 𝑝𝑣𝑢𝑞𝑣𝑢2 π 𝑦, 𝑧 Differ from f(x,y)

slide-21
SLIDE 21

Definition(3)

21

  • Definition 1: Let 𝑔 = (𝑔

1, 𝑔 2) be a functionality. We

say that π securely computes f in the presence of static semi-honest adversaries if there exist probabilistic polynomial-time algorithms 𝑇1 and 𝑇2 such that:

𝑇1 𝑦, 𝑔

1 𝑦, 𝑧

, 𝑔 𝑦, 𝑧

𝑦,𝑧∈ 0,1 ∗ ֞ 𝐷 {(𝑤𝑗𝑓𝑥1

π 𝑦, 𝑧 , 𝑝𝑣𝑢𝑞𝑣𝑢π 𝑦, 𝑧 )}𝑦,𝑧∈ 0,1 ∗

And:

𝑇2 𝑧, 𝑔

2 𝑦, 𝑧

, 𝑔 𝑦, 𝑧

𝑦,𝑧∈ 0,1 ∗ ֞ 𝐷 {(𝑤𝑗𝑓𝑥2

π 𝑦, 𝑧 , 𝑝𝑣𝑢𝑞𝑣𝑢π 𝑦, 𝑧 )}𝑦,𝑧∈ 0,1 ∗

slide-22
SLIDE 22

Definition(4)

22

  • A Simpler Formulation for Deterministic

Functionalities: In the case that the functionality f is deterministic, a simpler definition can be used. Specifically, we do not need to consider the joint distribution of the simulator’s output with the protocol output. Rather, we separately require that: 𝑝𝑣𝑢𝑞𝑣𝑢π 𝑦, 𝑧 = 𝑔(𝑦, 𝑧) And in addition, that there exist S1 and S2 such that: {𝑇1 𝑦, 𝑔

1 𝑦, 𝑧

}𝑦,𝑧∈ 0,1 ∗ ֞

𝐷 {𝑤𝑗𝑓𝑥1

π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗ {𝑇2 𝑧, 𝑔

2 𝑦, 𝑧

}𝑦,𝑧∈ 0,1 ∗ ֞

𝐷 {𝑤𝑗𝑓𝑥2

π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗

slide-23
SLIDE 23

Definition(5)

23

  • Deterministic Same-Output Functionalities We

say that a functionality f = (f1,f2) is same-

  • utput if f1 = f2.
  • In our presentation, we will show how to

securely compute deterministic same output functionalities only. This suffices for obtaining secure protocols for arbitrary probabilistic functionalities.

slide-24
SLIDE 24

Definition(6)

24

  • Proof of the last slide:

From deterministic Functionalities to probabilistic polynomial-time:

f ’((x,r) , (y,s)) = f (x , y , r ⊕ s)

Deterministic Same-Output Functionalities :

f ’((x,r) , (y,s)) = f1(x,y)⊕r||f2(x,y)⊕s

slide-25
SLIDE 25

Tools—private-key encryption (1)

  • Let (G,E,D) be a private-key encryption

scheme and denote the range

  • f a key in the scheme by:

𝑆𝑏𝑜𝑕𝑓𝑜 𝑙 = 𝐹𝑙 𝑦 , 𝑦 ∈ {0,1}𝑜

25

slide-26
SLIDE 26

Tools—private-key encryption (2)

  • We say that (G,E,D) has an elusive range if for

every probabilistic polynomial time machine A, every polynomial p(·), and all sufficiently large n, 𝑄𝑠𝑙←𝐻(1𝑜) 𝐵 1𝑜 ∈ 𝑆𝑏𝑜𝑕𝑓𝑜 𝑙 <

1 𝑞(𝑜)

26

slide-27
SLIDE 27

Tools—private-key encryption (3)

27

  • We say that (G,E,D) has an efficiently

verifiable range if there exists a probabilistic polynomial-time machine M such that : M(k,c) = 1 if and only if c ∈ Rangen(k)

slide-28
SLIDE 28

Tools—private-key encryption (4)

  • Construction:
  • Let 𝐺 = {𝑔

𝑙} be a family of pseudorandom

functions, where 𝑔

𝑙: {0,1}𝑜→ {0,1}2𝑜 for k ∈

{0,1}𝑜. Then, define: 𝐹𝑙 𝑦 = {𝑠, 𝑔

𝑙 𝑠 ⨁(𝑦| 0𝑜 }

This 𝐹𝑙 has an efficiently verifiable range. Proof: 𝑔

𝑙 𝑦 and 𝑔 𝑠𝑏𝑜𝑒 𝑦 is indistinguishable.

28

slide-29
SLIDE 29

Tools—private-key encryption (5)

  • Other properties needed for (G,E,D):
  • For every two (known) vectors of messages x

and y, no polynomial-time adversary can distinguish an encryption of the vector x from an encryption of the vector y.

  • an encryption under one key will fall in the

range of an encryption under another key with negligible probability.

Easy to fulfill.

29

slide-30
SLIDE 30

Proof of correctness(1)

30

  • If 𝐹𝑙(𝑦) has an efficiently verfiable range ,

then the Yao’s Two-Party Protocol constructed by 𝐹𝑙(𝑦) is correct.

  • All we need is to prove: if 𝑙1

0, 𝑙1 1, 𝑙2 0, 𝑙2 1, 𝑙3 are

uniformly independently chosen, then: Pr 𝐹𝑙1

𝑗

𝐹𝑙2

𝑘 𝑙3

∈ 𝑆𝑏𝑜𝑕𝑓𝑜 𝑙1

0, 𝑙2

< 1 𝑞(𝑜) For each (i,j)=(0,1),(1,0),(1,1)

slide-31
SLIDE 31

Proof of correctness(2)

31

(1) i=0, j=1: Pr 𝐹𝑙1

0 𝐹𝑙2 1 𝑙3

∈ 𝑆𝑏𝑜𝑕𝑓𝑜 𝑙1

0, 𝑙2

= Pr 𝐹𝑙2

1 𝑙3 ∈ 𝑆𝑏𝑜𝑕𝑓𝑜 𝑙2

<

1 𝑞(𝑜)

(2)i=1: Pr 𝐹𝑙1

1 𝐹𝑙2 𝑘 𝑙3

∈ 𝑆𝑏𝑜𝑕𝑓𝑜 𝑙1

0, 𝑙2

≤ Pr 𝐹𝑙1

1 𝑙′ ∈ 𝑆𝑏𝑜𝑕𝑓𝑜 𝑙1

<

1 𝑞(𝑜)

slide-32
SLIDE 32

b1 b0

Transferring b0 obliviously

32

Alice (Generator) Bob (Evaluator)

y

Oblivious Transfer (f,t) is a permutation-trapdoor pair in a family of enhanced trapdoor permutation and B() is a hard-core of f

by

𝑤𝑧 ← 𝐸 𝑔 , 𝑥𝑧 = 𝑔(𝑤𝑧) 𝑥1−𝑧 ← 𝑊 𝑔 𝑥0, 𝑥1 𝑤0 = 𝑔−1 𝑥0 𝑤1 = 𝑔−1 𝑥1 𝑛0 = 𝐶 𝑤0 ⨁𝑐0 𝑛1 = 𝐶 𝑤1 ⨁𝑐1 𝑛0, 𝑛1 𝑐𝑧= 𝐶 𝑤𝑧 ⨁𝑐𝑧 Bob have no information of t (the trapdoor), means (f,t) should be sampled by Alice and then be sent to Bob.

slide-33
SLIDE 33

Tools—OT

33

  • About: 𝑥1−𝑧 ← 𝑊 𝑔
  • An enhanced trapdoor permutation has the

property that it is possible to sample from the range, so that given the coins used for sampling.

  • The comparison of two Ots:

𝑤𝑧 ← 𝐸 𝑔 , 𝑥𝑧 = 𝑔 𝑤𝑧 , 𝑥1−𝑧 ← 𝑊 𝑔 ℎ𝑧 ← 𝑕𝑙, ℎ1−𝑧 ← 𝐷𝑕−𝑙

VS

slide-34
SLIDE 34

b1 b0

Transferring b0 obliviously

34

Alice (Generator) Bob (Evaluator)

y

Oblivious Transfer

by

[Naor-Pinkas, SODA’00]

Output

slide-35
SLIDE 35

b1 b0

Transferring b0 obliviously

35

Alice (Generator) Bob (Evaluator)

y

Oblivious Transfer (f,t) is a permutation-trapdoor pair in a family of enhanced trapdoor permutation and B() is a hard-core of f

by

𝑤𝑧 ← 𝐸 𝑔 , 𝑥𝑧 = 𝑔(𝑤𝑧) 𝑥1−𝑧 ← 𝑊 𝑔 𝑥0, 𝑥1 𝑤0 = 𝑔−1 𝑥0 𝑤1 = 𝑔−1 𝑥1 𝑛0 = 𝐶 𝑤0 ⨁𝑐0 𝑛1 = 𝐶 𝑤1 ⨁𝑐1 𝑛0, 𝑛1 𝑐𝑧= 𝐶 𝑤𝑧 ⨁𝑐𝑧 Bob have no information of t (the trapdoor), means (f,t) should be sampled by Alice and then be sent to Bob.

slide-36
SLIDE 36

Security of the OT

36

  • Showed in the reference of the paper :O.

Goldreich, Foundations of Cryptography; vol. 2: Basic Applications (Cambridge University Press,Cambridge, 2004) Sec. 7.3.2

  • Let 𝑇1

𝑃𝑈 𝑏𝑜𝑒 𝑇2 𝑃𝑈 be the simulator of P1 and

P2 in the oblivious transfer.

slide-37
SLIDE 37

Security of double encryption(1)

37

Def: 𝐹𝑙1(𝐹𝑙2 𝑦 )is CPA-security iff: For every PPT adversary A ,which can query the encrypt function 𝐹𝑙1 . and 𝐹𝑙2 . and 𝐹𝑙1 𝐹𝑙2 . :

Pr 𝐵1 1𝑜 → 𝑛0, 𝑛1, 𝑡 , 𝑑 ←

$ (𝑑1, 𝑑2), 𝐵2 𝑑, 𝑡 → 𝑐′ <

1 𝑞(𝑜)

Where 𝑑𝑗 = 𝐹𝑙1(𝐹𝑙2 𝑛𝑗 ) computed by the challenger.

slide-38
SLIDE 38

Security of double encryption(2)

38

If 𝐹𝑙 𝑛 is CPA-security, then 𝐹𝑙1(𝐹𝑙2 𝑦 ) is also CPA-security.

slide-39
SLIDE 39

Review of the Definition

39

  • The view of the i_th party (i ∈ {1, 2}) during

an execution of π on (x,y) is denoted: 𝑤𝑗𝑓𝑥iπ (𝑦, 𝑧) = (𝑦, 𝑠𝑗, 𝑛1

𝑗 , … , 𝑛𝑢 𝑗)

where 𝑠𝑗 equals the contents of the i_th party’s internal random tape, and 𝑛𝑘

𝑗 represents

the j_th message that it received.

slide-40
SLIDE 40

Security of Yao’s Two-Party Protocol (1)

40

𝑤𝑗𝑓𝑥1 π (𝑦, 𝑧) = (𝑦, 𝑠

𝐷, 𝑆1 𝑃𝑈 𝑙𝑜+1

, 𝑙𝑜+1

1

, … , 𝑆1

𝑃𝑈 𝑙2𝑜 0 , 𝑙2𝑜 1

, 𝑔 𝑦, 𝑧 )

Construction of 𝑇1:

𝑇1 = (𝑦, 𝑠

𝐷, 𝑇1 𝑃𝑈 𝑙𝑜+1

, 𝑙𝑜+1

1

, … , 𝑇1

𝑃𝑈 𝑙2𝑜 0 , 𝑙2𝑜 1

, 𝑔 𝑦, 𝑧 )

Proof of 𝑇1:

{𝑇1 𝑦, 𝑔

1 𝑦, 𝑧

}𝑦,𝑧∈ 0,1 ∗ ֞

𝐷 {𝑤𝑗𝑓𝑥1

π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗

Using hybrid argument!!

slide-41
SLIDE 41

Security of Yao’s Two-Party Protocol (2)

41

𝐼𝑗 = (𝑦, 𝑠𝐷, 𝑇1

𝑃𝑈 𝑙𝑜+1

, 𝑙𝑜+1

1

, . . . , 𝑇1

𝑃𝑈 𝑙𝑜+𝑗

, 𝑙𝑜+𝑗

1

𝑆1

𝑃𝑈 𝑙𝑜+𝑗+1

, 𝑙𝑜+𝑗+1

1

, … , 𝑔 𝑦, 𝑧 )

Then we prove {𝐼0}𝑦,𝑧∈ 0,1 ∗ ֞

𝐷 {𝐼𝑜}𝑦,𝑧∈ 0,1 ∗

Otherwise Pr{(𝐸 𝐼0 . = 1} − Pr{(𝐸(𝐼𝑜 . ) = 1} > 1 𝑞(𝑜) Means for some i: Pr{(𝐸 𝐼𝑗 . = 1} − Pr{(𝐸(𝐼𝑗+1 . ) = 1} >

1 𝑜∗𝑞(𝑜) ⇒ Pr{(𝐸 𝑆1

𝑃𝑈 𝑙𝑜+𝑗+1

, 𝑙𝑜+𝑗+1

1

= 1}−Pr{(𝐸 𝑇1

𝑃𝑈 𝑙𝑜+𝑗+1

, 𝑙𝑜+𝑗+1

1

= 1} >

1 𝑜∗𝑞(𝑜)

Which is contradicted with 𝑇1

𝑃𝑈 is the simulator of P1 in the

  • blivious transfer.
slide-42
SLIDE 42

Security of Yao’s Two-Party Protocol (3)

42

{𝑇2 𝑧, 𝑔

2 𝑦, 𝑧

}𝑦,𝑧∈ 0,1 ∗ ֞

𝐷 {𝑤𝑗𝑓𝑥2

π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗

𝑤𝑗𝑓𝑥2 π (𝑦, 𝑧) = (𝑦, G 𝐷 , 𝑙1

𝑦1, … , 𝑙𝑜 𝑦𝑜, 𝑆2 𝑃𝑈 𝑙1 0, 𝑙1 1 , … , 𝑆2 𝑃𝑈 𝑙𝑜 0, 𝑙𝑜 1 , 𝑔 𝑦, 𝑧 )

Step1: Simulate one gate g(α,β) :

𝑨1

slide-43
SLIDE 43

Security of Yao’s Two-Party Protocol (4)

43

Step2: Simulate the output decryption table :

𝑨𝑗, 𝑙𝑝𝑣𝑢𝑗

1

, 1 − 𝑨𝑗, 𝑙𝑝𝑣𝑢𝑗 , 𝑗 = 1,2, … . . , 𝑜; 𝑎 = 𝑔(𝑦, 𝑧)

let the “fake” circuit be: G′ 𝐷 Step3:Simulate the oblivious transfer like 𝑇1.

slide-44
SLIDE 44

Security of Yao’s Two-Party Protocol (5)

44

Construction of 𝑇2:

𝑇2 = (𝑦, G′ 𝐷 , 𝑙1

𝑦1, … , 𝑙𝑜 𝑦𝑜, 𝑇2 𝑃𝑈 𝑙1 0, 𝑙1 1 , … , 𝑇2 𝑃𝑈 𝑙𝑜 0, 𝑙𝑜 1 )

Proof of 𝑇2:

{𝑇2 𝑦, 𝑔

2 𝑦, 𝑧

}𝑦,𝑧∈ 0,1 ∗ ֞

𝐷 {𝑤𝑗𝑓𝑥2

π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗

Using hybrid argument as well!!

slide-45
SLIDE 45

Security of Yao’s Two-Party Protocol (6)

45

Let 𝐼𝑃𝑈 = (𝑦, G 𝐷 , 𝑙1

𝑦1, … , 𝑙𝑜 𝑦𝑜, 𝑇2 𝑃𝑈 𝑙1 0, 𝑙1 1 , … , 𝑇2 𝑃𝑈 𝑙𝑜 0, 𝑙𝑜 1 )

From the proof of 𝑇1, we know : {𝐼𝑃𝑈}𝑦,𝑧∈ 0,1 ∗ ֞

𝐷 {𝑤𝑗𝑓𝑥2

π 𝑦, 𝑧 }𝑦,𝑧∈ 0,1 ∗ The hybrid experiment 𝐼𝑗 means the experiment use the first i gates in G 𝐷 and

  • thers in G′ 𝐷 .

At last we just need to prove : {𝐼|𝐷|}𝑦,𝑧∈ 0,1 ∗ ֞

𝐷 {𝐼0}𝑦,𝑧∈ 0,1 ∗

slide-46
SLIDE 46

Security of Yao’s Two-Party Protocol (7)

46

Assume that there exists a nonuniform probabilistic polynomial-time distinguisher D,s.t.

Pr{(𝐸 𝐼0 . = 1} − Pr{(𝐸(𝐼𝑜 . ) = 1} > 1 𝑞(𝑜) Means for some i: Pr{(𝐸 𝐼𝑗 . = 1} − Pr{(𝐸(𝐼𝑗+1 . ) = 1} >

1 |𝐷|∗𝑞(𝑜) ⇒ Pr{(𝐸 𝐹𝑗

1, 𝐹𝑗 2, 𝐹𝑗 3, 𝐹𝑗 4 = 1}−Pr{(𝐸 𝐹′𝑗 1, 𝐹′𝑗 2, 𝐹′𝑗 3, 𝐹′𝑗 4 = 1} > 1 |𝐷|∗𝑞(𝑜)

Which is impossible, which can be reduced from the security of the double encryption.

slide-47
SLIDE 47

Paper

  • An Efficient Protocol for Secure Two-Party

Computation in the Presence of Malicious Adversaries

  • Author: Yehuda Lindell , Benny Pinkas

47

slide-48
SLIDE 48

Modeling Adversaries

Semi-Honest (Honest-but-curious)

Always follow the protocol but tries to learn extra from the execution transcripts

48

Malicious/Active

Absolutely no restriction

  • n polynomial time

adversaries

slide-49
SLIDE 49

Malicious Adversary (1)

  • A Malicious adversary can do anything in the
  • processing. Such as:

49

No! Bob have abort his computation, Alice can’t get her result.

slide-50
SLIDE 50

Malicious Adversary (2)

  • If the last message send by the malicious

party in the protocol, it can abort the computation to make the other party get no

  • result. So the ideal model can’t be achieved.

50

slide-51
SLIDE 51

New ideal model(1)

  • To define the security of the protocol under

the attack of the malicious adversary, we need define a new ideal model.

  • Malicious attack means get the input of the
  • ther party or let the other party get the

wrong result. But…

  • Make the other party get no result can’t be
  • avoid. (A so-called inevitable disadvantage )

51

slide-52
SLIDE 52

New ideal model(2)

  • Inputs: Each party obtains an input, denoted w (w = x for P1,

and w = y for P2).

  • Send inputs to trusted party: An honest party always sends w

to the trusted party. A malicious party may, depending on w, either abort or send some other w’ to the trusted party.

  • Trusted party answers first party: In case it has obtained an

input pair (x, y), the trusted party first replies to the first party with f1(x, y). Otherwise (i.e., in case it receives only one valid input), the trusted party replies to both parties with a special symbol ⊥.

52

slide-53
SLIDE 53

New ideal model(3)

  • Trusted party answers second party: In case the first party is

malicious it may, depending on its input and the trusted party’s answer, decide to stop the trusted party by sending it ⊥. In this case the trusted party sends ⊥ to the second party. Otherwise the trusted party sends f2(x, y) to the second party.

  • Outputs: An honest party always outputs the message it has
  • btained from the trusted party. A malicious party may output

an arbitrary (probabilistic polynomial-time computable) function of its initial input and the message obtained from the trusted party.

53

slide-54
SLIDE 54

Definition

  • Secure two-party computation in malicious model:

Protocol π is said to securely compute f (in the malicious model) if for every pair of admissible non- uniform probabilistic polynomial-time machines A = (A1, A2) for the real model, there exists a pair of admissible non-uniform probabilistic expected polynomial-time machines B = (B1, B2) for the ideal model, such that: 𝐽𝑒𝑓𝑏𝑚𝑔,𝐶 𝑦,𝑧

𝑦,𝑧 ≡ 𝑆𝑓𝑏𝑚𝜌,𝐵 𝑦,𝑧 𝑦,𝑧

54

slide-55
SLIDE 55

Notice of Definition

  • Admissible means at most one of the two parties is
  • malicious. Which means the security of the malicious

party should not be achieved.

  • The ideal model means the model referred before.
  • 𝐽𝑒𝑓𝑏𝑚𝑔,𝐶 𝑦,𝑧 means the output of the two parties

(malicious or not) in the ideal model. 𝑆𝑓𝑏𝑚𝜌,𝐵 𝑦,𝑧 means the output of the two parties (malicious or not) in the real model.

55

slide-56
SLIDE 56

The simple case

  • The presentation of our protocol is simpler for

the case that only party P2 receives output. (f=f2, and f1 does not exist, but WHY?)

  • g((p, a, b, x), y) =(α, β, f2(x, y)), where α = p +

f1(x, y), β = a · α + b; a,b,p are chosen by P1.

56

𝑕 . =(α, β, 𝑔

2)

α, β 𝑔

1 = α−p, and check

β = a · α + b

slide-57
SLIDE 57

Attack for Yao’s protocol

  • In this definition, Yao’s protocol is not secure.
  • Attack 1: attack in the obliviously transfer.

Attack 2: Alice make a fake circuit.

57

slide-58
SLIDE 58

Achieve Active Security(1) (against the malicious adversary)

  • Solution of attack1: make P2’s input longer.

𝑧ˆ = 𝑧ˆ1, . . . , 𝑧ˆ𝑜𝑡 𝑧𝑗 = 𝑧ˆ((𝑗−1)·𝑡+1) ⊕· · · ⊕ 𝑧ˆ𝑗𝑡

  • Solution of attack2: cut-and-choose

58

slide-59
SLIDE 59

Achieve Active Security(2)

59

slide-60
SLIDE 60

Achieve Active Security(3)

  • P1 constructs s independent copies of a garbled

circuit of C, denoted GC1, . . . , GCs, and P1 commits to the garbled values of the all the wires.

  • P2 randomly choose some circuits for checking, P1
  • pen the check-circuits for P2 verify. The remain

circuits are used for the evaluation.

  • But new questions appear: How can make P1 use the

same input in the processing of evaluation.

60

slide-61
SLIDE 61

Achieve Active Security(4)

61

z=1

But!!

No change for z!!! For checking the consistency of Alice’s input, she need provide more commitment

  • f its input.
slide-62
SLIDE 62

Achieve Active Security(5)

62

slide-63
SLIDE 63

Full protocol

63

slide-64
SLIDE 64

64

slide-65
SLIDE 65

65

slide-66
SLIDE 66

66

slide-67
SLIDE 67

67

slide-68
SLIDE 68

Proof of the paper

Security against a Malicious P1: The proof constructs an ideal-model adversary/simulator which has access to P1 and to the trusted party, and can simulate the view of an actual run of the protocol. It uses the fact that the strings 𝜍, 𝜍0, which choose the circuits and commitment sets that are checked, are uniformly distributed even if P1 is malicious. The simulator runs the protocol until P1 opens the commitments of the checked circuits and checked commitment sets, and then rewinds the execution and runs it again with new random 𝜍, 𝜍0 values. We expect that about one quarter of the circuits are checked in the first execution and evaluated in the second execution. For these circuits, in the first execution the simulator learns the translation between the garbled values of P1’s input wires and the actual values of these wires, and in the second execution it learns the garbled values that are associated with P1’s input (this association is learned from the garbled values that P1 sends to P2). Combining the two, it learns P1’s input x, which can then be sent to the trusted party. The trusted party answers with f(x, y), which we use to define P2’s output and complete the simulation.

68

slide-69
SLIDE 69
  • Security against a Malicious P2. Intuitively, the security in this case is

derived from the fact that:

  • (a) the oblivious transfer protocol is secure, and so P2 only learns a single

set of keys (corresponding to a single input y) for decrypting the garbled circuits, and

  • (b) the commitment schemes are hiding and so P2 does not know what

input corresponds to the garbled values that P1 sends it for evaluating the circuit.

  • Of course, in order to formally prove security we construct an ideal-model

simulator B2 working with an adversary A2 that has corrupted P2.

69

slide-70
SLIDE 70
  • The simulator first extracts P2’s input bits from the oblivious transfer

protocol, and then sends the input y it obtained to the trusted party and receives back z = f(x, y). Given the output, the simulator constructs the garbled circuits. However, rather than constructing them all correctly, for each circuit it tosses a coin and, based on the result, either constructs the circuit correctly, or constructs it to compute the constant function

  • utputting z (the output is received from the trusted party). In order to

make sure that the simulator is not caught cheating, it biases the coin- tossing phase so that all of the correctly constructed garbled circuits are check-circuits, and all of the other circuits are evaluation-circuits (this is why the protocol uses joint coin-tossing rather than let P2 alone choose the circuits to be opened). A2 then checks the correctly-constructed circuits, and is satisfied with the result as if it were interacting with a legitimate P1. A2 therefore continues the execution with the circuits which always output z.

70

slide-71
SLIDE 71

Reducing the Number of Oblivious Transfers(1)

  • For Bob, his original input is 𝑧 = 𝑧1𝑧2 … … 𝑧𝑜,

we can encode 𝑧 like: 𝑧𝑗 = 𝑥𝑗 + 𝑥𝑗+1 + ⋯ + 𝑥𝑗+𝑡−1 𝑧 has been encoded as 𝑥 = 𝑥1𝑥2 … … 𝑥𝑜+𝑡−1 we use 𝑥 as the input of Bob. It is secure for each single bit of 𝑧. BUT!!! 𝑧1⨁𝑧2 = 𝑥1⨁𝑥𝑡+1

71

slide-72
SLIDE 72

Reducing the Number of Oblivious Transfers(2)

  • We should encode 𝑧 to 𝑥 = 𝑥1𝑥2 … … 𝑥𝑛 to

make sure that the exclusive-or of every subset of 𝑧 can’ t be written as any s𝑥𝑗’s exclusive-or.

  • If 𝑧𝑗 = σ𝑘=1

𝑛

𝑏𝑗𝑘𝑥𝑘, we call the encode of 𝑧𝑗 is 𝑑𝑗 = 𝑏𝑗1𝑏𝑗2 … 𝑏𝑗𝑛 then the requirement is converted to minimal hamming distance of code C = {𝑑1, 𝑑2, … 𝑑𝑜} is s.

72

slide-73
SLIDE 73

Reducing the Number of Oblivious Transfers(3)

  • Gilbert bounds:

2𝑛 σ𝑗=1

𝑡−1 𝑛

𝑗 ≤ 𝑂 = 2𝑜 we can find that: 𝑛 ≤ 𝑜 + 𝑡 · (𝑚𝑝𝑕𝑜 + 𝑚𝑝𝑕𝑡) Still longer!? We can use randomization.

73

slide-74
SLIDE 74

Reducing the Number of Oblivious Transfers(4)

  • 𝑑𝑗 are randomly chosen in {0,1}𝑛 and

m=max{4n,8s}=O(n,s), then we assume 𝑜 > 2𝑡, otherwise expand the input’s length.

  • The encode of the exclusive-or of every subset
  • f 𝑧 is also a random code in {0,1}𝑛, let its

hamming weight be X then:

  • Pr 𝑌 < 𝑡 = Pr

𝑌 4𝑜 − 1 2 > 3 8 < 2𝑓−9𝑜/8

74

slide-75
SLIDE 75

Thank you

75

slide-76
SLIDE 76

76

Q&A