Automated Game-Based Cryptographic Proofs Santiago Zanella B - - PowerPoint PPT Presentation

automated game based cryptographic proofs
SMART_READER_LITE
LIVE PREVIEW

Automated Game-Based Cryptographic Proofs Santiago Zanella B - - PowerPoint PPT Presentation

Automated Game-Based Cryptographic Proofs Santiago Zanella B eguelin IMDEA Software Institute, Madrid, Spain 2011.11.14 SEFM 2011 Based on joint work with Gilles Barthe Benjamin Gr egoire Sylvain Heraud Federico Olmedo Yassine


slide-1
SLIDE 1

Automated Game-Based Cryptographic Proofs

Santiago Zanella B´ eguelin

IMDEA Software Institute, Madrid, Spain

2011.11.14 SEFM 2011

slide-2
SLIDE 2

Based on joint work with

Gilles Barthe Benjamin Gr´ egoire Sylvain Heraud Federico Olmedo Yassine Lakhnech Daniel Hedin

2/59

slide-3
SLIDE 3

Cryptography

Used to achieve security goals in many areas: confidentiality and integrity in distributed systems aunthentication of mobile code non-repudiation in electronic commerce anonymity in electronic voting ...

3/59

slide-4
SLIDE 4

Cryptography

Cryptographic primitives Symmetric encryption Asymmetric encryption Digital signatures Message Authentication Codes Hash functions Zero-knowledge proofs

3/59

slide-5
SLIDE 5

Cryptography

Many potential sources of failures when using cryptography primitives may not be secure protocols may be logically flawed implementations may introduce weaknesses applications may make an improper use of cryptography users may ignore security issues

3/59

slide-6
SLIDE 6

Cryptography

Many potential sources of failures when using cryptography primitives may not be secure protocols may be logically flawed implementations may introduce weaknesses applications may make an improper use of cryptography users may ignore security issues

3/59

slide-7
SLIDE 7

Perfect cryptography is not practical

One-Time Pad is an encryption scheme that achieves perfect

  • secrecy. Relies on the absorption property of XOR:

b ⊕ a ⊕ b = a To encrypt a plaintext m ∈ {0, 1}n, use a key k ∈ {0, 1}n and compute c = k ⊕ m. To decrypt a ciphertext c ∈ {0, 1}n, xor it with the key k,

  • btaining m = c ⊕ k.

NOT PRACTICAL: a key must be used only once and must be at least as long as the plaintext to be encrypted. Shannon proved in the 40’s that OTP is the sole way to achieve information-theoretically secrecy

4/59

slide-8
SLIDE 8

Practical cryptography

Modern cryptography is built on the assumption that there exist problems that are hard on the average. Efficiency is associated with the tasks that can be performed by probabilistic polynomial-time Turing machines (the class BPP). Thus, modern cryptography is meaningful only if

NP ⊆ BPP

5/59

slide-9
SLIDE 9

Cryptanalysis-driven design

Propose a cryptographic scheme Wait for someone to come out with an attack

6/59

slide-10
SLIDE 10

Cryptanalysis-driven design

Propose a cryptographic scheme Wait for someone to come out with an attack

6/59

slide-11
SLIDE 11

Cryptanalysis-driven design

Propose a cryptographic scheme Wait for someone to come out with an attack Attack found!

6/59

slide-12
SLIDE 12

Cryptanalysis-driven design

Propose a cryptographic scheme Wait for someone to come out with an attack Attack found!

Patch

6/59

slide-13
SLIDE 13

Cryptanalysis-driven design

Propose a cryptographic scheme Wait for someone to come out with an attack Enough waiting Declare the scheme secure

6/59

slide-14
SLIDE 14

Cryptanalysis-driven design

Propose a cryptographic scheme Wait for someone to come out with an attack Enough waiting Declare the scheme secure How much time is enough?

6/59

slide-15
SLIDE 15

Cryptanalysis-driven design

Propose a cryptographic scheme Wait for someone to come out with an attack Enough waiting Declare the scheme secure It took 5 years to break the Merkle-Hellman cryptosystem

6/59

slide-16
SLIDE 16

Cryptanalysis-driven design

Propose a cryptographic scheme Wait for someone to come out with an attack Enough waiting Declare the scheme secure Ok, let’s say 7 years

6/59

slide-17
SLIDE 17

Cryptanalysis-driven design

Propose a cryptographic scheme Wait for someone to come out with an attack Enough waiting Declare the scheme secure It took 10 years to break the Chor-Rivest cryptosystem

6/59

slide-18
SLIDE 18

Cryptanalysis-driven design

Propose a cryptographic scheme Wait for someone to come out with an attack Enough waiting Declare the scheme secure

Can’t we do better?

6/59

slide-19
SLIDE 19

The Provable Security paradigm

1 Define a security goal and a model for adversaries 2 Propose a cryptographic scheme 3 Reduce security of the scheme to a cryptographic assumption

IF an adversary A can break the security of the scheme THEN the assumption can be broken with little extra effort Conversely, IF the security assumption holds THEN the scheme is secure

7/59

slide-20
SLIDE 20

Proof by reduction

Assume a polynomial adversary A breaks the security of a scheme Build a polynomial algorithm B that uses A to solve a computational hard problem

IF the problem is intractable THEN the cryptographic scheme is asymptotically secure

B Problem instance Solution

8/59

slide-21
SLIDE 21

Proof by reduction

Assume a polynomial adversary A breaks the security of a scheme Build a polynomial algorithm B that uses A to solve a computational hard problem

IF the problem is intractable THEN the cryptographic scheme is asymptotically secure

B Problem instance Solution

8/59

slide-22
SLIDE 22

Proof by reduction

Assume a polynomial adversary A breaks the security of a scheme Build a polynomial algorithm B that uses A to solve a computational hard problem

IF the problem is intractable THEN the cryptographic scheme is asymptotically secure

B A Problem instance Solution

8/59

slide-23
SLIDE 23

Exact security

Assume an adversary A breaks the security of a scheme within time t with probability ǫ Build an algorithm B that uses A to solve a computational hard problem with probability ǫ′ ≥ f (ǫ) within time t′ ≤ g(t) Bounds matter: the greater f (ǫ) and the smaller g(t) are, the closer the security of the scheme is related to the problem.

Choosing scheme parameters

What is the best known method to solve the problem? Choose parameters so that the reduction yields a better one

9/59

slide-24
SLIDE 24

Exact security

Assume an adversary A breaks the security of a scheme within time t with probability ǫ Build an algorithm B that uses A to solve a computational hard problem with probability ǫ′ ≥ f (ǫ) within time t′ ≤ g(t) Bounds matter: the greater f (ǫ) and the smaller g(t) are, the closer the security of the scheme is related to the problem.

Choosing scheme parameters

What is the best known method to solve the problem? Choose parameters so that the reduction yields a better one

9/59

slide-25
SLIDE 25

Game-based proofs

Security proofs in cryptography may be organized as sequences of games [...] this can be a useful tool in taming the complexity of security proofs that might

  • therwise become so messy, complicated, and subtle as

to be nearly impossible to verify

  • V. Shoup

Game G0 : . . . . . . ← A( . . . ); . . . Pr[G0 : A0] Game G1 : . . . . . . . . . ≤ h1(Pr[G1 : A1]) · · · Game Gn : . . . . . . ← B( . . . ) . . . ≤ . . . ≤ hn(Pr[Gn : An]) Start from an initial game encoding the security goal

10/59

slide-26
SLIDE 26

Game-based proofs

Security proofs in cryptography may be organized as sequences of games [...] this can be a useful tool in taming the complexity of security proofs that might

  • therwise become so messy, complicated, and subtle as

to be nearly impossible to verify

  • V. Shoup

Game G0 : . . . . . . ← A( . . . ); . . . Pr[G0 : A0] Game G1 : . . . . . . . . . ≤ h1(Pr[G1 : A1]) · · · Game Gn : . . . . . . ← B( . . . ) . . . ≤ . . . ≤ hn(Pr[Gn : An]) Stepwise transform the game keeping track of probabilities

10/59

slide-27
SLIDE 27

Game-based proofs

B A Problem instance Solution Game G0 : . . . . . . ← A( . . . ); . . . Pr[G0 : A0] Game G1 : . . . . . . . . . ≤ h1(Pr[G1 : A1]) · · · Game Gn : . . . . . . ← B( . . . ) . . . ≤ . . . ≤ hn(Pr[Gn : An]) Reach a final game encoding a computational assumption

10/59

slide-28
SLIDE 28

Example: Public-Key Encryption

A public-key encryption scheme is composed of a triple of algorithms (KG, E, D): Key Generation Given a security parameter η : N, KG(η) generates a public key pk, used for encryption, and a secret key sk, used for decryption Encryption Given a public key pk and a message m, E(pk, m) returns a ciphertext c Decryption Given a secret key sk and a ciphertext c, D(sk, c) returns the decryption of c under sk, if c is a valid ciphertext or ⊥ otherwise KG and E are probabilistic, D is deterministic.

Correctness

∀η, pk, sk, m. (sk, pk) = KG(η) = ⇒ D(sk, E(pk, m)) = m

11/59

slide-29
SLIDE 29

Example: ElGamal encryption

G is a cyclic group of order q g is a generator of G KG(η)

def

= x

$

← Zq; return (x, gx) E(α, m)

def

= y

$

← Zq; return (gy, αy × m) D(x, (β, ζ))

def

= return ζ × β−x Correctness:

12/59

slide-30
SLIDE 30

Example: ElGamal encryption

G is a cyclic group of order q g is a generator of G KG(η)

def

= x

$

← Zq; return (x, gx) E(α, m)

def

= y

$

← Zq; return (gy, αy × m) D(x, (β, ζ))

def

= return ζ × β−x Correctness: D(x, E(gx, m))

12/59

slide-31
SLIDE 31

Example: ElGamal encryption

G is a cyclic group of order q g is a generator of G KG(η)

def

= x

$

← Zq; return (x, gx) E(α, m)

def

= y

$

← Zq; return (gy, αy × m) D(x, (β, ζ))

def

= return ζ × β−x Correctness: D(x, (gy, gxy × m))

12/59

slide-32
SLIDE 32

Example: ElGamal encryption

G is a cyclic group of order q g is a generator of G KG(η)

def

= x

$

← Zq; return (x, gx) E(α, m)

def

= y

$

← Zq; return (gy, αy × m) D(x, (β, ζ))

def

= return ζ × β−x Correctness: (gxy × m) × (gy)−x

12/59

slide-33
SLIDE 33

Example: ElGamal encryption

G is a cyclic group of order q g is a generator of G KG(η)

def

= x

$

← Zq; return (x, gx) E(α, m)

def

= y

$

← Zq; return (gy, αy × m) D(x, (β, ζ))

def

= return ζ × β−x Correctness: (gxy × m) × g−xy

12/59

slide-34
SLIDE 34

Example: ElGamal encryption

G is a cyclic group of order q g is a generator of G KG(η)

def

= x

$

← Zq; return (x, gx) E(α, m)

def

= y

$

← Zq; return (gy, αy × m) D(x, (β, ζ))

def

= return ζ × β−x Correctness: m

12/59

slide-35
SLIDE 35

Security of Public-Key Encryption

Let (KG, E, D) be an encryption scheme and A a probabilistic polynomial-time adversary. Game INDCPA : (sk, pk) ← KG(); (m0, m1) ← A(pk); b

$

← {0, 1}; c ← E(pk, mb); ˜ b ← A(pk, c); return ˜ b = b AdvA

INDCPA(η)

def

=

  • Pr[INDCPA : ˜

b = b] − 1 2

  • The scheme is semantically (INDCPA) secure if AdvA

INDCPA(η) is

negligible

13/59

slide-36
SLIDE 36

INDCPA-security of ElGamal Encryption

ElGamal is secure under the Decision Diffie-Hellman assumption Game DDH0 : x, y

$

← Zq; d ← B(gx, gy, gxy) Game DDH1 : x, y, z

$

← Zq; d ← B(gx, gy, gz) AdvB

DDH(η)

def

= |Pr[DDH0 : d = 1] − Pr[DDH1 : d = 1]|

DDH Assumption

For every PPT adversary B, AdvB

DDH is negligible

One can prove: ∀A. PPT(A) = ⇒ ∃B. PPT(B) ∧ AdvA

INDCPA = AdvB DDH

which implies, under the DDH assumption, that for any PPT adversary A, AdvA

INDCPA is negligible

14/59

slide-37
SLIDE 37

INDCPA-security of ElGamal Encryption

ElGamal is secure under the Decision Diffie-Hellman assumption Game DDH0 : x, y

$

← Zq; d ← B(gx, gy, gxy) Game DDH1 : x, y, z

$

← Zq; d ← B(gx, gy, gz) AdvB

DDH(η)

def

= |Pr[DDH0 : d = 1] − Pr[DDH1 : d = 1]|

DDH Assumption

For every PPT adversary B, AdvB

DDH is negligible

One can prove: ∀A. PPT(A) = ⇒ ∃B. PPT(B) ∧ AdvA

INDCPA = AdvB DDH

which implies, under the DDH assumption, that for any PPT adversary A, AdvA

INDCPA is negligible

14/59

slide-38
SLIDE 38

Things can still go wrong: RSA-OAEP

1994 Bellare and Rogaway 2001 Shoup Fujisaki, Okamoto, Pointcheval, Stern 2004 Pointcheval 2009 Bellare, Hofheinz, Kiltz

1994 Purported proof of chosen-ciphertext security 2001 Proof is flawed, but can be patched ...for a weaker security notion, or ...for a modified scheme, or ...under stronger assumptions 2004 Filled gaps in Fujisaki et al. 2001 proof 2009 Security definition needs to be clarified 2011 Filled gaps and marginally improved bound in 2004 proof

15/59

slide-39
SLIDE 39

Things can still go wrong: Identity-Based Encryption

1984 Shamir 2001 Boneh & Franklin 2002 2003 2004 2005 Gentry & Silverberg, Horwitz & Lynn, Al-Riyami & Peterson, Yao et al, Cheng & Comely Galindo

1984: Conception of Identity-Based Cryptography 2001: First practical provably-secure IBE scheme. 2002-2005: Extensively used as a buliding block 2005: Proof is flawed, but can be patched ...for a weaker security bound

16/59

slide-40
SLIDE 40

Beyond Provable Security: Verifiable Security Goal Build a framework to formalize game-based cryptographic proofs

Provide foundations to game-based proofs Notation as close as possible to cryptographer’s Automate common reasoning patterns Support exact security Provide independently and automatically verifiable proofs

17/59

slide-41
SLIDE 41

CertiCrypt

Language-based cryptographic proofs

Formal certification of ElGamal encryption. A gentle introduction to CertiCrypt International Workshop on Formal Aspects in Security and Trust, FAST 2008 Formal certification of code-based cryptographic proofs ACM Symposium on Principles of Programming Languages, POPL 2009 Formally certifying the security of digital signature schemes IEEE Symposium on Security & Privacy, S&P 2009 Programming language techniques for cryptographic proofs International Conference on Interactive Theorem Proving, ITP 2010

18/59

slide-42
SLIDE 42

Language-based game-playing proofs What if we represent games as programs?

Games = ⇒ probabilistic programs Game transformations = ⇒ program transformations Adversaries = ⇒ unspecified procedures Efficiency = ⇒ Probabilistic Polynomial-Time

19/59

slide-43
SLIDE 43

Language-based game-playing proofs What if we represent games as programs?

Games = ⇒ probabilistic programs Game transformations = ⇒ program transformations Adversaries = ⇒ unspecified procedures Efficiency = ⇒ Probabilistic Polynomial-Time

19/59

slide-44
SLIDE 44

Language-based game-playing proofs What if we represent games as programs?

Games = ⇒ probabilistic programs Game transformations = ⇒ program transformations Adversaries = ⇒ unspecified procedures Efficiency = ⇒ Probabilistic Polynomial-Time

19/59

slide-45
SLIDE 45

Language-based game-playing proofs What if we represent games as programs?

Games = ⇒ probabilistic programs Game transformations = ⇒ program transformations Adversaries = ⇒ unspecified procedures Efficiency = ⇒ Probabilistic Polynomial-Time

19/59

slide-46
SLIDE 46

Language-based game-playing proofs What if we represent games as programs?

Games = ⇒ probabilistic programs Game transformations = ⇒ program transformations Adversaries = ⇒ unspecified procedures Efficiency = ⇒ Probabilistic Polynomial-Time

19/59

slide-47
SLIDE 47

A language-based approach

Security definitions, assumptions and games are formalized using a probabilistic programming language pWhile: C ::= skip nop | C; C sequence | V ← E assignment | V

$

← DE random sampling | if E then C else C conditional | while E do C while loop | V ← P(E, . . . , E) procedure call x

$

← d: sample the value of x according to distribution d The language of expressions (E) and distribution expressions (DE) admits user-defined extensions

20/59

slide-48
SLIDE 48

Some design choices

CertiCrypt is built on top of the Coq proof assistant Deep-embedding formalization Strongly-typed language Syntax is dependently-typed (only well-typed programs are admitted) Monadic semantics uses Paulin-Mohring’s ALEA Coq library

21/59

slide-49
SLIDE 49

Language-based Game-playing Proofs

Deep-embedded in the language of Coq, using a dependently-typed syntax: Inductive I : Type := | Assign : ∀t, Vt → Et → I | Rand : ∀t, Vt → Dt → I | Cond : EB → C → C → I | While : EB → C → I | Call : ∀l t, P(l,t) → Vt → E∗

l → I

where C := I∗

Programs are well-typed by construction

22/59

slide-50
SLIDE 50

Semantics

Measure Monad

Distributions represented as monotonic, linear and continuous functions of type D(A)

def

= (A → [0, 1]) → [0, 1] Intuition: Given µ ∈ D(A) and f : A → [0, 1] µ(f ) represents the expected value of f w.r.t. µ unit : A → D(A)

def

= λx. λf . f x bind : D(A) → (A → D(B)) → D(B)

def

= λµ. λF. λf . µ(λx. F x f )

23/59

slide-51
SLIDE 51

Semantics

Programs map an initial memory to a distribution on final memories c ∈ C : M → D(M) The probability of an event is the expected value of its characteristic function: Pr[c, m : A]

def

= c m 1A Instrumented and parametrized semantics to characterize PPT: c ∈ Cη : M → D(M × N)

24/59

slide-52
SLIDE 52

Semantics

Programs map an initial memory to a distribution on final memories c ∈ C : M → D(M) The probability of an event is the expected value of its characteristic function: Pr[c, m : A]

def

= c m 1A Instrumented and parametrized semantics to characterize PPT: c ∈ Cη : M → D(M × N)

24/59

slide-53
SLIDE 53

Intuition behind semantics

Think of G m as the expectation operator of the probability distribution induced by the game: G m f =

  • m′

f (m′) Pr[c, m ⇓ m′] Computing probabilities:

Pr[G, m : A]

def

= G m 1A

Example. Let G

def

= x

$

← {0, 1}; y

$

← {0, 1} Pr[G, m : x = y] = G m 1x=y =

25/59

slide-54
SLIDE 54

Intuition behind semantics

Think of G m as the expectation operator of the probability distribution induced by the game: G m f =

  • m′

f (m′) Pr[c, m ⇓ m′] Computing probabilities:

Pr[G, m : A]

def

= G m 1A

Example. Let G

def

= x

$

← {0, 1}; y

$

← {0, 1} Pr[G, m : x = y] = G m 1x=y =

1 4 1x=y(m[x → 0, y → 0])

+

1 4 1x=y(m[x → 0, y → 1])

+

1 4 1x=y(m[x → 1, y → 0])

+

1 4 1x=y(m[x → 1, y → 1])

25/59

slide-55
SLIDE 55

Intuition behind semantics

Think of G m as the expectation operator of the probability distribution induced by the game: G m f =

  • m′

f (m′) Pr[c, m ⇓ m′] Computing probabilities:

Pr[G, m : A]

def

= G m 1A

Example. Let G

def

= x

$

← {0, 1}; y

$

← {0, 1} Pr[G, m : x = y] = G m 1x=y = +

1 4

+

1 4

+

25/59

slide-56
SLIDE 56

Intuition behind semantics

Think of G m as the expectation operator of the probability distribution induced by the game: G m f =

  • m′

f (m′) Pr[c, m ⇓ m′] Computing probabilities:

Pr[G, m : A]

def

= G m 1A

Example. Let G

def

= x

$

← {0, 1}; y

$

← {0, 1} Pr[G, m : x = y] = G m 1x=y =

1 2

25/59

slide-57
SLIDE 57

Deductive reasoning about programs

Art of proving that programs are correct Foundations: program logic (Hoare’69) and weakest precondition calculus (Floyd’67) Major advances in:

language coverage (functions, objects, concurrency, heap) automation (decision procedures, SMT solvers, invariant generation) proof engineering (intermediate languages)

26/59

slide-58
SLIDE 58

Hoare logic

Judgments are of the form c : P ⇒ Q (typically P and Q are f.o. formulae over program variables) A judgment c : P ⇒ Q is valid iff for all states s and s′, if such that c, s ⇓ s′ and s satisfies P then s′ satisfies Q.

Selected rules

x ← e : Q{x := e} ⇒ Q c1 : P ⇒ Q c2 : Q ⇒ R c1; c2 : P ⇒ R c1 : P ∧ e = tt ⇒ Q c2 : P ∧ e = ff ⇒ Q if e then c1 else c2 : P ⇒ Q c : I ∧ e = tt ⇒ I P ⇒ I I ∧ e = ff ⇒ Q while e do c : P ⇒ Q

27/59

slide-59
SLIDE 59

Relational judgments

Judgments are of the form c1 ∼ c2 : P ⇒ Q (typically P and Q are f.o. formulae over tagged program variables of c1 and c2) A judgment c1 ∼ c2 : P ⇒ Q is valid iff for all states s1, s′

1, s2, s′ 2, if c1, s1 ⇓ s′ 1 and c2, s2 ⇓ s′ 2 and (s1, s2) satisfies P

then (s′

1, s′ 2) satisfies Q.

May require co-termination.

Verification methods

Embedding into Hoare logic:

Self-composition (B, D’Argenio and Rezk’04) Cross-products (Zaks and Pnueli’08, Barthe, Crespo and Kunz’11)

Relational Hoare Logic (Benton’04)

28/59

slide-60
SLIDE 60

Relational Hoare Logic

Selected rules

c1 ∼ c2 : Ψ ⇒ Φ Ψ′ ⇒ Ψ Φ ⇒ Φ′ c1 ∼ c2 : Ψ′ ⇒ Φ′ [Sub] c1 ∼ c2 : Ψ ⇒ Φ c2 ∼ c3 : Ψ′ ⇒ Φ′ c1 ∼ c3 : Ψ ◦ Ψ′ ⇒ Φ ◦ Φ′ [Comp] c1 ∼ c′

1 : Ψ ⇒ Φ′

c2 ∼ c′

2 : Φ′ ⇒ Φ

c1; c2 ∼ c′

1; c′ 2 : Ψ ⇒ Φ

[Seq] x ← e ∼ x ← e′ : Φ{x1 := e1, x2 := e′2} ⇒ Φ [Asn] Ψ = ⇒ e1 ⇔ e′2 c1 ∼ c′

1 : Ψ ∧ e1 ⇒ Φ

c2 ∼ c′

2 : Ψ ∧ ¬e1 ⇒ Φ

if e then c1 else c2 ∼ if e′ then c′

1 else c′ 2 : Ψ ⇒ Φ

[Cond]

29/59

slide-61
SLIDE 61

Probabilistic Relational Hoare Logic

Probabilistic extension of Benton’s Relational Hoare Logic

Definition

c1 ∼ c2 : Ψ ⇒ Φ

def

= ∀m1 m2. m1 Ψ m2 = ⇒ c1 m1 ≃Φ c2 m2 µ1 ≃Φ µ2 lifts relation Φ from memories to distributions. µ1 ≃Φ µ2 holds if there exists a distribution µ on M × M s.t. The 1st projection of µ coincides with µ1 The 2nd projection of µ coincides with µ2 Sets with positive measure are in Φ

30/59

slide-62
SLIDE 62

A specialized rule for random assignments

Let A be a finite set and let f , g : A → B. Define d = x

$

← A; y ← f x d′ = x

$

← A; y ← g x Then d = d′ iff there exists h : A 1−1 → A such that g = f ◦ h

A rule for random assignments

f is 1-1 Ψ = ⇒ ∀v. Φ{x1 := v, x2 := f v} x

$

← A ∼ x

$

← A : Ψ ⇒ Φ

31/59

slide-63
SLIDE 63

From pRHL to probabilities

Assume c1 ∼ c2 : P ⇒ Q For all memories m1 and m2 such that P m1 m2 and events A, B such that Q = ⇒ A1 ⇔ B2 we have c1m1 1A = c2m2 1B i.e., Pr[c1, m1 : A] = Pr[c2, m2 : B]

32/59

slide-64
SLIDE 64

Observational equivalence

Definition

f =X g

def

= ∀m1 m2. m1 =X m2 = ⇒ f m1 = g m2 c1 ≃I

O c2

def

= ∀m1 m2 f g. m1 =I m2 ∧ f =O g = ⇒ c1 m1 f = c2 m2 g

Example

x

$

← {0, 1}k; y ← x ⊕ z ≃{z}

{x,y,z} y

$

← {0, 1}k; x ← y ⊕ z Only a Partial Equivalence Relation c ≃I

O c

not true in general (obviously) Generalizes information flow security (take I = O = Vlow)

33/59

slide-65
SLIDE 65

Observational equivalence

Definition

f =X g

def

= ∀m1 m2. m1 =X m2 = ⇒ f m1 = g m2 c1 ≃I

O c2

def

= ∀m1 m2 f g. m1 =I m2 ∧ f =O g = ⇒ c1 m1 f = c2 m2 g

Example

x

$

← {0, 1}k; y ← x ⊕ z ≃{z}

{x,y,z} y

$

← {0, 1}k; x ← y ⊕ z Only a Partial Equivalence Relation c ≃I

O c

not true in general (obviously) Generalizes information flow security (take I = O = Vlow)

33/59

slide-66
SLIDE 66

Proving program equivalence

Goal c1 ≃I

O c2

34/59

slide-67
SLIDE 67

Proving program equivalence

Goal c1 ≃I

O c2

Mechanized program transformations Transformation: T(c1, c2, I, O) = (c′

1, c′ 2, I ′, O′)

Soundness theorem T(c1, c2, I, O) = (c′

1, c′ 2, I ′, O′)

c′

1 ≃I ′ O′ c′ 2

c1 ≃I

O c2

Reflection-based Coq tactic (replace reasoning by computation)

34/59

slide-68
SLIDE 68

Proving program equivalence

Goal c1 ≃I

O c2

Mechanized program transformations Dead code elimination (deadcode) Constant folding and propagation (ep) Procedure call inlining (inline) Code movement (swap) Common suffix/prefix elimination (eqobs hd, eqobs tl)

34/59

slide-69
SLIDE 69

Proving program equivalence

Goal c ≃I

O c

An –incomplete– tactic for self-equivalence (eqobs in) Does c ≃I

O c hold?

Analyze dependencies to compute I ′ s.t. c ≃I ′

O c

Check that I ′ ⊆ I Think about type systems for information flow security

34/59

slide-70
SLIDE 70

Example: ElGamal encryption

≃∅

{d}

x

$

← Zq; y

$

← Zq; (m0, m1) ← A(gx); b

$

← {0, 1}; ζ ← gxy × mb; b′ ← A′(gx, gy, ζ); d ← b = b′ x

$

← Zq; y

$

← Zq; d ← B(gx, gy, gxy) Lemma foo: ElGamal0 ≃∅

{d} DDH0

Proof.

35/59

slide-71
SLIDE 71

Example: ElGamal encryption

≃∅

{d}

x

$

← Zq; y

$

← Zq; (m0, m1) ← A(gx); b

$

← {0, 1}; ζ ← gxy × mb; b′ ← A′(gx, gy, ζ); d ← b = b′ x

$

← Zq; y

$

← Zq; α ← gx; β ← gy; γ ← gxy; (m0, m1) ← A(α); b

$

← {0, 1}; b′ ← A′(α, β, γ × mb); d ← b = b′ inline_r B. Lemma foo: ElGamal0 ≃∅

{d} DDH0

Proof.

35/59

slide-72
SLIDE 72

Example: ElGamal encryption

≃∅

{d}

x

$

← Zq; y

$

← Zq; (m0, m1) ← A(gx); b

$

← {0, 1}; ζ ← gxy × mb; b′ ← A′(gx, gy, gxy × mb); d ← b = b′ x

$

← Zq; y

$

← Zq; α ← gx; β ← gy; γ ← gxy; (m0, m1) ← A(gx); b

$

← {0, 1}; b′ ← A′(gx, gy, gxy × mb); d ← b = b′ ep. inline_r B. Lemma foo: ElGamal0 ≃∅

{d} DDH0

Proof.

35/59

slide-73
SLIDE 73

Example: ElGamal encryption

≃∅

{d}

x

$

← Zq; y

$

← Zq; (m0, m1) ← A(gx); b

$

← {0, 1}; ζ ← gxy × mb; b′ ← A′(gx, gy, gxy × mb); d ← b = b′ x

$

← Zq; y

$

← Zq; α ← gx; β ← gy; γ ← gxy; (m0, m1) ← A(gx); b

$

← {0, 1}; b′ ← A′(gx, gy, gxy × mb); d ← b = b′ ep. inline_r B. Lemma foo: ElGamal0 ≃∅

{d} DDH0

Proof. deadcode.

35/59

slide-74
SLIDE 74

Example: ElGamal encryption

≃∅

{d}

x

$

← Zq; y

$

← Zq; (m0, m1) ← A(gx); b

$

← {0, 1}; b′ ← A′(gx, gy, gxy × mb); d ← b = b′ x

$

← Zq; y

$

← Zq; (m0, m1) ← A(gx); b

$

← {0, 1}; b′ ← A′(gx, gy, gxy × mb); d ← b = b′ ep. inline_r B. Lemma foo: ElGamal0 ≃∅

{d} DDH0

Proof. deadcode.

35/59

slide-75
SLIDE 75

Example: ElGamal encryption

≃∅

{d}

x

$

← Zq; y

$

← Zq; (m0, m1) ← A(gx); b

$

← {0, 1}; b′ ← A′(gx, gy, gxy × mb); d ← b = b′ ep. inline_r B. Lemma foo: ElGamal0 ≃∅

{d} DDH0

Proof. deadcode. eqobs_in. Qed. PrElGamal0,m[b = b′] = PrDDH0,m[b = b′] x

$

← Zq; y

$

← Zq; (m0, m1) ← A(gx); b

$

← {0, 1}; b′ ← A′(gx, gy, gxy × mb); d ← b = b′

35/59

slide-76
SLIDE 76

Demo: ElGamal encryption

slide-77
SLIDE 77

Reasoning about Failure Events

Lemma (Fundamental Lemma of Game-Playing)

Let A, B, F be events and G1, G2 be two games such that Pr[G1 : A ∧ ¬F] = Pr[G2 : B ∧ ¬F] Then, |Pr[G1 : A] − Pr[G2 : B]| ≤ Pr[G1,2 : F]

37/59

slide-78
SLIDE 78

Automation

When A = B and F = bad. If G0, G1 are syntactically identical except after program points setting bad e.g. Game G0 : . . . bad ← true; c0 . . . Game G1 : . . . bad ← true; c1 . . . then PrG1,m[A | ¬bad] = PrG2,m[A | ¬bad] PrG1,m[bad] = PrG2,m[bad]

Corollary

|PrG1,m[A] − PrG2,m[A]| ≤ PrG1,2[bad]

38/59

slide-79
SLIDE 79

Automation

When A = B and F = bad. If G0, G1 are syntactically identical except after program points setting bad e.g. Game G0 : . . . bad ← true; c0 . . . Game G1 : . . . bad ← true; c1 . . . then PrG1,m[A | ¬bad] = PrG2,m[A | ¬bad] PrG1,m[bad] = PrG2,m[bad]

Corollary

|PrG1,m[A] − PrG2,m[A]| ≤ PrG1,2[bad]

38/59

slide-80
SLIDE 80

Application: PRP/PRF Switching Lemma

Game GRP : L ← nil; b ← A() Oracle O(x) : if x / ∈ dom(L) then y

$

← {0, 1}ℓ \ ran(L); L ← (x, y) :: L return L(x) Game GRF : L ← nil; b ← A() Oracle O(x) : if x / ∈ dom(L) then y

$

← {0, 1}ℓ; L ← (x, y) :: L return L(x) Suppose A makes at most q queries to O. Then |Pr[GRP : b] − Pr[GRF : b]| ≤ q(q − 1) 2ℓ+1 First introduced by Impagliazzo and Rudich in 1989 Proof fixed by Bellare and Rogaway (2006) and Shoup (2004)

39/59

slide-81
SLIDE 81

Proof

Game GRP : L ← nil; b ← A() Oracle O(x) : if x / ∈ dom(L) then y

$

← {0, 1}ℓ; if y ∈ ran(L) then ; bad ← true; y

$

← {0, 1}ℓ \ ran(L) L ← (x, y) :: L return L(x) Game GRF : L ← nil; b ← A() Oracle O(x) : if x / ∈ dom(L) then y

$

← {0, 1}ℓ; if y ∈ ran(L) then ; bad ← true L ← (x, y) :: L return L(x) |Pr[GRP : b] − Pr[GRF : b]| ≤ Pr[GRF : bad]

40/59

slide-82
SLIDE 82

Proof

Failure Event Lemma

Let k be a counter for O and m(bad) = false: IF Pr[O : bad] ≤ f (m(k)) THEN Pr[G : bad] ≤

qO−1

  • k=0

f (k) Oracle O(x) : if x / ∈ dom(L) then y

$

← {0, 1}ℓ; if y ∈ ran(L) then bad ← true; L ← (x, y) :: L return L(x) Prove that Pr[O : bad] ≤ |L|/2ℓ Summing over the q queries, Pr[GRF : bad] ≤ q(q − 1) 2ℓ+1

41/59

slide-83
SLIDE 83

Application

Zero-Knowledge Proofs

A Machine-Checked Formalization of Σ-Protocols. 23rd IEEE Computer Security Foundations Symposium, CSF 2010

42/59

slide-84
SLIDE 84

Zero-Knowledge Proofs

Victor Peggy

43/59

slide-85
SLIDE 85

Zero-Knowledge Proofs

Victor Peggy

43/59

slide-86
SLIDE 86

Zero-Knowledge Proofs

Victor Peggy

43/59

slide-87
SLIDE 87

If you ever need to explain this to your kids

How to Explain Zero-Knowledge Protocols to your Children Jean-Jacques Quisquater, Louis C. Guillou. CRYPTO’89

44/59

slide-88
SLIDE 88

Properties of Zero-Knowledge Proofs

Completeness A honest prover always convinces a honest verifier Soundness A dishonest prover (almost) never convinces a verifier Zero-Knowledge A verifier doesn’t learn anything from playing the protocol

45/59

slide-89
SLIDE 89

Formalizing Σ-Protocols

Prover knows (x, w) s.t. R(x, w) / Verifier knows only x Prover Verifier Computes commitment r r c Samples challenge c Computes response s s Accepts/rejects response

46/59

slide-90
SLIDE 90

Formalizing Σ-Protocols

Prover knows (x, w) s.t. R(x, w) / Verifier knows only x Prover Verifier (r, state) ← P1(x, w) r c c

$

← C s ← P2(x, w, state, c) s b ← V2(x, r, c, s)

46/59

slide-91
SLIDE 91

Formalizing Σ-Protocols

A Σ-protocol is given by: Types for x, w, r, s, state A knowledge relation R A challenge set C Procedures P1, P2, V2 The protocol can be seen as a program protocol(x, w) : (r, state) ← P1(x, w); c

$

← C; s ← P2(x, w, state, c); b ← V2(x, r, c, s)

47/59

slide-92
SLIDE 92

Formalizing Σ-Protocols

Completeness

∀x, w. R(x, w) = ⇒ Pr[protocol(x, w) : b = true] = 1

Soundness

There exists a polynomial time procedure KE s.t. c1 = c2 (x, r, c1, s1) accepting (x, r, c2, s2) accepting    = ⇒ Pr[w ← KE(x, r, c1, c2, s1, s2) : R(x, w)] = 1

48/59

slide-93
SLIDE 93

Honest-Verifier ZK vs. Special Honest-Verifier ZK

protocol(x, w) : (r, state) ← P1(x, w); c

$

← C; s ← P2(x, w, state, c); b ← V2(x, r, c, s) protocol(x, w, c) : (r, state) ← P1(x, w); s ← P2(x, w, state, c); b ← V2(x, r, c, s)

Special Honest-Verifier ZK

∃S. ∀x, w, c. R(x, w) = ⇒ protocol(x, w, c) ≃{x,c}

{r,c,s} (r, s) ← S(x, c)

Honest-Verifier ZK

∃S. ∀x, w. R(x, w) = ⇒ protocol(x, w) ≃{x}

{r,c,s} (r, c, s) ← S(x)

49/59

slide-94
SLIDE 94

Honest-Verifier ZK vs. Special Honest-Verifier ZK

protocol(x, w) : (r, state) ← P1(x, w); c

$

← C; s ← P2(x, w, state, c); b ← V2(x, r, c, s) protocol(x, w, c) : (r, state) ← P1(x, w); s ← P2(x, w, state, c); b ← V2(x, r, c, s)

Special Honest-Verifier ZK

∃S. ∀x, w, c. R(x, w) = ⇒ protocol(x, w, c) ≃{x,c}

{r,c,s} (r, s) ← S(x, c)

Honest-Verifier ZK

∃S. ∀x, w. R(x, w) = ⇒ protocol(x, w) ≃{x}

{r,c,s} (r, c, s) ← S(x)

49/59

slide-95
SLIDE 95

What does it take to trust a proof in CertiCrypt

Verification is fully-automated! (but proof construction is time-consuming) You need to

trust the type checker of Coq trust the language semantics make sure the security statement (a few lines in Coq) is the

  • ne you expect

You don’t need to

understand or even read the proof trust tactics, program transformations trust program logics, wp-calculus be an expert in Coq

50/59

slide-96
SLIDE 96

Contributions of CertiCrypt

A language-based approach to computational crypto proofs Automated framework to formalize game-based proofs in Coq Probabilistic extension of Relational Hoare Logic Foundations for techniques used in crypto proofs Several case studies

PRP/PRF switching lemma Chosen-plaintext security of ElGamal Chosen-plaintext security of Hashed ElGamal in ROM and SM Unforgeability of Full-Domain Hash signatures Adaptive chosen-ciphertext security of OAEP Σ-protocols IBE (F. Olmedo), Golle-Juels (Z. Luo), BLS (M. Christofi)

51/59

slide-97
SLIDE 97

The road ahead

CertiCrypt bridges the gap between fully formal machine-checked proofs and pen-and-paper proof sketches Fact: cryptographers won’t embrace proof assistants (CertiCrypt) anytime soon What if we start building a bridge from the other side? Start from a proof sketch and try to fill in the blanks and justify reasoning steps, building into the tool as much automation as

  • possible. Record and highlight unjustified proof steps and let the

user give finer-grained justifications—perhaps interactively, using automated tools.

52/59

slide-98
SLIDE 98

EasyCrypt

Computer-aided proofs for the working cryptographer

53/59

slide-99
SLIDE 99

EasyCrypt

Generate from a sequence of games and probabilistic stataments a fully verified proof.

Rationale

Crypto papers provide sequence of games and statements Probabilistic statements have a direct translation to relational Hoare judgments

Challenges

Automatic verification of relational Hoare judgments Invariant generation for adversaries

54/59

slide-100
SLIDE 100

Automatic verification of relational Hoare judgments

Verify the validity of G1 ∼ G2 : Ψ ⇒ Φ by generating VCs and sending them to an SMT solver

Key idea

Use one-sided rules (a.k.a. self-composition), except for: Procedure calls (procedures have relational specs!):

use inlining if possible if not (e.g. adversary calls), use two-sided rules. Needs call graphs to be similar

Random assignments:

put programs in static single (random) assignment form hoist random assignments use specialized two-sided Hoare rule for random assignments

55/59

slide-101
SLIDE 101

EasyCrypt tool chain

Parsing mode OCaml toplevel Why CertiCrypt Coq SMT solvers

56/59

slide-102
SLIDE 102

EasyCrypt demo

x

$

← {0, 1}k; y ← x ⊕ z ≃{z}

{x,y,z} y

$

← {0, 1}k; x ← y ⊕ z m1 Ψ m2 = ⇒ m1{ t/ x} Φ m2{f ( t)/ y} x

$

← T ∼ y

$

← T : Ψ ⇒ Φ

57/59

slide-103
SLIDE 103

Conclusion

There is a problem with cryptographic proofs Cryptographic proofs can (and should be) machine-checked Verification technology is mature enough to provide a solution We provided two:

CertiCrypt: fully formalized machine-checked proofs in Coq EasyCrypt: automated SMT-based tool with guarantees as strong as CertiCrypt

Have we reached the point where formalization effort pays off? Are cryptographers willing to adopt these tools?

58/59

slide-104
SLIDE 104

Thanks!!! http://certicrypt.gforge.inria.fr

59/59