Provable Security against Side-Channel Attacks Matthieu Rivain - - PowerPoint PPT Presentation

provable security against side channel attacks
SMART_READER_LITE
LIVE PREVIEW

Provable Security against Side-Channel Attacks Matthieu Rivain - - PowerPoint PPT Presentation

Provable Security against Side-Channel Attacks Matthieu Rivain matthieu.rivain@cryptoexperts.com MCrypt Seminar Aug. 11th 2014 Outline 1 Introduction 2 Modeling side-channel leakage 3 Achieving provable security against SCA


slide-1
SLIDE 1

Provable Security against Side-Channel Attacks

Matthieu Rivain matthieu.rivain@cryptoexperts.com

MCrypt Seminar – Aug. 11th 2014

slide-2
SLIDE 2

Outline 1 Introduction 2 Modeling side-channel leakage 3 Achieving provable security against SCA

slide-3
SLIDE 3

Outline 1 Introduction 2 Modeling side-channel leakage 3 Achieving provable security against SCA

slide-4
SLIDE 4

Side-channel attacks

slide-5
SLIDE 5

Side-channel attacks

slide-6
SLIDE 6

Side-channel attacks

slide-7
SLIDE 7

Side-channel attacks

slide-8
SLIDE 8

Side-channel attacks

slide-9
SLIDE 9

Side-channel attacks

Sound and temperature

Proofs of concept in idealized conditions Minor practical threats on embedded systems

Running time

Trivial solution: constant-time implementations Must be carefully addressed ◮ timing flaw still discovered in OpenSSL in 2011! ◮ timing flaws can be induced by the processor (cache, branch

prediction, ...)

slide-10
SLIDE 10

Side-channel attacks

Power consumption and EM emanations

Close by nature (switching activity) Can be modeled as weighted sums of the transitions EM can be more informative (placing of the probe) but

assume a raw access to the circuit

Both are noisy i.e. non-deterministic Noise amplification by generating random switching activity

slide-11
SLIDE 11

Side-channel attacks

Power consumption and EM emanations

Close by nature (switching activity) Can be modeled as weighted sums of the transitions EM can be more informative (placing of the probe) but

assume a raw access to the circuit

Both are noisy i.e. non-deterministic Noise amplification by generating random switching activity

This talk: leakage = power consuption + EM emanations

slide-12
SLIDE 12

Provable security

Traditional approach

define an adversarial model (e.g. chosen plaintext attacker) define a security goal (e.g. distinguish two ciphertexts)

slide-13
SLIDE 13

Provable security

Traditional approach

define an adversarial model (e.g. chosen plaintext attacker) define a security goal (e.g. distinguish two ciphertexts)

A

Adversary k

$

← K b

$

← {0, 1} c∗ ← E(k, mb) m0, m1 c∗ ˆ b ˆ b

?

= b Challenger E(k, ·) m c k Oracle

slide-14
SLIDE 14

Provable security

Traditional approach

define an adversarial model (e.g. chosen plaintext attacker) define a security goal (e.g. distinguish two ciphertexts)

A

Adversary k

$

← K b

$

← {0, 1} c∗ ← E(k, mb) m0, m1 c∗ ˆ b ˆ b

?

= b Challenger E(k, ·) m c k Oracle

Security reduction: If A exists with non-negligible |Pr[ˆ b = b] − 1/2| then I can use A to efficiently solve a hard problem.

slide-15
SLIDE 15

Provable security

... in the presence of leakage

A

Adversary k

$

← K b

$

← {0, 1} c∗ ← E(k, mb) m0, m1 c∗ ˆ b ˆ b

?

= b Challenger E(k, ·) m c k Oracle

slide-16
SLIDE 16

Provable security

... in the presence of leakage

A

Adversary k

$

← K b

$

← {0, 1} c∗ ← E(k, mb) m0, m1 c∗, ℓ∗ ˆ b ˆ b

?

= b Challenger E(k, ·) m c,ℓ k Oracle

slide-17
SLIDE 17

Provable security

... in the presence of leakage

A

Adversary k

$

← K b

$

← {0, 1} c∗ ← E(k, mb) m0, m1 c∗, ℓ∗

1, . . . , ℓ∗ q

ˆ b ˆ b

?

= b Challenger E(k, ·) m c,ℓ k Oracle

slide-18
SLIDE 18

Provable security

... in the presence of leakage

A

Adversary k

$

← K b

$

← {0, 1} c∗ ← E(k, mb) m0, m1 c∗, ℓ∗

1, . . . , ℓ∗ q

ˆ b ˆ b

?

= b Challenger E(k, ·) m c,ℓ k Oracle

Issue: how to model the leakage?

slide-19
SLIDE 19

Outline 1 Introduction 2 Modeling side-channel leakage 3 Achieving provable security against SCA

slide-20
SLIDE 20

Modeling side-channel leakage

The encryption oracle cannot be seen as a mathematical function E(k, ·) : m → c anymore, but as a computation.

Two classical approaches to model computation: ◮ Turing machines (programs) ◮ Circuits How to model leaking computation?

slide-21
SLIDE 21

Modeling side-channel leakage

Chronology

Probing model (circuits, 2003) Physically observable cryptography (Turing machines, 2004) Leakage resilient cryptography (2008) Further leakage models for circuits (2010) Noisy leakage model (2013)

Presentation

Leakage models for circuits Leakage models for programs

slide-22
SLIDE 22

Modeling side-channel leakage

Chronology

Probing model (circuits, 2003) Physically observable cryptography (Turing machines, 2004) Leakage resilient cryptography (2008) Further leakage models for circuits (2010) Noisy leakage model (2013)

Presentation

Leakage models for circuits Leakage models for programs

slide-23
SLIDE 23

Modeling side-channel leakage

Chronology

Probing model (circuits, 2003) Physically observable cryptography (Turing machines, 2004) Leakage resilient cryptography (2008) Further leakage models for circuits (2010) Noisy leakage model (2013)

Presentation

Leakage models for circuits Leakage models for programs

slide-24
SLIDE 24

Leakage Models for Circuits

[Ishai-Sahai-Wagner. CRYPTO 2003] Directed graph whose nodes are gates and edges are wires in1 in2 in3 $ Op1 copy Op2 mem Op3 copy Op4

  • ut1
  • ut2
slide-25
SLIDE 25

Leakage Models for Circuits

[Ishai-Sahai-Wagner. CRYPTO 2003] Directed graph whose nodes are gates and edges are wires in1 in2 in3 $ Op1 copy Op2 mem Op3 copy Op4

  • ut1
  • ut2

w1 w2 w3 w7 w4 w5 w6 w9 w8 w11 w10 w12 w13 At each cycles, the circuit leaks f(w1, w2, . . . , wn)

slide-26
SLIDE 26

Leakage Models for Circuits

Probing security model [Ishai-Sahai-Wagner. CRYPTO 2003] ◮ the adversary gets (wi)i∈I for some chosen set |I| ≤ t AC0 leakage model [Faust et al. EUROCRYPT 2010] ◮ the leakage function f belongs to the AC0 complexity class ◮ i.e. f is computable by circuits of constant depth d Noisy circuit-leakage model [Faust et al. EUROCRYPT 2010] ◮ f : (w1, w2, . . . , wn) → (w1 ⊕ ε1, w2 ⊕ ε2, . . . , wn ⊕ εn)

with εi = 1 with proba p < 1/2 0 with proba 1 − p

slide-27
SLIDE 27

Leakage Models for Circuits

Probing security model [Ishai-Sahai-Wagner. CRYPTO 2003] ◮ the adversary gets (wi)i∈I for some chosen set |I| ≤ t AC0 leakage model [Faust et al. EUROCRYPT 2010] ◮ the leakage function f belongs to the AC0 complexity class ◮ i.e. f is computable by circuits of constant depth d Noisy circuit-leakage model [Faust et al. EUROCRYPT 2010] ◮ f : (w1, w2, . . . , wn) → (w1 ⊕ ε1, w2 ⊕ ε2, . . . , wn ⊕ εn)

with εi = 1 with proba p < 1/2 0 with proba 1 − p

These models fail in capturing EM and PC leakages!

slide-28
SLIDE 28

Leakage Models for Circuits

Probing security model [Ishai-Sahai-Wagner. CRYPTO 2003] ◮ the adversary gets (wi)i∈I for some chosen set |I| ≤ t AC0 leakage model [Faust et al. EUROCRYPT 2010] ◮ the leakage function f belongs to the AC0 complexity class ◮ i.e. f is computable by circuits of constant depth d Noisy circuit-leakage model [Faust et al. EUROCRYPT 2010] ◮ f : (w1, w2, . . . , wn) → (w1 ⊕ ε1, w2 ⊕ ε2, . . . , wn ⊕ εn)

with εi = 1 with proba p < 1/2 0 with proba 1 − p

These models fail in capturing EM and PC leakages! Circuits not convenient to model software implementations

(or algorithms / protocols)

slide-29
SLIDE 29

Physically Observable Cryptography

[Micali-Reyzin. TCC’04] Framework for leaking computation Strong formalism using Turing machines Assumption: Only Computation Leaks (OCL) Computation divided into subcomputations y ← SC(x) Each SC accesses a part of the state x and leaks f(x) f adaptively chosen by the adversary No actual proposal for f

slide-30
SLIDE 30

Leakage Resilient Cryptography

Model introduced in [Dziembowski-Pietrzak. STOC’08] Specialization of the Micali-Reyzin framework Leakage functions follow the bounded retrieval model

[Crescenzo et al. TCC’06] f : {0, 1}n → {0, 1}λ for some constant λ < n

slide-31
SLIDE 31

Leakage Resilient Cryptography

Example: LR stream cipher [Pietrzak. EUROCRYPT’09] Many further LR crypto primitives published so far Generic LR compilers ◮ [Goldwasser-Rothblum. FOCS’12] ◮ [Dziembowski-Faust. TCC’12]

slide-32
SLIDE 32

Leakage Resilient Cryptography

Limitation: the leakage of a subcomputation is limited to

λ-bit values for λ < n (the input size)

Side-channel leakage far bigger than n bits ◮ although it may not remove all the entropy of x

Figure: Power consumption of a DES computation.

slide-33
SLIDE 33

Noisy Leakage Model

[Prouff-Rivain. EUROCRYPT 2013] OCL assumption (Micali-Reyzin framework) New class of noisy leakage functions An observation f(x) introduces a bounded bias in Pr[x] ◮ very generic

slide-34
SLIDE 34

Notion of bias

Bias of X given Y = y:

β(X|Y = y) = Pr[X] − Pr[X|Y = y]

with · = Euclidean norm.

Bias of X given Y :

β(X|Y ) =

  • y∈Y

Pr[Y = y] β(X|Y = y) .

β(X|Y ) ∈

  • 0;
  • 1 −

1 |X|

  • (indep. / deterministic relation)

Related to MI by:

1 ln 2β(X|Y ) ≤ MI(X; Y ) ≤ |X| ln 2β(X|Y )

slide-35
SLIDE 35

Noisy Leakage Model

Every subcomputation leaks a noisy function f of its input ◮ noise modeled by a fresh random tape argument ψ is some noise parameter f ∈ N(1/ψ)

⇒ β

  • X|f(X)
  • < 1

ψ

Capture any form of noisy leakage

slide-36
SLIDE 36

Noisy Leakage Model

In practice, the multivariate Gaussian model is widely admitted

f(x) ∼ N( mx, Σ) ∀x ∈ X

The bias can be efficiently computed:

β(X|f(X)) =

  • y

p

y x

  • px|

y − 1/|X|

21/2 with p

y = x φΣ( y− mx)

  • z φΣ(

z− mx) and px| y = φΣ( y− mx)

  • v φΣ(

y− mv)

where φΣ : y → exp

  • − 1

2

y · Σ · y

  • .
slide-37
SLIDE 37

Noisy Leakage Model

Illustration: univariate Hamming weight model with Gaussian noise f(X) = HW(X) + N(0, σ2)

50 100 150 200 3.0 2.5 2.0 1.5

Figure: log10 β(X|f(X)) w.r.t. σ.

slide-38
SLIDE 38

Noisy Leakage Model

Illustration: univariate Hamming weight model with Gaussian noise f(X) = HW(X) + N(0, σ)

50 100 150 200 500 1000 1500 2000 2500

Figure: ψ =

1 β(X|f(X)) w.r.t. σ.

slide-39
SLIDE 39

Outline 1 Introduction 2 Modeling side-channel leakage 3 Achieving provable security against SCA

slide-40
SLIDE 40

Achieving provable security against SCA

A

Adversary k

$

← K b

$

← {0, 1} c∗ ← E(k, mb) m0, m1 c∗, ℓ∗

1, . . . , ℓ∗ q

ˆ b ˆ b

?

= b Challenger E(k, ·) m c,ℓ k Oracle

Describe a (protected) implementation of E(k, ·) Model the leakage Provide a security reduction

slide-41
SLIDE 41

Achieving provable security against SCA

A

Adversary k

$

← K b

$

← {0, 1} c∗ ← E(k, mb) m0, m1 c∗, ℓ∗

1, . . . , ℓ∗ q

ˆ b ˆ b

?

= b Challenger E(k, ·) m c,ℓ k Oracle

Describe a (protected) implementation of E(k, ·) Model the leakage Provide a security reduction What about generic security against SCA? ◮ for any cryptosystem, security goal, adversarial model

slide-42
SLIDE 42

General setting

A

Adversary Security Game

O

Leakage Oracle (mi, k) ℓ(mi, k) 0/1

slide-43
SLIDE 43

General setting

A

Security Game

O

Leakage Oracle

Distinguisher

(mi, k) ℓ(mi, k) 0/1

Security: ∀Dist : Adv(DistO(·)) ≤ 2−κ

slide-44
SLIDE 44

General setting

A

Security Game

O

Leakage Oracle

Distinguisher

(mi, k) ℓ($, $) 0/1

Security: ∀Dist : Adv(DistO(·)) ≤ 2−κ

slide-45
SLIDE 45

General setting

A

Adversary Security Game

O

Leakage Oracle

Distinguisher

(mi, k) ℓ(mi, k) 0/1

Security: ∀Dist : Adv(DistO(·)) ≤ 2−κ

⇒ ∀A : Adv(A-SGO(·)) ≈ Adv(A-SGO$(·))

slide-46
SLIDE 46

General setting

A

Adversary Security Game

O

Leakage Oracle

Distinguisher

(mi, k) ℓ(mi, k) 0/1

Security: ∀Dist : Adv(DistO(·)) ≤ 2−κ

⇒ ∀A : Adv(A-SGO(·)) ≈ Adv(A-SGO$(·))

Information theoretic security: MI((m, k); ℓ(m, k)) ≤ 2−κ

slide-47
SLIDE 47

General setting

A

Adversary Security Game

O

Leakage Oracle

Distinguisher

(mi, k) ℓ(mi, k) 0/1

Security: ∀Dist : Adv(DistO(·)) ≤ 2−κ

⇒ ∀A : Adv(A-SGO(·)) ≈ Adv(A-SGO$(·))

Information theoretic security: MI((m, k); ℓ(m, k)) ≤ 2−κ

IT Security ⇒ Security

slide-48
SLIDE 48

Using random sharing

Principle

Randomly share the internal state of the computation A d-sharing of x ∈ F is a tuple (x1, x2, . . . , xn) s.t.

x1 + x2 + · · · + xn = x with n − 1 degrees of randomness

Subcomputations y ← SC(x) are replaced by

(y1, y2, . . . , yn) ← SC′(x1, x2, . . . , xn)

slide-49
SLIDE 49

Using random sharing

Soundness

[Chari et al. CRYPTO’99] Univariate Gaussian leakage model: ℓi ∼ xi + N(µ, σ2) Distinguishing

  • (ℓi)i|x = 0
  • from
  • (ℓi)i|x = 1
  • takes q

samples: q ≥ cst · σn

Limitations: ◮ univariate leakage model, Gaussian noise assumption ◮ static leakage of the shares (i.e. without computation) ◮ no scheme proposed to securely compute on a shared state

slide-50
SLIDE 50

Ishai-Sahai-Wagner Scheme

[Ishai-Sahai-Wagner. CRYPTO 2003] Binary circuit model Goal: security against t-probing attacks Every wire w is shared in n wires w1, w2, . . . , wn Issue: how to encode logic gates? ◮ NOT gates and AND gates NOT gates encoding:

w = w1 ⊕ w2 · · · ⊕ wn

slide-51
SLIDE 51

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

slide-52
SLIDE 52

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj
slide-53
SLIDE 53

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 a1b2 a1b3 a2b1 a2b2 a2b3 a3b1 a3b2 a3b3  

slide-54
SLIDE 54

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 a1b2 a1b3 a2b2 a2b3 a3b3   ⊕   a2b1 a3b1 a3b2  

slide-55
SLIDE 55

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 a1b2 a1b3 a2b2 a2b3 a3b3   ⊕   a2b1 a3b1 a3b2  

slide-56
SLIDE 56

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 a1b2 ⊕ a2b1 a1b3 ⊕ a3b1 a2b2 a2b3 ⊕ a3b2 a3b3  

slide-57
SLIDE 57

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 a1b2 ⊕ a2b1 a1b3 ⊕ a3b1 a2b2 a2b3 ⊕ a3b2 a3b3  

slide-58
SLIDE 58

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 a1b2 ⊕ a2b1 a1b3 ⊕ a3b1 a2b2 a2b3 ⊕ a3b2 a3b3   ⊕   r1,2 r1,3 r2,3  

slide-59
SLIDE 59

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 a1b2 ⊕ a2b1 a1b3 ⊕ a3b1 a2b2 a2b3 ⊕ a3b2 a3b3   ⊕   r1,2 r1,3 r1,2 r2,3 r1,3 r2,3  

slide-60
SLIDE 60

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 (a1b2 ⊕ r1,2) ⊕ a2b1 (a1b3 ⊕ r1,3) ⊕ a3b1 r1,2 a2b2 (a2b3 ⊕ r2,3) ⊕ a3b2 r1,3 r2,3 a3b3  

slide-61
SLIDE 61

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 (a1b2 ⊕ r1,2) ⊕ a2b1 (a1b3 ⊕ r1,3) ⊕ a3b1 r1,2 a2b2 (a2b3 ⊕ r2,3) ⊕ a3b2 r1,3 r2,3 a3b3  

slide-62
SLIDE 62

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 (a1b2 ⊕ r1,2) ⊕ a2b1 (a1b3 ⊕ r1,3) ⊕ a3b1 r1,2 a2b2 (a2b3 ⊕ r2,3) ⊕ a3b2 r1,3 r2,3 a3b3   c1 a1b1 (a1b2 ⊕ r1,2) ⊕ a2b1 (a1b3 ⊕ r1,3) ⊕ a3b1

slide-63
SLIDE 63

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 (a1b2 ⊕ r1,2) ⊕ a2b1 (a1b3 ⊕ r1,3) ⊕ a3b1 r1,2 a2b2 (a2b3 ⊕ r2,3) ⊕ a3b2 r1,3 r2,3 a3b3   c1 c2 a1b1 (a1b2 ⊕ r1,2) ⊕ a2b1 (a1b3 ⊕ r1,3) ⊕ a3b1

slide-64
SLIDE 64

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 (a1b2 ⊕ r1,2) ⊕ a2b1 (a1b3 ⊕ r1,3) ⊕ a3b1 r1,2 a2b2 (a2b3 ⊕ r2,3) ⊕ a3b2 r1,3 r2,3 a3b3   c1 c2 c3 a1b1 (a1b2 ⊕ r1,2) ⊕ a2b1 (a1b3 ⊕ r1,3) ⊕ a3b1

slide-65
SLIDE 65

Ishai-Sahai-Wagner Scheme

AND gates encoding

Input: (ai)i, (bi)i s.t.

i ai = a, i bi = b

Output: (ci)i s.t.

i ci = a · b

a · b =

  • iai
  • ibi
  • =
  • i,jaibj

Example (n = 3):

  a1b1 (a1b2 ⊕ r1,2) ⊕ a2b1 (a1b3 ⊕ r1,3) ⊕ a3b1 r1,2 a2b2 (a2b3 ⊕ r2,3) ⊕ a3b2 r1,3 r2,3 a3b3   c1 c2 c3 a1b1 (a1b2 ⊕ r1,2) ⊕ a2b1 (a1b3 ⊕ r1,3) ⊕ a3b1

slide-66
SLIDE 66

Ishai-Sahai-Wagner Scheme

Sketch of security proof

t-probing adversary ⇒ n = 2t + 1 shares probed wires: v1, v2, . . . , vt construct a set I = {raw and column indices of vk} (v1, v2, . . . , vt) perfectly simulated from (ai)i∈I and (bi)i∈I |I| ≤ 2t < n ⇒ (ai)i∈I and (bi)i∈I are random |I|-tuples

slide-67
SLIDE 67

Ishai-Sahai-Wagner Scheme

b b b b b b b b b b b b b b b b b b b b b

(ai)i c0 c1 c2 (bi)i $ $ $

Figure: AND gate for n = 3

slide-68
SLIDE 68

Ishai-Sahai-Wagner Scheme

Can be transposed to the dth-order security model ◮ the adversary must combined the leakage of at least d

subcomputations to recover information

◮ in presence of noise d is a relevant security parameter

[Chari et al. CRYPTO’99]

Many dth-order secure schemes based on ISW scheme Not fully satisfactory ◮ an relevant adversary should use all the leakage

slide-69
SLIDE 69

Security in the noisy model

[Prouff-Rivain. EUROCRYPT 2013] Every y ← SC(x) leaks f(x) where β

  • X|f(X)
  • < 1

ψ

Information theoretic security proof:

MI

  • (m, k); ℓ(m, k)) < O(ω−d)

Assumtpion: the noise parameter ψ can be linearly increased Need of a leak-free component for refreshing

x = (x0, x1, . . . , xd)

  • i xi=x

− → x′ = (x′

0, x′ 1, . . . , x′ d)

  • i x′

i=x

with (x | x) and (x′ | x) mutually independent.

slide-70
SLIDE 70

Overview of the proof

Consider a SPN computation

Figure: Example of SPN round.

slide-71
SLIDE 71

Overview of the proof

Classical implementation protected with sharing

Figure: Example of SPN round protected with sharing.

slide-72
SLIDE 72

S-Box computation

[Carlet et al. FSE’12] Polynomial evaluation over GF(2n) Two types of elementary calculations: ◮ linear functions (additions, squares, multiplication by

coefficients)

◮ multiplications over GF(2n)

slide-73
SLIDE 73

Linear functions

Given a sharing X = X0 ⊕ X1 ⊕ · · · ⊕ Xd

X0 λ(X0) λ λ λ X1 λ(X1) λ(Xd) Xd

· · ·

slide-74
SLIDE 74

Linear functions

Given a sharing X = X0 ⊕ X1 ⊕ · · · ⊕ Xd

f0(X0) X0 λ(X0) λ λ λ X1 λ(X1) λ(Xd) f1(X1) fd(Xd) Xd

· · ·

slide-75
SLIDE 75

Linear functions

Given a sharing X = X0 ⊕ X1 ⊕ · · · ⊕ Xd

f0(X0) X0 λ(X0) λ λ λ X1 λ(X1) λ(Xd) f1(X1) fd(Xd) Xd

· · ·

For fi ∈ N(1/ψ) with ψ = O(|X| 1 2 ω), we show

MI

  • X; (f0(X0), f1(X1), . . . , fd(Xd))

1 ωd+1

slide-76
SLIDE 76

Multiplications

Inputs: sharings

i Ai = g(X) and i Bi = g(X) where

X = s-box input

First step: cross-products

A0 × B0 A0 × B1 · · · A0 × Bd A1 × B0 A1 × B1 · · · A1 × Bd . . . . . . ... . . . Ad × B0 Ad × B1 · · · Ad × Bd

slide-77
SLIDE 77

Multiplications

Inputs: sharings

i Ai = g(X) and i Bi = g(X) where

X = s-box input

First step: cross-products

A0 × B0 A0 × B1 · · · A0 × Bd A1 × B0 A1 × B1 · · · A1 × Bd . . . . . . ... . . . Ad × B0 Ad × B1 · · · Ad × Bd f0,0(A0, B0) f0,1(A0, B1) · · · f0,d(A0, Bd) f1,0(A1, B0) f1,1(A1, B1) · · · f1,d(A1, Bd) . . . . . . ... . . . fd,0(Ad, B0) fd,1(Ad, B1) · · · fd,d(Ad, Bd)

slide-78
SLIDE 78

Multiplications

Inputs: sharings

i Ai = g(X) and i Bi = g(X) where

X = s-box input

First step: cross-products

A0 × B0 A0 × B1 · · · A0 × Bd A1 × B0 A1 × B1 · · · A1 × Bd . . . . . . ... . . . Ad × B0 Ad × B1 · · · Ad × Bd

For fi,j ∈ N(1/ψ) with ψ = O(|X| 3 2 d ω) we show

MI

  • X; (fi,j(Ai, Bj))i,j

1 ωd+1

slide-79
SLIDE 79

Multiplications

Second step: refreshing Apply on each column and one row of

A0 × B0 A0 × B1 · · · A0 × Bd A1 × B0 A1 × B1 · · · A1 × Bd . . . . . . ... . . . Ad × B0 Ad × B1 · · · Ad × Bd

We get a fresh (d + 1)2-sharing of A × B

V0,0 V0,1 · · · V0,d V1,0 V1,1 · · · V1,d . . . . . . ... . . . Vd,0 Vd,1 · · · Vd,d

slide-80
SLIDE 80

Multiplications

Third step: summing rows Takes d elementary calculations (XORs) per row:

Ti,1 ← Vi,0 ⊕ Vi,1 Ti,2 ← Ti,1 ⊕ Vi,2 . . . Ti,d ← Ti,d−1 ⊕ Vi,d

slide-81
SLIDE 81

Multiplications

Third step: summing rows Takes d elementary calculations (XORs) per row:

. . . Ti,1 ← Vi,0 ⊕ Vi,1 Ti,2 ← Ti,1 ⊕ Vi,2 . . . Ti,d ← Ti,d−1 ⊕ Vi,d fi,d(Ti,d−1, Vi,d) fi,1(Vi,0, Vi,1) fi,2(Ti,1, Vi,2)

slide-82
SLIDE 82

Multiplications

Third step: summing rows Takes d elementary calculations (XORs) per row:

. . . Ti,1 ← Vi,0 ⊕ Vi,1 Ti,2 ← Ti,1 ⊕ Vi,2 . . . Ti,d ← Ti,d−1 ⊕ Vi,d fi,d(Ti,d−1, Vi,d) fi,1(Vi,0, Vi,1) fi,2(Ti,1, Vi,2)

For fi,j ∈ N(1/ψ) with ψ = O(|X| 3 2 ω), we show

MI

  • X; (F0, F1, . . . , Fd)

1 ωd+1

where Fi =

  • fi,1(Vi,0, Vi,1), fi,2(Ti,1, Vi,2), . . . , fi,d(Ti,d−1, Vi,d)
slide-83
SLIDE 83

Putting everything together

Several sequences of subcomputations, each leaking Lt with

MI((m, k); Lt) ≤ 1 ωd+1

Use of share-refreshing between each sequence ◮ (Lt)t are mutually independent given (m, k) We hence have

MI

  • (m, k); (L1, L2, . . . , LT )

T

  • t=1

MI((m, k); Lt) ≤ T ωd+1

slide-84
SLIDE 84

Improved security proof

[Duc-Dziembowski-Faust. EUROCRYPT 2014] Security reduction: probing model ⇒ noisy model ISW scheme secure in the noisy model No need for leak-free component !

slide-85
SLIDE 85

Improved security proof

Consider y1 ← SC1(x1), y2 ← SC2(x2), . . . , yn ← SCn(xn) t-probing model: ℓ = (xi)i∈I with |I| = t ε-random probing model: ℓ = (ϕ1(x1), ϕ2(x2), . . . , ϕn(xn)) ◮ where ϕi is a ε-identity function i.e.

with ϕi(x) = x with proba ε ⊥ with proba 1 − ε

δ-noisy model: ℓ = (f1(x1), f2(x2), . . . , fn(xn))

with β(X|fi(X)) ≤ δ (here · = L1)

slide-86
SLIDE 86

Improved security proof

Consider y1 ← SC1(x1), y2 ← SC2(x2), . . . , yn ← SCn(xn) t-probing model: ℓ = (xi)i∈I with |I| = t ε-random probing model: ℓ = (ϕ1(x1), ϕ2(x2), . . . , ϕn(xn)) ◮ where ϕi is a ε-identity function i.e.

with ϕi(x) = x with proba ε ⊥ with proba 1 − ε

δ-noisy model: ℓ = (f1(x1), f2(x2), . . . , fn(xn))

with β(X|fi(X)) ≤ δ (here · = L1) t-probing security ⇒ ε-random probing security ⇒ δ-noisy security

slide-87
SLIDE 87

From probing to random probing

ε-random probing adv. Arp ⇒ t-probing adv. Ap ◮ with t = 2nε − 1 Ap works as follows ◮ sample (z1, z2, . . . , zn) where zi =

  • 1 with proba ε

0 with proba 1 − ε

◮ set I = {i | zi = 1}, if |I| > t return ⊥ ◮ get (xi)i∈I ◮ call Arp on (y1, y2, . . . , yn) where yi =

  • xi if i ∈ I

⊥ if i / ∈ I

If |I| ≤ t : (y1, y2, . . . , yn) ∼ (ϕ1(x1), ϕ2(x2), . . . , ϕn(xn)) Chernoff bound: Pr[|I| > t] ≤ exp(−t/6) ∀Arp ∃Ap : Adv(Ap) ≤ Adv(Arp) − exp(−t/6)

slide-88
SLIDE 88

From probing to random probing

ε-random probing adv. Arp ⇒ t-probing adv. Ap ◮ with t = 2nε − 1 Ap works as follows ◮ sample (z1, z2, . . . , zn) where zi =

  • 1 with proba ε

0 with proba 1 − ε

◮ set I = {i | zi = 1}, if |I| > t return ⊥ ◮ get (xi)i∈I ◮ call Arp on (y1, y2, . . . , yn) where yi =

  • xi if i ∈ I

⊥ if i / ∈ I

If |I| ≤ t : (y1, y2, . . . , yn) ∼ (ϕ1(x1), ϕ2(x2), . . . , ϕn(xn)) Chernoff bound: Pr[|I| > t] ≤ exp(−t/6) ∀Arp : Adv(Arp) ≤ maxAp Adv(Ap) + exp(−t/6)

slide-89
SLIDE 89

From random probing to noisy leakage

Main lemma: every f s.t. β(X|f(X)) ≤ δ can be written:

f = f′ ◦ ϕ where ϕ is an ε-identity function with ε ≤ δ|X|, and f efficient to sample Pr[f(x) = y] eff. computable

  • ⇒ f′ efficient to sample

δ-noisy adversary An ⇒ ε-random probing adv. Arp ◮ get (ϕ1(x1), ϕ2(x2), . . . , ϕn(xn)) ◮ call An on (f ′

1 ◦ ϕ1(x1), f ′ 2 ◦ ϕ2(x1), . . . , f ′ n ◦ ϕn(xn))

∀An ∃Arp : Adv(Arp) = Adv(An)

slide-90
SLIDE 90

From random probing to noisy leakage

Main lemma: every f s.t. β(X|f(X)) ≤ δ can be written:

f = f′ ◦ ϕ where ϕ is an ε-identity function with ε ≤ δ|X|, and f efficient to sample Pr[f(x) = y] eff. computable

  • ⇒ f′ efficient to sample

δ-noisy adversary An ⇒ ε-random probing adv. Arp ◮ get (ϕ1(x1), ϕ2(x2), . . . , ϕn(xn)) ◮ call An on (f ′

1 ◦ ϕ1(x1), f ′ 2 ◦ ϕ2(x1), . . . , f ′ n ◦ ϕn(xn))

∀An : Adv(An) ≤ maxArp Adv(Arp)

slide-91
SLIDE 91

Combining both reductions

Security against t-probing ⇒ security against δ-noisy

where δ = t + 1 2n|X|

◮ exp(−t/6) must be negligible ⇒ t ≥ 8.65 κ ISW scheme with d-sharing is secure against δ-noisy attackers

where δ = d n|X| (and d ≥ 17.5 κ)

For ISW-multiplication n = O(d2) and X = F × F giving

δ = O(1/d|F|2) ⇒ ψ = O(d|F|2)

Limitation: ψ is still in O(d)

slide-92
SLIDE 92

Conclusion

New practically relevant model for leaking computation: the

noisy model

Need for practical investigations for the bias estimation Only 2 works proposing formal proofs in this model Open issues: ◮ a scheme secure with constant noise ◮ secure implementations with different kind of randomization

(e.g. exponent/message blinding for RSA/ECC)