Lattice Attacks against Elliptic-Curve Signatures with Blinded - - PowerPoint PPT Presentation

lattice attacks against elliptic curve signatures with
SMART_READER_LITE
LIVE PREVIEW

Lattice Attacks against Elliptic-Curve Signatures with Blinded - - PowerPoint PPT Presentation

Lattice Attacks against Elliptic-Curve Signatures with Blinded Scalar Multiplication Dahmun Goudarzi, Matthieu Rivain, Damien Vergnaud SAC 2016, 12 Aug, St. Johns Outline EC signature schemes based on random nonces computed from [ k ]


slide-1
SLIDE 1

Lattice Attacks against Elliptic-Curve Signatures with Blinded Scalar Multiplication

Dahmun Goudarzi, Matthieu Rivain, Damien Vergnaud

SAC 2016, 12 Aug, St. Johns

slide-2
SLIDE 2

Outline

EC signature schemes based on random nonces ◮ σ computed from [k]P , k ← $ ◮ σ + k ⇒ secret key ◮ lattice attack: few bits of several ki ⇒ secret key Scenario: ◮ implementation with countermeasures against SCA ◮ blinding of the nonce ◮ noisy side-channel leakage on the bits of the blinded nonce Issue: noisy information on blinded nonces ⇒ lattice attack

slide-3
SLIDE 3

Outline

Approach: ◮ template attack ⇒ probability scores ◮ probability scores ⇒ bit-selection algorithm ◮ selected bits ⇒ lattice attack ◮ dealing with blinding Presentation: ◮ ECDSA ◮ target implementation & leakage model ◮ Howgrave-Graham and Smart lattice attack ◮ bit selection ◮ experimental results

slide-4
SLIDE 4

ECDSA

slide-5
SLIDE 5

Key pair (x, Q) with Q = [x]P ∈ E(K)

slide-6
SLIDE 6

Key pair (x, Q) with Q = [x]P ∈ E(K) Signature of h = H(m)

k

$

← − [1; q] (q = |E(K)|) t = xcoord([k]P ) s = h + t · x k (mod q)

slide-7
SLIDE 7

Key pair (x, Q) with Q = [x]P ∈ E(K) Signature of h = H(m)

k

$

← − [1; q] (q = |E(K)|) ⇒ random nonce k t = xcoord([k]P ) s = h + t · x k (mod q) ⇒ signature σ = (t, s)

slide-8
SLIDE 8

Key pair (x, Q) with Q = [x]P ∈ E(K) Signature of h = H(m)

k

$

← − [1; q] (q = |E(K)|) ⇒ random nonce k t = xcoord([k]P ) s = h + t · x k (mod q) ⇒ signature σ = (t, s)

Verification of σ = (t, s)

k = h + t · x s

slide-9
SLIDE 9

Key pair (x, Q) with Q = [x]P ∈ E(K) Signature of h = H(m)

k

$

← − [1; q] (q = |E(K)|) ⇒ random nonce k t = xcoord([k]P ) s = h + t · x k (mod q) ⇒ signature σ = (t, s)

Verification of σ = (t, s)

k = h + t · x s [k]P

  • t

?

= h

s

  • P +

t · x

s

  • P
  • t

s

  • Q
slide-10
SLIDE 10

Target implementation and leakage model

slide-11
SLIDE 11

Target implementation

Regular binary algorithm (e.g. Montgomery ladder) Classical side-channel countermeasures: ◮ randomization of point coordinates ◮ scalar blinding

Classic blinding:

  • 1. r

$

← − [ [0, 2λ − 1] ]

  • 2. a ← k + r · q
  • 3. return [a]P

Euclidean blinding:

  • 1. r

$

← − [ [1, 2λ − 1] ]

  • 2. a ← ⌊k/r⌋; b ← k mod r
  • 3. return [r]([a]P ) + [b]P
slide-12
SLIDE 12

Leakage model

Algorithm 1 Montgomery ladder Input: blinded nonce a Output: [a]P

  • 1. P0 ← O; P1 ← P
  • 2. for i = ℓ − 1 downto 0 do

3.

P1−ai ← P1−ai + Pai

4.

Pai ← 2Pai

  • 5. end for
  • 6. return P0

Loop iteration: (P0, P1) ← f(ai, P0, P1)

⇒ leaks Ψ(ai, P0, P1)

Gaussian leakage assumption:

Ψ(ai, P0, P1) ∼ N(mai, Σ)

slide-13
SLIDE 13

Template attacker

Get a side-channel trace (ψℓ−1, . . . , ψ1, ψ0) For every i, use leakage templates to decide

ψi ∼ Ψ(0)

  • r

ψi ∼ Ψ(1)

Maximum likelihood

Pr[ai = 0 | ψi] = cst · exp

1 2(ψi − m0)t · Σ−1 · (ψi − m0)

  • Pr[ai = 1 | ψi] = cst · exp

1 2(ψi − m1)t · Σ−1 · (ψi − m1)

  • We get Pr[ai = 0 | ψi] ∼ Dθ(ai) with

θ = Λ · (m0 − m1)

  • multivariate SNR

where ΛtΛ = Σ

slide-14
SLIDE 14

Howgrave-Graham and Smart attack with blinded nonces

slide-15
SLIDE 15

ℓ λ k

a = k + r · q

a

slide-16
SLIDE 16

ℓ λ k

a = k + r · q

a known unknown

slide-17
SLIDE 17

ℓ λ k a0

σ0 = (s0, t0)

a1

σ1 = (s1, t1)

an

σn = (sn, tn)

slide-18
SLIDE 18

ℓ λ k a0

σ0 = (s0, t0)

a1

σ1 = (s1, t1)

an

σn = (sn, tn)

x ≡ ai · si − hi ti (mod q)

slide-19
SLIDE 19

ℓ λ k a0

σ0 = (s0, t0)

a1

σ1 = (s1, t1)

an

σn = (sn, tn)

x ≡ ai · si − hi ti ≡ a0 · s0 − h0 t0 (mod q)

slide-20
SLIDE 20

ℓ λ k a0

σ0 = (s0, t0)

a1

σ1 = (s1, t1)

an

σn = (sn, tn)

x ≡ ai · si − hi ti ≡ a0 · s0 − h0 t0 (mod q) ⇔ ai + A a0 + B ≡ 0 (mod q)

slide-21
SLIDE 21

ai a0

+ A × + B ≡ 0 (mod q)

slide-22
SLIDE 22

ai a0

+ A × + B ≡ 0 (mod q)

xi,1 + αi,2 · xi,2 + αi,3 · xi,3 + · · · + βi,1 · x0,1 + βi,2 · x0,2 + βi,3 · x0,3 + · · · + γi ≡ 0 (mod q)

slide-23
SLIDE 23

ai a0

+ A × + B ≡ 0 (mod q)

xi,1 + αi,2 · xi,2 + αi,3 · xi,3 + · · · + βi,1 · x0,1 + βi,2 · x0,2 + βi,3 · x0,3 + · · · + γi = ηi · q

slide-24
SLIDE 24

xi,1 + αi,2 · xi,2 + αi,3 · xi,3 + · · · + βi,1 · x0,1 + βi,2 · x0,2 + βi,3 · x0,3 + · · · + γi = ηi · q

⇒ n equations (for i = 1, 2, . . . , n)

slide-25
SLIDE 25

αi,2 · xi,2 + αi,3 · xi,3 + · · · + βi,1 · x0,1 + βi,2 · x0,2 + βi,3 · x0,3 + · · · ηi · q = xi,1 + γi

⇒ n equations (for i = 1, 2, . . . , n)

slide-26
SLIDE 26

αi,2 · xi,2 + αi,3 · xi,3 + · · · + βi,1 · x0,1 + βi,2 · x0,2 + βi,3 · x0,3 + · · · ηi · q = xi,1 + γi

⇒ n equations (for i = 1, 2, . . . , n)

                               . . . . . . . . .                                     (α1,j)j (β1,j)j q (α2,j)j (β2,j)j q ... ... (αn,j)j (βn,j)j q     

×

=

     γ1 + γ2 + . . . γn +     

slide-27
SLIDE 27

αi,2 · xi,2 + αi,3 · xi,3 + · · · + βi,1 · x0,1 + βi,2 · x0,2 + βi,3 · x0,3 + · · · ηi · q = xi,1 + γi

⇒ n equations (for i = 1, 2, . . . , n)

                               . . . . . . . . .                                                               (α1,j)j (β1,j)j q (α2,j)j (β2,j)j q ... ... (αn,j)j (βn,j)j q 1 1 ... 1                               

×

=

                               γ1 + γ2 + . . . γn + . . . . . .                               

slide-28
SLIDE 28

Lattice problem

There exists y st:

M · y = v + x

where v = (γ1, γ2, . . . , γn, 0, 0, · · · , 0) x is the vector of unknown blocks

CVP (Closest Vector Problem): v ⇒ (v + x)

(v + x) − v

  • x

≤ c0

  • dim(M) det(M)

1 dim(M)

c0 ≈ 1/ √ 2πe (heuristic)

slide-29
SLIDE 29

Lattice attack parameters

Sum of contributions:

n

  • i=0

(δi − λ − c1 · Ni) ≥ ℓ where δi = number of known bits in ai Ni = number of unknown blocks in ai

Loss λ bits per blinded nonce Linear term c1 · Ni ◮ overlooked in the original paper ◮ significant in our context ◮ heuristically c1 ≈ − log2(c0) ≈ 2.05 ◮ higher in practice

slide-30
SLIDE 30

Experiments

Practical c1 values for a 95% success rate:

(n + 1) = 5 (n + 1) = 10 (n + 1) = 20 Nb = 5 10 25 50 10 20 50 100 20 40 100 λ = 0 3.60 2.60 2.56 2.90 4.10 3.30 3.52 3.57 4.85 4.42 4.51 λ = 16 3.40 2.60 2.40 3.02 4.20 3.15 3.40 4.20 5.25 4.77 4.96 λ = 32 3.40 2.60 2.60 2.68 3.90 3.10 3.60 n/a 4.95 4.50 n/a λ = 64 3.20 2.80 2.36 n/a 3.70 3.55 3.68 n/a 4.80 4.60 n/a

CVP algorithm: the embedding method For tested parameters: 2.3 < c1 < 5.3

slide-31
SLIDE 31

Attack on leaking implementations

slide-32
SLIDE 32

ai

slide-33
SLIDE 33

ai leak ∼ Ψ(ai,0) Pr[ai,0 = 0]

slide-34
SLIDE 34

ai leak ∼ Ψ(ai,1) Pr[ai,1 = 0]

slide-35
SLIDE 35

ai leak ∼ Ψ(ai,2) Pr[ai,2 = 0]

slide-36
SLIDE 36

ai leak ∼ Ψ(ai,j) Pr[ai,j = 0]

slide-37
SLIDE 37

ai leak ∼ Ψ(ai,j) Pr[ai,j = 0]

slide-38
SLIDE 38

ai leak ∼ Ψ(ai,j) Pr[ai,j = 0]

Guess

ˆ ai,j = argmax

b∈{0,1}

Pr[ai,j = b]

Good-guess probability

pi,j := Pr[ai,j = ˆ ai,j] = max

b∈{0,1} Pr[ai,j = b]

Select some guess bits to construct the lattice

slide-39
SLIDE 39
slide-40
SLIDE 40

I

slide-41
SLIDE 41

I Ji

slide-42
SLIDE 42

I Ji

Goal: select I and (Ji)i∈I to maximize

success proba =

  • i∈I
  • j∈Ji

pi,j such that

  • i∈I

(|Ji| − λ − c1 · Ni) ≥ ℓ

  • CVP constraint

and

  • i∈I

Ni ≤ ∆max

  • lattice dimension
slide-43
SLIDE 43

For each selected set Ji

CVP constraint += |Ji| − λ − c1 · Ni (must reach ℓ) dim(L) += Ni (must not exceed ∆max) success proba ×=

  • j∈Ji

pi,j

Select Ji to maximize

γi =

j∈Ji

pi,j

  • 1

|Ji|−λ−c1Ni Efficient algorithm based on dynamic programming

slide-44
SLIDE 44

Euclidean case

slide-45
SLIDE 45

ki = ai · ri + bi

  • Pr[ai,j = 0]
  • j
  • Pr[ri,j = 0]
  • j
  • Pr[bi,j = 0]
  • j
slide-46
SLIDE 46

ki = ai · ri + bi

  • Pr[ai,j = 0]
  • j
  • Pr[ri,j = 0]
  • j
  • Pr[bi,j = 0]
  • j

Pr[ki,j = 0] = f( )

slide-47
SLIDE 47

ki = ai · ri + bi

  • Pr[ai,j = 0]
  • j
  • Pr[ri,j = 0]
  • j
  • Pr[bi,j = 0]
  • j

Pr[ki,j = 0] = f( )

Bias decreases exponentially as j → ℓ

2

20 21 22 23 24 25 26 0.1 0.2 0.3 0.4 0.5 Bit index j Error probability Bias 1/4 Bias 1/8 Bias 1/12 Bias 1/16 Bias 1/20 Bias 1/24

slide-48
SLIDE 48

I Bi,1 Bi,2 ⇒ |Ji| = |Bi,1| + |Bi,2| Ni = 1 λ = 0

  • i∈I

(|Ji| − λ − c1 · Ni) ≥ ℓ

  • i∈I

Ni ≤ ∆max ⇒

  • i∈I

(|B1,i| + |B2,i| − c1) ≥ ℓ ⇒ |I| ≤ ∆max

slide-49
SLIDE 49

Block probabilities

Pr[Bi,j = x] = f

  • Pr[ai,j = 0]
  • j;
  • Pr[ri,j = 0]
  • j;
  • Pr[bi,j = 0]
  • j; x
  • Block guesses

ˆ Bi,j = argmax

x

Pr[Bi,j = x] Pr[Bi,j = ˆ Bi,j] = max

x

Pr[Bi,j = x]

Select blocks maximizing

γi =

  • Pr[ ˆ

Bi,1 = Bi,1] · Pr[ ˆ Bi,2 = Bi,2]

  • 1

|B1,i|+|B2,i|−c1

slide-50
SLIDE 50

Experimental results

slide-51
SLIDE 51

Experimental setting

ANSSI 256-bit elliptic curve (i.e. ℓ = 256) Three different random sizes λ ∈ {16, 32, 64} Probability scores simulated using Dθ(·) with

θ = α · (0.5, 1, 2) with α ∈ {1.5, 2}

Attack parameters ◮ nsig signatures (with leaking blinded nonces) ◮ ntr trials for the subset I

(nsig, ntr) ∈ {(10, 1), (20, 5), (20, 10), (100, 10), (100, 50), (100, 100)}

◮ Linear factor c1 set to 4

slide-52
SLIDE 52

Experimental results

(nsig, ntr) (10,1) (20, 5) (20, 10) (100, 10) (100, 50) (100, 100) Classic blinding α = 1.5 λ = 16 13.5 % 38.3 % 54.0 % 70.1 % 99.0 % 99.9 % λ = 32 3.5 % 13.6 % 22.7 % 27.8 % 73.9 % 91.9 % λ = 64 0.2 % 0.6 % 1.2 % 1.5 % 6.2 % 11.7 % α = 2 λ = 16 91.2 % 99.9 % λ = 32 90.5 % 99.5 % 100 % 100 % 100 % 100 % λ = 64 85.7 % 99.3 % Euclidean blinding α = 1.5 λ = 16 λ = 32 0 % 0 % 0 % 0 % 0 % 0 % λ = 64 α = 2 λ = 16 0.7 % 3.1 % 5.8 % 42.8 % 76.8 % 83.3 % λ = 32 0.1 % 0.4 % 0.8 % 41.1 % 74.9 % 82.6 % λ = 64 0.1 % 0.4 % 1.0 % 40.2 % 75.0 % 82.8 %

Lattice reduction (almost) always works (for correct guesses)

⇒ sound choice for c1

λ has small impact for Euclidean blinding Classic blinding more sensitive to our attack than Euclidean

blinding