Masking Proofs are Tight (and How to Exploit it in Security - - PowerPoint PPT Presentation

β–Ά
masking proofs are tight
SMART_READER_LITE
LIVE PREVIEW

Masking Proofs are Tight (and How to Exploit it in Security - - PowerPoint PPT Presentation

Masking Proofs are Tight (and How to Exploit it in Security Evaluations) Vincent Grosso, Franois-Xavier Standaert Radbout University Nijmegen (The Netherlands), UCL (Belgium) EUROCRYPT 2018, Tel Aviv, Israel Motivation (side-channel security


slide-1
SLIDE 1

Masking Proofs are Tight

(and How to Exploit it in Security Evaluations) Vincent Grosso, FranΓ§ois-Xavier Standaert

Radbout University Nijmegen (The Netherlands), UCL (Belgium)

EUROCRYPT 2018, Tel Aviv, Israel

slide-2
SLIDE 2

Motivation (side-channel security evaluation)

attack-based evaluations

computation πŸ‘πŸ’πŸ 220 210 measurements 2128 264 20

current practice

(simplified)

1

slide-3
SLIDE 3

Motivation (side-channel security evaluation)

attack-based evaluations

computation πŸ‘πŸ’πŸ 220 210 measurements 2128 264 20 > πŸ‘πŸ’πŸ = πŸ‘πŸ“πŸ? = πŸ‘πŸ—πŸ?

current practice

(simplified)

1

slide-4
SLIDE 4

Motivation (side-channel security evaluation)

attack-based evaluations proof-based evaluations

  • pen designs

(Kerckhoffs)

computation computation πŸ‘πŸ’πŸ 220 210 measurements 2128 264 20 computation 2128 264 20 πŸ‘πŸ˜πŸ 260 230 measurements

proposed approach current practice

(simplified)

1

slide-5
SLIDE 5

Example: masked encoding

  • Probing security (Ishai, Sahai, Wagner 2003)

𝑧 = 𝑧 1 βŠ• 𝑧 2 βŠ• β‹― βŠ• 𝑧(𝑒 βˆ’ 1) βŠ• 𝑧(𝑒)

?

2

slide-6
SLIDE 6

Example: masked encoding

  • Probing security (Ishai, Sahai, Wagner 2003)
  • 𝑒 βˆ’ 1 probes do not reveal anything on 𝑧

𝑧 = 𝑧 1 βŠ• 𝑧 2 βŠ• β‹― βŠ• 𝑧(𝑒 βˆ’ 1) βŠ• 𝑧(𝑒)

?

2

slide-7
SLIDE 7

Example: masked encoding

  • Probing security (Ishai, Sahai, Wagner 2003)
  • But 𝑒 probes completely reveal 𝑧

𝑧 = 𝑧 1 βŠ• 𝑧 2 βŠ• β‹― βŠ• 𝑧(𝑒 βˆ’ 1) βŠ• 𝑧(𝑒)

y

2

slide-8
SLIDE 8

Example: masked encoding

?

𝑧 = 𝑧 1 βŠ• 𝑧 2 βŠ• β‹― βŠ• 𝑧(𝑒 βˆ’ 1) βŠ• 𝑧(𝑒)

  • Probing security (Ishai, Sahai, Wagner 2003)
  • Noisy leakage security (Prouff, Rivain 2013)

2

slide-9
SLIDE 9

Example: masked encoding

𝑧 = 𝑧 1 βŠ• 𝑧 2 βŠ• β‹― βŠ• 𝑧(𝑒 βˆ’ 1) βŠ• 𝑧(𝑒)

  • Probing security (Ishai, Sahai, Wagner 2003)
  • Noisy leakage security (Prouff, Rivain 2013)

noise and independence

(Duc, Dziemb., Faust 2014)

2

slide-10
SLIDE 10

Example: masked encoding

𝑧 = 𝑧 1 βŠ• 𝑧 2 βŠ• β‹― βŠ• 𝑧(𝑒 βˆ’ 1) βŠ• 𝑧(𝑒)

  • Probing security (Ishai, Sahai, Wagner 2003)
  • Noisy leakage security (Prouff, Rivain 2013)

𝑂 ∝

𝑑

MI(𝑍;𝑴) and MI(𝑍, 𝑴) < MI(𝑍(π‘˜), 𝑴(π‘˜))𝑒 noise and independence

(Duc, Dziemb., Faust 2014)

2

slide-11
SLIDE 11

Contributions

  • Previous work: masking proofs are tight for the

encodings (Duc, Faust, Standaert, EC15/JoC18)

3

slide-12
SLIDE 12

Contributions

  • Previous work: masking proofs are tight for the

encodings (Duc, Faust, Standaert, EC15/JoC18)

  • This work:
  • 1. The same holds for circuits (e.g., S-boxes) made

from simple gadgets (e.g., add. & mult.)

3

slide-13
SLIDE 13

Contributions

  • Previous work: masking proofs are tight for the

encodings (Duc, Faust, Standaert, EC15/JoC18)

  • This work:
  • 1. The same holds for circuits (e.g., S-boxes) made

from simple gadgets (e.g., add. & mult.)

  • 2. Proofs can considerably simplify evaluations
  • Under noise & independence assumptions
  • Limited to divide & conquer attacks

3

slide-14
SLIDE 14

Outline

  • 1. Evaluation settings
  • 2. Case studies
  • Case #1: low 𝑒, one-tuple vs. multi-tuples
  • Independent Operation’s Leakages (IOL)
  • Case #2: higher 𝑒, single-tuple
  • Independent Shares Leakages (ISL), DFS bound
  • Case #3: multiplication leakages
  • ISL assumption + PR bound
  • Case #4: higher 𝑒, worst-case attacks
  • Shares repetition, security graphs
  • 3. Concrete attacks (i.e., why worst-case data comp. needed)
  • 4. Conclusions & future research

w.c. eval. time complexity

slide-15
SLIDE 15

Outline

  • 1. Evaluation settings
  • 2. Case studies
  • Case #1: low 𝑒, one-tuple vs. multi-tuples
  • Independent Operation’s Leakages (IOL)
  • Case #2: higher 𝑒, single-tuple
  • Independent Shares Leakages (ISL), DFS bound
  • Case #3: multiplication leakages
  • ISL assumption + PR bound
  • Case #4: higher 𝑒, worst-case attacks
  • Shares repetition, security graphs
  • 3. Concrete attacks (i.e., why worst-case data comp. needed)
  • 4. Conclusions & future research

w.c. eval. time complexity

slide-16
SLIDE 16

Evaluation settings (I)

  • Target implementation:

4

slide-17
SLIDE 17

Evaluation settings (I)

  • Target implementation:
  • C1 Adv: one 𝑒-tuple, 𝑴 = 𝑀10 = [𝑀10 1 , … , 𝑀10 𝑒 ]

4

leakage matrix (all leaks) leakage vector (one 𝑒-tuple) leakage sample (one share)

slide-18
SLIDE 18

Evaluation settings (I)

  • Target implementation:
  • C2 Adv: ten 𝑒-tuples, 𝑴 = [𝑀1, 𝑀2, … , 𝑀10]

4

slide-19
SLIDE 19

Evaluation settings (I)

  • Target implementation:
  • C3 Adv: multiplication leaks, some

𝑀𝑗’s become 𝑒2-tuples - or even 2𝑒2-tuples (log/alog tables) 4

𝑏1𝑐1 𝑏1𝑐2 𝑏1𝑐3 𝑏2𝑐1 𝑏2𝑐2 𝑏2𝑐3 𝑏3𝑐1 𝑏3𝑐2 𝑏3𝑐3 + 𝑠

1

𝑠

2

βˆ’π‘ 

1

𝑠

3

βˆ’π‘ 

2

𝑠

3

β‡’ 𝑑1 𝑑2 𝑑3 partial products refreshing compression

slide-20
SLIDE 20

Evaluation settings (I)

  • Target implementation:
  • C3 Adv: multiplication leaks, some

𝑀𝑗’s become 𝑒2-tuples - or even 2𝑒2-tuples (log/alog tables)

  • 8-bit 𝑧 = 𝑧(1) … 𝑧 𝑒 , π‘š(π‘˜) = HW 𝑧(π‘˜) + π‘œ
  • Noise variance πœπ‘œ

2, SNR = 𝜏2(8βˆ’bit HW) πœπ‘œ

2

= 2

πœπ‘œ

2

  • (Correlated noise analyzed in the paper)

4

slide-21
SLIDE 21

Evaluation settings (II)

  • Exact worst-case evaluations β‰ˆ computing:
  • Which can be computationally hard…

MI 𝐿; π‘Œ, 𝑴 = H 𝐿 +

𝑙

Pr[𝑙] βˆ™

𝑦

Pr 𝑦 βˆ™

𝒛

Pr[𝑧] βˆ™

π’Ž

Pr π’Ž 𝑙, 𝑦, 𝒛 βˆ™ log2(Pr 𝑙 𝑦, π’Ž )

shares vectors β‡’ 𝑃(2𝑒) πœ€-dimension integral

5

slide-22
SLIDE 22

Outline

  • 1. Evaluation settings
  • 2. Case studies
  • Case #1: low 𝑒, one-tuple vs. multi-tuples
  • Independent Operation’s Leakages (IOL)
  • Case #2: higher 𝑒, single-tuple
  • Independent Shares Leakages (ISL), DFS bound
  • Case #3: multiplication leakages
  • ISL assumption + PR bound
  • Case #4: higher 𝑒, worst-case attacks
  • Shares repetition, security graphs
  • 3. Concrete attacks (i.e., why worst-case data comp. needed)
  • 4. Conclusions & future research

w.c. eval. time complexity

slide-23
SLIDE 23

Case #1

  • 𝑒 = 1,2, C1 Adv β‡’ exhaustive analysis possible

6

slide-24
SLIDE 24

Case #1

  • 𝑒 = 1,2, C2 Adv β‡’ exhaustive analysis possible

6

slide-25
SLIDE 25

Case #1

  • 𝑒 = 1,2, C2 Adv β‡’ exhaustive analysis possible
  • But IOL assumption leads to faster evaluation
  • i.e., MI 𝐿; π‘Œ, 𝑴 β‰ˆ 10 βˆ™ MI(𝑍

𝑗,

𝑀𝑗) 6

slide-26
SLIDE 26

Case #1

  • 𝑒 = 1,2, C2 Adv β‡’ exhaustive analysis possible
  • But IOL assumption leads to faster evaluation
  • Conservative (dependencies linearly decrease the MI)

6

slide-27
SLIDE 27

Case #2

  • Larger 𝑒’s, C1 Adv β‡’ exhaustive analysis hard

7

slide-28
SLIDE 28

Case #2

  • Larger 𝑒’s, C1 Adv β‡’ exhaustive analysis hard
  • But ISL assumpt. leads to much faster eval.
  • i.e., MI(𝑍

𝑗,

𝑀𝑗) < MI(𝑍

𝑗(π‘˜),

𝑀𝑗(π‘˜))𝑒 [DFS15,18] 7

slide-29
SLIDE 29

Case #2

  • Larger 𝑒’s, C1 Adv β‡’ exhaustive analysis hard
  • But ISL assumpt. leads to much faster eval.
  • Critical (dependencies exponentially increase the MI)

7

slide-30
SLIDE 30

Case #3

  • Mult. leaks β‡’ analysis even harder (2-bit example)

8

slide-31
SLIDE 31

Case #3

  • Mult. leaks β‡’ analysis even harder (2-bit example)
  • ISL assumpt. leads to much faster eval. [PR13]
  • MI(𝑒2 partial prod.) β‰ˆ 1,72 βˆ™ 𝑒 βˆ™ MI(𝑒-tuple)

8

slide-32
SLIDE 32

Case #4: putting things together (I)

  • Full S-box analysis, large 𝑒’s, C1 & C3 Adv

9

slide-33
SLIDE 33
  • Full S-box analysis, large 𝑒’s, C1 & C3 Adv
  • C1 β†’ C3: MI increases linearly in # of tuples
  • i.e., MI(𝑍

𝑗(π‘˜),

𝑀𝑗(π‘˜))𝑒 β‰ˆ 𝑒 βˆ™ MI(𝑍

𝑗(π‘˜), 𝑀𝑗(π‘˜))𝑒

Case #4: putting things together (I)

9

slide-34
SLIDE 34
  • Full S-box analysis, large 𝑒’s, C1 & C3 Adv
  • C1 β†’ C3: MI increases linearly in # of tuples
  • ∝ β€œcircuit size” parameter of masking proofs

Case #4: putting things together (I)

9

slide-35
SLIDE 35
  • e.g., ISW each share used 𝑒 times:

Case #4: putting things together (II)

  • Things get (much) worse if shares re-used

10

𝑏1𝑐1 𝑏1𝑐2 𝑏1𝑐3 𝑏2𝑐1 𝑏2𝑐2 𝑏2𝑐3 𝑏3𝑐1 𝑏3𝑐2 𝑏3𝑐3

slide-36
SLIDE 36
  • Adv. can average the

𝑀𝑗(π‘˜)’s & increases MI exp. in 𝑒

Case #4: putting things together (II)

  • Things get (much) worse if shares re-used

10

slide-37
SLIDE 37
  • Adv. can average the

𝑀𝑗(π‘˜)’s & increases MI exp. in 𝑒

  • i.e., MI(𝑍

𝑗(π‘˜),

𝑀𝑗(π‘˜))𝑒 β‰ˆ (𝑒 βˆ™ MI(𝑍

𝑗(π‘˜), 𝑀𝑗(π‘˜)))𝑒

Case #4: putting things together (II)

  • Things get (much) worse if shares re-used

10

slide-38
SLIDE 38
  • Adv. can average the

𝑀𝑗(π‘˜)’s & increases MI exp. in 𝑒

  • ∝ β€œnoise condition” of masking security proofs

Case #4: putting things together (II)

  • Things get (much) worse if shares re-used

10

slide-39
SLIDE 39
  • Adv. can average the

𝑀𝑗(π‘˜)’s & increases MI exp. in 𝑒

  • ∝ β€œnoise condition” of masking security proofs

Case #4: putting things together (II)

  • Things get (much) worse if shares re-used

10

slide-40
SLIDE 40

Link to the bigger picture

  • From MI 𝐿; π‘Œ, 𝑴 one can directly obtain a

bound on the attack’s overall complexity

  • Example for MI 𝐿; π‘Œ, 𝑴 = 10βˆ’7

11

slide-41
SLIDE 41

Outline

  • 1. Evaluation settings
  • 2. Case studies
  • Case #1: low 𝑒, one-tuple vs. multi-tuples
  • Independent Operation’s Leakages (IOL)
  • Case #2: higher 𝑒, single-tuple
  • Independent Shares Leakages (ISL), DFS bound
  • Case #3: multiplication leakages
  • ISL assumption + PR bound
  • Case #4: higher 𝑒, worst-case attacks
  • Shares repetition, security graphs
  • 3. Concrete attacks (i.e., why worst-case data comp. needed)
  • 4. Conclusions & future research

w.c. eval. time complexity

slide-42
SLIDE 42

From evaluations to attacks

  • Exact worst-case evaluations can be hard
  • Time complexity may be beyond reach
  • IOL and ISL assumptions lead to easy bounds
  • Is the worst-case data complexity an overkill?
  • Short answer: no (βˆƒ efficient attacks close to worst-case)

12

slide-43
SLIDE 43

From evaluations to attacks

  • Exact worst-case evaluations can be hard
  • Time complexity may be beyond reach
  • IOL and ISL assumptions lead to easy bounds
  • Is the worst-case data complexity an overkill?
  • Short answer: no (βˆƒ efficient attacks close to worst-case)

12

Examples:

  • Horizontal attacks

(Battistello et al., CHES16)

  • SASCA aka belief

propagation (this paper)

  • Machine learning?
  • Black box (!)
slide-44
SLIDE 44

Outline

  • 1. Evaluation settings
  • 2. Case studies
  • Case #1: low 𝑒, one-tuple vs. multi-tuples
  • Independent Operation’s Leakages (IOL)
  • Case #2: higher 𝑒, single-tuple
  • Independent Shares Leakages (ISL), DFS bound
  • Case #3: multiplication leakages
  • ISL assumption + PR bound
  • Case #4: higher 𝑒, worst-case attacks
  • Shares repetition, security graphs
  • 3. Concrete attacks (i.e., why worst-case data comp. needed)
  • 4. Conclusions & future research

w.c. eval. time complexity

slide-45
SLIDE 45

Conclusions (I)

  • Analogy: security against linear cryptanalysis
  • Trying attacks with bounded complexity
  • Use S-boxes / diffusion properties & state

bounds based on the wide-trail strategy

13

slide-46
SLIDE 46

Conclusions (I)

  • Analogy: security against linear cryptanalysis
  • Trying attacks with bounded complexity
  • Use S-boxes / diffusion properties & state

bounds based on the wide-trail strategy

  • General message: the same (should) hold(s) for

implementations (this paper β‰ˆ one step in this direction) 13

slide-47
SLIDE 47

Conclusions (I)

  • Analogy: security against linear cryptanalysis
  • Trying attacks with bounded complexity
  • Use S-boxes / diffusion properties & state

bounds based on the wide-trail strategy

  • General message: the same (should) hold(s) for

implementations (this paper β‰ˆ one step in this direction)

  • Methodology applied to a real world 32-share

masked AES implementation at CHES 2017

  • Allows claiming security beyond 260 measurements

13

slide-48
SLIDE 48

Conclusions (II)

  • Limitations
  • Results are only tight in log-scale plots
  • Hence only apply to high-security evaluations
  • βˆƒ more accurate evaluations for low 𝑒’s

14

slide-49
SLIDE 49

Conclusions (II)

  • Limitations
  • Results are only tight in log-scale plots
  • Hence only apply to high-security evaluations
  • βˆƒ more accurate evaluations for low 𝑒’s
  • Strong (noise and independence) assumptions
  • Orthogonal but important issue!
  • See for example threshold implementations

14

slide-50
SLIDE 50

Conclusions (II)

  • Limitations
  • Results are only tight in log-scale plots
  • Hence only apply to high-security evaluations
  • βˆƒ more accurate evaluations for low 𝑒’s
  • Strong (noise and independence) assumptions
  • Orthogonal but important issue!
  • See for example threshold implementations
  • Only Divide-and-Conquer attacks
  • Problem 1: extend to analytical attacks

14

slide-51
SLIDE 51

Conclusions (II)

  • Limitations
  • Results are only tight in log-scale plots
  • Hence only apply to high-security evaluations
  • βˆƒ more accurate evaluations for low 𝑒’s
  • Strong (noise and independence) assumptions
  • Orthogonal but important issue!
  • See for example threshold implementations
  • Only Divide-and-Conquer attacks
  • Problem 1: extend to analytical attacks
  • Concrete conclusion: shares’ leakage averaging

implies and exponential security loss!

  • Problem 2: analyze/generalize & fix this threat

14

slide-52
SLIDE 52

THANKS

http://perso.uclouvain.be/fstandae/