Profiled Side-Channel Analysis Guilherme Perin , Lukasz Chmielewski, - - PowerPoint PPT Presentation

profiled side channel analysis
SMART_READER_LITE
LIVE PREVIEW

Profiled Side-Channel Analysis Guilherme Perin , Lukasz Chmielewski, - - PowerPoint PPT Presentation

Conference on Cryptographic Hardware and Embedded Systems (CHES) 2020 Strength in Numbers: Improving Generalization with Ensembles in Machine Learning-based Profiled Side-Channel Analysis Guilherme Perin , Lukasz Chmielewski, Stjepan Picek In


slide-1
SLIDE 1

1

Strength in Numbers: Improving Generalization with Ensembles in Machine Learning-based Profiled Side-Channel Analysis

Guilherme Perin, Lukasz Chmielewski, Stjepan Picek Conference on Cryptographic Hardware and Embedded Systems (CHES) 2020 In collaboration with

slide-2
SLIDE 2

2

  • Analysis of output class probabilities (predictions)
  • Using proper metrics for profiled SCA with deep learning
  • Improving generalization in DL-based profiled SCA:
  • Ensembles: combining multiple NN models into a stronger model

Contributions

slide-3
SLIDE 3

3

DL-based profiled SCA

Device A (AES) Profiling Traces (known key) Validation Traces (known key) Attack Traces (unknown key) Recovered key Device B (AES) Good (enough) generalization

(learning algorithm - DNN) (learning algorithm - DNN)

slide-4
SLIDE 4

4

  • If (n-order) SCA leakages are there, we can improve generalization by:

– Using a small NN model (implicitly regularized) – Using a large NN model and add (explicit) regularization (dropout, data augmentation, noise layers, batch normalization, weight decay, etc.) – Being precise in training time/epochs (early stopping)

– Or, using ensembles.

“... Improving Generalization ...”

slide-5
SLIDE 5

5

  • No points of interest selection
  • Less sensitive to trace desynchronization (CNN)
  • Implement high-order profiled SCA
  • Allow visualization techniques
  • Work in progress:

– Create a good DL model is difficult: efficient and automated hyperparameters tuning not solved yet for SCA – SCA is already costly by itself: adding hyperparameters tuning can render the DL-based SCA impractical

DL-based SCA is (mostly) about hyperparameters

More secure products

slide-6
SLIDE 6

6

  • Accuracy, Loss, Recall, Precision: not very consistent for SCA (multiple test traces)
  • Success Rate
  • Guessing Entropy

Custom loss/error function in Keras/TensorFlow

SCA Traces

Predictions

What can we learn here?

DL-based SCA is (also) about metrics

Key Rank (GE, SR)

slide-7
SLIDE 7

7

Results on Masked AES (MLP)

Attacking 1 key byte with HW model

Predictions

  • r

Output Class Probabilities

slide-8
SLIDE 8

8

Output Class Probabilities

Example: HW model of 1 byte on AES (S-box output)

HW = 0 HW = 1 HW = 2 HW = 3 HW = 4 HW = 5 HW = 6 HW = 7 HW = 8

𝑞0,0 𝑞0,1 𝑞0,2 𝑞0,3 𝑞0,4 𝑞0,5 𝑞0,6 𝑞0,7 𝑞0,8 𝑞1,0 𝑞1,1 𝑞1,2 𝑞1,3 𝑞1,4 𝑞1,5 𝑞1,6 𝑞1,7 𝑞1,8 𝑞2,0 𝑞2,1 𝑞2,2 𝑞2,3 𝑞2,4 𝑞2,5 𝑞2,6 𝑞2,7 𝑞2,8 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 𝑞𝑂−1,0 𝑞𝑂−1,1 𝑞𝑂−1,2 𝑞𝑂−1,3 𝑞𝑂−1,4 𝑞𝑂−1,5 𝑞𝑂−1,6 𝑞𝑂−1,7 𝑞𝑂−1,8

𝑄 =

Test Traces Classes / Labels

𝑞𝑗,𝑘 = probability that trace 𝑗 contains label (HW) 𝑘 𝑘 = 𝑇𝑐𝑝𝑦(𝑞𝑢𝑗⨁𝑙𝑗) (leakage or selection function)

slide-9
SLIDE 9

9

Summation: Key Rank

HW = 0 HW = 1 HW = 2 HW = 3 HW = 4 HW = 5 HW = 6 HW = 7 HW = 8

𝑞0,0 𝑞0,1 𝑞0,2 𝒒𝟏,𝟒 𝑞0,4 𝑞0,5 𝑞0,6 𝑞0,7 𝑞0,8 𝑞1,0 𝑞1,1 𝑞1,2 𝑞1,3 𝑞1,4 𝑞1,5 𝒒𝟐,𝟕 𝑞1,7 𝑞1,8 𝑞2,0 𝑞2,1 𝒒𝟑,𝟑 𝑞2,3 𝑞2,4 𝑞2,5 𝑞2,6 𝑞2,7 𝑞2,8 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 𝑞𝑂−1,0 𝑞𝑂−1,1 𝑞𝑂−1,2 𝑞𝑂−1,3 𝒒𝑶−𝟐,𝟓 𝑞𝑂−1,5 𝑞𝑂−1,6 𝑞𝑂−1,7 𝑞𝑂−1,8

𝑄(𝒍) =

Test Traces Classes / Labels 𝑄(𝒍) = ෍

𝑗=0 𝑂−1

log 𝑞𝑗,𝑘 = log 𝑞0,3 + log 𝑞1,6 + log 𝑞2,2 + ⋯ + log 𝑞𝑂−1,4

Label(0) = Sbox(𝒒𝒖𝟏 ⨁ 𝒍) = 3 Label(1) = Sbox(𝒒𝒖𝟐 ⨁ 𝒍) = 6 Label(2) = Sbox(𝒒𝒖𝟑 ⨁ 𝒍) = 2 ... Label(N-1) = Sbox(𝒒𝒖𝑶−𝟐 ⨁ 𝒍) = 4

Label according to key guess 𝒍

Recovered key: argmax

𝑙

[𝑄 0 , 𝑄 1 , … , 𝑄(255)]

slide-10
SLIDE 10

10

Summation: Key Rank

HW = 0 HW = 1 HW = 2 HW = 3 HW = 4 HW = 5 HW = 6 HW = 7 HW = 8

0,01 0,02 0,08 𝟏, 𝟓𝟏 0,20 0,25 0,01 0,02 0,01 0,02 0,01 0,06 0,14 0,15 0,20 𝟏, 𝟒𝟔 0,02 0,05 0,01 0,01 𝟏, 𝟔𝟒 0,08 0,22 0,10 0,02 0,02 0,01 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 0,01 0,01 0,02 0,25 𝟏, 𝟓𝟏 0,08 0,20 0,02 0,01

𝑄(𝒍) =

Test Traces Classes / Labels 𝑄(𝒍) = ෍

𝑗=0 𝑂−1

log 𝑞𝑗,𝑘 = log 𝟏, 𝟓𝟏 + log 𝟏, 𝟒𝟔 + log 𝟏, 𝟔𝟒 + ⋯ + log 𝟏, 𝟓𝟏

Always the highest value per row Test Accuracy is 100%

slide-11
SLIDE 11

11

Summation: Key Rank

HW = 0 HW = 1 HW = 2 HW = 3 HW = 4 HW = 5 HW = 6 HW = 7 HW = 8

0,01 0,02 𝟏, 𝟑𝟔 0,08 0,20 𝟏, 𝟓𝟏 0,01 0,02 0,01 0,02 0,01 0,06 0,14 𝟏, 𝟒𝟔 0,20 0,02 𝟏, 𝟐𝟔 0,05 0,01 0,01 0,08 𝟏, 𝟔𝟒 0,22 0,10 0,02 0,02 0,01 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 0,01 0,01 0,02 𝟏, 𝟓𝟏 0,08 𝟏, 𝟑𝟔 0,20 0,02 0,01

𝑄(𝒍) =

Test Traces Classes / Labels 𝑄(𝒍) = ෍

𝑗=0 𝑂−1

log 𝑞𝑗,𝑘 = log 𝟏, 𝟑𝟔 + log 𝟏, 𝟐𝟔 + log 𝟏, 𝟔𝟒 + ⋯ + log 𝟏, 25

NOT Always the highest value per row Test Accuracy is 27%

slide-12
SLIDE 12

12

Rank of Class Probabilities

1 3 4 5 6 7 8 9

Ordering keys by accuracy

0.27

2

0.29 Correct key candidate Incorrect key candidates

max(𝑞𝑗,0, … , 𝑞𝑗,8) min(𝑞𝑗,0, … , 𝑞𝑗,8)

Class Probability Rank

slide-13
SLIDE 13

13

Rank of Class Probabilities

1 3 4 5 6 7 8 9

Class Probability Rank

0.27

2

0.29

Low Ranks: summation for 𝒍 is pushed up High Ranks: summation for 𝒍 is pushed down

Correct key candidate Incorrect key candidates

Ordering keys by accuracy small influence on correct 𝑄(𝒍) large influence

  • n correct 𝑄(𝒍)
slide-14
SLIDE 14

14

0.48 0.48

Output class probabilities are pushed towards ranks 1 and 2

No test traces with high ranked probabilities

Results on Leaky AES (MLP)

Attacking 1 key byte with HW model

slide-15
SLIDE 15

15

0.22

Output class probabilities are pushed towards ranks 1 and 2

Few test traces with high ranked probabilities 0.22

Successful key recovery

Results on Masked AES (MLP)

Attacking 1 key byte with HW model

slide-16
SLIDE 16

16

Two CNN models on masked AES

CNN with 4 hidden layers CNN with 3 hidden layers

slide-17
SLIDE 17

17

  • Deep learning analysis requires a large amount of hyperparameters

experiments:

  • From multiple models, we elect a best one. Why not benefit from multiple

models instead of a best single model?

Common story

ℎ𝑐𝑓𝑡𝑢 = argmin

𝑛 ∈ 𝑁

𝑀𝑝𝑡𝑡(𝜇𝑛, 𝑢𝑢𝑠𝑏𝑗𝑜, 𝑢𝑤𝑏𝑚) ℎ𝑐𝑓𝑡𝑢 = argmin

𝑛 ∈ 𝑁

𝐻𝐹(𝜇𝑛, 𝑢𝑢𝑠𝑏𝑗𝑜, 𝑢𝑤𝑏𝑚) Select a proper metric

slide-18
SLIDE 18

18

  • Boosting
  • Stacking
  • Bootstrap Aggregating (Bagging)

Ensembles

𝑄(𝒍) = ෍

𝑛=0 𝑁−1

𝑗=0 𝑂−1

log 𝑞𝑗,𝑘,𝑛 𝑁𝑐𝑓𝑡𝑢 < 𝑁 argmin

𝑛

𝐻𝐹(𝜇𝑛, 𝑢𝑢𝑠𝑏𝑗𝑜, 𝑢𝑤𝑏𝑚) Select best models based on GE:

hyperparameters train traces validation traces

slide-19
SLIDE 19

19

Ensembles

Ensemble (𝑁𝑐𝑓𝑡𝑢 = 10, 𝑁 = 50) Single Best Model = argmin

𝑛

𝐻𝐹(𝜇𝑛, 𝑢𝑢𝑠𝑏𝑗𝑜, 𝑢𝑤𝑏𝑚)

𝑄(𝒍) = ෍

𝑛=0 𝑁−1

𝑗=0 𝑂−1

log 𝑞𝑗,𝑘,𝑛 𝑄(𝒍) = ෍

𝑗=0 𝑂−1

log 𝑞𝑗,𝑘

slide-20
SLIDE 20

20

Datasets

Dataset Training Validation Test Features Countermeasures Pinata SW AES 6,000 (fixed key) 1,000 1,000 400 No DPAv4 34,000 (fixed key) 1,000 1,000 2,000 RSM ASCAD 200,000 (random keys) 500 500 1,400 Masking CHES CTF 2018 43,000 (fixed key) 1,000 1,000 2,000 Masking

slide-21
SLIDE 21

21

Range of Hyperparameters

Hyperparameter min max step Learning Rate 0.0001 0.001 0.0001 Mini-batch 100 1000 100 Dense Layers 2 8 1 Neurons 100 1000 100 Activation Function Tanh, ReLU, ELU or SELU Hyperparameter min max step Learning Rate 0.0001 0.001 0.0001 Mini-batch 100 1000 100 Convolution Layers (i) 1 2 1 Filters 8*i 32*i 4 Kernel Size 10 20 2 Stride 5 10 1 Dense Layers 2 8 1 Neurons 100 1000 100 Activation Function Tanh, ReLU, ELU or SELU

MLP CNN

*optimal ranges based on literature

slide-22
SLIDE 22

22

Results on ASCAD (Hamming Weight)

MLP CNN

slide-23
SLIDE 23

23

Results on ASCAD (Identity)

MLP CNN

slide-24
SLIDE 24

24

  • Output class probabilities are a valid distinguisher for side-channel analysis.
  • Output class probabilities are sensitive to small changes in hyperparameteres: ensembles remove the

effect of small variations, improving generalization results.

  • Ensembles do not replace hyperparameters search. Ensembles relax the fine tuning of

hyperparameters: GE or SR of ensemble tends to be superior to GE or SR of a single best model.

  • Ensembles do not improve learnability: they improve what single models already learn.
  • Limited amount of models can be enough to build a strong ensemble.

As future works:

  • Explore another ensemble methods (e.g., stacking) .
  • Verify how ensembles work in combination with other regularization methods and other metrics (SR,

MI).

  • Formalize the density distribution of output class probabilities (a new metric).

Conclusions

slide-25
SLIDE 25

25

  • Our code is available at: https://github.com/AISyLab/EnsembleSCA

Thank you!