Fault attack vulnerability assessment of binary code Cryptography - - PowerPoint PPT Presentation

fault attack vulnerability assessment of binary code
SMART_READER_LITE
LIVE PREVIEW

Fault attack vulnerability assessment of binary code Cryptography - - PowerPoint PPT Presentation

Fault attack vulnerability assessment of binary code Cryptography and Security in Computing Systems [CS219], Valencia, Spain January 21, 2019 Jean-Baptiste Brjon Emmanuelle Encrenaz Karine Heydemann Quentin Meunier Son-Tuan Vu Sorbonne


slide-1
SLIDE 1

Fault attack vulnerability assessment of binary code

Cryptography and Security in Computing Systems [CS2’19], Valencia, Spain January 21, 2019 Jean-Baptiste Bréjon Emmanuelle Encrenaz Karine Heydemann Quentin Meunier Son-Tuan Vu Sorbonne Université, CNRS, Laboratoire d’Informatique de Paris 6, F-75005 Paris, France

  • JB. Bréjon

CS2’19 January 21, 2019 1 / 21

slide-2
SLIDE 2

Plan

Context Fault Attacks Protections Vulnerability Assessment Security Metrics Use-Case: VerifyPin Conclusion

  • JB. Bréjon

CS2’19 January 21, 2019 2 / 21

slide-3
SLIDE 3

Context

The number of embedded systems keeps growing and their uses diversify Sensitive data are increasingly manipulated by embedded systems → Need to secure them and thus guarantee the effectiveness of protection mechanisms The case of physical attacks: requires physical access to the chip Side-channel: the attacker deduces secret information by exploiting caracteristics of the software or hardware implementation Fault attacks: the attacker perturbs the program behavior by using physical means to obtain secret information

  • JB. Bréjon

CS2’19 January 21, 2019 3 / 21

slide-4
SLIDE 4

Fault Attacks

Means

Clock glitch, voltage glitch [BAR-EL et al. 2006] Laser beam [KARAKLAJIC et al. 2013] Electromagnetic pulse [DEHBAOUI et al. 2012; MORO et al. 2013] ...

Goals

Bypass security mechanisms [VASSELLE et al. 2017] Privilege escalation [TIMMERS et al. 2017] Obtain sensitive information

Effects

Permanent (bond wire fuse), transient (bit flip) [BAR-EL et al. 2006; BONEH et al. 2001; ORDAS et al. 2015] Fault effects representation at a specific code level → fault model

→ How can we protect from them?

  • JB. Bréjon

CS2’19 January 21, 2019 4 / 21

slide-5
SLIDE 5

Protections

Can be implemented on both hardware (costly, cannot be adapted) and software (cheaper, easily adaptable) Software protections Add pieces of code Can be implemented at all code levels Trial-and-error process Designed against fault models Issues with the process of protecting software Ensure protections effectiveness Ensure the final code is well protected, especially considering the effects of compiler optimisations → Need to assess the security of the code at low level

  • JB. Bréjon

CS2’19 January 21, 2019 5 / 21

slide-6
SLIDE 6

Vulnerability Assessment

Goal: assess the robustness of a binary program to fault attacks Why binary code? → Allows to take into account the effects of the optimization process on protections implemented at source-level → Have access to the final binary instructions and code layout Formal verification → requires to formally model code and faults Implemented in a tool → RobustB 3 supported fault models Instruction skip Register corruption Instruction replacement

  • JB. Bréjon

CS2’19 January 21, 2019 6 / 21

slide-7
SLIDE 7

Overview (vulnerabilities)

Binary + Target region RobustB Vulnerability list

Original Code Faulted Code

Initialization Final values Final values Comparison

Original Path Faulted Path Initialization Final values Final values Comparison

From a code region in a binary Vulnerability detection: difference in final values of the original code and a faulted code version Outputs a description of the vulnerabilities found: vulnerability list

  • JB. Bréjon

CS2’19 January 21, 2019 7 / 21

slide-8
SLIDE 8

Overview (process steps)

Binary + Target region RobustB Vulnerability list

Original Code Faulted Code

Initialization Final values Final values Comparison

Original Path Faulted Path Initialization Final values Final values Comparison

1 - Extract a representation of the target region 2 - Determine the possible execution paths within the target region 3 - Single fault injection on the possible execution paths 4 - Search for vulnerabilities by formal verification of a non-equivalence property (SMT) Satisfiable → the fault induces a vulnerability Unsatisfiable → the fault has no effect ⇒ Vulnerability list including their locations

  • JB. Bréjon

CS2’19 January 21, 2019 8 / 21

slide-9
SLIDE 9

Information Extraction From the Binary

Block A Block B Block D Block C

1 1

A B C D

CFG Binary Target Region

Other area block Other area block

Static analysis CFG construction + Blocks order Dynamic/symbolic analysis Extracts execution contexts of the target region Extracts loop bounds within the target region

  • JB. Bréjon

CS2’19 January 21, 2019 9 / 21

slide-10
SLIDE 10

Determining the Possible Execution Paths

Structural bounded unfolding of the CFG

1 1

A B C D A B B C C D D D D

CFG EFG

  • JB. Bréjon

CS2’19 January 21, 2019 10 / 21

slide-11
SLIDE 11

Determining the Possible Execution Paths

Structural bounded unfolding of the CFG

1 1

A B C D A B B C C D D D D

CFG EFG

Resulting paths accessibility test (SMT)

→ Each instruction is modeled regarding its effect on a machine state model

S0 S1 S2 S3 S4 S5 S6 add cmp Branch add cmp Branch Initial state Final state Conditions : accessibility

  • JB. Bréjon

CS2’19 January 21, 2019 10 / 21

slide-12
SLIDE 12

Determining Faulty Execution Paths

B B D C C D D D A

EFG Induced EFG

B D B D B D A

Block A Block B Block D Block C Binary Other area block Other area block

EFG

A

1 + 1 1

A B C D

CFG

A fault may alter the execution flow → Possible execution paths are recomputed after a fault injection CFG unfolding after the fault Takes into account the code layout Relaxed loop bounds Resulting paths are checked for accessibility

  • JB. Bréjon

CS2’19 January 21, 2019 11 / 21

slide-13
SLIDE 13

Robustness Analysis

P_Orig → Original execution path P_Faulted → Faulty execution path

P_Orig P_Faulted

Context

Registers Memory locations

Registers Memory locations Registers Memory locations

Security Property

Same context (C) When the final values of some memorizing elements differ, a vulnerability is detected Formula: Access(P_Orig, C)∧ Access(P_Faulted, C)∧ Vuln → SAT: The fault in P_Faulted leads to a vulnerability Repeating this process for all faults on all injection points produces a vulnerability list

  • JB. Bréjon
CS2’19 January 21, 2019 12 / 21
slide-14
SLIDE 14

Results Synthesis

Vulnerability list is not easy to analyse (manual evaluate each vulnerability) How dangerous is each vulnerability? How to compare the vulnerabilities of two different implementations? Need for a synthetic view Introduction of three security metrics Instruction sensitivity level Average number of vulnerabilities in paths Vulnerabilities density

  • JB. Bréjon

CS2’19 January 21, 2019 13 / 21

slide-15
SLIDE 15

Paths Probabilities

A vulnerability appearing on a path should be weighted differently than

  • ne appearing on another path depending on the likelihood of their path.

1 1

A B C D A B B C C D D D D

0.5 0.5 1 0.5 0.5

CFG EFG

p1 p2 p3 By default: paths have equal probability Ideally: user can define the branches probability

Path Blocks P(path) p1 A - B - B - D 0.5 p2 A - C - C - D 0.25 p3 A - C - D 0.25

  • JB. Bréjon

CS2’19 January 21, 2019 14 / 21

slide-16
SLIDE 16

Instruction Sensitivity (IS)

IS(i): score reflecting instruction i sensitivity

IS(i) =

p∈Paths P(p is taken) × NVi(p)

NVi(p): Instruction i #Vulnerabilities on path p Inst Score I0 1 = P(p1) + P(p2) + P(p3) I1 1 = 2 ∗ P(p1) I2 0.5 = P(p2) + P(p3) Each vulnerable instruction occurence is weighted relatively to the likelihood of the path it appears on

A B B C C D D D D

I2 I1 I1 I0 I2

0.5 0.5 1 0.5 0.5

p1 P(p1)=0.5 p2 P(p2)=0.25 p3 P(p3)=0.25

*3

Rank the instructions according to their sensitivity → helps the designer to focus on the most sensitive instructions

  • JB. Bréjon

CS2’19 January 21, 2019 15 / 21

slide-17
SLIDE 17

Attack Surface (AS)

AS: average number of vulnerabilities on an execution path AS =

p∈Paths P(p is taken) × NV(p)

NV(p): #Vulnerabilites appearing on path p 4 vulnerabilities, on each example, weigthed by paths probabilities

4 4 0.5 0.5 1

p1 P(p1)=0.5 p2 P(p2)=0.5 p1 P(p1)=0.5 AS = 4 ∗ 0.5 = 2 2 vulnerabilities found on average AS = 4 ∗ 1 = 4 4 vulnerabilities found on average The higher the attack surface, the more the attacker will be able to inject a fault leading to a vulnerability

  • JB. Bréjon

CS2’19 January 21, 2019 16 / 21

slide-18
SLIDE 18

Normalized Attack Surface (NAS)

NAS: Average density of vulnerabilities NAS = AS

  • p∈Paths P(p is taken) × NI(p) = AS

ANI NI(p): Path p #Instructions ANI: Average number of instructions per path Same vulnerabilities but different amount of instructions: affects vulnerability density

100 100 100 10 10 10 x2 x2 0.5 0.5 0.5 0.5

p1 P(p1)=0.5 p2 P(p2)=0.5 p1 P(p1)=0.5 p2 P(p2)=0.5

AS = 2 ∗ 0.5 + 2 ∗ 0.5 = 2 NAS = 2/(100 + 100) = 0.01 Odds for a randomly timed fault injection to lead to a vulnerability: 1% AS = 2 ∗ 0.5 + 2 ∗ 0.5 = 2 NAS = 2/(10 + 10) = 0.1 Odds for a randomly timed fault injection to lead to a vulnerability: 10%

  • JB. Bréjon

CS2’19 January 21, 2019 17 / 21

slide-19
SLIDE 19

Use-Case: VerifyPin

Description Belongs to the FISCC (Fault Injection and Simulation Secure Code Collection) benchmarks, dedicated to fault injection analysis Compares a user PIN with a predefined PIN Authentication “OK” if PINs are identical, “KO” otherwise Several versions of the function, each one combining different source-level protections Analysis Security property: user PIN and predefined PIN different with Authentication “OK” 4 versions: 1 unprotected, 3 protected 2 optimisation levels: O0, O2 Fault model: instruction skip

  • JB. Bréjon

CS2’19 January 21, 2019 18 / 21

slide-20
SLIDE 20

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 Loop counter*2 VerifyPin4 Double call VerifyPin5 Result var*2 VerifyPin7 Step counter(CFI)

Four implementations of VerifyPin

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-21
SLIDE 21

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 O0 O2 Loop counter*2 VerifyPin4 O0 O2 Double call VerifyPin5 O0 O2 Result var*2 VerifyPin7 O0 Step counter(CFI) O2

Two optimisation levels

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-22
SLIDE 22

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 O0 4 O2 4 Loop counter*2 VerifyPin4 O0 15 O2 1 Double call VerifyPin5 O0 15 O2 1 Result var*2 VerifyPin7 O0 15 Step counter(CFI) O2 1

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-23
SLIDE 23

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 O0 4 96 O2 4 54 Loop counter*2 VerifyPin4 O0 15 127 O2 1 28 Double call VerifyPin5 O0 15 15 O2 1 8 Result var*2 VerifyPin7 O0 15 67 Step counter(CFI) O2 1 24

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-24
SLIDE 24

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 O0 4 96 18.37 O2 4 54 10.38 Loop counter*2 VerifyPin4 O0 15 127 7.75 O2 1 26 26 Double call VerifyPin5 O0 15 15 1 O2 1 8 8 Result var*2 VerifyPin7 O0 15 67 4.75 Step counter(CFI) O2 1 24 24

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-25
SLIDE 25

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 O0 4 96 18.37 0.25 O2 4 54 10.38 0.41 Loop counter*2 VerifyPin4 O0 15 127 7.75 0.05 O2 1 26 26 0.71 Double call VerifyPin5 O0 15 15 1 0.01 O2 1 8 8 0.17 Result var*2 VerifyPin7 O0 15 67 4.75 0.03 Step counter(CFI) O2 1 24 24 0.48

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-26
SLIDE 26

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 O0 4 96 18.37 0.25 73.9 O2 4 54 10.38 0.41 25.3 Loop counter*2 VerifyPin4 O0 15 127 7.75 0.05 149.1 O2 1 26 26 0.71 49 Double call VerifyPin5 O0 15 15 1 0.01 124.2 O2 1 8 8 0.17 48 Result var*2 VerifyPin7 O0 15 67 4.75 0.03 180.1 Step counter(CFI) O2 1 24 24 0.48 50

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-27
SLIDE 27

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 O0 4 96 18.37 0.25 73.9 O2 4 54 10.38 0.41 25.3 Loop counter*2 VerifyPin4 O0 15 127 7.75 0.05 149.1 O2 1 26 26 0.71 49 Double call VerifyPin5 O0 15 15 1 0.01 124.2 O2 1 8 8 0.17 48 Result var*2 VerifyPin7 O0 15 67 4.75 0.03 180.1 Step counter(CFI) O2 1 24 24 0.48 50

VerifyPin5 is the least sensitive implementation (for all metrics) → Double call bests targets the instruction skip fault model

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-28
SLIDE 28

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 O0 4 96 18.37 0.25 73.9 O2 4 54 10.38 0.41 25.3 Loop counter*2 VerifyPin4 O0 15 127 7.75 0.05 149.1 O2 1 26 26 0.71 49 Double call VerifyPin5 O0 15 15 1 0.01 124.2 O2 1 8 8 0.17 48 Result var*2 VerifyPin7 O0 15 67 4.75 0.03 180.1 Step counter(CFI) O2 1 24 24 0.48 50

VerifyPin5 O0 is the least sensitive version according to AS and NAS, the number of raw vulnerabilities disagree

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-29
SLIDE 29

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 O0 4 96 18.37 0.25 73.9 O2 4 54 10.38 0.41 25.3 Loop counter*2 VerifyPin4 O0 15 127 7.75 0.05 149.1 O2 1 26 26 0.71 49 Double call VerifyPin5 O0 15 15 1 0.01 124.2 O2 1 8 8 0.17 48 Result var*2 VerifyPin7 O0 15 67 4.75 0.03 180.1 Step counter(CFI) O2 1 24 24 0.48 50

NAS metric shows the odds of a successful randomly timed attack. Higher for O2 versions → smaller code + less paths

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-30
SLIDE 30

Results

Vulns: Raw number of vulnerabilities ANI: Average number of instructions per path RP: Number of paths in the original code Protection Version Opt level #RP #Vulns AS NAS ANI None VerifyPin0 O0 4 96 18.37 0.25 73.9 O2 4 54 10.38 0.41 25.3 Loop counter*2 VerifyPin4 O0 15 127 7.75 0.05 149.1 O2 1 26 26 0.71 49 Double call VerifyPin5 O0 15 15 1 0.01 124.2 O2 1 8 8 0.17 48 Result var*2 VerifyPin7 O0 15 67 4.75 0.03 180.1 Step counter(CFI) O2 1 24 24 0.48 50

VerifyPin0: AS is higher for O0 version → less instructions = lower attack surface. In protected versions: O2 optimisation level affected the protections.

  • JB. Bréjon

CS2’19 January 21, 2019 19 / 21

slide-31
SLIDE 31

Conclusion

A tool for analysing binary code regions against single fault attacks Comparison of compiler options, optimisation effects and protections effectiveness on a use-case (other use-cases in the paper) 3 security metrics displaying views of code security Pros Automatic (except for the property → expected soon) Formal verification (SMT) → exhaustiveness (registers corruption) Contextual analysis Cons Small code regions → speed of the analysis depends on the number

  • f possible paths and the number of memory accesses

Exhaustive multiple faults → combinatorial explosion, but the approach does not forbide it

  • JB. Bréjon

CS2’19 January 21, 2019 20 / 21

slide-32
SLIDE 32

Thanks !

  • JB. Bréjon

CS2’19 January 21, 2019 21 / 21

slide-33
SLIDE 33

Bibliography I

Hagai BAR-EL et al. “The sorcerer’s apprentice guide to fault attacks”. In : Proceedings of the IEEE 94.2 (2006), p. 370-382. Dan BONEH, Richard A. DEMILLO et Richard J. LIPTON. “On the Importance of Eliminating Errors in Cryptographic Computations”. In : J. Cryptology 14 (2001),

  • p. 101-119.

Amine DEHBAOUI et al. “Electromagnetic transient faults injection on a hardware and a software implementations of AES”. In : Fault Diagnosis and Tolerance in Cryptography (FDTC), 2012 Workshop on. IEEE. 2012, p. 7-15. Dusko KARAKLAJIC, Jorn-Marc SCHMIDT et Ingrid VERBAUWHEDE. “Hardware Designer’s Guide to Fault Attacks”. In : IEEE Transactions on Very Large Scale Integration (VLSI) Systems 21.12 (déc. 2013), p. 2295-2306. ISSN : 1063-8210.

DOI : 10.1109/TVLSI.2012.2231707. URL : http:

//ieeexplore.ieee.org/xpls/abs%5C_all.jsp?arnumber=6425517%20http: //ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6425517. Nicolas MORO et al. “Electromagnetic fault injection: towards a fault model on a 32-bit microcontroller”. In : Fault Diagnosis and Tolerance in Cryptography (FDTC), 2013 Workshop on. IEEE. 2013, p. 77-88.

  • JB. Bréjon

CS2’19 January 21, 2019 22 / 21

slide-34
SLIDE 34

Bibliography II

  • S. ORDAS, L. GUILLAUME-SAGE et P

. MAURINE. “EM Injection: Fault Model and Locality”. In : 2015 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC) 00 (2015), p. 3-13. DOI : doi.ieeecomputersociety.org/10.1109/FDTC.2015.9. Niek TIMMERS et Cristofaro MUNE. “Escalating Privileges in Linux Using Voltage Fault Injection”. In : 2017 Workshop on Fault Diagnosis and Tolerance in Cryptography, FDTC 2017, Taipei, Taiwan, September 25, 2017. 2017, p. 1-8.

DOI : 10.1109/FDTC.2017.16.

Aurelien VASSELLE et al. “Laser-Induced Fault Injection on Smartphone Bypassing the Secure Boot”. In : 2017 Workshop on Fault Diagnosis and Tolerance in Cryptography, FDTC 2017, Taipei, Taiwan, September 25, 2017. 2017, p. 41-48.

DOI : 10.1109/FDTC.2017.18.

  • JB. Bréjon

CS2’19 January 21, 2019 23 / 21