design and implementation of the aegis single chip secure
play

Design and Implementation of the AEGIS Single-Chip Secure Processor - PowerPoint PPT Presentation

Design and Implementation of the AEGIS Single-Chip Secure Processor Using Physical Random Functions G. Edward Suh, Charles W. ODonnell, Ishan Sachdev, and Srinivas Devadas Massachusetts Institute of Technology 1 New Security Challenges


  1. Design and Implementation of the AEGIS Single-Chip Secure Processor Using Physical Random Functions G. Edward Suh, Charles W. O’Donnell, Ishan Sachdev, and Srinivas Devadas Massachusetts Institute of Technology 1

  2. New Security Challenges • Computing devices are becoming distributed, unsupervised, and physically exposed – Computers on the Internet (with untrusted owners) – Embedded devices (cars, home appliances) – Mobile devices (cell phones, PDAs, laptops) • Attackers can physically tamper with devices – Invasive probing – Non-invasive measurement – Install malicious software • Software-only protections are not enough 2

  3. Distributed Computation • How can we “trust” remote computation? Example: Distributed Computation on the Internet (SETI@home, etc.) DistComp() { x = Receive(); result = Func(x); Send(result); } Receive() { … } Send(…) { … } • Need a secure platform Func(…) { … } – Authenticate “itself (device)” – Authenticate “software” – Guarantee the integrity and privacy of “execution” 3

  4. Existing Approaches Tamper-Proof Package: IBM 4758 Sensors to detect attacks Expensive Continually battery-powered Trusted Platform Module (TPM) A separate chip (TPM) for security functions Decrypted “secondary” keys can be read out from the bus 4

  5. Our Approach • Build a secure computing platform with only trusting a “single-chip” processor (named AEGIS) Security Protect Protected Environment Kernel (trusted part I/O of an OS) Identify Memory • A single chip is easier and cheaper to protect • The processor authenticates itself, identifies the security kernel, and protects off-chip memory 5

  6. Contributions • Physical Random Functions (PUFs) – Cheap and secure way to authenticate the processor • Architecture to minimize the trusted code base – Efficient use of protection mechanisms – Reduce the code to be verified • Integration of protection mechanisms – Additional checks in MMU – Off-chip memory encryption and integrity verification (IV) • Evaluation of a fully-functional RTL implementation – Area Estimate – Performance Measurement 6

  7. Physical Random Function (PUF – Physical Unclonable Function) 7

  8. Problem Storing digital information in a device in a way that is resistant to physical attacks is difficult and expensive. EEPROM/ROM Probe Processor • Adversaries can physically extract secret keys from EEPROM while processor is off • Trusted party must embed and test secret keys in a secure location • EEPROM adds additional complexity to manufacturing 8

  9. Our Solution: Physical Random Functions (PUFs) • Generate keys from a complex physical system Hard to fully characterize characterize Physical System or predict configure Use as a secret Response (n-bits) Can generate many Challenge (c-bits) secrets by changing the challenge Processor • Security Advantage – Keys are generated on demand � No non-volatile secrets – No need to program the secret – Can generate multiple master keys • What can be hard to predict, but easy to measure? 9

  10. Silicon PUF – Concept • Because of random process variations, no two Integrated Circuits even with the same layouts are identical – Variation is inherent in fabrication process – Hard to remove or predict – Relative variation increases as the fabrication process advances • Experiments in which identical circuits with identical layouts were placed on different ICs show that path delays vary enough across ICs to use them for identification. Challenge Response c-bits n-bits Combinatorial Circuit 10

  11. A (Simple) Silicon PUF [VLSI’04] 0 1 0 0 1 1 c-bit Challenge 1 0 1 1 1 1 if top D Q 0 … 0 0 path is faster, Rising 0 0 0 else 0 G Edge 1 1 1 Each challenge creates two paths through the circuit that are excited simultaneously. The digital response of 0 or 1 is based on a comparison of the path delays by the arbiter We can obtain n-bit responses from this circuit by either duplicate the circuit n times, or use n different challenges Only use standard digital logic � No special fabrication 11

  12. PUF Experiments • Fabricated 200 “identical” chips with PUFs in TSMC 0.18 μ on 5 different wafer runs Security – What is the probability that a challenge produces different responses on two different PUFs? Reliability – What is the probability that a PUF output for a challenge changes with temperature? – With voltage variation? 12

  13. Inter-Chip Variation • Apply random challenges and observe 100 response bits Measurement noise for Chip X = 0.9 bits Measurement Noise Inter-Chip Variation 0.25 Probability Density Function Can identify 0.2 individual ICs 0.15 Distance between Chip X and Y 0.1 responses = 24.8 bits 0.05 0 0 5 10 15 20 25 30 35 40 Hamming Distance (# of different bits, out of 100) 13

  14. Environmental Variations • What happens if we change voltage and temperature? Measurement Noise Inter-Chip Variation Measurement noise at 125C 0.25 Voltage Variation Noise (baseline at 20C) = 3.5 bits Probability Density Function Temp Variation Noise 0.2 Measurement noise with 0.15 10% voltage variation = 4 bits Even with environmental variation, 0.1 we can still distinguish two 0.05 different PUFs 0 0 5 10 15 20 25 30 35 40 Hamming Distance (# of different bits, out of 100) 14

  15. Reliable PUFs PUFs can be made more secure and reliable by adding extra control logic Challenge Response New Response One-Way BCH PUF Hash Decoding c n k Function Syndrome For Re-generation For calibration BCH Encoding n - k Syndrome • Hash function (SHA-1,MD5) precludes PUF “model-building” attacks since, to obtain PUF output, adversary has to invert a one-way function • Error Correcting Code (ECC) can eliminate the measurement noise without compromising security 15

  16. 16 Architecture Overview

  17. Authentication • The processor identifies security kernel by computing the kernel’s hash (on the l.enter.aegis instruction) – Similar to ideas in TCG TPM and Microsoft NGSCB – Security kernel identifies application programs • H(SKernel) is used to produce a unique key for security kernel from a PUF response (l.puf.secret instruction) – Security kernel provides a unique key for each application Application H(App) (DistComp) Message Authentication Code (MAC) � A server can authenticate the processor, the security kernel, and the application Security Kernel H(SKernel) 17

  18. Protecting Program State • On-chip registers and caches – Security kernel handles context switches and permission checks in MMU External Memory Processor write ENCRYPT / DECRYPT I NTEGRI TY VERI FI CATI ON read • Memory Encryption [MICRO36][Yang 03] – Counter-mode encryption • Integrity Verification [HPCA’03,MICRO36,IEEE S&P ’05] – Hash trees 18

  19. A Simple Protection Model • How should we apply the authentication and Uninitialized Data Encrypted protection mechanisms? (stack, heap) & Integrity • What to protect? Verified – All instructions and data Initialized Data (.rodata, .bss) – Both integrity and privacy Hash • What to trust? � Program Code – The entire program code Program (Instructions) – Any part of the code can Identity read/write protected data Memory Space 19

  20. What Is Wrong? • Large Trusted Code Base – Difficult to verify to be bug-free – How can we trust shared libraries? • Applications/functions have varying security requirements – Do all code and data need privacy? – Do I/O functions need to be protected? � Unnecessary performance and power overheads • Architecture should provide flexibility so that software can choose the minimum required trust and protection 20

  21. Distributed Computation Example • Obtaining a secret key DistComp() { and computing a MAC – Need both privacy and x = Receive(); integrity result = Func(x); • Computing the result key = get_puf_secret(); – Only need integrity mac = MAC(x,result,key); Send(result,mac); • Receiving the input and sending the result (I/O) } – No need for protection – No need to be trusted 21

  22. AEGIS Memory Protection • Architecture provides five Receive(), Unprotected different memory regions Send() data – Applications choose how to use Dynamic MAC() data Encrypted • Static (read-only) Dynamic Verified – Integrity verified Func() data – Integrity verified & encrypted • Dynamic (read-write) Static Encrypted – Integrity verified Static – Integrity verified & encrypted Verified Func(), MAC() • Unprotected • Only authenticate code in the Receive(), Unprotected verified regions Send() Memory Space 22

  23. Suspended Secure Processing (SSP) Insecure (untrusted) Modes • Two security levels within a process Start-up – Untrusted code such as Receive() and Send() STD SSP should have less privilege Resume • Architecture ensures that SSP mode cannot tamper with secure processing Compute Suspend Hash – No permission for protected memory – Only resume secure TE/PTR processing at a specific point Secure Modes 23

  24. 24 Implementation & Evaluation

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend