Authenticated Storage Using Small Trusted Hardware Hsin-Jung Yang, - - PowerPoint PPT Presentation

authenticated storage using small trusted hardware
SMART_READER_LITE
LIVE PREVIEW

Authenticated Storage Using Small Trusted Hardware Hsin-Jung Yang, - - PowerPoint PPT Presentation

Authenticated Storage Using Small Trusted Hardware Hsin-Jung Yang, Victor Costan, Nickolai Zeldovich, and Srini Devadas Massachusetts Institute of Technology November 8th, CCSW 2013 Cloud Storage Model Cloud Storage Requirements Privacy


slide-1
SLIDE 1

Authenticated Storage Using Small Trusted Hardware

Hsin-Jung Yang, Victor Costan, Nickolai Zeldovich, and Srini Devadas

Massachusetts Institute of Technology

November 8th, CCSW 2013

slide-2
SLIDE 2

Cloud Storage Model

slide-3
SLIDE 3

Cloud Storage Requirements

  • Privacy

– Sol: encryption at the client side

  • Availability

– Sol: appropriate data replication

  • Integrity

– Sol: digital signatures & message authentication codes

  • Freshness

– Hard to guarantee due to replay attacks

slide-4
SLIDE 4

Cloud Storage: Replay Attack

User A Cloud Server User B

slide-5
SLIDE 5

Cloud Storage: Replay Attack

User A Cloud Server User B

slide-6
SLIDE 6

Cloud Storage: Replay Attack

User A Cloud Server User B

slide-7
SLIDE 7

Cloud Storage: Replay Attack

User A Cloud Server User B

slide-8
SLIDE 8

Cloud Storage: Replay Attack

User A Cloud Server User B

slide-9
SLIDE 9

Cloud Storage: Replay Attack

User A Cloud Server User B

slide-10
SLIDE 10

Cloud Storage: Replay Attack

User A Cloud Server User B

slide-11
SLIDE 11

Cloud Storage: Replay Attack

User A Cloud Server User B

slide-12
SLIDE 12

Cloud Storage: Replay Attack

User A Cloud Server User B

slide-13
SLIDE 13

Cloud Storage: Replay Attack

User A Cloud Server User B

slide-14
SLIDE 14

Cloud Storage: Replay Attack

User A Cloud Server User B

Software solution: Two users contact with each other directly

slide-15
SLIDE 15

Solution: Adding Trusted Hardw are

slide-16
SLIDE 16

Solution: Adding Trusted Hardw are

slide-17
SLIDE 17

Solution: Adding Trusted Hardw are

single chip

slide-18
SLIDE 18

Solution: Adding Trusted Hardw are

Secure NVRAM single chip

slide-19
SLIDE 19

Solution: Adding Trusted Hardw are

Secure NVRAM Computational Engines single chip

slide-20
SLIDE 20

Solution: Adding Trusted Hardw are

Secure NVRAM Computational Engines

Slow under NVRAM process!

single chip

slide-21
SLIDE 21

Solution: Adding Trusted Hardw are

Secure NVRAM Computational Engines state chip (S chip) processing chip (P chip)

slide-22
SLIDE 22

Solution: Adding Trusted Hardw are

Secure NVRAM Computational Engines state chip (S chip) processing chip (P chip)

FPGA / ASIC Smart Card

Fast!

slide-23
SLIDE 23

Solution: Adding Trusted Hardw are

Secure NVRAM Computational Engines state chip (S chip) processing chip (P chip) securely paired

FPGA / ASIC Smart Card

Fast!

slide-24
SLIDE 24

Outline

  • Motivation: Cloud Storage and Security Challenges
  • System Design

– Threat Model & System Overview – Security Protocols – Crash Recovery Mechanism

  • Implementation
  • Evaluation
  • Conclusion
slide-25
SLIDE 25

Threat Model

slide-26
SLIDE 26

Threat Model

  • Untrusted connections
  • Disk attacks and hardware

failures

  • Untrusted server that may

(1) send wrong response (2) pretend to be a client (3) maliciously crash (4) disrupt P chip’s power

  • Clients may try to modify
  • ther’s data
slide-27
SLIDE 27

Threat Model

  • Untrusted connections
  • Disk attacks and hardware

failures

  • Untrusted server that may

(1) send wrong response (2) pretend to be a client (3) maliciously crash (4) disrupt P chip’s power

  • Clients may try to modify
  • ther’s data
slide-28
SLIDE 28

Threat Model

  • Untrusted connections
  • Disk attacks and hardware

failures

  • Untrusted server that may

(1) send wrong response (2) pretend to be a client (3) maliciously crash (4) disrupt P chip’s power

  • Clients may try to modify
  • ther’s data
slide-29
SLIDE 29

Threat Model

  • Untrusted connections
  • Disk attacks and hardware

failures

  • Untrusted server that may

(1) send wrong response (2) pretend to be a client (3) maliciously crash (4) disrupt P chip’s power

  • Clients may try to modify
  • ther’s data
slide-30
SLIDE 30

System Overview

  • Client <-> S-P chip: HMAC key
  • S-P chip: integrity/freshness checks,

system state storage & updates sign responses

  • Server: communication, scheduling, disk IO
slide-31
SLIDE 31

Security Protocols

  • Message Authentication
  • Memory Authentication
  • Write Access Control
  • System State Protection against Power Loss
slide-32
SLIDE 32

Design: Message Authentication

  • Untrusted network between client and server

– Sol: HMAC Technique

  • Session-based protocol (HMAC key)

Client Server S-P Chip pair

PubEK, Ecert HMAC key {HMAC key}PubEK HMAC key (PubEK, PrivEK)

(encrypted HMAC key) {HMAC key}PubEK {HMAC key}PubEK

decrypt the key

slide-33
SLIDE 33

Security Protocols

  • Message Authentication
  • Memory Authentication
  • Write Access Control
  • System State Protection against Power Loss
slide-34
SLIDE 34

Design: Memory Authentication

  • Data protection against untrusted disk
  • Block-based cloud storage API

– Fixed block size (1MB) – Write (block number, block) – Read (block number)  block – Easy to reason about the security

B1 B2 B3 B4 Disk

slide-35
SLIDE 35

Design: Memory Authentication

h5..8 h12 h34 h56 h78 h12=H(h1 h2) h1=H(B1) h1 h2 h3 h4 h5 h6 h7 h8

B1 B2 B3 B4 B5 B6 B7 B8

Disk is divided into many blocks

  • Solution: Merkle tree

h1..8 h1..4

slide-36
SLIDE 36

Design: Memory Authentication

Root Hash (securely stored) h5..8 h12 h34 h56 h78 h12=H(h1 h2) h1=H(B1) h1 h2 h3 h4 h5 h6 h7 h8

B1 B2 B3 B4 B5 B6 B7 B8

Disk is divided into many blocks

  • Solution: Merkle tree

h1..8 h1..4

slide-37
SLIDE 37

Design: Memory Authentication

Root Hash (securely stored) h5..8 h12 h34 h56 h78 h12=H(h1 h2) h1=H(B1) h1 h2 h3 h4 h5 h6 h7 h8

B1 B2 B3 B4 B5 B6 B7 B8

Disk is divided into many blocks

  • Solution: Merkle tree

h1..8 h1..4 verify

slide-38
SLIDE 38

Design: Memory Authentication

Root Hash (securely stored) h5..8 h12 h34 h56 h78 h12=H(h1 h2) h1=H(B1) h1 h2 h3 h4 h5 h6 h7 h8

B1 B2 B3 B4 B5 B6 B7 B8

Disk is divided into many blocks

  • Solution: Merkle tree

h1..8 h1..4 verify

slide-39
SLIDE 39

Merkle Tree Caching

  • Caching policy is controlled by the server

Node # Hash Verified Left child Right child

1 fabe3c05d8ba995af93e Y Y N 2 e6fc9bc13d624ace2394 Y Y Y 4 53a81fc2dcc53e4da819 Y N N 5 b2ce548dfa2f91d83ec6 Y N N

P chip Cache management commands:

LOAD, VERIFY, UPDATE

slide-40
SLIDE 40

Security Protocols

  • Message Authentication
  • Memory Authentication
  • Write Access Control
  • System State Protection against Power Loss
slide-41
SLIDE 41

Design: Write Access Control

  • Goal: to ensure all writes are authorized and fresh
  • Coherence model assumption:

– Clients should be aware of the latest update

  • Unique write access key (Wkey)

– Share between authorized writers and the S-P chip

  • Revision number (Vid)

– Increase during each write operation

B1 B2 B3 B4

S-P Chip pair Wkey

slide-42
SLIDE 42

Design: Write Access Control

  • Protect Wkey and Vid

– Add another layer at the bottom of Merkle tree

h1..8 Root Hash (securely stored) h1..4 h5..8 h12 h34 h56 h78 h1 h2 h3 h4 h5 h6 h7 h8

B1 B2 B3 B4 B5 B6 B7 B8

h8

B8

h8=H(B8) h78 h'8 h8

B8

h8=H(B8) Vid h78 H(Wkey)

slide-43
SLIDE 43

Security Protocols

  • Message Authentication
  • Memory Authentication
  • Write Access Control
  • System State Protection against Power Loss
slide-44
SLIDE 44

Design: System State Protection

  • Goal: to avoid losing the latest system state

– Server may interrupt the P chip’s supply power

  • Solution: root hash storage protocol

Client Server P chip S chip

request hold response store state release response

slide-45
SLIDE 45

Design: Crash Recovery Mechanism

  • Goal: to recover the system from crashes

– Even if the server crashes, the disk can be recovered to be consistent with the root hash stored on the S chip

  • Solution:
slide-46
SLIDE 46

Implementation

  • ABS (authenticated block storage) server architecture
slide-47
SLIDE 47

Implementation

  • ABS client model
slide-48
SLIDE 48

Performance Evaluation

  • Experiment configuration

– Disk size: 1TB – Block size: 1MB – Server: Intel Core i7-980X 3.33GHz 6-core processor with 12GB of DDR3-1333 RAM – FPGA: Xilinx Virtex-5 XC5VLX110T – Client: Intel Core i7-920X 2.67GHz 4-core processor – FPGA-server connection: Gigabit Ethernet – Client-server connection: Gigabit Ethernet

slide-49
SLIDE 49

File System Benchmarks (Mathmatica)

  • Fast network:

– Latency: 0.2ms – Bandwidth: 1Gbit/s

pure writes reads + writes pure reads

slide-50
SLIDE 50

File System Benchmarks (Mathmatica)

  • Slow network:

– Latency: 30.2ms – Bandwidth: 100Mbit/s

pure writes reads + writes pure reads

slide-51
SLIDE 51

File System Benchmarks (Modified Andrew Benchmark)

  • Slow network:

– Latency: 30.2ms – Bandwidth: 100Mbit/s

slide-52
SLIDE 52

Customized Solutions

  • Hardware requirements
  • Estimated performance

Demand Focused Performance Budget Connection PCIe x16 (P) / USB (S) USB Hash Engine 8 + 1 (Merkle) 0 + 1 (Merkle) Tree Cache large none Response Buffer 2 KB 300 B Demand Focused Performance Budget Randomly Write Throughput 2.4 GB/s 377 MB/s Latency 12.3 ms + 32 ms 2.7 ms + 32 ms Randomly Read Throughput 2.4 GB/s Latency 0.4 ms # HDDs supported 24 4

slide-53
SLIDE 53

Customized Solutions

  • Hardware requirements
  • Estimated performance

Demand Focused Performance Budget Connection PCIe x16 (P) / USB (S) USB Hash Engine 8 + 1 (Merkle) 0 + 1 (Merkle) Tree Cache large none Response Buffer 2 KB 300 B Demand Focused Performance Budget Randomly Write Throughput 2.4 GB/s 377 MB/s Latency 12.3 ms + 32 ms 2.7 ms + 32 ms Randomly Read Throughput 2.4 GB/s Latency 0.4 ms # HDDs supported 24 4

Single chip!

slide-54
SLIDE 54

Conclusion

  • We build an authenticated storage system

– Efficiently ensure data integrity and freshness – Prevent unauthorized/replayed writes – Can be recovered from accidentally/malicious crashes

  • The system has 10% performance overhead on the

network with 30 ms latency and 100 Mbit/s bandwidth

  • We provide customized solutions

– With limited resources: single-chip solution – With more hardware resources: two-chip solution

slide-55
SLIDE 55

Thank You!