Abusing hardware for fun and profit Agenda Cache-based Covert - - PowerPoint PPT Presentation

abusing hardware for fun and profit agenda
SMART_READER_LITE
LIVE PREVIEW

Abusing hardware for fun and profit Agenda Cache-based Covert - - PowerPoint PPT Presentation

Abusing hardware for fun and profit Agenda Cache-based Covert channels w/ demo Spectre and Meltdown from covert channels Process isola8on + OS (CS 423) OS paging OS services 0x00000000 Communica<on to other processes


slide-1
SLIDE 1

Abusing hardware for fun and profit

slide-2
SLIDE 2

Agenda

  • Cache-based Covert channels w/ demo
  • Spectre and Meltdown from covert channels
slide-3
SLIDE 3

Process isola8on + OS (CS 423)

Process Memory 0x00000000 0xffffffff Communica<on to other processes via e.g., #include <sockets.h>, send(), recv …OS services… … OS paging … Virtual memory Threading, etc

slide-4
SLIDE 4

Programs run on processors

  • Processor that OS would have

you see …

  • Real processors

L1 I Cache Memory Datapath L1 D Cache L2 Cache L3 Cache DRAM (and/or: stacked DRAM, HMC, NVMs) Core Core Cache = on-chip memory, faster to access than OS swaps work on/off

slide-5
SLIDE 5

Hardware Covert Channels

  • Talk to your friends without the OS’s help or knowledge
  • No header files à no socket/etc, no OS-sanc<oned communica<on
  • Exploit proper<es of your hardware J

L1 I Cache Datapath L1 D Cache L2 Cache L3 Cache DRAM (and/or: stacked DRAM, HMC, NVMs) Core

L3 cache shared by all processes running on system!

slide-6
SLIDE 6

Processor caches

Mo<va<on

  • Programs have locality
  • Memory access cost ∝ memory size

lock placement/replacement policies tell us where blocks can live and when

  • re-facing API:

ackend API:

# ways # sets

Read(addr) Write(addr, word)

Read Fill/Evict

Which set? 2 or more Li+1 cache sets sta<cally map to 1 Which way? Determined by replacement policy. Evict(addr) Fill(addr, line)

L2 cache L1 cache

slide-7
SLIDE 7

Why is cache design relevant?

  • Two processes can agree on “dead drops” on the processor

hardware, to pass informa<on under the OS’s nose

Cache: Process 1 Process 2 Repeatedly accesses lines in set i t1 = rdtsc() Repeatedly accesses lines in set i t2 = rdtsc() If (t2 – t1 > thresh) read ‘1’ Else read ‘0’

slide-8
SLIDE 8

We made a virtual “wire”, now what?

  • Remember TCP?
  • Virtual wire +

de-noising + re-transmission + wrapper API =

Cache pressure!

slide-9
SLIDE 9

Demo

slide-10
SLIDE 10

Fun! How else can I do this?

  • Processes share …

branch predictors, cores, caches, RNG modules, DRAM, …

  • All of which can (and have) been turned into “virtual wires”
  • And they are preey fast (~ 1 Mb/sec on the high end)

L1 I Cache Datapath L1 D Cache L2 Cache L3 Cache DRAM (and/or: stacked DRAM, HMC, NVMs) Core

RNG Not shown

slide-11
SLIDE 11

Prac8cal uses

  • Talk to your friends for fun
  • Malware can inter-communicate w/o OS realizing it
  • Different VMs sharing the same box on (e.g.) Amazon AWS can talk
  • Side channel aeacks
  • Learn private informa<on about co-resident processes
slide-12
SLIDE 12

Side channel aOacks

  • Shared resource pressure can also lead to side channel aeacks
  • E.g., RSA encryp<on msg = Decryptkey(Encryptkey(msg))
slide-13
SLIDE 13
slide-14
SLIDE 14

Ingredients

  • Cover channel
  • Specula<on
  • OS mapped to process address space (for Meltdown)
  • Branch predic<on (for Spectre)
slide-15
SLIDE 15

Out of order, specula8ve processor core

OOM spec; heps://github.com/ccelio/riscv-boom-doc/raw/gh-pages/boom-spec.pdf

xor sum, 0, 0 xor d, 0, 0 loop: add $t0, d, &P1 lw P1d, 0($t0) add $t0, d, &P2 lw P2d, 0($t0) sub $t0, P1d, P2d mul $t0, $t0, $t0 add sum, sum, $t0 addi d, d, 1 ble loop, d, LEN post: blt end, best, sum add best, sum, 0 end:

add lw ad lw sub mul add addi ble