without the Virtualization Eric Keller , Jakub Szefer, Jennifer - - PowerPoint PPT Presentation

without the virtualization
SMART_READER_LITE
LIVE PREVIEW

without the Virtualization Eric Keller , Jakub Szefer, Jennifer - - PowerPoint PPT Presentation

NoHype: Virtualized Cloud Infrastructure without the Virtualization Eric Keller , Jakub Szefer, Jennifer Rexford, Ruby Lee Princeton University ISCA 2010 Virtualized Cloud Infrastructure Run virtual machines on a hosted infrastructure


slide-1
SLIDE 1

NoHype: Virtualized Cloud Infrastructure without the Virtualization

Eric Keller, Jakub Szefer, Jennifer Rexford, Ruby Lee

ISCA 2010

Princeton University

slide-2
SLIDE 2

Virtualized Cloud Infrastructure

  • Run virtual machines on a hosted infrastructure
  • Benefits…

– Economies of scale – Dynamically scale (pay for what you use)

slide-3
SLIDE 3

Without the Virtualization

  • Virtualization used to share servers

– Software layer running under each virtual machine

3

Physical Hardware Hypervisor OS OS

Apps Apps

Guest VM1 Guest VM2

servers

slide-4
SLIDE 4

Without the Virtualization

  • Virtualization used to share servers

– Software layer running under each virtual machine

  • Malicious software can run on the same server

– Attack hypervisor – Access/Obstruct other VMs

4

Physical Hardware Hypervisor OS OS

Apps Apps

Guest VM1 Guest VM2

servers

slide-5
SLIDE 5

Are these vulnerabilities imagined?

  • No headlines… doesn’t mean it’s not real

– Not enticing enough to hackers yet? (small market size, lack of confidential data)

  • Virtualization layer huge and growing

– 100 Thousand lines of code in hypervisor – 1 Million lines in privileged virtual machine

  • Derived from existing operating systems

– Which have security holes

5

slide-6
SLIDE 6

NoHype

  • NoHype removes the hypervisor

– There’s nothing to attack – Complete systems solution – Still retains the needs of a virtualized cloud infrastructure

6

Physical Hardware OS OS

Apps Apps

Guest VM1 Guest VM2

No hypervisor

slide-7
SLIDE 7

Virtualization in the Cloud

  • Why does a cloud infrastructure use virtualization?

– To support dynamically starting/stopping VMs – To allow servers to be shared (multi-tenancy)

  • Do not need full power of modern hypervisors

– Emulating diverse (potentially older) hardware – Maximizing server consolidation

7

slide-8
SLIDE 8

Roles of the Hypervisor

  • Isolating/Emulating resources

– CPU: Scheduling virtual machines – Memory: Managing memory – I/O: Emulating I/O devices

  • Networking
  • Managing virtual machines

8

slide-9
SLIDE 9

Roles of the Hypervisor

  • Isolating/Emulating resources

– CPU: Scheduling virtual machines – Memory: Managing memory – I/O: Emulating I/O devices

  • Networking
  • Managing virtual machines

9

Push to HW / Pre-allocation

slide-10
SLIDE 10

Roles of the Hypervisor

  • Isolating/Emulating resources

– CPU: Scheduling virtual machines – Memory: Managing memory – I/O: Emulating I/O devices

  • Networking
  • Managing virtual machines

10

Push to HW / Pre-allocation Remove

slide-11
SLIDE 11

Roles of the Hypervisor

  • Isolating/Emulating resources

– CPU: Scheduling virtual machines – Memory: Managing memory – I/O: Emulating I/O devices

  • Networking
  • Managing virtual machines

11

Push to HW / Pre-allocation Remove Push to side

slide-12
SLIDE 12

Roles of the Hypervisor

  • Isolating/Emulating resources

– CPU: Scheduling virtual machines – Memory: Managing memory – I/O: Emulating I/O devices

  • Networking
  • Managing virtual machines

12

Push to HW / Pre-allocation Remove Push to side

NoHype has a double meaning… “no hype”

slide-13
SLIDE 13

Scheduling Virtual Machines

  • Scheduler called each time hypervisor runs

(periodically, I/O events, etc.)

– Chooses what to run next on given core – Balances load across cores

13

hypervisor

timer switch I/O switch timer switch

VMs

time

Today

slide-14
SLIDE 14

Dedicate a core to a single VM

  • Ride the multi-core trend

– 1 core on 128-core device is ~0.8% of the processor

  • Cloud computing is pay-per-use

– During high demand, spawn more VMs – During low demand, kill some VMs – Customer maximizing each VMs work, which minimizes opportunity for over-subscription

14

NoHype

slide-15
SLIDE 15

Managing Memory

  • Goal: system-wide optimal usage

– i.e., maximize server consolidation

  • Hypervisor controls allocation of physical memory

15

100 200 300 400 500 600 VM/app 3 (max 400) VM/app 2 (max 300) VM/app 1 (max 400)

Today

slide-16
SLIDE 16

Pre-allocate Memory

  • In cloud computing: charged per unit

– e.g., VM with 2GB memory

  • Pre-allocate a fixed amount of memory

– Memory is fixed and guaranteed – Guest VM manages its own physical memory (deciding what pages to swap to disk)

  • Processor support for enforcing:

– allocation and bus utilization

16

NoHype

slide-17
SLIDE 17

Emulate I/O Devices

  • Guest sees virtual devices

– Access to a device’s memory range traps to hypervisor – Hypervisor handles interrupts – Privileged VM emulates devices and performs I/O

17

Physical Hardware Hypervisor OS OS

Apps Apps

Guest VM1 Guest VM2 Real Drivers

  • Priv. VM

Device Emulation

trap trap hypercall

Today

slide-18
SLIDE 18
  • Guest sees virtual devices

– Access to a device’s memory range traps to hypervisor – Hypervisor handles interrupts – Privileged VM emulates devices and performs I/O

Emulate I/O Devices

18

Physical Hardware Hypervisor OS OS

Apps Apps

Guest VM1 Guest VM2 Real Drivers

  • Priv. VM

Device Emulation

trap trap hypercall

Today

slide-19
SLIDE 19

Dedicate Devices to a VM

  • In cloud computing, only networking and storage
  • Static memory partitioning for enforcing access

– Processor (for to device), IOMMU (for from device)

19

Physical Hardware OS OS

Apps Apps

Guest VM1 Guest VM2 NoHype

slide-20
SLIDE 20

Virtualize the Devices

  • Per-VM physical device doesn’t scale
  • Multiple queues on device

– Multiple memory ranges mapping to different queues

20

Processor Chipset Memory Classify MUX MAC/PHY

Network Card

Peripheral bus

NoHype

slide-21
SLIDE 21
  • Ethernet switches connect servers

Networking

21

server server

Today

slide-22
SLIDE 22
  • Software Ethernet switches connect VMs

Networking (in virtualized server)

22

Virtual server Virtual server

Software

Virtual switch

Today

slide-23
SLIDE 23
  • Software Ethernet switches connect VMs

Networking (in virtualized server)

23

OS

Apps

Guest VM1 Hypervisor OS

Apps

Guest VM2

hypervisor

Today

slide-24
SLIDE 24
  • Software Ethernet switches connect VMs

Networking (in virtualized server)

24

OS

Apps

Guest VM1 Hypervisor OS

Apps

Guest VM2 Software Switch

  • Priv. VM

Today

slide-25
SLIDE 25

Do Networking in the Network

  • Co-located VMs communicate through software

– Performance penalty for not co-located VMs – Special case in cloud computing – Artifact of going through hypervisor anyway

  • Instead: utilize hardware switches in the network

– Modification to support hairpin turnaround

25

NoHype

slide-26
SLIDE 26

Managing Virtual Machines

  • Allowing a customer to start and stop VMs

26

Wide Area Network

Request: Start VM

Cloud Customer Cloud Provider

Today

slide-27
SLIDE 27

Managing Virtual Machines

  • Allowing a customer to start and stop VMs

27

Wide Area Network

Servers Request: Start VM

Cloud Customer Cloud Provider

. . .

VM images

Cloud Manager Request: Start VM Today

slide-28
SLIDE 28

Hypervisor’s Role in Management

  • Run as application in privileged VM

28

Physical Hardware Hypervisor

  • Priv. VM

VM Mgmt. Today

slide-29
SLIDE 29

Hypervisor’s Role in Management

  • Receive request from cloud manager

29

Physical Hardware Hypervisor

  • Priv. VM

VM Mgmt. Today

slide-30
SLIDE 30

Hypervisor’s Role in Management

  • Form request to hypervisor

30

Physical Hardware Hypervisor

  • Priv. VM

VM Mgmt. Today

slide-31
SLIDE 31

Hypervisor’s Role in Management

  • Launch VM

31

Physical Hardware Hypervisor

  • Priv. VM

VM Mgmt. OS

Apps

Guest VM1 Today

slide-32
SLIDE 32

Decouple Management And Operation

  • System manager runs on its own core

32

Core 0 System Manager Core 1 NoHype

slide-33
SLIDE 33

Decouple Management And Operation

  • System manager runs on its own core
  • Sends an IPI to start/stop a VM

33

Core 0 System Manager Core 1

IPI

NoHype

slide-34
SLIDE 34

Decouple Management And Operation

  • System manager runs on its own core
  • Sends an IPI to start/stop a VM
  • Core manager sets up core, launches VM

– Not run again until VM is killed

34

Core 0 System Manager Core 1 Core Manager OS

Apps

Guest VM2

IPI

NoHype

slide-35
SLIDE 35

Removing the Hypervisor Summary

  • Scheduling virtual machines

– One VM per core

  • Managing memory

– Pre-allocate memory with processor support

  • Emulating I/O devices

– Direct access to virtualized devices

  • Networking

– Utilize hardware Ethernet switches

  • Managing virtual machines

– Decouple the management from operation

35

slide-36
SLIDE 36

Security Benefits

  • Confidentiality/Integrity of data
  • Availability
  • Side channels

36

slide-37
SLIDE 37

Security Benefits

  • Confidentiality/Integrity of data
  • Availability
  • Side channels

37

slide-38
SLIDE 38

Confidentiality/Integrity of Data

38

Requires access to the data

  • System manager can alter memory access rules

– But, guest VMs do not interact with the system manager With hypervisor NoHype Registers upon VM exit No scheduling Packets sent through software switch No software switch Memory accessible by hypervisor No hypervisor

slide-39
SLIDE 39

NoHype Double Meaning

  • Means no hypervisor, also means “no hype”
  • Multi-core processors

– Available now

  • Extended (Nested) Page Tables

– Available now

  • SR-IOV and Directed I/O (VT-d)

– Network cards now, Storage devices near future

  • Virtual Ethernet Port Aggregator (VEPA)

– Next-generation switches

39

slide-40
SLIDE 40

Conclusions and Future Work

  • Trend towards hosted and shared infrastructures
  • Significant security issue threatens adoption
  • NoHype solves this by removing the hypervisor
  • Performance improvement is a side benefit
  • Future work:

– Implement on current hardware – Assess needs for future processors

40

slide-41
SLIDE 41

Questions?

Contact info: ekeller@princeton.edu http://www.princeton.edu/~ekeller szefer@princeton.edu http://www.princeton.edu/~szefer

41