Xen past, present and future Stefano Stabellini Xen architecture: - - PowerPoint PPT Presentation

xen
SMART_READER_LITE
LIVE PREVIEW

Xen past, present and future Stefano Stabellini Xen architecture: - - PowerPoint PPT Presentation

Xen past, present and future Stefano Stabellini Xen architecture: PV domains Xen arch: driver domains Xen: advantages - small surface of attack - isolation - resilience - specialized algorithms (scheduler) Xen and the Linux kernel Xen was


slide-1
SLIDE 1

Xen

past, present and future

Stefano Stabellini

slide-2
SLIDE 2

Xen architecture: PV domains

slide-3
SLIDE 3

Xen arch: driver domains

slide-4
SLIDE 4

Xen: advantages

  • small surface of attack
  • isolation
  • resilience
  • specialized algorithms (scheduler)
slide-5
SLIDE 5

Xen and the Linux kernel

Xen was initially a university research project invasive changes to the kernel to run Linux as a PV guest even more changes to run Linux as dom0

slide-6
SLIDE 6

Xen and the Linux kernel

Xen support in the Linux kernel not upstream Great maintance effort on distributions Risk of distributions dropping Xen support

slide-7
SLIDE 7

Xen and the Linux kernel

  • PV support went in Linux 2.6.26
  • basic Dom0 support went in Linux 2.6.37
  • Netback went in Linux 2.6.39
  • Blkback went in Linux 3.0.0

A single 3.0.0 Linux kernel image boots on native,

  • n Xen as domU, as dom0 and PV on HVM guest
slide-8
SLIDE 8

Xen and Linux distributions

2010

  • Fedora and Ubuntu dropped Xen support from

their Linux kernels

  • Debian, Suse, Gentoo still provide Xen kernels
  • XenServer went Open Source with XCP

Present

  • Fedora and Ubuntu are adding Xen support

back in kernel in the next releases

slide-9
SLIDE 9

Xen architecture: HVM domains

slide-10
SLIDE 10

Xen architecture: stubdoms

slide-11
SLIDE 11

Xen and Qemu

  • initially forked in 2005
  • updated once every few releases
  • Xen support went in upstream Qemu at the

beginning of 2011

  • Upstream Qemu is going to be used as device

model with Xen 4.2

slide-12
SLIDE 12

New developments: Libxenlight

Multiple toolstacks:

  • Xend, Xapi, XenVM, LibVirt, …
  • code duplications, inefficiencies, bugs, wasted

efforts Xend:

  • difficult to understand, modify and extend
  • significant memory footprint
slide-13
SLIDE 13

Libxenlight

What is Libxenlight:

  • a small lower level library in C
  • simple to understand
  • easy to modify and extend

Goals:

  • provide a simple and robust API for toolstacks
  • create a common codebase to do Xen
  • perations
slide-14
SLIDE 14

XL

  • the unit testing tool for libxenlight
  • feature complete
  • a minimal toolstack
  • compatible with xm

Do more with less!

slide-15
SLIDE 15

XL: design principles

  • smallest possible toolstack on top of libxenlight
  • stateless

CLI → XL → libxenlight → EXIT

slide-16
SLIDE 16

XL vs. Xend

XL: pros

  • very small and easy to read
  • well tested
  • compatible with xm

Xend: pros

  • provide XML RPC interface
  • provide ”managed domains”
slide-17
SLIDE 17

Libxenlight: the new world

slide-18
SLIDE 18

Linux PV on HVM

paravirtualized interfaces in HVM guests

slide-19
SLIDE 19

Linux as a guests: problems

Linux PV guests have limitations:

  • difficult “different” to install
  • limited set of virtual hardware

Linux HVM guests:

  • install the same way as native
  • very slow
slide-20
SLIDE 20

Linux PV on HVM: the solution

  • install the same way as native
  • PC-like hardware
  • access to fast paravirtualized devices
  • exploit nested paging
slide-21
SLIDE 21

Linux PV on HVM: initial feats

Initial version in Linux 2.6.36:

  • introduce the xen platform device driver
  • add support for HVM hypercalls, xenbus and

grant table

  • enables blkfront, netfront and PV timers
  • add support to PV suspend/resume
  • the vector callback mechanism
slide-22
SLIDE 22

Old style event injection

slide-23
SLIDE 23

Receiving an interrupt

do_IRQ handle_fasteoi_irq handle_irq_event xen_evtchn_do_upcall ack_apic_level ← >=3 VMEXIT

slide-24
SLIDE 24

The new vector callback

slide-25
SLIDE 25

Receiving a vector callback

xen_evtchn_do_upcall

slide-26
SLIDE 26

Linux PV on HVM: newer feats

Later enhancements (2.6.37+):

  • ballooning
  • PV spinlocks
  • PV IPIs
  • Interrupt remapping onto event channels
  • MSI remapping onto event channels
slide-27
SLIDE 27

Interrupt remapping

slide-28
SLIDE 28

MSI remapping

slide-29
SLIDE 29

PV spectrum

HVM guests Classic PV on HVM Enhanced PV on HVM Hybrid PV

  • n HVM

PV guests

  • ot

sequence emulated emulated emulated paravirtualized Memory hardware hardware hardware paravirtualized nterrupts emulated emulated paravirtualized paravirtualized imers emulated emulated paravirtualized paravirtualized pinlocks emulated emulated paravirtualized paravirtualized isk emulated paravirtualized paravirtualized paravirtualized etwork emulated paravirtualized paravirtualized paravirtualized rivileged perations hardware hardware hardware paravirtualized

slide-30
SLIDE 30

Benchmarks: the setup

Hardware setup:

Dell PowerEdge R710 CPU: dual Intel Xeon E5520 quad core CPUs @ 2.27GHz RAM: 22GB

Software setup:

Xen 4.1, 64 bit Dom0 Linux 2.6.32, 64 bit DomU Linux 3.0 rc4, 8GB of memory, 8 vcpus

slide-31
SLIDE 31

PCI passthrough: benchmark

PCI passthrough of an Intel Gigabit NIC CPU usage: the lower the better:

interrupt remapping no interrupt remapping 20 40 60 80 100 120 140 160 180 200 CPU usage domU CPU usage dom0

slide-32
SLIDE 32

Kernbench

Results: percentage of native, the lower the better

PV on HVM 64 bit PV on HVM 32 bit HVM 64 bit HVM 32 bit PV 64 bit PV 32 bit 90 95 100 105 110 115 120 125 130 135 140

slide-33
SLIDE 33

PBZIP2

Results: percentage of native, the lower the better

PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit 100 110 120 130 140 150 160

slide-34
SLIDE 34

SPECjbb2005

PV 64 bit PV on HVM 64 bit 10 20 30 40 50 60 70 80 90 100

Results: percentage of native, the higher the better

slide-35
SLIDE 35

Iperf tcp

Results: gbit/sec, the higher the better

PV 64 bit PV on HVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit 1 2 3 4 5 6 7 8

slide-36
SLIDE 36

Conclusions

PV on HVM guests are very close to PV guests in benchmarks that favor PV MMUs PV on HVM guests are far ahead of PV guests in benchmarks that favor nested paging

slide-37
SLIDE 37

Questions?