Progressive paravirtualization Keir Fraser, XenSource HVM - - PowerPoint PPT Presentation

progressive paravirtualization
SMART_READER_LITE
LIVE PREVIEW

Progressive paravirtualization Keir Fraser, XenSource HVM - - PowerPoint PPT Presentation

TM Progressive paravirtualization Keir Fraser, XenSource HVM Architecture Domain 0 Domain N Guest VM (HVM) Guest VM (HVM) (32-bit) (64-bit) Linux xen64 (xm/xend) Control Models Device Unmodified OS Unmodified OS Panel 3D 3P Linux


slide-1
SLIDE 1

Progressive paravirtualization

Keir Fraser, XenSource

TM

slide-2
SLIDE 2

HVM Architecture

Native Device Drivers Control Panel (xm/xend) Front end Virtual Drivers

Linux xen64 Xen Hypervisor

Device Models Guest BIOS

Unmodified OS Domain N Linux xen64

Callback / Hypercall VMExit Virtual Platform 0D

Guest VM (HVM) (32-bit)

Backend Virtual driver Native Device Drivers

Domain 0

Event channel 0P

1/3P

3P I/O: PIT, APIC, PIC, IOAPIC Processor Memory Control Interface Hypercalls Event Channel Scheduler Guest BIOS

Unmodified OS

VMExit Virtual Platform

Guest VM (HVM) (64-bit)

3D PIC/APIC/IOAPIC emulation

slide-3
SLIDE 3

Progressive paravirtualization

Hypercall API available to HVM guests Selectively add PV extensions to optimize

  • Net and Block IO
  • XenPIC (event channels)
  • MMU operations
  • multicast TLB flush
  • PTE updates (faster than page fault)
  • Time
  • CPU and memory hotplug
slide-4
SLIDE 4

PV Drivers

Native Device Drivers Control Panel (xm/xend) Front end Virtual Drivers

Linux xen64 Xen Hypervisor

Device Models Guest BIOS

Unmodified OS Domain N Linux xen64

Callback / Hypercall VMExit Virtual Platform 0D

Guest VM (HVM) (32-bit)

Backend Virtual driver Native Device Drivers

Domain 0

Event channel 0P

1/3P

3P I/O: PIT, APIC, PIC, IOAPIC Processor Memory Control Interface Hypercalls Event Channel Scheduler FE Virtual Drivers Guest BIOS

Unmodified OS

VMExit Virtual Platform

Guest VM (HVM) (64-bit)

FE Virtual Drivers 3D PIC/APIC/IOAPIC emulation

slide-5
SLIDE 5

Hypercalls

HVM guest can detect hypervisor platform via CPUID instruction

  • New hypervisor leaves at 0x40000000
  • Look for signature ‘XenVMMXenVMM’
  • Space for future expansion and feature flags

Hypercall page is filled in by writing its address to a special MSR

  • Location determined via CPUID
  • Currently always MSR 0x40000000

Hypercall page hides low-level details of transferring control to the VMM

slide-6
SLIDE 6

Building PV drivers for HVM

 PV drivers depend on architectural features of Xen

  • Grant tables for memory sharing
  • Event channels for asynchronous notifications

 Encapsulate support in a ‘platform driver’

  • Ioemu defines a dummy PCI device that triggers loading of the

platform driver in the HVM guest

 Xenbus, blkfront, netfront can be built as separate modules against a native Linux build

  • See unmodified_drivers/linux-2.6 in the xen-unstable tree
slide-7
SLIDE 7

Xen support for PV-on-HVM

 Event-channel notifications cause an interrupt to be delivered via the virtual APIC on a pre-registered vector

  • ‘Platform driver’ registers itself on that IRQ and demuxes

pending events to registered drivers

  • Future: IRQ per-device or per-VCPU

 Virtual CPUID and MSR addresses to allow hypervisor detection and hypercall setup  Hypercalls are being incrementally extended to support HVM guests

  • Also require support for 32-bit guests on 64-bit hypervisor: the

32-bit and 64-bit ABIs are different

  • This work overlaps strongly with PAE-on-64 PV guest support
slide-8
SLIDE 8

PV Driver performance

100 200 300 400 500 600 700 800 900 1000

ioemu PV-on-HVM PV Mb/s rx tx

Measured with ttcp, 1500 byte MTU