Lars Kurth Xen Project Community Manager lars.kurth@xen.org
10 Years of Xen and beyond …
@lars_kurth FREENODE: lars_kurth
10 Years of Xen and beyond Lars Kurth Xen Project Community - - PowerPoint PPT Presentation
10 Years of Xen and beyond Lars Kurth Xen Project Community Manager lars.kurth@xen.org @lars_kurth FREENODE: lars_kurth Xen.org becomes XenProject.org Teams aka sub-projects Hypervisor XAPI ARM Hypervisor (for Servers as
Lars Kurth Xen Project Community Manager lars.kurth@xen.org
@lars_kurth FREENODE: lars_kurth
– Hypervisor – XAPI – ARM Hypervisor (for Servers as well as Mobile Devices) – Mirage OS
– Consensus decision making – Sub-project life-cycle (aka incubator) – PMC style structure for team leadership
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 2010 2011 2012 Citrix UPC SUSE Amazon University AMD GridCentric Individual NSA Intel Fujitsu iWeb Misc Oracle Spectralogic University of British Columbia
active vendors is increasing
new participation
Type 1: Bare metal Hypervisor
A pure Hypervisor that runs directly on the hardware and hosts Guest OS’s.
Provides partition isolation + reliability, higher security
Host HW
Memory CPUs I/O
Hypervisor
Scheduler MMU Device Drivers/Models
VMn VM1 VM0
Guest OS and Apps
Type 1: Bare metal Hypervisor
A pure Hypervisor that runs directly on the hardware and hosts Guest OS’s.
Type 2: OS ‘Hosted’
A Hypervisor that runs within a Host OS and hosts Guest OS’s inside of it, using the host OS services to provide the virtual environment.
Provides partition isolation + reliability, higher security Low cost, no additional drivers Ease of use & installation
Host HW
Memory CPUs I/O
Host HW
Memory CPUs I/O
Hypervisor
Scheduler MMU Device Drivers/Models
VMn VM1 VM0
Guest OS and Apps
Host OS
Device Drivers Ring-0 VM Monitor “Kernel “
VMn VM1 VM0
Guest OS and Apps User Apps User-level VMM Device Models
Type 1: Bare metal Hypervisor
Host HW
Memory CPUs I/O
Hypervisor
Scheduler MMU Device Drivers/Models
VMn VM1 VM0
Guest OS and Apps
Type 1: Bare metal Hypervisor
Host HW
Memory CPUs I/O
Hypervisor
Scheduler MMU Device Drivers/Models
VMn VM1 VM0
Guest OS and Apps
Host HW
Memory CPUs I/O
Hypervisor VMn VM1 VM0
Guest OS and Apps
Xen Architecture
Scheduler MMU
Type 1: Bare metal Hypervisor
Host HW
Memory CPUs I/O
Hypervisor
Scheduler MMU Device Drivers/Models
VMn VM1 VM0
Guest OS and Apps
Host HW
Memory CPUs I/O
Hypervisor VMn VM1 VM0
Guest OS and Apps
Xen Architecture
Scheduler MMU
Control domain (dom0)
Drivers Device Models Linux & BSD
– Install Dom0 Linux distro – Install Xen package(s) or meta package – Reboot – Config stuff: set up disks, peripherals, etc.
More info: wiki.xen.org/wiki/Category:Host_Install
11
Control domain (dom0) Host HW VMn VM1 VM0
Guest OS and Apps
Memory CPUs I/O
Console
Control Domain aka Dom0
Guest Domains
Driver/Stub/Service Domain(s)
service in a box”
Dom0 Kernel
Hypervisor
Scheduler MMU XSM
Trusted Computing Base
12
Control domain (dom0) Host HW VMn VM1 VM0
Guest OS and Apps
Memory CPUs I/O
Dom0 Kernel Toolstack
Hypervisor
Scheduler MMU XSM
Console
Control Domain aka Dom0
Guest Domains
Driver/Stub/Service Domain(s)
service in a box”
Trusted Computing Base
13
Control domain (dom0) Host HW VMn VM1 VM0
Guest OS and Apps
Memory CPUs I/O
One or more driver, stub or service domains
Dom0 Kernel Toolstack
Hypervisor
Scheduler MMU XSM
Console
Control Domain aka Dom0
Guest Domains
Driver/Stub/Service Domain(s)
service in a box”
Trusted Computing Base
14
Xen Hypervisor Hypervisor
Single Host Basic Functions Multiple Hosts Additional Functionality
15
Increased level of functionality and integration with other components
Default / XL (XM) Toolstack / Console Libvirt / VIRSH XAPI / XE Hypervisor
Single Host Additional Functionality
Xen Hypervisor
Single Host Basic Functions Multiple Hosts Additional Functionality
Increased level of functionality and integration with other components
Default / XL (XM) Toolstack / Console Libvirt / VIRSH XAPI / XE Hypervisor
Single Host Additional Functionality
Xen Hypervisor
17
Increased level of functionality and integration with other components
Default / XL (XM) Toolstack / Console Libvirt / VIRSH Products Oracle VM Huawei UVP Citrix XenServer Project XAPI / XE Xen Hypervisor
18
Increased level of functionality and integration with other components
Default / XL (XM) Toolstack / Console Libvirt / VIRSH Used by … Project XAPI / XE Products Oracle VM Huawei UVP Citrix XenServer Xen Hypervisor
20
Xen Hypervisor Control domain (dom0) Host HW Guest VMn
Apps
Memory CPUs I/O
Technology:
Linux PV guests have limitations:
Advantages
(even without virt extensions)
HW Drivers PV Back Ends PV Front Ends Guest OS Dom0 Kernel
21
Xen Hypervisor Control domain (dom0) Host HW Guest VMn
Apps
Memory CPUs I/O
Technology:
Linux PV guests have limitations:
Advantages
(even without virt extensions)
Driver Domains
HW Drivers PV Back Ends PV Front Ends
Driver Domain e.g.
HW Driver PV Back End Dom0 Kernel*
*) Can be MiniOS
Guest OS Dom0 Kernel
22
Xen Hypervisor Dom0 Host HW Guest VMn
Technology:
Model (SW Virtualization)
Disadvantages
(mainly I/O devices)
Advantages
Device Model
IO Emulation IO Event VMEXIT
Dom0 Kernel
Memory CPUs I/O
23
Xen Hypervisor Dom0 Host HW Guest VMn
Technology:
Model (SW Virtualization)
Disadvantages
(mainly I/O devices)
Advantages
Stub Domains
Device Model
IO Emulation IO Event VMEXIT
Stubdomn
Device Model Mini OS
Guest VMn
IO Emulation IO Event VMEXIT
Dom0 Kernel
Memory CPUs I/O
Fully Virtualized (FV) VS VS VS VH FV with PV for disk & network P VS VS VH PVHVM P P VS VH PVH P P P VH Fully Paravirtualized (PV) P P P P VH Virtualized (HW) P Paravirtualized VS Virtualized (SW) HVM mode/domain PV mode/domain Xen 4.4
Fully Virtualized (FV) VS VS VS VH FV with PV for disk & network P VS VS VH PVHVM P P VS VH PVH P P P VH Fully Paravirtualized (PV) P P P P Scope for improvement Poor performance Optimal performance HVM mode/domain Xen 4.4 PV mode/domain
Fully Virtualized (FV) VS VS VS VH FV with PV for disk & network P VS VS VH PVHVM P P VS VH PVH P P P VH Fully Paravirtualized (PV) P P P P Scope for improvement Poor performance Optimal performance HVM mode/domain Xen 4.4 PV mode/domain
Important: Xen automatically picks the best
available drivers. As a Xen user I chose a HVM or PV domain.
Single Host Basic Functions Multiple Hosts Additional Functionality
Increased level of functionality and integration with other components
Default / XL (XM) Toolstack / Console Libvirt / VIRSH XAPI / XE Hypervisor
Single Host Additional Functionality
Xen Hypervisor
Multiple Hosts Additional Functionality
XAPI / XE Xen Hypervisor
without shared storage (while the VM is running)
More info: wiki.xen.org/wiki/XCP_Release_Features
Multiple Hosts Additional Functionality
XAPI / XE Xen Hypervisor XCP ISO (at v1.6) Xen 4.1.3 + XAPI CentOS 5.3 Kernel (v2.6.32.43) OVS 1.4.2 XCP-XAPI packages Debian Wheezy Ubuntu 12.04 LTS Others in progress …
Multiple Hosts Additional Functionality
XAPI / XE Xen Hypervisor
www.colt.net/cio-research
Results XCP User Survey 2012 – 90% of users quoted these as most important attributes
Split Control Domain into Driver, Stub and Service Domains
– See: ”Breaking up is hard to do” @ Xen Papers – See: “Domain 0 Disaggregation for XCP and XenServer”
Used today by Qubes OS and Citrix XenClient XT Prototypes for XAPI
See qubes-os.org Different windows run in different VMs
More Security Increased serviceability and flexibility Better Robustness Better Performance Better Scalability
Ability to safely restart parts of the system (e.g. just 275ms outage from failed Ethernet driver)
CPU CPU RAM RAM NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
RAID
Xen Dom0
Network drivers NFS/ iSCSI
drivers
Qemu xapi Local storage
drivers
NFS/ iSCSI
drivers
Network drivers Qemu
eth eth eth eth scsi
User VM User VM
NB
gntdev
NB NF BF NF BF
qemu qemu
xapi vswitch networkd tapdisk blktap3 storaged syslogd vswitch networkd tapdisk blktap3 storaged tapdisk blktap3 storaged
gntdev gntdev
Dom0
xenopsd libxl healthd Domain manager
Dom0 . . . . Xen
xapi
CPU CPU RAM RAM NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
RAID
Dom0
Network driver domain NFS/ iSCSI
driver domain
Qemu domain xapi domain
Logging domain
Local storage
driver domain
NFS/ iSCSI
driver domain
Network driver domain
User VM User VM
NB
gntdev
NB NF BF NF BF
dbus over v4v
qemu
xapi xenopsd libxl healthd Domain manager vswitch networkd tapdisk blktap3 storaged syslogd vswitch networkd tapdisk blktap3 storaged tapdisk blktap3 storaged
gntdev gntdev
eth eth eth eth scsi
Xen Xen D
dbus over v4v
. . .
40
– Well-defined trusted computing base (much smaller than on type-2 HV) – Minimal services in hypervisor layer
– XSM is Xen equivalent of LSM – FLASK is Xen equivalent of SELinux – Developed, maintained and contributed to Xen by NSA – Compatible with SELinux (tools, architecture) – XSM object classes maps onto Xen features More info: http://www.slideshare.net/xen_com_mgr/ a-brief-tutorial-on-xens-advanced-security-features
CPU CPU RAM RAM NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
RAID
Xen Dom0
Network driver domain NFS/ iSCSI
driver domain
Qemu domain xapi domain
Logging domain
Local storage
driver domain
NFS/ iSCSI
driver domain
Network driver domain
eth eth eth eth scsi
User VM User VM
NB
gntdev
NB NF BF NF BF
qemu
xapi xenopsd libxl healthd Domain manager vswitch networkd tapdisk blktap3 storaged syslogd vswitch networkd tapdisk blktap3 storaged tapdisk blktap3 storaged
gntdev gntdev
FLASK policy restricting access
D
. . .
dbus over v4v dbus over v4v
Xen
ARM SOC
ARM Architecture Features for Virtualization
Hypervisor mode : EL2 Kernel mode : EL1 User mode : EL0
GIC v2 GT 2 stage MMU I/O Device Tree describes …
Hypercall Interface HVC
ARM SOC ARM Architecture Features for Virtualization
EL2 EL1 EL0
GIC v2 GT 2 stage MMU I/O Device Tree describes …
HVC
Xen Hypervisor
ARM SOC ARM Architecture Features for Virtualization
EL2 EL1 EL0
GIC v2 GT 2 stage MMU I/O Device Tree describes …
HVC
Xen Hypervisor Any Xen Guest VM (including Dom0)
Kernel User Space
HVC
ARM SOC ARM Architecture Features for Virtualization
EL2 EL1 EL0
GIC v2 GT 2 stage MMU I/O Device Tree describes …
HVC
Xen Hypervisor Dom0
Any Xen Guest VM (including Dom0)
Kernel User Space
I/O
PV back PV front
I/O
HVC
x86: PVHVM P P VS VH x86: PVH P P P VH ARM v7 & v8 P VH VH VH Scope for improvement Optimal performance HVM mode/domain PV mode/domain
X86 Hypervisor 100K -120K LOC Any x86 CPU ARM Hypervisor for mobile Devices 60K LOC ARM v5 – v7 (no virt extensions) (extra code for RT) ARM Hypervisor for Servers 17K LOC ARM v7+ (virt extensions)
Application stacks only running on Xen APIs Works on any Xen based cloud or hosting service Examples
– ErlangOnXen.org : Erlang – HalVM : Haskell – Mirage OS : Ocaml
Benefits:
– Small footprint – Low startup latency – Extremely fast migration of VMs
Xen Control domain (dom0) Host HW Guest VMn
Apps HW Drivers PV Back Ends Library OS embedded in Language run-time Dom0 Kernel
– TCP/IP, DNS, SSH, Openflow (switch/controller), HTTP, XMPP, ... – New applications using next generation XAPI (disaggregated XAPI architecture)
More info: http://www.slideshare.net/xen_com_mgr/ mirage-extreme-specialisation-of-virtual-appliances
– scalability, performance, better NUMA support, …
More info: blog.xen.org/index.php/2013/02/11/xen-4-3-mid-release-roadmap-update
– Most major contributors are duplicating effort – Mirage OS provides interesting opportiunities
– Example: Xen + XAPI in CentOS 6.4
– Examples: OpenStack and Xen Orchestra
– Embed Xen more into the Linux eco-system and provide benefits for the wider Linux community
Xen Hackathon, May 16-17, Dublin, Ireland @Google
Slides available under CC-BY-SA 3.0 From www.slideshare.net/xen_com_mgr
@lars_kurth FREENODE: lars_kurth
– Help for IRC, Lists, … – Stackoverflow like Q&A