Lars Kurth Xen Community Manager lars.kurth@xen.org
Virtualization in the Cloud: Featuring Xen and XCP
@lars_kurth FREENODE: lars_kurth
Virtualization in the Cloud: Featuring Xen and XCP Lars Kurth Xen - - PowerPoint PPT Presentation
Virtualization in the Cloud: Featuring Xen and XCP Lars Kurth Xen Community Manager lars.kurth@xen.org FREENODE: lars_kurth @lars_kurth A Brief History of Xen in the Cloud Late 90s XenoServer Project A Brief History of Xen in the Cloud
Lars Kurth Xen Community Manager lars.kurth@xen.org
@lars_kurth FREENODE: lars_kurth
Late 90s
XenoServer Project
Late 90s
XenoServer Project
‘03
Xen 1.0
Late 90s
XenoServer Project
‘03 ‘08 ‘06
Amazon EC2 and Slicehost launched Rackspace Cloud Xen 1.0
Late 90s
XenoServer Project
‘03 ‘08 ‘06
Amazon EC2 and Slicehost launched Rackspace Cloud XCP 1.x Cloud Mgmt
‘11 ‘12
XCP packages in Linux Xen 1.0
Late 90s
XenoServer Project
‘03 ‘08 ‘06
Amazon EC2 and Slicehost launched Rackspace Cloud Linux 3.0 XCP 1.x Cloud Mgmt
‘11 ‘12
XCP packages in Linux Xen 1.0
Late 90s
XenoServer Project
‘03 ‘08 ‘06
Amazon EC2 and Slicehost launched Rackspace Cloud Linux 3.0 XCP 1.x Cloud Mgmt
‘11 ‘12
XCP packages in Linux
‘13
Xen for ARM servers Xen 1.0 10th birthday
– Plus project lifecycle and Project Management Committee (PMC)
– Xen Hypervisor (led by 5 committers, 2 from Citrix, 1 from Suse, 2 Independent) – Xen Cloud Platform aka XCP (led by Citrix) – Xen ARM : Xen for mobile devices (led by Samsung)
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 2010 2011 2012 Citrix UPC SUSE Amazon University AMD GridCentric Individual NSA Intel Fujitsu iWeb Misc Oracle Spectralogic University of British Columbia
active vendors is increasing
new participation
Type 1: Bare metal Hypervisor
A pure Hypervisor that runs directly on the hardware and hosts Guest OS’s.
Provides partition isolation + reliability, higher security
Host HW
Memory CPUs I/O
Hypervisor
Scheduler MMU Device Drivers/Models
VMn VM1 VM0
Guest OS and Apps
Type 1: Bare metal Hypervisor
A pure Hypervisor that runs directly on the hardware and hosts Guest OS’s.
Type 2: OS ‘Hosted’
A Hypervisor that runs within a Host OS and hosts Guest OS’s inside of it, using the host OS services to provide the virtual environment.
Provides partition isolation + reliability, higher security Low cost, no additional drivers Ease of use & installation
Host HW
Memory CPUs I/O
Host HW
Memory CPUs I/O
Hypervisor
Scheduler MMU Device Drivers/Models
VMn VM1 VM0
Guest OS and Apps
Host OS
Device Drivers Ring-0 VM Monitor “Kernel “
VMn VM1 VM0
Guest OS and Apps User Apps User-level VMM Device Models
Type 1: Bare metal Hypervisor
Host HW
Memory CPUs I/O
Hypervisor
Scheduler MMU Device Drivers/Models
VMn VM1 VM0
Guest OS and Apps
Type 1: Bare metal Hypervisor
Host HW
Memory CPUs I/O
Hypervisor
Scheduler MMU Device Drivers/Models
VMn VM1 VM0
Guest OS and Apps
Host HW
Memory CPUs I/O
Hypervisor VMn VM1 VM0
Guest OS and Apps
Xen Architecture
Scheduler MMU
Type 1: Bare metal Hypervisor
Host HW
Memory CPUs I/O
Hypervisor
Scheduler MMU Device Drivers/Models
VMn VM1 VM0
Guest OS and Apps
Host HW
Memory CPUs I/O
Hypervisor VMn VM1 VM0
Guest OS and Apps
Xen Architecture
Scheduler MMU
Control domain (dom0)
Drivers Device Models Linux & BSD
– Install Dom0 Linux distro – Install Xen package(s) or meta package – Reboot – Config stuff: set up disks, peripherals, etc.
More info: wiki.xen.org/wiki/Category:Host_Install
18
Control domain (dom0) Host HW VMn VM1 VM0
Guest OS and Apps
Memory CPUs I/O
Console
Control Domain aka Dom0
Guest Domains
Driver/Stub/Service Domain(s)
service in a box”
Dom0 Kernel
Hypervisor
Scheduler MMU XSM
Trusted Computing Base
19
Control domain (dom0) Host HW VMn VM1 VM0
Guest OS and Apps
Memory CPUs I/O
Dom0 Kernel Toolstack
Hypervisor
Scheduler MMU XSM
Console
Control Domain aka Dom0
Guest Domains
Driver/Stub/Service Domain(s)
service in a box”
Trusted Computing Base
20
Control domain (dom0) Host HW VMn VM1 VM0
Guest OS and Apps
Memory CPUs I/O
One or more driver, stub or service domains
Dom0 Kernel Toolstack
Hypervisor
Scheduler MMU XSM
Console
Control Domain aka Dom0
Guest Domains
Driver/Stub/Service Domain(s)
service in a box”
Trusted Computing Base
21
Xen Hypervisor
Single Host Basic Functions Multiple Hosts Additional Functionality
22
Increased level of functionality and integration with other components
Default / XL (XM) Toolstack / Console Libvirt / VIRSH XAPI / XE Hypervisor
Single Host Additional Functionality
Xen
Single Host Basic Functions Multiple Hosts Additional Functionality
23
Increased level of functionality and integration with other components
Default / XL (XM) Toolstack / Console Libvirt / VIRSH XAPI / XE Hypervisor
Single Host Additional Functionality
Xen XCP
24
Increased level of functionality and integration with other components
Default / XL (XM) Toolstack / Console Libvirt / VIRSH Get Binaries from … Linux Distros Linux Distros Debian & Ubuntu ISO from Xen.org Project Xen XCP XAPI / XE
25
Xen Hypervisor
Increased level of functionality and integration with other components
Default / XL (XM) Toolstack / Console Libvirt / VIRSH Products Oracle VM Huawei UVP Citrix XenServer Get Binaries from … Linux Distros Linux Distros Debian & Ubuntu ISO from Xen.org Project XCP XAPI / XE
26
Increased level of functionality and integration with other components
Default / XL (XM) Toolstack / Console Libvirt / VIRSH Get Binaries from … Linux Distros Linux Distros Debian & Ubuntu ISO from Xen.org Used by … More info: xen.org/community/ecosystem.html xen.org/community/presentations.html xen.org/products/case_studies.html Project Xen Hypervisor XCP XAPI / XE
28
Xen Hypervisor Control domain (dom0) Host HW Guest VMn
Apps
Memory CPUs I/O
Technology:
Linux PV guests have limitations:
Advantages
(even without virt extensions)
HW Drivers PV Back Ends PV Front Ends Guest OS Dom0 Kernel
29
Xen Hypervisor Control domain (dom0) Host HW Guest VMn
Apps
Memory CPUs I/O
Technology:
Linux PV guests have limitations:
Advantages
(even without virt extensions)
Driver Domains
HW Drivers PV Back Ends PV Front Ends
Driver Domain e.g.
HW Driver PV Back End Dom0 Kernel*
*) Can be MiniOS
Guest OS Dom0 Kernel
30
Xen Hypervisor Dom0 Host HW Guest VMn
Technology:
Model (SW Virtualization)
Disadvantages
(mainly I/O devices)
Advantages
Device Model
IO Emulation IO Event VMEXIT
Dom0 Kernel
Memory CPUs I/O
31
Xen Hypervisor Dom0 Host HW Guest VMn
Technology:
Model (SW Virtualization)
Disadvantages
(mainly I/O devices)
Advantages
Stub Domains
Device Model
IO Emulation IO Event VMEXIT
Stubdomn
Device Model Mini OS
Guest VMn
IO Emulation IO Event VMEXIT
Dom0 Kernel
Memory CPUs I/O
Fully Virtualized (FV) VS VS VS VH FV with PV for disk & network P VS VS VH PVHVM P P VS VH PVH P P P VH Fully Paravirtualized (PV) P P P P VH Virtualized (HW) P Paravirtualized VS Virtualized (SW) HVM mode/domain PV mode/domain Xen 4.3
Fully Virtualized (FV) VS VS VS VH FV with PV for disk & network P VS VS VH PVHVM P P VS VH PVH P P P VH Fully Paravirtualized (PV) P P P P Scope for improvement Poor performance Optimal performance HVM mode/domain Xen 4.3 PV mode/domain
Fully Virtualized (FV) VS VS VS VH FV with PV for disk & network P VS VS VH PVHVM P P VS VH PVH P P P VH Fully Paravirtualized (PV) P P P P Scope for improvement Poor performance Optimal performance HVM mode/domain Xen 4.3 PV mode/domain
Important: Xen automatically picks the best
available drivers. As a Xen user I chose a HVM or PV domain.
Complete stack for server virtualization
for cloud, storage and networking to Xen HV
Two Flavours
(more distros to come)
More info: wiki.xen.org/wiki/XCP_Release_Features
installable by Windows Update Service
– Migrate VMs between hosts or pools without shared storage – Move a VM’s disks between storage repositories while the VM is running
More info: xen.org/download/xcp/releasenotes_1.6.0.html & More info: xen.org/download/xcp/index_1.6.0.html
www.colt.net/cio-research
Results XCP User Survey 2012 – 90% of users quoted these as most important attributes
Split Control Domain into Driver, Stub and Service Domains
– See: ”Breaking up is hard to do” @ Xen Papers – See: “Domain 0 Disaggregation for XCP and XenServer”
Used today by Qubes OS and Citrix XenClient XT Prototypes for XCP
See qubes-os.org Different windows run in different VMs
More Security Increased serviceability and flexibility Better Robustness Better Performance Better Scalability
Ability to safely restart parts of the system (e.g. just 275ms outage from failed Ethernet driver)
CPU CPU RAM RAM NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
RAID
Xen Dom0
Network drivers NFS/ iSCSI
drivers
Qemu xapi Local storage
drivers
NFS/ iSCSI
drivers
Network drivers Qemu
eth eth eth eth scsi
User VM User VM
NB
gntdev
NB NF BF NF BF
qemu qemu
xapi vswitch networkd tapdisk blktap3 storaged syslogd vswitch networkd tapdisk blktap3 storaged tapdisk blktap3 storaged
gntdev gntdev
Dom0
xenopsd libxl healthd Domain manager
Dom0 . . . . Xen
xapi
CPU CPU RAM RAM NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
RAID
Dom0
Network driver domain NFS/ iSCSI
driver domain
Qemu domain xapi domain
Logging domain
Local storage
driver domain
NFS/ iSCSI
driver domain
Network driver domain
User VM User VM
NB
gntdev
NB NF BF NF BF
dbus over v4v
qemu
xapi xenopsd libxl healthd Domain manager vswitch networkd tapdisk blktap3 storaged syslogd vswitch networkd tapdisk blktap3 storaged tapdisk blktap3 storaged
gntdev gntdev
eth eth eth eth scsi
Xen Xen D
dbus over v4v
. . .
50
– Well-defined trusted computing base (much smaller than on type-2 HV) – Minimal services in hypervisor layer
– XSM is Xen equivalent of LSM – FLASK is Xen equivalent of SELinux – Developed, maintained and contributed to Xen by NSA – Compatible with SELinux (tools, architecture) – XSM object classes maps onto Xen features More info: http://www.slideshare.net/xen_com_mgr/ a-brief-tutorial-on-xens-advanced-security-features
CPU CPU RAM RAM NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
NIC
(or SR- IOV VF)
RAID
Xen Dom0
Network driver domain NFS/ iSCSI
driver domain
Qemu domain xapi domain
Logging domain
Local storage
driver domain
NFS/ iSCSI
driver domain
Network driver domain
eth eth eth eth scsi
User VM User VM
NB
gntdev
NB NF BF NF BF
qemu
xapi xenopsd libxl healthd Domain manager vswitch networkd tapdisk blktap3 storaged syslogd vswitch networkd tapdisk blktap3 storaged tapdisk blktap3 storaged
gntdev gntdev
FLASK policy restricting access
D
. . .
dbus over v4v dbus over v4v
Xen
– scalability, performance, better NUMA support, …
More info: blog.xen.org/index.php/2013/02/11/xen-4-3-mid-release-roadmap-update
ARM SOC
ARM Architecture Features for Virtualization
Hypervisor mode : EL2 Kernel mode : EL1 User mode : EL0 Hypercall interface :HVC
GIC v2 GT 2 stage MMU I/O Device Tree describes …
ARM SOC ARM Architecture Features for Virtualization
EL2 EL1 EL0 HVC
GIC v2 GT 2 stage MMU I/O Device Tree describes …
Xen Hypervisor Dom0
Any Xen Guest VM (including Dom0)
Kernel User Space
x86: PVHVM P P VS VH x86: PVH P P P VH ARM v7 & v8 P VH VH VH Scope for improvement Optimal performance HVM mode/domain PV mode/domain
Xen is coming back to CentOS In semi-private beta Planned release in CentOS 6.4 Include XAPI packages – aka XCP in CentOS
Application stacks only running on Xen APIs Works on any Xen based cloud or hosting service Examples
– ErlangOnXen.org : Erlang – HalVM : Haskell – OpenMirage : Ocaml
Benefits:
– Small footprint – Low startup latency – Extremely fast migration of VMs
Xen Control domain (dom0) Host HW Guest VMn
Apps HW Drivers PV Back Ends Library OS embedded in Language run-time Dom0 Kernel
– Resilience, Robustness & Scalability – Security: Small surface of attack, Isolation & Advanced Security Features
– Ready for use with cloud orchestration stacks
– Xen is still on top of the game – Exciting new developments and features in the pipeline
Slides available under CC-BY-SA 3.0 From www.slideshare.net/xen_com_mgr
@lars_kurth FREENODE: lars_kurth
xen.org/community/ecosystem.html
xen.org/community/presentations.html