xen project 4 4 features and futures
play

Xen Project 4.4: Features and Futures Russell Pavlicek Xen Project - PowerPoint PPT Presentation

Xen Project 4.4: Features and Futures Russell Pavlicek Xen Project Evangelist Citrix Systems About This Release Xen Project 4.4.0 was released on March 10, 2014. This release is the work of 8 months of development, with 1193


  1. Xen Project 4.4: Features and Futures Russell Pavlicek Xen Project Evangelist Citrix Systems

  2. About This Release • Xen Project 4.4.0 was released on March 10, 2014. • This release is the work of 8 months of development, with 1193 changesets. • Xen Project 4.4 is our first release made with an attempt at a 6-month development cycle. – Between Christmas, and a few important blockers, we missed that by about 6 weeks; but still not too bad overall.

  3. Xen Project 101: Basics

  4. Hypervisor Architectures Type 1: Bare metal Hypervisor A pure Hypervisor that runs directly on the hardware and hosts Guest OS’s. VM n VM n VM 1 VM 1 VM 0 VM 0 Guest OS Guest OS and Apps and Apps Hypervisor Hypervisor Scheduler Scheduler Device Drivers/Models Device Drivers/Models MMU MMU Host HW Host HW I/O Memory CPUs Provides partition isolation + Provides partition isolation + reliability, reliability, higher security higher security

  5. Hypervisor Architectures Type 1: Bare metal Hypervisor Type 2: OS ‘Hosted’ A pure Hypervisor that runs directly on the A Hypervisor that runs within a Host OS and hardware and hosts Guest OS’s. hosts Guest OS’s inside of it, using the host OS services to provide the virtual environment. VM n User-level VMM VM n User-level VMM VM n VM n VM 1 User VM 1 User VM 1 VM 1 Apps Apps Device Models VM 0 Device Models VM 0 VM 0 VM 0 Guest OS Guest OS Guest OS Guest OS and Apps and Apps and Apps and Apps Host OS Host OS Hypervisor Hypervisor Scheduler Scheduler Ring-0 VM Monitor Ring-0 VM Monitor “Kernel “ Device Drivers “Kernel “ Device Drivers/Models Device Drivers Device Drivers/Models MMU MMU Host HW Host HW Host HW Host HW I/O Memory CPUs I/O Memory CPUs Provides partition isolation + Provides partition isolation + Low cost, no additional drivers Low cost, no additional drivers reliability, reliability, Ease of use & installation Ease of use & installation higher security higher security

  6. Xen Project: Type 1 with a Twist Type 1: Bare metal Hypervisor VM n VM n VM 1 VM 1 VM 0 VM 0 Guest OS Guest OS and Apps and Apps Hypervisor Hypervisor Scheduler Scheduler Device Drivers/Models Device Drivers/Models MMU MMU Host HW Host HW I/O Memory CPUs

  7. Xen Project: Type 1 with a Twist Type 1: Bare metal Hypervisor Xen Architecture VM n VM n VM 1 VM 1 VM n VM n VM 0 VM 0 VM 1 VM 1 Guest OS VM 0 VM 0 Guest OS and Apps and Apps Guest OS Guest OS and Apps and Apps Hypervisor Hypervisor Scheduler Scheduler Device Drivers/Models Device Drivers/Models Hypervisor Scheduler MMU MMU Hypervisor Scheduler MMU MMU Host HW Host HW Host HW Host HW I/O Memory CPUs I/O Memory CPUs

  8. Xen Project: Type 1 with a Twist Type 1: Bare metal Hypervisor Xen Architecture Control domain Control domain (dom0) (dom0) VM n VM n VM 1 VM 1 VM n VM n Device Models Device Models VM 0 VM 0 VM 1 VM 1 Guest OS VM 0 VM 0 Guest OS Drivers Drivers and Apps and Apps Guest OS Guest OS and Apps Linux & BSD and Apps Linux & BSD Hypervisor Hypervisor Scheduler Scheduler Device Drivers/Models Device Drivers/Models Hypervisor Scheduler MMU MMU Hypervisor Scheduler MMU MMU Host HW Host HW Host HW Host HW I/O Memory CPUs I/O Memory CPUs

  9. Basic Xen Project Concepts Console • Interface to the outside world • Control Domain aka Dom0 • Dom0 kernel with drivers • Xen Management Toolstack • VM n VM n Guest Domains • Your apps Control domain Control domain VM 1 VM 1 • (dom0) (dom0) Driver/Stub/Service Domain(s) VM 0 VM 0 • A “driver, device model or control service in a box” Guest OS Guest OS • De-privileged and isolated and Apps and Apps Dom0 Kernel Dom0 Kernel • Lifetime: start, stop, kill Hypervisor Hypervisor Scheduler MMU XSM Scheduler MMU XSM Host HW Host HW I/O Memory CPUs Trusted Computing Base 9

  10. Basic Xen Project Concepts: Toolstack+ Console Console • Interface to the outside world • Control Domain aka Dom0 • Dom0 kernel with drivers • Xen Management Toolstack • VM n VM n Guest Domains • Your apps Control domain Control domain VM 1 VM 1 • (dom0) (dom0) Driver/Stub/Service Domain(s) VM 0 VM 0 • A “driver, device model or control Toolstack Toolstack service in a box” Guest OS Guest OS • De-privileged and isolated and Apps and Apps Dom0 Kernel Dom0 Kernel • Lifetime: start, stop, kill Hypervisor Scheduler Hypervisor MMU XSM Scheduler MMU XSM Host HW Host HW I/O Memory CPUs Trusted Computing Base 10

  11. Basic Xen Project Concepts: Disaggregation Console Console • Interface to the outside world • Control Domain aka Dom0 • Dom0 kernel with drivers • Xen Management Toolstack • VM n VM n Guest Domains • Your apps Control domain Control domain VM 1 VM 1 • (dom0) (dom0) Driver/Stub/Service Domain(s) VM 0 One or more VM 0 One or more • A “driver, device model or control driver, stub or Toolstack driver, stub or Toolstack service in a box” service domains service domains Guest OS Guest OS • De-privileged and isolated and Apps and Apps Dom0 Kernel Dom0 Kernel • Lifetime: start, stop, kill Hypervisor Scheduler Hypervisor MMU XSM Scheduler MMU XSM Host HW Host HW I/O Memory CPUs Trusted Computing Base 11

  12. Xen Project 4.4 Features

  13. Improved Event Channel Scalability • Event channels are paravirtualized interrupts • Previously limited to either 1024 or 4096 channels per domain – Domain 0 needs several event channels for each guest VM (for network/disk backends, qemu etc.) – Practical limit of total number of VMs to around 300-500 (depending on VM configuration)

  14. Improved Event Channel Scalability (2) • New FIFO-based event channel ABI allows for over 100,000 event channels – Improve fairness – Allows for multiple priorities – The increased limit allows for more VMs, which benefits large systems and cloud operating systems such as MirageOS, ErlangOnXen, OSv, HalVM – Also useful for VDI applications

  15. Experimental PVH Guest Support • PVH mode combines the best elements of HVM and PV – PVH takes advantage of many of the hardware virtualization features that exist in contemporary hardware • Potential for significantly increased efficiency and performance • Reduced implementation footprint in Linux,FreeBSD • Enable with "pvh=1" in your config

  16. Xen Project Virtualization Vocabulary • PV – Paravirtualization – Hypervisor provides API used by the OS of the Guest VM – Guest OS needs to be modified to provide the API • HVM – Hardware-assisted Virtual Machine – Uses CPU VM extensions to handle Guest requests – No modifications to Guest OS – But CPU must provide the VM extensions • FV – Full Virtualization (Another name for HVM)

  17. Xen Project Virtualization Vocabulary • PVHVM – PV on HVM drivers – Allows H/W virtualized guests to use PV disk and I/O drivers – No modifications to guest OS – Better performance than straight HVM • PVH – PV in HVM Container (New in 4.4) – Almost fully PV – Uses HW extensions to eliminate PV MMU – Possibly best mode for CPUs with virtual H/W extensions

  18. The Virtualization Spectrum VH VS P Paravirtualized Virtualized (HW) Virtualized (SW) 4.4 Disk and Network Interrupts, Timers Emulated Motherboard, Legacy boot Privileged Instructions and page tables PV mode/domain HVM mode/domain

  19. The Virtualization Spectrum Emulated Motherboard, Interrupts, Timers Disk and Network Privileged Instructions Optimal performance and page tables Scope for improvement Legacy boot Poor performance HVM mode/domain 4.4 PV mode/domain

  20. Improved Disk Driver Domains • Linux driver domains used to rely on udev events in order to launch backends for guests Dependency on udev is replaced with a custom – daemon built on top of libxl Now feature complete and consistent between Linux – and non-Linux guests Provides greater flexibility in order to run user-space – backends inside of driver domains Example of capability: driver domains can now use – Qdisk backends, which was not possible with udev

  21. Improved Support for SPICE • SPICE is a protocol for virtual desktops which allows a much richer connection than display-only protocols like VNC • Added support for additional SPICE functionality, including: – Vdagent – clipboard sharing – USB redirection

  22. GRUB 2 Support of Xen Project PV Images • In the past, Xen Project software required a custom implementation of GRUB called pvgrub • The upstream GRUB 2 project now has a build target which will construct a bootable PV Xen Project image – This ensures 100% GRUB 2 compatibility for pvgrub going forward – Delivered in upcoming GRUB 2 release (v2.02?)

  23. Indirect Descriptors for Block PV Protocol • Modern storage devices work much better with larger chunks of data • Indirect descriptors have allowed the size of each individual request to triple, greatly improving I/O performance when running on fast storage technologies like SSD and RAID • This support is available in any guest running Linux 3.11 or higher (regardless of Xen Project version)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend