Virtual Machine Monitors Lakshmi Ganesh What is a VMM? - - PowerPoint PPT Presentation
Virtual Machine Monitors Lakshmi Ganesh What is a VMM? - - PowerPoint PPT Presentation
Virtual Machine Monitors Lakshmi Ganesh What is a VMM? Virtualization: Using a layer of software to present a (possibly different) logical view of a given set of resources VMM: Simulate, on a single hardware platform, multiple hardware
What is a VMM?
Virtualization: Using a layer of software to present a (possibly different) logical view of a given set of resources VMM: Simulate, on a single hardware platform, multiple hardware platforms - virtual machines VMs are usually similar/identical to underlying machine VMs allow multiple operating systems to be run concurrently on a single machine
What is it, really?
Virtual Machine Monitor
OS
App App
OS
App App
Hardware
Virtual Machine Virtual Machine
Type 1 VMM: IBM VM/370, Xen, VMware ESX Server Type 2 VMM: VMware Workstation
Virtual Machine Monitor
Guest OS
App App
Guest OS
App App
Virtual Machine Virtual Machine
Hardware
VMMs: Meet the family
Cousins: Number of instructions executed on hardware: Statistically dominant number: VMM All unprivileged instructions: HVM None: CSIM Siblings: VMM subtypes Location of VMM: On top of machine: Type 1 VMM On top of OS (host OS): Type 2 VMM Virtualization approach Full virtualization Paravirtualization
Why is a VMM?
Why is a VMM?
No more dual booting!
Why is a VMM?
No more dual booting! Sandbox for testing
Why is a VMM?
Consolidate multiple servers onto single machine No more dual booting! Sandbox for testing
Why is a VMM?
Consolidate multiple servers onto single machine Add lots more servers - virtual ones! No more dual booting! Sandbox for testing
Why is a VMM?
Consolidate multiple servers onto single machine Add lots more servers - virtual ones! Flash cloning: adapt number of servers to load No more dual booting! Sandbox for testing
VMMs: Challenges and Design Decisions
Several warring parameters: what is our goal? Performance: VM must be like real machine! Design Decision: Avoid simulation (Xen, VMware ESX) Design Decision: Type 1 VMM (Xen, VMware ESX) Ability to run unmodified OSes Design Decision: full virtualization (VMware) CPUs non-amenable to virtualization Design Decision: paravirtualization (Xen)
Challenges and Design Decisions (contd.)
Performance Isolation Design Decision: Virtualize MMU (Xen) Scalability: more VMs per machine Design Decision: Memory Reclamation, Shared Memory (Xen, VMware) Ease of Installation Design Decision: hosted VMM (VMware WS) VMM must be reliable and bug-free Design Decision: Keep it simple: hosted VMM (VMware WS)
A Story
Real
A Story
Real
Each machine must host thousands of VMs
A Story
Real
Each machine must host thousands of VMs
Scalability
A Story
Real
Each machine must host thousands of VMs
VMs must run insecure software
Scalability
A Story
Real
Each machine must host thousands of VMs
VMs must run insecure software
Scalability Fault containment
A Story
Real
Each machine must host thousands of VMs
VMs must run insecure software
VMM: must send alert when breach
- ccurs
Scalability Fault containment
A Story
Real
Each machine must host thousands of VMs
VMs must run insecure software
VMM: must send alert when breach
- ccurs
Scalability
Copy-on-write
Fault containment
A Story
Real
Each machine must host thousands of VMs
VMs must run insecure software
VMM: must send alert when breach
- ccurs
VM OS must look like native OS to fool malware
Scalability
Copy-on-write
Fault containment
A Story
Real
Each machine must host thousands of VMs
VMs must run insecure software
VMM: must send alert when breach
- ccurs
VM OS must look like native OS to fool malware
Scalability
Copy-on-write Minimal OS modification
Fault containment
Case Study: Xen
X E N H/W (SMP x86, phy mem, enet, SCSI/IDE)
virtual network virtual blockdev virtual x86 CPU virtual phy mem
Control Plane Software
GuestOS
(XenoLinux)
GuestOS
(XenoBSD)
GuestOS
(XenoXP)
User Software User Software User Software
GuestOS
(XenoLinux)
Xeno-Aware Device Drivers Xeno-Aware Device Drivers Xeno-Aware Device Drivers Xeno-Aware Device Drivers
Domain0 control interface
Figure 1: The structure of a machine running the Xen hyper- visor, hosting a number of different guest operating systems, including Domain0 running control software in a XenoLinux environment.
Xen: The case for Paravirtualization
Paravirtualization: When the interface the VM exports is not quite identical to the machine interface Full virtualization is difficult non-amenable CPUs, eg. x86 Replace privileged syscalls with hypercalls: Avoids binary rewriting and fault trapping Full virtualization is undesirable denies VM OSes important information that they could use to improve performance Wall-clock/Virtual time, Resource Availability
Xen: CPU Virtualization
Xen runs in ring 0 (most privileged) Ring 1/2 for guest OS, 3 for user-space GPF if guest attempts to use privileged instr Xen lives in top 64MB of linear addr space Segmentation used to protect Xen as switching page tables too slow on standard x86 Hypercalls jump to Xen in ring 0 Guest OS may install ‘fast trap’ handler Direct ring user-space to guest OS system calls
Slide source: Ian Pratt http:/ /www.cl.cam.ac.uk/research/srg/netos/papers/2004-xen-ols.pdf
Xen: MMU Virtualization
!!"
#$$%&&%'() '*+,-(.*,&
/0%&,(12 3!! 45+'65+%
70%&,(6+*,%& 70%&,(+%5'& 3*+,058(! 9&%0':;<=-&*$58 3*+,058(! !5$=*>% "<'5,%&
!!" #$%&'()* +%,(-!! ./012/0%
3$%&'(204'%& 3$%&'(0%/1&
- 40'$/5(! !/674,%
Shadow-mode
Direct-mode
Slide source: Ian Pratt http:/ /www.cl.cam.ac.uk/research/srg/netos/papers/2004-xen-ols.pdf
MMU Micro Benchmarks
! " # $
Page fault (µs)
! " # $
Process fork (µs)
%&% %&' %&( %&) %&* %&+ %&, %&- %&. %&/ '&% '&'
lmbench results on Linux (L), Xen (X), VMWare Workstation (V), and UML (U)
Slide source: Ian Pratt http:/ /www.cl.cam.ac.uk/research/srg/netos/papers/2004-xen-ols.pdf
Xen: I/O Virtualization
Device I/O: I/O devices are virtualized as Virtual Block Devices (VBDs) Data transferred in and out of domains using buffer descriptor rings Ring = circular queue of requests and responses. Generic mechanism allows use in various contexts Network: Virtual network Interface (VIF) Transmit and Receive buffers Avoids data copy by bartering pages for packets
Slide source: Ian Pratt http:/ /www.cl.cam.ac.uk/research/srg/netos/papers/2004-xen-ols.pdf
Xen: TCP Results
! " # $
%&'()%$(*+,,(-)./01
! " # $
2&'()%$(*+,,(-)./01
! " # $
%&'()%$(+,,(-)./01
! " # $
2&'()%$(+,,(-)./01 ,3, ,3* ,34 ,35 ,36 ,3+ ,37 ,38 ,39 ,3: *3, *3*
%;<(.=>?@A?BC(D>(!A>E&(-!1'("F>(-"1'(#)G=HF GDHI0B=BAD>(-#1'(=>?($)!(-$1 Slide source: Ian Pratt http:/ /www.cl.cam.ac.uk/research/srg/netos/papers/2004-xen-ols.pdf
Xen: Odds and Ends
Copy-on-write VMs share single copy of RO pages Writes attempts trigger page fault Traps to Xen, which creates unique RW copy of page Result: lightweight VMs, can scale well Live Migration Within 10’ s of milliseconds can migrate VMs from
- ne machine to another! (though app dependent)
Xen: Odds and Ends (contd.)
Live Migration mechanism VM continues to run Pre-copy approach: VM continues to run ‘lift’ domain on to shadow page tables Bitmap of dirtied pages; scan; transmit dirtied Atomic ‘zero bitmap & make PTEs read-
- nly’
Iterate until no forward progress, then stop VM and transfer remainder
Xen: Odds and Ends (contd.)
Memory Reclamation Over-booked resources How to reclaim memory from a VMOS? VMware ESX Server: balloon process Xen: balloon driver
Xen: Scalability
L 662 X
- 16.3% (non-SMP guest)
1
1001 L 924 X
2
887 L 896 X
4
842 L 906 X
8
880 L 874 X
16
200 400 600 800 1000
Aggregate number of conforming clients Simultaneous SPEC WEB99 Instances on Linux (L) and Xen(X)
Figure 4: SPEC WEB99 for 1, 2, 4, 8 and 16 concurrent Apache servers: higher values are better.
1 2 4 8 8(diff)
OSDB-IR
1 2 4 8 8(diff)
OSDB-OLTP
158 318 289 282 290 1661 3289 2833 2685 2104
0.0 0.5 1.0 1.5 2.0
Aggregate score relative to single instance Simultaneous OSDB-IR and OSDB-OLTP Instances on Xen
Figure 5: Performance of multiple instances of PostgreSQL running OSDB in separate Xen domains. 8(diff) bars show per- formance variation with different scheduler weights.
VM vs. Real Machine
L 567 X 567 V 554 U 550
SPEC INT2000 (score)
L 263 X 271 V 334 U 535
Linux build time (s)
L 172 X 158 V 80 U 65
OSDB-IR (tup/s)
L 1714 X 1633 V 199 U 306
OSDB-OLTP (tup/s)
L 418 X 400 V 310 U 111
dbench (score)
L 518 X 514 V 150 U 172
SPEC WEB99 (score)
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1
Relative score to Linux
Figure 3: Relative performance of native Linux (L), XenoLinux (X), VMware workstation 3.2 (V) and User-Mode Linux (U).
Things to think about
Xen only useful for research settings? OS modification is a BIG thing Xen v2.0 requires no modification of Linux 2.6 core code Why Xen rather than VMware for honeyfarms? Is performance key for a honeypot? It’ s free :-) Great expectations for VMMs: but how realistic/useful are they? Mobile applications, VMMs are not new... they have been resurrected what further directions for research?
Conclusion
VMMs have come a long way Started out as multiplexing tools back in the ‘60’ s Resurrected and made-over to suit a wide range
- f applications
VMMs today are Fast Secure Light-weight VMMs have taken the (research?) world by storm
Thank you!
:-)
Extra slides
Why full virtualization is difficult
Modern CPUs are not designed for virtualization Full virtualization requires the CPU to support direct execution Privileged instructions, when run in unprivileged mode MUST trigger a trap The x86 has upto 17 sensitive instructions that do not
- bey this rule!
Eg: the SMSW instruction stores machine status word into general purpose register first bit = PE (Protection Enable: Protected Mode when set, real mode when clear) if VMOS checked PE bit when in real mode, it would incorrectly read it as Protected Mode