Scheduling in Xen: Past, Present and Future
Dario Faggioli
dario.faggioli@citrix.com
Scheduling in Xen: Past, Present and Future Dario Faggioli - - PowerPoint PPT Presentation
Scheduling in Xen: Past, Present and Future Dario Faggioli dario.faggioli@citrix.com Seattle, WA 18th of August, 2015 Introduction Sched. in Virt. History of (Xen) Scheduling Scheduler Features Workin On Conclusions Introduction
dario.faggioli@citrix.com
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 2 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ Hello, my name is Dario ◮ Working on Xen (from within Citrix) since 2011 ◮ Mostly hypervisor stuff, but also toolstack: NUMA, cpupools,
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 3 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 4 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 5 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 5 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ most of the time, there isn’t even any CPU overbooking ◮ when there’s overbooking, not everything is equally important ◮ I/O is more important!
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 5 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ There is pretty much always CPU overbooking ◮ All activities (i.e., all VMs) are (potentially) equally
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 6 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ There is pretty much always CPU overbooking ◮ All activities (i.e., all VMs) are (potentially) equally
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 6 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ There is pretty much always CPU overbooking ◮ All activities (i.e., all VMs) are (potentially) equally
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 6 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Xen Hypervisor Hardware
device model (qemu) toolstack Control Domain NetBSD or Linux
Hardware Drivers I/O Devices CPU Memory Paravirtualized (PV) Domain: NetBSD or Linux netback blkback netfront blkfront Driver Domain netback blkback
Xen Hypervisor Hardware
device model (qemu) toolstack Control Domain NetBSD or Linux
Hardware Drivers I/O Devices CPU Memory Paravirtualized (PV) Domain: NetBSD or Linux Fully Virtualized (HVM) Domain: Windows, FreeBSD... netback blkback netfront blkfront
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 7 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ How does Xen hadles I/O:
Xen Hypervisor Hardware
device model (qemu) toolstack Control Domain NetBSD or Linux
Hardware Drivers I/O Devices CPU Memory Paravirtualized (PV) Domain: NetBSD or Linux netback blkback netfront blkfront Driver Domain netback blkback
Xen Hypervisor Hardware
device model (qemu) toolstack Control Domain NetBSD or Linux
Hardware Drivers I/O Devices CPU Memory Paravirtualized (PV) Domain: NetBSD or Linux Fully Virtualized (HVM) Domain: Windows, FreeBSD... netback blkback netfront blkfront
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 7 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 8 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 9 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 10 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 10 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 10 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ some info at http://wiki.xen.org/wiki/Xen Project Schedulers
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 10 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ some info at http://wiki.xen.org/wiki/Xen Project Schedulers ◮ we just killed SEDF in 4.6 (yay!!)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 10 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ some info at http://wiki.xen.org/wiki/Xen Project Schedulers ◮ we just killed SEDF in 4.6 (yay!!)
◮ that’s it
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 10 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 11 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ pinning: you can run there and only there!
◮ hard affinity: you can’t run outside of that spot
◮ soft affinity: you can’t run outside of that spot and,
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 12 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 13 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ acts at domain creation time, important for memory
◮ easy to tweak (at libxl build time, for now) heuristics:
◮ use the smallest possible set of nodes (ideally, just one) ◮ use the (set of) node(s) with fewer vCPUs bound to it ([will]
consider both hard and soft affinity)
◮ use the (set of) node(s) with the most free RAM (mimics the
“worst fit”algorithm)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 14 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ acts at domain creation time, important for memory
◮ easy to tweak (at libxl build time, for now) heuristics:
◮ use the smallest possible set of nodes (ideally, just one) ◮ use the (set of) node(s) with fewer vCPUs bound to it ([will]
consider both hard and soft affinity)
◮ use the (set of) node(s) with the most free RAM (mimics the
“worst fit”algorithm)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 14 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ acts at domain creation time, important for memory
◮ easy to tweak (at libxl build time, for now) heuristics:
◮ use the smallest possible set of nodes (ideally, just one) ◮ use the (set of) node(s) with fewer vCPUs bound to it ([will]
consider both hard and soft affinity)
◮ use the (set of) node(s) with the most free RAM (mimics the
“worst fit”algorithm)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 14 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ acts at domain creation time, important for memory
◮ easy to tweak (at libxl build time, for now) heuristics:
◮ use the smallest possible set of nodes (ideally, just one) ◮ use the (set of) node(s) with fewer vCPUs bound to it ([will]
consider both hard and soft affinity)
◮ use the (set of) node(s) with the most free RAM (mimics the
“worst fit”algorithm)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 14 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ acts at domain creation time, important for memory
◮ easy to tweak (at libxl build time, for now) heuristics:
◮ use the smallest possible set of nodes (ideally, just one) ◮ use the (set of) node(s) with fewer vCPUs bound to it ([will]
consider both hard and soft affinity)
◮ use the (set of) node(s) with the most free RAM (mimics the
“worst fit”algorithm)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 14 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 15 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ dom0 max vcpu: makes sense ◮ dom0 vcpus pin: bleah!! ◮ dom0 nodes: new parameter. Place Dom0’s vCPUs and
◮ strict (default) uses hard affinity ◮ relaxed uses soft affinity Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 16 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 17 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 18 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ Cache Monitoring Technology (CMT) ◮ Cache Allocation Technology (CAT)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 19 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ Cache Monitoring Technology (CMT) ◮ Cache Allocation Technology (CAT)
◮ how ’cache hungry’ is a given pCPU? ◮ how much free chace is there on a given socket/NUMA node?
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 19 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ Cache Monitoring Technology (CMT) ◮ Cache Allocation Technology (CAT)
◮ how ’cache hungry’ is a given pCPU? ◮ how much free chace is there on a given socket/NUMA node?
◮ if you’re going to run on pCPUs #x, #y and #z... ◮ ...you better have some cache space reserved there!
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 19 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 20 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 20 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ scalability measure (for the Linux scheduler) ◮ balance the load: often among SMT threads, less often
◮ based on CPU topology
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 20 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ vCPUs wander among pCPUs ◮ vCPU #0 and #1 of a guest can be SMT siblings or not... at
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 21 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ vCPUs wander among pCPUs ◮ vCPU #0 and #1 of a guest can be SMT siblings or not... at
◮ it doesn’t make sense, in the guest, to try to optimize
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 21 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ vCPUs wander among pCPUs ◮ vCPU #0 and #1 of a guest can be SMT siblings or not... at
◮ it doesn’t make sense, in the guest, to try to optimize
◮ flat layout for the scheduling domains → just one domain with
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 21 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 22 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 22 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ historical load based load balancing ◮ runqueue kept in order of credit (instead than Round-Robin as
◮ configurable runqueue arrangement
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 22 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 23 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ in Credit we have:
◮ credits and weights ◮ 2 priorities ◮ oh, actually, it’s 3 ◮ active and non active state of vCPUs ◮ flipping between active/non-active means flipping between
burning/non-burning credits, which in turns means wandering around among priorities
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 23 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ in Credit we have:
◮ credits and weights ◮ 2 priorities ◮ oh, actually, it’s 3 ◮ active and non active state of vCPUs ◮ flipping between active/non-active means flipping between
burning/non-burning credits, which in turns means wandering around among priorities
◮ in Credit2 we have:
◮ credits burned basing on weights Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 23 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ in Credit we have:
◮ credits and weights ◮ 2 priorities ◮ oh, actually, it’s 3 ◮ active and non active state of vCPUs ◮ flipping between active/non-active means flipping between
burning/non-burning credits, which in turns means wandering around among priorities
◮ in Credit2 we have:
◮ credits burned basing on weights ◮ that’s it Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 23 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ in Credit we have:
◮ credits and weights ◮ 2 priorities ◮ oh, actually, it’s 3 ◮ active and non active state of vCPUs ◮ flipping between active/non-active means flipping between
burning/non-burning credits, which in turns means wandering around among priorities
◮ in Credit2 we have:
◮ credits burned basing on weights ◮ that’s it ◮ no, really, it’s that simple! :-) Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 23 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 24 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ in Credit
◮ periodic runqueue sorting. Freezes a runqueue ◮ periodic accounting. Freezes the whole scheduler! Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 24 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ in Credit
◮ periodic runqueue sorting. Freezes a runqueue ◮ periodic accounting. Freezes the whole scheduler!
◮ in Credit2 we have:
◮ “global”lock only for load balancing
(looking at improving it)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 24 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ SMT awareness (done, missing final touches) ◮ hard and soft affinity support (someone working on it) ◮ tweaks and optimization in the load balancer (someone
◮ cap and reservation (!!!)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 25 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ SMT awareness (done, missing final touches) ◮ hard and soft affinity support (someone working on it) ◮ tweaks and optimization in the load balancer (someone
◮ cap and reservation (!!!)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 25 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ SMT awareness (done, missing final touches) ◮ hard and soft affinity support (someone working on it) ◮ tweaks and optimization in the load balancer (someone
◮ cap and reservation (!!!)
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 25 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 26 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ 16 pCPUs host ◮ 8 vCPUs guest
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 26 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ 16 pCPUs host ◮ 8 vCPUs guest ◮ varying # of build jobs → # of vCPUs busy in the guest
◮ -j4, -j8, -j (unlimited) Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 26 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ 16 pCPUs host ◮ 8 vCPUs guest ◮ varying # of build jobs → # of vCPUs busy in the guest
◮ -j4, -j8, -j (unlimited)
◮ varying the interferece load → # of vCPUs active in Dom0
◮ nothing, 4 CPU hog tasks, 12 CPU hog tasks Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 26 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ 16 pCPUs host ◮ 8 vCPUs guest ◮ varying # of build jobs → # of vCPUs busy in the guest
◮ -j4, -j8, -j (unlimited)
◮ varying the interferece load → # of vCPUs active in Dom0
◮ nothing, 4 CPU hog tasks, 12 CPU hog tasks
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 26 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ 16 pCPUs host ◮ 8 vCPUs guest ◮ varying # of build jobs → # of vCPUs busy in the guest
◮ -j4, -j8, -j (unlimited)
◮ varying the interferece load → # of vCPUs active in Dom0
◮ nothing, 4 CPU hog tasks, 12 CPU hog tasks
◮ Some tweaks still missing, but really promising!
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 26 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 27 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 27 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 28 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 28 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 29 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 29 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 30 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 31 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ we should continously try assess where we stand (performance
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 31 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ we should continously try assess where we stand (performance
◮ we should always strive to do better!
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 31 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 32 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 33 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ num. of activitieas that can be monitored is limited ◮ applies to L3 cache only, for now
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 34 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ num. of activitieas that can be monitored is limited ◮ applies to L3 cache only, for now
◮ per-vCPU granularity =
◮ L2 occupancy/bandwidth stats, for helping intra-socket
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 34 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ num. of activitieas that can be monitored is limited ◮ applies to L3 cache only, for now
◮ per-vCPU granularity =
◮ L2 occupancy/bandwidth stats, for helping intra-socket
◮ use one monitoring ID per pCPU. This gives:
◮ how ’cache hungry’ a pCPU is being ◮ how much free chace there is on each socket/NUMA node
◮ sample periodically and use for mid-level load balancing
◮ ... ideas welcome!!
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 34 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 35 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ if you’re going to run on pCPUs #x, #y and #z... ◮ ...you better have some cache space reserved there!
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 35 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ if you’re going to run on pCPUs #x, #y and #z... ◮ ...you better have some cache space reserved there!
◮ when setting affinity, set CAT accordingly
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 35 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ if you’re going to run on pCPUs #x, #y and #z... ◮ ...you better have some cache space reserved there!
◮ when setting affinity, set CAT accordingly
◮ always? ◮ for hard affinity, or soft affinity, or both? ◮ with what mask (i.e., how to split the cache)? ◮ what about the vice-versa?
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 35 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ vCPU x is top priority (higher credits, whatever) ◮ vCPU x issues an I/O operation. It has some remaining
◮ some other domains’ vCPUs y, w and z have higher priority
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 36 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ vCPU x is top priority (higher credits, whatever) ◮ vCPU x issues an I/O operation. It has some remaining
◮ some other domains’ vCPUs y, w and z have higher priority
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 36 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 37 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 37 / 39
Introduction
History of (Xen) Scheduling Scheduler Features Workin’ On Conclusions
◮ avoids priority inversion (no, we’re not the Mars Pathfinder,
◮ makes vx sort of “pay”, from the CPU load it generates with
Seattle, WA – 18th of August, 2015 Scheduling in Xen: Past, Present and Future 37 / 39