[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Performance of Xen VCPU Scheduling



Hello,

I observed that a configuration where dom0 vcpus were pinned to a set of pcpus in the host using dom0_vcpus_pin and the guests were prevented from running on the dom0 pcpus (here called "exclusively-pinned dom0 vcpus", or xpin) caused the general performance of the guests in a host with around 24 pcpus or more to increase during bootstorms and high density of guests, even though there were less pcpus available to the guests.

While trying to understand why this increased performance is not present in the default non-pinned state and if it would be possible to obtain this extra performance in the default non-pinned dom0 configuration, Matthew Portas and I evaluated lots of Xen 4.2 parameters and patches:

http://wiki.xenproject.org/wiki/Performance_of_Xen_VCPU_Scheduling

We put some of the results that we thought would be of interest to this list in the link above, and we invite everybody here to have a look. We are especially interested in having some feedback on the prototype section, and to find out if anyone has further ideas for patches and if the listed patches are reasonable or could have side-effects.

Interesting results in the link above:
- xpin decreases startup time of vms in a bootstorm
- xpin has a pathological case when the vms are burning lots of cpu
- nopin produces an interesting cluster of high event channel latency between 100us-1000us when enough vms are using lots of cpu - experimental tweaks on the xen credit1 scheduler code that cause the xpin pathological case to go away, and other increases (and sometimes decreases) in bootstorm and vm density performance.

cheers,
Marcus

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.