[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Priority for SMP VMs


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: "Gabriel Southern" <gsouther@xxxxxxx>
  • Date: Wed, 2 Jul 2008 22:36:53 -0400
  • Delivery-date: Wed, 02 Jul 2008 19:37:13 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:mime-version:content-type :content-transfer-encoding:content-disposition:x-google-sender-auth; b=d4RyI+N6MWOXj+YvfJ+l8EtUbklAfKKX//CsnJHKVYsIW1WRsIm73FXoo9g+oSgWG6 p7mR7A0JSgbfOPsvHIGMBFdi44vsMQPgRutXHbfcNBJoF6cH1VPNcYK9dFtnx3gqZWoD p4Osg+5/GmQGcgHU13ax4lGk4AorrmwTAyd+M=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi,

I'm working a project with SMP VMs and I noticed something about the
behavior of the credit scheduler that does not match my understanding
of the documentation about the credit scheduler.  It seems like
assigning more VCPUs to a VM increases the proportion of total system
CPU resources the VM will receive, whereas the documentation indicates
that this should be controlled by the weight value.

For example when running a CPU intensvie benchmark with some VMs
configured with 1-VCPU and other VMs configured with 8-VCPUs, the
benchmark took 37% longer to complete on the VMs with 1-VCPU than the
ones with 8-VCPUs.  Unfortunately I did not record the exact values
for CPU time that each VM received; however, I think that the 8-VCPU
VMs did receive around 30% more CPU time than the 1-VCPU VMs.  These
tests were performed with the default weight of 256 for all VMs and no
cap configured.

I don't think that this is the behavior that the scheduler should
exhibit based on the documentation I read.  I admit the tests I was
doing were not really practical use cases for real applications.  But
I'd be curious if anyone knows if this is a limitation of the design
of the credit scheduler, or possibly due to a configuration problem
with my system.  I running Xen 3.2.0 compiled from the official source
distribution tarball, and the guest VMs are also using the 3.2.0
distribution with the 2.6.18 kernel.  Any ideas anyone has about why
my system is behaving this way are appreciated.

Thanks,

Gabriel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.