[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] The overhead of VCPu migration in xen


  • To: "xen-devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "michaeli.zhi" <michaeli.zhi@xxxxxxxxx>
  • Date: Fri, 20 Nov 2009 15:26:28 +0800
  • Delivery-date: Thu, 19 Nov 2009 23:26:27 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:subject:message-id:x-mailer:mime-version:content-type; b=KI67synNR18FUh9AKX7C4SHttcEUvDPmsmRlTCAT5iVkwKUsMegud8UYs7tGXH1saS 7i7H0mfLFLOMQJj9DtTE23Iin1glps2BI7msk1HVFuXZkd1iaSTj/VnqesDrnqfsjPP/ gfgUD+E1RSR7VH7kfYgc1td2qRXOjLG6kLyww=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi, everyone.
I did a little study on the code of credit scheduler of xen.
I am confused on overhear of the migration of VCPU .
As I learned, the course of VCPU migration is just to return a "reasonable" VCPU of other peer PCPU
to the PCPU which is busy on other VCPU, as well as some judgments before(eg., comparision of VCPU priority, affintiy). 
In a word, it is just a pointer returned.  
My question is where is the overhead,
or the overhead of executing those load_balance codes is expensive,
or will the migration of VCPU introducing the somekind of cache failure?
 
Best regards
2009-11-20

michaeli.zhi
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.