[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RE: [ PATCH 4/4 ] HVM vcpu add/remove: qemu logic for vcpu add/revmoe



On 12/14/2009 10:54 AM, Liu, Jinsong wrote:
Keir Fraser wrote:
On 14/12/2009 08:04, "Keir Fraser"<keir.fraser@xxxxxxxxxxxxx>  wrote:

On 13/12/2009 18:05, "Liu, Jinsong"<jinsong.liu@xxxxxxxxx>  wrote:

HVM vcpu add/remove: qemu logic for vcpu add/revmoe

-- at qemu side, get vcpu_avail which used for original cpu avail
map;
-- setup gpe ioread/iowrite at qmeu;
-- setup vcpu add/remove user interface through monitor;
-- setup SCI logic;
I'm guessing because this adds a new command-line option that I need
this checked into the qemu tree before I can apply your first patch
(1/4)? Otherwise that patch will break domain creation as qemu will
exit with an 'unrecognised option' error. So I need Ian Jackson to
apply this one and send me an updated QEMU_TAG first.
As of c/s 20640 all your Xen patches are checked in. I modified them
a bit so you may want to take a look. I commented out the one line
that actually sets the new qemu option, until that option is
supported by our qemu. I think there is a question over whether the
new qemu option should (a) have a better name (I called it
vcpu_online[] in hvm_info structure); and (b) should have a more
user-friendly format (currently passing a decimal number interpreted
as a bitmap - perhaps should be a list of vcpus instead).

  -- Keir
Thanks!

Currently at xm level, HVM config keep compatible with PV config (patch 20495, 
20502), they both set maxvcpus/ avail vcpus at config file as
maxvcpus = xxx
vcpus = yyy
and both HVM and PV can dynamic add/remove vcpus now.

One question is, patch 20384/ 20386/ 20389 and qemu patch 
3140780e451d3919ef2c81f91ae0ebe3f286eb06 extend HVM vcpus max to 128, however, 
current xm and xend python logic seems only support max 64 since xm/xend now 
interpret vcpu bitmap to a 'long'.
I agree that the bitmap would better be replaced by a list of vcpus so that 
vcpus number will not be limited in the future.

Well, actually I did the patches 20495 and 20502 and I used the existing infrastructure of vcpus and vcpu_avail here so there were no changes in current infrastructure since I wanted to preserve it. That's the reason why my patch was done this way. Changing existing infrastructure could introduce some problems with that so that's why maxvcpus values is going to be vcpus in xend python as well as vcpus is set the bitmask of vcpu_avail.

Regards,
Michal
Regards,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.