[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Revert "xen-hvm: increase maxmem before calling xc_domain_populate_physmap"



On Wed, 2015-06-10 at 14:21 +0100, George Dunlap wrote:
> On 06/10/2015 01:55 PM, George Dunlap wrote:
> > This reverts commit c1d322e6048796296555dd36fdd102d7fa2f50bf.
> > 
> > The original commit fixes a bug when assigning a large number of
> > devices which require option roms to a guest.  (One known
> > configuration that needs extra memory is having more than 4 emulated
> > NICs assigned.  Three or fewer NICs seems to work without this
> > functionality.)
> > 
> > However, by unilaterally increasing maxmem, it introduces two
> > problems.
> > 
> > First, now libxl's calculation of the required maxmem during migration
> > is broken -- any guest which exercised this functionality will fail on
> > migration.  (Guests which have the default number of devices are not
> > affected.)
> 
> Just to make it clear what the situation is (to the best of my knowledge):
> 
> QEMU 2.2 and before:
>  * A VM assigned more than 3 NICs would fail during qemu start-up
>  * A VM assigned 3 or fewer NICs can be created and migrated successfully.
> 
> QEMU 2.3 (most recent release):
>  * A VM assigned more than 3 NICs can be created successfully, but not
> migrated afterwards
>  * A VM assigned 3 or fewer NICs can be both created and migrated.
> (Stefano has done a few tests to verify this and it seems to be accurate.)
> 
> It's unlikely that the "proper fix" descibed in this mail will be ready
> for 2.4, so if this patch is accepted, 2.4 will look like 2.2.

FWIW I think reverting would be the right thing to do.

I think we should also revert some of the changes to libxl which tried
to cope with the qemu 2.2 behaviour.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.