[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for 2.3 v2 1/1] xen-hvm: increase maxmem before calling xc_domain_populate_physmap



On 01/13/15 13:07, Stefano Stabellini wrote:
> On Mon, 12 Jan 2015, Stefano Stabellini wrote:
>> On Wed, 3 Dec 2014, Don Slutz wrote:
>>> From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
>>>
>>> Increase maxmem before calling xc_domain_populate_physmap_exact to
>>> avoid the risk of running out of guest memory. This way we can also
>>> avoid complex memory calculations in libxl at domain construction
>>> time.
>>>
>>> This patch fixes an abort() when assigning more than 4 NICs to a VM.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
>>> Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
>>> ---
>>> v2: Changes by Don Slutz
>>>   Switch from xc_domain_getinfo to xc_domain_getinfolist
>>>   Fix error check for xc_domain_getinfolist
>>>   Limit increase of maxmem to only do when needed:
>>>     Add QEMU_SPARE_PAGES (How many pages to leave free)
>>>     Add free_pages calculation
>>>
>>>  xen-hvm.c | 19 +++++++++++++++++++
>>>  1 file changed, 19 insertions(+)
>>>
>>> diff --git a/xen-hvm.c b/xen-hvm.c
>>> index 7548794..d30e77e 100644
>>> --- a/xen-hvm.c
>>> +++ b/xen-hvm.c
>>> @@ -90,6 +90,7 @@ static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t 
>>> *shared_page, int vcpu)
>>>  #endif
>>>  
>>>  #define BUFFER_IO_MAX_DELAY  100
>>> +#define QEMU_SPARE_PAGES 16
>>
>> We need a big comment here to explain why we have this parameter and
>> when we'll be able to get rid of it.
>>
>> Other than that the patch is fine.
>>
>> Thanks!
>>
> 
> Actually I'll just go ahead and add the comment and commit, if for you
> is OK.
> 

That would be fine with me.  I was still working on a good wording.
   -Don Slutz

> Cheers,
> 
> Stefano
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.