[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] pre-reservation of memory for domain creation



Jan Beulich wrote:
>>>> Tim Deegan <Tim.Deegan@xxxxxxxxxx> 14.01.10 13:46 >>>
>> At 09:00 +0000 on 14 Jan (1263459616), Jan Beulich wrote:
>>>>>> "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> 14.01.10 08:19 >>>
>>>>    Currently guest initialization process in xend
>>>> (XendDomainInfo.py) is: 
>>>> 
>>>>    _constructDomain() --> domain_create() --> domain_max_vcpus()
>>>>    ... --> _initDomain() --> shadow_mem_control() ...
>>> 
>>> While the patch certainly matches what I had in mind, with this
>>> sequence 
>>> it is clear that the tools still will need adjustment: The full
>>> ballooning only happens from _initDomain(), and hence the
>>> pre-reservation (from _constructDomain) of 4Mb would still be too
>>> small for large vCPU counts. 
>>> 
>>> I wonder though what all this memory is needed for before the domain
>>> (not to speak of secondary CPUs) actually gets started. If that
>>> could be 
>>> got under control, tools side adjustment would not be necessary.
>>> Tim? 
>> 
>> Hmmm.  Some shadow memory has to be allocated before the VCPUs are
>> initialized so that they can be given monitor pagetables etc.  Some
>> shadow memory has to be allocated before the guest's main memory is
>> assigned because the p2m is built out of shadow memory.
> 
> So is there a way to quantify that? In particular, is that *initial*
> amount in any way dependent on the number of vCPU-s?
> 
>> Fixing the first one should be enough, so long as xend assignd vcpus
>> and memory before assigning shadow memory properly (which I believe
>> it does).  Patch attached.
> 
> The full memory assignment happens after vCPU-s got assigned, but
> the initial assignment happens before. Dongxiao's patch tried to
> account for that, but neither was that patch accepted so far, nor am I
> convinced this is really correct or even necessary.

The patch I attached last time could not solve this issue. The reason is
the same that, at the point when shadow allocates memory for each vcpu,
xend hasn't ballooned out enough memory. 

I discussed this issue within our team, however we could't achieve a
good solution currently since Chinese New Year vacation will soon start.
Anyway I made a work-around patch for it, though you may not like it. 

Thanks!
Dongxiao

Xend: Enlarge the memory balloon size for domain creation since shadow
pre-allocation size has changed from 1M to 4M.

Signed-off-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>

diff -r 5b895c3f4386 tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py   Mon Feb 08 10:18:51 2010 +0000
+++ b/tools/python/xen/xend/XendDomainInfo.py   Tue Feb 09 15:07:47 2010 +0800
@@ -2519,9 +2519,8 @@ class XendDomainInfo:
         # There is an implicit memory overhead for any domain creation. This
         # overhead is greater for some types of domain than others. For
         # example, an x86 HVM domain will have a default shadow-pagetable
-        # allocation of 1MB. We free up 4MB here to be on the safe side.
-        # 2MB memory allocation was not enough in some cases, so it's 4MB now
-        balloon.free(4*1024, self) # 4MB should be plenty
+        # allocation of 4MB. We free up 16MB here to be on the safe side.
+        balloon.free(16*1024, self) # 16MB should be plenty

         ssidref = 0
         if security.on() == xsconstants.XS_POLICY_USE:


> 
> I'm re-raising this question because we're not seeming to make any
> progress towards a satisfactory resolution of the regression c/s 20389
> introduced.
> 
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.