[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of problem and alternate solutions



> >>>> Neither is enforcing min==max. This was my argument when previously 
> >>>> commenting on this thread. The fact that you have enforcement of a 
> >>>> maximum domain allocation gives you an excellent tool to keep a domain's 
> >>>> unsupervised growth at bay. The toolstack can choose how fine-grained, 
> >>>> how often to be alerted and stall the domain.
> > 
> > That would also do the trick - but there are penalties to it.
> > 
> > If one just wants to launch multiple guests and "freeze" all the other 
> > guests
> > from using the balloon driver - that can certainly be done.
> > 
> > But that is a half-way solution (in my mind). Dan's idea is that you 
> > wouldn't
> > even need that and can just allocate without having to worry about the other
> > guests at all - b/c you have reserved enough memory in the hypervisor 
> > (host) to
> > launch the guest.
> 
> Konrad:
> Ok, what happens when a guest is stalled because it cannot allocate more 
> pages due to existing claims? Exactly the same that happens when it can't 
> grow because it has hit d->max_pages.

But it wouldn't. I am going here on a limp, b/c I believe this is what the code
does but I should double-check.

The variables for the guest to go up/down would still stay in place - so it
should not be impacted by the 'claim'. Meaning you just leave them alone
and let the guest do whatever it wants without influencing it.

If the claim hypercall fails, then yes - you could have this issue.

But the solution to the hypercall failing are multiple - one is to 
try to "squeeze" all the guests to make space or just try to allocate
the guest on another box that has more memory and where the claim
hypercall returned success. Or it can do these claim hypercalls
on all the nodes in parallel and pick amongst the ones that returned
success.

Perhaps the 'claim' call should be called 'probe_and_claim'?

.. snip..
> >>> That code makes certain assumptions - that the guest will not go/up down
> >>> in the ballooning once the toolstack has decreed how much
> >>> memory the guest should use. It also assumes that the operations
> >>> are semi-atomic - and to make it so as much as it can - it executes
> >>> these operations in serial.
> >>> 
> >>> This goes back to the problem statement - if we try to parallize
> >>> this we run in the problem that the amount of memory we thought
> >>> we free is not true anymore. The start of this email has a good
> >>> description of some of the issues.
> >> 
> >> Just set max_pages (bad name...) everywhere as needed to make room. Then 
> >> kick tmem (everywhere, in parallel) to free memory. Wait until enough is 
> >> free â. Allocate your domain(s, in parallel). If any vcpus become stalled 
> >> because a tmem guest driver is trying to allocate beyond max_pages, you 
> >> need to adjust your allocations. As usual.
> > 
> > 
> > Versus just one "reserve" that would remove the need for most of this.
> > That is - if we can not "reserve" we would fall-back to the mechanism you
> > stated, but if there is enough memory we do not have to do the "wait"
> > game (which on a 1TB takes forever and makes launching guests sometimes
> > take minutes) - and can launch the guest without having to worry
> > about slow-path.
> > .. snip.
> 
> The "wait" could be literally zero in a common case. And if not, because 
> there is not enough free ram, the claim would have failed.
> 

Absolutly. And that is the beaty of it. If it fails then we can
decide to persue other options knowing that there was no race in finding
the value of free memory at all. The other options could be the
squeeze other guests down and try again; or just decide to claim/allocate
the guest on another host altogether.


> >>> I believe Dan is saying is that it is not enabled by default.
> >>> Meaning it does not get executed in by /etc/init.d/xencommons and
> >>> as such it never gets run (or does it now?) - unless one knows
> >>> about it - or it is enabled by default in a product. But perhaps
> >>> we are both mistaken? Is it enabled by default now on den-unstable?
> >> 
> >> I'm a bit lost â what is supposed to be enabled? A sharing daemon? A 
> >> paging daemon? Neither daemon requires wait queue work, batch allocations, 
> >> etc. I can't figure out what this portion of the conversation is about.
> > 
> > The xenshared daemon.
> That's not in the tree. Unbeknownst to me. Would appreciate to know more. Or 
> is it a symbolic placeholder in this conversation?

OK, I am confused then. I thought there was now an daemon that would take
care of the PoD and swapping? Perhaps its called something else?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.