[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed new "memory capacity claim" hypercall/feature



> From: Tim Deegan [mailto:tim@xxxxxxx]
> Subject: Re: Proposed new "memory capacity claim" hypercall/feature
> 
> Hi,

Hi Tim!

> At 16:21 -0700 on 29 Oct (1351527686), Dan Magenheimer wrote:
> > > > The hypervisor must also enforce some semantics:  If an allocation
> > > > occurs such that a domain's tot_phys_pages would equal or exceed
> > > > d.tot_claimed_pages, then d.tot_claimed_pages becomes "unset".
> > > > This enforces the temporary nature of a claim:  Once a domain
> > > > fully "occupies" its claim, the claim silently expires.
> > >
> > > Why does that happen?  If I understand you correctly, releasing the
> > > claim is something the toolstack should do once it knows it's no longer
> > > needed.
> >
> > I haven't thought this all the way through yet, but I think this
> > part of the design allows the toolstack to avoid monitoring the
> > domain until "total_phys_pages" reaches "total_claimed" pages,
> > which should make the implementation of claims in the toolstack
> > simpler, especially in many-server environments.
> 
> I think the toolstack has to monitor the domain for that long anyway,
> since it will have to unpause it once it's built.

Could be.  This "claim auto-expire" feature is certainly not a
requirement but I thought it might be useful, especially for
multi-server toolstacks (such as Oracle's).  I may take a look at
implementing it anyway since it is probably only a few lines of code,
but will ensure I do so as a separately reviewable/rejectable patch.

> Relying on an
> implicit release seems fragile -- if the builder ends up using only
> (total_claimed - 1) pages, or temporarily allocating total_claimed and
> then releasing some memory, things could break.

I agree its fragile, though I don't see how things could actually
"break".  But, let's drop claim-auto-expire for now as I fear it is
detracting from the larger discussion.
 
> > > I think it needs a plan for handling restricted memory allocations.
> > > For example, some PV guests need their memory to come below a
> > > certain machine address, or entirely in superpages, and certain
> > > build-time allocations come from xenheap.  How would you handle that
> > > sort of thing?
> >
> > Good point.  I think there's always been some uncertainty about
> > how to account for different zones and xenheap... are they part of the
> > domain's memory or not?
> 
> Xenheap pages are not part of the domain memory for accounting purposes;
> likewise other 'anonymous' allocations (that is, anywhere that
> alloc_domheap_pages() & friends are called with a NULL domain pointer).
> Pages with restricted addresses are just accounted like any other
> memory, except when they're on the free lists.
> 
> Today, toolstacks use a rule of thumb of how much extra space to leave
> to cover those things -- if you want to pre-allocate them, you'll have
> to go through the hypervisor making sure _all_ memory allocations are
> accounted to the right domain somehow (maybe by generalizing the
> shadow-allocation pool to cover all per-domain overheads).  That seems
> like a useful side-effect of adding your new feature.

Hmmm... then I'm not quite sure how adding a simple "claim" changes
the need for accounting of these anonymous allocations.  I guess
it depends on the implementation... maybe the simple implementation
I have in mind can't co-exist with anonymous allocations but I think
it will.

> > Deserves some more thought...  if you can enumerate all such cases,
> > that would be very helpful (and probably valuable long-term
> > documentation as well).
> 
> I'm afraid I can't, not without re-reading all the domain-builder code
> and a fair chunk of the hypervisor, so it's up to you to figure it out.

Well, or at least to ensure that I haven't made it any worse ;-)

me adds "world peace" to the requirements list for the new claim
hypercall ;-)

Thanks much for the feedback!
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.