[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/7] xl: Fix 'free_memory' to include outstanding_claims value.

At 16:33 +0100 on 16 Apr (1366129996), Ian Jackson wrote:
> konrad wilk writes ("Re: [PATCH 7/7] xl: Fix 'free_memory' to include 
> outstanding_claims value."):
> > On 4/15/2013 12:50 PM, Ian Jackson wrote:
> > > I have checked this and the race is in the hypercall API.  The
> > > hypercall API has already been checked in.  So, under the
> > > circumstances, for this patch:
> ...
> > Right. Let me enumerate some ideas for fixing this on a different thread 
> > (if we have the same race in mind that you spotted).
> The race I'm thinking of is this:
> When we display the effective amount of free memory (in "xl info"
> etc.), we calculate it as
>   physinfo->free_pages - xc_domain_get_outstanding_pages()
> But these two values have been retrieved at different times.  So while
> a domain is being built and memory moves from "free but claimed" to
> "in use", the free memory visible via the libxl API will sometimes be
> "wrong".
> This may seem like a minor point, but it will show up as occasional
> glitches in automatic monitoring and graphing systems; it might even
> trigger spurious nagios alerts in higher layers etc.  If that was all
> there was to it I wouldn't regard it as a release-critical bug - a
> race like that would be annoying but could be fixed later.
> However, the race is baked into the hypercall API/ABI which we want to
> keep relatively stable at least in stable releases.
> I think the right answer is probably simply to move the total claim
> value into the physinfo struct, and do away with the separate
> XENMEM_get_outstanding_pages memory op hypercall.  Do you agree ?

FWIW, I think this is a good idea.  You might have to be a little
careful on the hypervisor side to make sure the two values are actually
a matched pair (say by taking the page allocator lock).


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.