[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [PATCH] Add hypercall to mark superpages to improve performance

  • To: Dave McCracken <dcm@xxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Mon, 3 May 2010 17:29:14 +0100
  • Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, Xen Developers List <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 03 May 2010 09:30:10 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcrqY+neLe2P2PbORqmHQq60FQTuzQAdxJsMAACyESg=
  • Thread-topic: [Xen-devel] Re: [PATCH] Add hypercall to mark superpages to improve performance

On 03/05/2010 17:09, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

>> It should be simple enough to also check superpage->count_info in those
>> places.  So the total mappings of a page would be page->count_info +
>> superpage->count_info.  Good thing you suggested we also have a count in the
>> superpage_info struct :)
> I think you're going to have trouble handling two separate reference counts,
> for superpages and single pages, in a race-free manner that is any better
> than checking/updating reference counts across all pages in a superpage on
> first superpage mapping.

For example: When making first superpage mapping, how do you know that all
pages belong to the relevant domain, without scanning every page_info? When
destructing last superpage mapping (or single-page mapping) how do you
safely check the 'other' reference count to decide whether the page is
freeable, without having races (last single-page and superpage mappings
could be destructed concurrently, need to ensure any given page gets freed
exactly once). And I could think of others no doubt... Just pointing out how
careful you have to be if you think you can avoid the naïve
refcount-updatign algorithms I suggested. I'd rather shoot down the obvious
races before you do the coding.

 -- Keir

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.