WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: [PATCH] tmem: fix to 20945 "When tmem is enabled, reserv

To: Jan Beulich <JBeulich@xxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: [Xen-devel] RE: [PATCH] tmem: fix to 20945 "When tmem is enabled, reserve a fraction of memory"
From: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Date: Wed, 17 Feb 2010 07:13:00 -0800 (PST)
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 17 Feb 2010 07:18:44 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4B7BF409020000780002FD81@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4f9cade6-4b13-41fd-bc34-7443fb06ece3@default> <C7A18D21.A5E2%keir.fraser@xxxxxxxxxxxxx 4B7BF409020000780002FD81@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx]
> Subject: Re: [PATCH] tmem: fix to 20945 "When tmem is enabled, reserve
> a fraction of memory"
> 
> >>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 17.02.10 13:10 >>>
> >On 16/02/2010 18:30, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx>
> wrote:
> >
> >> +        if ( order == 0)
> >> +            goto try_tmem;
> >> +        if ( order >= 9)
> >> +            goto fail;
> >
> >Why not try_tmem in the case that order>=9, too, rather than fail
> outright?
> 
> It could be done that way, but wouldn't have any effect, as tmem
> doesn't even try to relinquish any memory when order > 0.

Correct.  To explain (if anyone is interested):

Tmem maintains queues of order==0 pages internally because
if a page is released to the xenheap/domheap, it must be scrubbed.
But tmem is highly likely to use the page again (and SOON).
If tmem immediately reclaims the page, the scrubbing is wasted
cycles.  But if it does not and some other xenheap/domheap allocation
obtains the page, the contents of an unscrubbed page could
reveal data from another domain so would be a potential
security hole.

When a domain is being created, a large number of pages
may be (scrubbed and) transferred from tmem to Xen/domheap.
While this transfer, in combination with the "buddying"
in xenheap/domheap, may result in some order>0 chunks of
memory, there is no guarantee that it will.

I considered adding some kind of "buddying" to tmem's "free"
pages (and the interface to tmem_relinquish_pages() from
alloc_heap_pages() allows for an order>0 to be requested),
but due to fragmentation it would only rarely have any
value, especially for order>1, so I never implemented it.

So, in the end, the real solution is to change any allocations
in Xen, at least any allocations that occur after dom0 is
running, to no longer require order>0 allocations.

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel