[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] tmem and construct_dom0 memory allocation race


  • To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
  • From: Dulloor <dulloor@xxxxxxxxx>
  • Date: Tue, 22 Jun 2010 11:56:20 -0700
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>
  • Delivery-date: Tue, 22 Jun 2010 11:57:10 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=xDhi6ADlN0rwnnmdCamsTTRrzUiT/putS/brcIRxxAHLXlStR3DJoyROQp3bvH67CP zN1N94/z1ey+LLpMN3ZUaZpWxVBjBcztji57N6OGcbfX+sRuOAGRszyI3lfGoaq/yR2R v5Y20LZWJ8RQqrwRaoLThNkIXRCOzZL1UxeD8=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On Tue, Jun 22, 2010 at 10:23 AM, Dan Magenheimer
<dan.magenheimer@xxxxxxxxxx> wrote:
>> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
>> Subject: Re: [Xen-devel] tmem and construct_dom0 memory allocation race
>>
>> On 22/06/2010 08:17, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>>
>> > Hi Keir, You are right .. there is no race. I spent some time
>> > debugging this. The problem is that a zero-order allocation (from
>> > alloc_chunk, for the last dom0 page) fails with tmem on (in
>> > alloc_heap_pages), even though there are pages available in the heap.
>> > I don't think tmem really intends to get triggered so early. What do
>> > you think ?
>>
>> That's one for Dan to comment on.
>
> Hmmm... the special casing in alloc_heap_pages to avoid fragmentation
> need not be invoked if tmem doesn't hold any pages (as is the
> case at dom0 boot)...
>
> Does this patch fix the problem?  If so...
I have already tried something like this and it works. Also, we could
check for tmem_freeable_pages
right after opt_tmem check and before doing other order and
fragmentation checks.

>
> Signed-off-by: Dan Magenheimer
>
> diff -r a24dbfcbdf69 xen/common/page_alloc.c
> --- a/xen/common/page_alloc.c   Tue Jun 22 07:19:38 2010 +0100
> +++ b/xen/common/page_alloc.c   Tue Jun 22 11:17:44 2010 -0600
> @@ -316,11 +316,14 @@ static struct page_info *alloc_heap_page
>     spin_lock(&heap_lock);
>
>     /*
> -     * TMEM: When available memory is scarce, allow only mid-size allocations
> -     * to avoid worst of fragmentation issues. Others try TMEM pools then 
> fail.
> +     * TMEM: When available memory is scarce due to tmem absorbing it, allow
> +     * only mid-size allocations to avoid worst of fragmentation issues.
> +     * Others try tmem pools then fail.  This is a workaround until all
> +     * post-dom0-creation-multi-page allocations can be eliminated.
>      */
>     if ( opt_tmem && ((order == 0) || (order >= 9)) &&
> -         (total_avail_pages <= midsize_alloc_zone_pages) )
> +         (total_avail_pages <= midsize_alloc_zone_pages) &&
> +         tmem_freeable_pages() )
>         goto try_tmem;
>
>     /*
> diff -r a24dbfcbdf69 xen/common/tmem.c
> --- a/xen/common/tmem.c Tue Jun 22 07:19:38 2010 +0100
> +++ b/xen/common/tmem.c Tue Jun 22 11:17:44 2010 -0600
> @@ -2850,6 +2850,11 @@ EXPORT void *tmem_relinquish_pages(unsig
>     return pfp;
>  }
>
> +EXPORT unsigned long tmem_freeable_pages(void)
> +{
> +    return tmh_freeable_pages();
> +}
> +
>  /* called at hypervisor startup */
>  static int __init init_tmem(void)
>  {
> diff -r a24dbfcbdf69 xen/include/xen/tmem.h
> --- a/xen/include/xen/tmem.h    Tue Jun 22 07:19:38 2010 +0100
> +++ b/xen/include/xen/tmem.h    Tue Jun 22 11:17:44 2010 -0600
> @@ -11,6 +11,7 @@
>
>  extern void tmem_destroy(void *);
>  extern void *tmem_relinquish_pages(unsigned int, unsigned int);
> +extern unsigned long tmem_freeable_pages(void);
>  extern int  opt_tmem;
>
>  #endif /* __XEN_TMEM_H__ */
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.