WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-3.4-testing] xmalloc_tlsf: Fall back to xmalloc_who

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-3.4-testing] xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails.
From: "Xen patchbot-3.4-testing" <patchbot-3.4-testing@xxxxxxxxxxxxxxxxxxx>
Date: Fri, 23 Oct 2009 02:40:38 -0700
Delivery-date: Fri, 23 Oct 2009 02:42:12 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1256289859 -3600
# Node ID 2beca5f48ffed21c4b56cabd34707e09b4c31068
# Parent  7bd37c5c72893a783a00b3068df0c81b3ceb911c
xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails.

This was happening for xmalloc request sizes between 3921 and 3951
bytes. The reason being that xmem_pool_alloc() may add extra padding
to the requested size, making the total block size greater than a
page.

Rather than add yet more smarts about TLSF to _xmalloc(), we just
dumbly attempt any request smaller than a page via xmem_pool_alloc()
first, then fall back on xmalloc_whole_pages() if this fails.

Based on bug diagnosis and initial patch by John Byrne <john.l.byrne@xxxxxx>

Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxx>
xen-unstable changeset:   20349:87bc0d49137b
xen-unstable date:        Wed Oct 21 09:21:01 2009 +0100
---
 xen/common/xmalloc_tlsf.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff -r 7bd37c5c7289 -r 2beca5f48ffe xen/common/xmalloc_tlsf.c
--- a/xen/common/xmalloc_tlsf.c Fri Oct 23 10:20:28 2009 +0100
+++ b/xen/common/xmalloc_tlsf.c Fri Oct 23 10:24:19 2009 +0100
@@ -542,7 +542,7 @@ static void tlsf_init(void)
 
 void *_xmalloc(unsigned long size, unsigned long align)
 {
-    void *p;
+    void *p = NULL;
     u32 pad;
 
     ASSERT(!in_irq());
@@ -555,10 +555,10 @@ void *_xmalloc(unsigned long size, unsig
     if ( !xenpool )
         tlsf_init();
 
-    if ( size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+    if ( size < PAGE_SIZE )
+        p = xmem_pool_alloc(size, xenpool);
+    if ( p == NULL )
         p = xmalloc_whole_pages(size);
-    else
-        p = xmem_pool_alloc(size, xenpool);
 
     /* Add alignment padding. */
     if ( (pad = -(long)p & (align - 1)) != 0 )
@@ -592,7 +592,7 @@ void xfree(void *p)
         ASSERT(!(b->size & 1));
     }
 
-    if ( b->size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+    if ( b->size >= PAGE_SIZE )
         free_xenheap_pages((void *)b, get_order_from_bytes(b->size));
     else
         xmem_pool_free(p, xenpool);

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-3.4-testing] xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails., Xen patchbot-3.4-testing <=