[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH V5 3/8] xen/common: Introduce _xrealloc function



From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>

This patch introduces type-unsafe function which besides
re-allocation handles the following corner cases:
1. if requested size is zero, it will behave like xfree
2. if incoming pointer is not valid (NULL or ZERO_BLOCK_PTR),
   it will behave like xmalloc

If both pointer and size are valid the function will re-allocate and
copy only if requested size and alignment don't fit in already
allocated space.

Subsequent patch will add type-safe helper macros.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CC: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
CC: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
CC: Jan Beulich <jbeulich@xxxxxxxx>
CC: Julien Grall <julien.grall@xxxxxxx>
CC: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
CC: Tim Deegan <tim@xxxxxxx>
CC: Wei Liu <wl@xxxxxxx>
CC: Paul Durrant <paul.durrant@xxxxxxxxxx>

---
Changes V4 -> V5:
    - avoid possible truncation with allocations of 4GiB or above
    - introduce helper functions add(strip)_padding to avoid
      duplicating the code
    - omit the unnecessary casts, change u32 to uint32_t
      when moving the code
    - use _xzalloc instead of _xmalloc to get the tail
      portion zeroed
    - update pointer according to the requsted alignment
    - compared against "size" instead of "tmp_size" for the allocations
      above PAGE_SIZE

Changes V3 -> V4:
    - add check for the alignment compatibility
    - properly detect current size (take into the account a possible
      fake alignment header)
    - update comment in code/patch description

Changes RFC -> V3:
    - behave like xmalloc if incoming pointer is ZERO_BLOCK_PTR or NULL
    - return ZERO_BLOCK_PTR after xfree if requested size is zero
    - add patch description
    - use allocator internals to recognize current size of
      the incoming pointer
    - do not re-allocate and copy if requested size fits in already
      allocated space

   ...

   Original patch was initially posted by Sameer Goel:
   https://lists.xen.org/archives/html/xen-devel/2017-06/msg00858.html

   This could be considered as another attempt to add it:
   https://www.mail-archive.com/kexec@xxxxxxxxxxxxxxxxxxx/msg21335.html

   [As it was previously discussed with Julien in IRC]

   The reason for this patch to be an RFC is that patch itself is not
   completely correct and I don't fully understand what/how should
   be done for this patch to be accepted. Or whether community even
   wants this to go in. So, to avoid bike shedding, the first target is
   to collect feedback.

   For everyone who wants more details why this is needed and
   where used, please see next patch of this thread:
   "iommu/arm: Add lightweight iommu_fwspec support"

   In a nutshell, the upcoming "iommu_fwspec" support on ARM
   is going to use xrealloc to expand an array for device IDs.
   We really want to have "iommu_fwspec" support which will give us
   a generic abstract way to add new device to the IOMMU based on
   the generic IOMMU DT binding.
---
 xen/common/xmalloc_tlsf.c | 113 ++++++++++++++++++++++++++++++++++++++--------
 xen/include/xen/xmalloc.h |   1 +
 2 files changed, 96 insertions(+), 18 deletions(-)

diff --git a/xen/common/xmalloc_tlsf.c b/xen/common/xmalloc_tlsf.c
index e98ad65..a797019 100644
--- a/xen/common/xmalloc_tlsf.c
+++ b/xen/common/xmalloc_tlsf.c
@@ -554,10 +554,40 @@ static void tlsf_init(void)
 #define ZERO_BLOCK_PTR ((void *)-1L)
 #endif
 
+static void *strip_padding(void *p)
+{
+    struct bhdr *b = p - BHDR_OVERHEAD;
+
+    if ( b->size & FREE_BLOCK )
+    {
+        p -= b->size & ~FREE_BLOCK;
+        b = p - BHDR_OVERHEAD;
+        ASSERT(!(b->size & FREE_BLOCK));
+    }
+
+    return p;
+}
+
+static void *add_padding(void *p, unsigned long align)
+{
+    uint32_t pad;
+
+    if ( (pad = -(long)p & (align - 1)) != 0 )
+    {
+        void *q = p + pad;
+        struct bhdr *b = q - BHDR_OVERHEAD;
+
+        ASSERT(q > p);
+        b->size = pad | FREE_BLOCK;
+        p = q;
+    }
+
+    return p;
+}
+
 void *_xmalloc(unsigned long size, unsigned long align)
 {
     void *p = NULL;
-    u32 pad;
 
     ASSERT(!in_irq());
 
@@ -578,14 +608,7 @@ void *_xmalloc(unsigned long size, unsigned long align)
         return xmalloc_whole_pages(size - align + MEM_ALIGN, align);
 
     /* Add alignment padding. */
-    if ( (pad = -(long)p & (align - 1)) != 0 )
-    {
-        char *q = (char *)p + pad;
-        struct bhdr *b = (struct bhdr *)(q - BHDR_OVERHEAD);
-        ASSERT(q > (char *)p);
-        b->size = pad | FREE_BLOCK;
-        p = q;
-    }
+    p = add_padding(p, align);
 
     ASSERT(((unsigned long)p & (align - 1)) == 0);
     return p;
@@ -598,10 +621,70 @@ void *_xzalloc(unsigned long size, unsigned long align)
     return p ? memset(p, 0, size) : p;
 }
 
-void xfree(void *p)
+void *_xrealloc(void *ptr, unsigned long size, unsigned long align)
 {
-    struct bhdr *b;
+    unsigned long curr_size, tmp_size;
+    void *p;
+
+    if ( !size )
+    {
+        xfree(ptr);
+        return ZERO_BLOCK_PTR;
+    }
 
+    if ( ptr == NULL || ptr == ZERO_BLOCK_PTR )
+        return _xmalloc(size, align);
+
+    ASSERT((align & (align - 1)) == 0);
+    if ( align < MEM_ALIGN )
+        align = MEM_ALIGN;
+
+    tmp_size = size + align - MEM_ALIGN;
+
+    if ( tmp_size < PAGE_SIZE )
+        tmp_size = (tmp_size < MIN_BLOCK_SIZE) ? MIN_BLOCK_SIZE :
+            ROUNDUP_SIZE(tmp_size);
+
+    if ( !((unsigned long)ptr & (PAGE_SIZE - 1)) )
+    {
+        curr_size = (unsigned long)PFN_ORDER(virt_to_page(ptr)) << PAGE_SHIFT;
+
+        if ( size <= curr_size && ((unsigned long)ptr & (align - 1)) == 0 )
+            return ptr;
+    }
+    else
+    {
+        struct bhdr *b;
+
+        /* Strip alignment padding. */
+        p = strip_padding(ptr);
+
+        b = p - BHDR_OVERHEAD;
+        curr_size = b->size & BLOCK_SIZE_MASK;
+
+        if ( tmp_size <= curr_size )
+        {
+            /* Add alignment padding. */
+            p = add_padding(p, align);
+
+            ASSERT(((unsigned long)p & (align - 1)) == 0);
+
+            return p;
+        }
+    }
+
+    p = _xzalloc(size, align);
+    if ( p )
+    {
+        memcpy(p, ptr, min(curr_size, size));
+        xfree(ptr);
+    }
+
+    return p;
+}
+
+void xfree(void *p)
+{
     if ( p == NULL || p == ZERO_BLOCK_PTR )
         return;
 
@@ -626,13 +709,7 @@ void xfree(void *p)
     }
 
     /* Strip alignment padding. */
-    b = (struct bhdr *)((char *)p - BHDR_OVERHEAD);
-    if ( b->size & FREE_BLOCK )
-    {
-        p = (char *)p - (b->size & ~FREE_BLOCK);
-        b = (struct bhdr *)((char *)p - BHDR_OVERHEAD);
-        ASSERT(!(b->size & FREE_BLOCK));
-    }
+    p = strip_padding(p);
 
     xmem_pool_free(p, xenpool);
 }
diff --git a/xen/include/xen/xmalloc.h b/xen/include/xen/xmalloc.h
index f075d2d..831152f 100644
--- a/xen/include/xen/xmalloc.h
+++ b/xen/include/xen/xmalloc.h
@@ -51,6 +51,7 @@ extern void xfree(void *);
 /* Underlying functions */
 extern void *_xmalloc(unsigned long size, unsigned long align);
 extern void *_xzalloc(unsigned long size, unsigned long align);
+extern void *_xrealloc(void *ptr, unsigned long size, unsigned long align);
 
 static inline void *_xmalloc_array(
     unsigned long size, unsigned long align, unsigned long num)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.