[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 0 of 3] xl: free allocations made at top level



On Fri, 2010-07-30 at 10:02 +0100, Ian Campbell wrote:
> 
> If there were a way to tell valgrind not to worry about these
> allocations that would be nice, as would a palatable workaround which
> could be used in xl but I can't find anything suitable. For example
> calling pthread_exit() at the end of main() causes leaks from the C
> runtime so that is out. Creating a thread to do the body of the work
> (with main just doing pthread_join and returning the result) doesn't
> pass the palatable test IMHO 

This seems to work. I'm not entirely sure about it though -- it's not
clear what happens if the thread which calls xc_interface_close is not
the final thread to exit and whether we would still leak the buffer from
the thread which does the final exit() call.

It's enough to make xl valgrind clean though, so perhaps that is enough.

# HG changeset patch
# User Ian Campbell <ian.campbell@xxxxxxxxxx>
# Date 1280481944 -3600
# Node ID b8bf3e732b9a4b3a1614dff87af48560d3839e3b
# Parent  f5f5949d98f0104ad1422ddacded20875f23d38d
libxc: free thread specific hypercall buffer on xc_interface_close

The per-thread hypercall buffer is usually cleaned up on pthread_exit
by the destructor passed to pthread_key_create. However if the calling
application is not threaded then the destructor is never called.

This frees the data for the current thread only but that is OK since
any other threads will be cleaned up by the destructor.

Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

diff -r f5f5949d98f0 -r b8bf3e732b9a tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c  Fri Jul 30 09:13:11 2010 +0100
+++ b/tools/libxc/xc_private.c  Fri Jul 30 10:25:44 2010 +0100
@@ -57,6 +57,8 @@ xc_interface *xc_interface_open(xentooll
     return 0;
 }
 
+static void xc_clean_hcall_buf(void);
+
 int xc_interface_close(xc_interface *xch)
 {
     int rc = 0;
@@ -68,6 +70,9 @@ int xc_interface_close(xc_interface *xch
         rc = xc_interface_close_core(xch, xch->fd);
         if (rc) PERROR("Could not close hypervisor interface");
     }
+
+    xc_clean_hcall_buf();
+
     free(xch);
     return rc;
 }
@@ -180,6 +185,8 @@ int hcall_buf_prep(void **addr, size_t l
 int hcall_buf_prep(void **addr, size_t len) { return 0; }
 void hcall_buf_release(void **addr, size_t len) { }
 
+static void xc_clean_hcall_buf(void) { }
+
 #else /* !__sun__ */
 
 int lock_pages(void *addr, size_t len)
@@ -223,6 +230,14 @@ static void _xc_clean_hcall_buf(void *m)
     }
 
     pthread_setspecific(hcall_buf_pkey, NULL);
+}
+
+static void xc_clean_hcall_buf(void)
+{
+    void *hcall_buf = pthread_getspecific(hcall_buf_pkey);
+
+    if (hcall_buf)
+        _xc_clean_hcall_buf(hcall_buf);
 }
 
 static void _xc_init_hcall_buf(void)



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.