[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] libxc: use correct macro when unmapping memory after save operation



With some help from Olaf, I've finally got to the bottom of an issue I
came across while trying to implement save/restore in the libvirt
libxenlight driver.  After issuing the save operation, the saved domain
was not being cleaned up properly and left in this state from xl's
perspective

xen33:# xl list
Name                   ID   Mem VCPUs      State   Time(s)
Domain-0                0  6821     8     r-----     122.5
(null)                  2     2     2     --pssd      10.8

Checking the libvirtd /proc/$pid/maps I found this

7f3798984000-7f3798b86000 r--s 00002000 00:03 4026532097 /proc/xen/privcmd

So not all all pages belonging to the domain were unmapped from
libvirtd.  In tools/libxc/xc_domain_save.c we found that P2M_FL_ENTRIES
were being mapped but only P2M_FLL_ENTRIES were being unmapped.  The
attached patch changes the unmapping to use the same P2M_FL_ENTRIES
macro.  I'm not too familiar with this code though so posting here for
review.

I suspect this was not noticed before since most (all?) processes doing
save terminate after the save and are not long-running like libvirtd.

Regards,
Jim

diff -r 5fb4c607049d tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c      Fri May 20 09:44:41 2011 +0100
+++ b/tools/libxc/xc_domain_save.c      Fri May 20 16:02:28 2011 -0600
@@ -1955,7 +1955,7 @@ int xc_domain_save(xc_interface *xch, in
         munmap(live_shinfo, PAGE_SIZE);
 
     if ( ctx->live_p2m )
-        munmap(ctx->live_p2m, P2M_FLL_ENTRIES * PAGE_SIZE);
+        munmap(ctx->live_p2m, P2M_FL_ENTRIES * PAGE_SIZE);
 
     if ( ctx->live_m2p )
         munmap(ctx->live_m2p, M2P_SIZE(ctx->max_mfn));
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.