[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH][RFC] to fix xm list memory reporting



McAfee, Tommie M wrote:
Anthony,

I'm using watches on the xenstore to set self.info['memory'].

You should gracefully handle the case where the info isn't in XenStore.

If the domain never writes to the store, Wether it is unable to, or if this balloon driver isn't in use, then functionality will be as it always was, I'm not relying on xen-store. self.info['memory'] only gets overwritten if the balloon driver (or perhaps anyone) writes to the memory/xmtarget path.
However, I did not think of some of the other points that you have brought up.  
Do you think that it is overkill at this point to address this, or should I 
look further into the issue?

If we're using the wrong entry from xc_dominfo_t then we ought to use the right one. Should be a 1-line fix.

Regards,

Anthony Liguori

Tommie McAfee
Xen-Testing


-----Original Message-----
From: Anthony Liguori [mailto:aliguori@xxxxxxxxxx]
Sent: Wed 10/11/2006 5:34 PM
To: McAfee, Tommie M
Cc: xen-devel
Subject: Re: [PATCH][RFC] to fix xm list memory reporting
This patch breaks the domU ABI.

You can't assume that the driver will have the modified balloon driver because it could be any 3.0.x guest.

You should gracefully handle the case where the info isn't in XenStore.

However, this patch concerns me in another way. You're relying on the guest to report how much memory it has? What prevents a guest from lying and claiming it has less memory than it really does? Forget about lying, what about buggy guests?

Isn't this info available from the hypervisor?

Regards,

Anthony Liguori

McAfee, Tommie M wrote:
This patch addresses bugzilla #649, enabling `xm list' to display the actual amount of memory allocated by a guest. This is done by checking the value that the balloon driver returns after memory requests. As a result, I'm noticing that a guest that I've set to start with 128Mb will actually give the user 126Mb. All other allocations perform normally, and attempting to over-allocate will simply return the amount of space that the domain was able to expand to. Under the premises that xm list should report actual memory values, 2 tests in xm-test may need to be modified to verify the amount of memory that a guest is physically using, and not rely on the value in the config file as being the amount of memory that the guest will have.

xm-test reactions to patch as of changset 11376:

REASON: Started domain with 128MB, but it got 126 MB
FAIL: 08_create_mem128_pos.test
REASON: Started domain with 256MB, but it got 254 MB
FAIL: 09_create_mem256_pos.test

Running 'free' inside of each guest shows 126, and 254 respectively, not their config file #'s.

Signed-off-by: Tommie McAfee <tommie.mcafee@xxxxxxxxxx>

Regards,
Tommie McAfee
Xen-Testing



------------------------------------------------------------------------

diff -r 593b5623a0d2 linux-2.6-xen-sparse/drivers/xen/balloon/balloon.c
--- a/linux-2.6-xen-sparse/drivers/xen/balloon/balloon.c        Fri Sep 29 
15:40:35 2006 +0100
+++ b/linux-2.6-xen-sparse/drivers/xen/balloon/balloon.c        Wed Oct 04 
14:46:33 2006 -0400
@@ -57,6 +57,8 @@
 #include <xen/xenbus.h>
#define PAGES2KB(_p) ((_p)<<(PAGE_SHIFT-10))
+#define NOXENBUS 0
+#define XENBUSREADY 1
#ifdef CONFIG_PROC_FS
 static struct proc_dir_entry *balloon_pde;
@@ -83,6 +85,9 @@ extern unsigned long totalram_pages;
/* We may hit the hard limit in Xen. If we do then we remember it. */
 static unsigned long hard_limit;
+
+/* Aknowledge that xenstore is available */
+static int xenbus_status;
/*
  * Drivers may alter the memory reservation independently, but they must
@@ -325,6 +330,22 @@ static int decrease_reservation(unsigned
        return need_sleep;
 }
+/* write currently allocated kbs to xenstore */
+static void xenbus_write_xmtarget(void){
+
+       struct xenbus_transaction xbt;
+       unsigned int xm_current_pages;
+       
+
+       if(likely(xenbus_status==XENBUSREADY)){
+               xm_current_pages = PAGES2KB(current_pages);
+               xenbus_transaction_start(&xbt);
+               xenbus_printf(xbt,"memory", "xmtarget", "%8u",xm_current_pages);
+               xenbus_transaction_end(xbt,0);
+       }
+
+}
+
 /*
  * We avoid multiple worker processes conflicting via the balloon mutex.
  * We may of course race updates of the target counts (which are protected
@@ -355,6 +376,8 @@ static void balloon_process(void *unused
        if (current_target() != current_pages)
                mod_timer(&balloon_timer, jiffies + HZ);
+ xenbus_write_xmtarget();
+
        up(&balloon_mutex);
 }
@@ -384,6 +407,8 @@ static void watch_target(struct xenbus_w
                /* This is ok (for domain0 at least) - so just return */
                return;
        }
+
+        xenbus_status=XENBUSREADY;
/* The given memory/target value is in KiB, so it needs converting to
         * pages. PAGE_SHIFT converts bytes to pages, hence PAGE_SHIFT - 10.
@@ -462,6 +487,8 @@ static int __init balloon_init(void)
 {
        unsigned long pfn;
        struct page *page;
+
+       xenbus_status=NOXENBUS;
if (!is_running_on_xen())
                return -ENODEV;
diff -r 593b5623a0d2 tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py   Fri Sep 29 15:40:35 2006 +0100
+++ b/tools/python/xen/xend/XendDomainInfo.py   Wed Oct 04 14:46:33 2006 -0400
@@ -459,6 +459,7 @@ class XendDomainInfo:
         self.console_mfn = None
self.vmWatch = None
+        self.memWatch = None
         self.shutdownWatch = None
self.shutdownStartTime = None
@@ -487,6 +488,14 @@ class XendDomainInfo:
             return []
+ def xsMemoryChanged(self, _):
+        """Get the memory target from xenstore of this domain.
+        """
+        xs_memory=int(self.readDom('memory/xmtarget'))/1024
+        self.info['memory']=xs_memory
+        self.storeVm("memory", xs_memory)
+       return 1
+ def storeChanged(self, _):
         log.trace("XendDomainInfo.storeChanged");
@@ -775,6 +784,9 @@ class XendDomainInfo:
         self.vmWatch = xswatch(self.vmpath, self.storeChanged)
         self.shutdownWatch = xswatch(self.dompath + '/control/shutdown',
                                      self.handleShutdownWatch)
+        self.memWatch = xswatch(self.dompath + '/memory/xmtarget',
+                                     self.xsMemoryChanged)
+
def getDomid(self):
@@ -1015,6 +1027,7 @@ class XendDomainInfo:
self.info['memory'] = target
         self.storeVm("memory", target)
+        self.storeDom('memory/xmtarget', target << 10)
         self.storeDom("memory/target", target << 10)
@@ -1372,6 +1385,7 @@ class XendDomainInfo:
         self.refresh_shutdown_lock.acquire()
         try:
             self.unwatchShutdown()
+            self.unwatchMemory()
self.release_devices() @@ -1439,6 +1453,18 @@ class XendDomainInfo:
                 self.shutdownWatch = None
         except:
             log.exception("Unwatching control/shutdown failed.")
+
+
+    def unwatchMemory(self):
+       """Remove the watch on the domain's control/shutdown node, if any."""
+       try:
+            try:
+                if self.memWatch:
+                    self.memWatch.unwatch()
+            finally:
+                self.memWatch= None
+        except:
+            log.exception("Unwatching memory/xmtarget failed.")
## public:
diff -r 593b5623a0d2 tools/python/xen/xend/image.py
--- a/tools/python/xen/xend/image.py    Fri Sep 29 15:40:35 2006 +0100
+++ b/tools/python/xen/xend/image.py    Wed Oct 04 14:46:33 2006 -0400
@@ -382,6 +382,7 @@ class HVMImageHandler(ImageHandler):
def destroy(self):
         self.unregister_shutdown_watch();
+        self.unregister_memory_watch();
         import signal
         if not self.pid:
             return
@@ -406,6 +407,18 @@ class HVMImageHandler(ImageHandler):
             log.exception("Unwatching hvm shutdown watch failed.")
         self.shutdownWatch = None
         log.debug("hvm shutdown watch unregistered")
+       
+    def unregister_memory_watch(self):
+        """Remove the watch on the target/xmtarget, if any. Nothrow
+        guarantee."""
+        try:
+           if self.memWatch:
+                self.memWatch.unwatch()
+        except:
+            log.exception("Unwatching memory/xmtarget failed.")
+        self.memWatch = None
+        log.debug("hvm memory watch unregistered")
+
def hvm_shutdown(self, _):
         """ watch call back on node control/shutdown,


------------------------------------------------------------------------

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.