WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-unstable] xend: Fix memory allocation bug after hvm

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-unstable] xend: Fix memory allocation bug after hvm reboot in numa system
From: Xen patchbot-unstable <patchbot-unstable@xxxxxxxxxxxxxxxxxxx>
Date: Tue, 09 Dec 2008 08:30:12 -0800
Delivery-date: Tue, 09 Dec 2008 08:30:03 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1228826672 0
# Node ID b1b9cf7a2d36706e7dbbed275327095a293a3b33
# Parent  628b3a76dbf40872b2d24ff6ba6ca8b13b0391df
xend: Fix memory allocation bug after hvm reboot in numa system

Recently we find a bug on Nahelem machine (totally with two nodes, 6G
memory (3G in each node):
- Start a HVM guest with its all VCPUS pinned to node1, so all its
memory is allocated from node1.
- Reboot the HVM.
- There will be some memory allocated from node0 even there is enough
free memory on node1.

Reason: For security issues, xen will not put all the pages of a dying
hvm to domheap directly, but put them in scrub list and wait for handled
by page_scrub_softirq(). If the dying hvm have a lot of memory,
page_scrub_softirq() will not handle all of them before the start the
hvm. There are some pages belong to node1 still in scrub list, new hvm
can't use pages in it. So this hvm will get different memory
distribution than before. Before changeset 18304, page_scrub_softirq()
can be excuted parallel between all the cpus. Changeset 18305
serialise page_scrub_softirq() and Changeset 18307 serialise
page_scrub_softirq() with a new lock to avoid holding up acquiring
page_scrub_lock in free_domheap_pages(). Those changeset slow the ability
to handle pages in scrub list. So the bug becomes more obvious after.

Patch: This patch modifiers balloon.free to avoid this bug. After
patch, balloon.free will check whether current machine is a numa
system and the new created hvm has all its vcpus in the same node. If
all the conditions above fit, we will wait until all the pages in
scrub list are freed (if waiting time go beyond 20s, we will stop
waiting it.).

This seems to be too restricted at the first glance. We used to only
wait for the free memory size of pinned node is bigger than
required. But as we know HVM memory alloction granularity is 2M. Even
the former condition is satisfied, we still may not find enough
2M-size memory on that node.

Signed-off-by: Ting Zhou <ting.g.zhou@xxxxxxxxx>
Signed-off-by: Xiaowei Yang <Xiaowei.yang@xxxxxxxxx>
---
 tools/python/xen/xend/XendCheckpoint.py |    2 -
 tools/python/xen/xend/XendDomainInfo.py |    6 ++---
 tools/python/xen/xend/balloon.py        |   36 +++++++++++++++++++++++++++++++-
 3 files changed, 39 insertions(+), 5 deletions(-)

diff -r 628b3a76dbf4 -r b1b9cf7a2d36 tools/python/xen/xend/XendCheckpoint.py
--- a/tools/python/xen/xend/XendCheckpoint.py   Tue Dec 09 12:42:18 2008 +0000
+++ b/tools/python/xen/xend/XendCheckpoint.py   Tue Dec 09 12:44:32 2008 +0000
@@ -253,7 +253,7 @@ def restore(xd, fd, dominfo = None, paus
         # set memory limit
         xc.domain_setmaxmem(dominfo.getDomid(), maxmem)
 
-        balloon.free(memory + shadow)
+        balloon.free(memory + shadow, dominfo)
 
         shadow_cur = xc.shadow_mem_control(dominfo.getDomid(), shadow / 1024)
         dominfo.info['shadow_memory'] = shadow_cur
diff -r 628b3a76dbf4 -r b1b9cf7a2d36 tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py   Tue Dec 09 12:42:18 2008 +0000
+++ b/tools/python/xen/xend/XendDomainInfo.py   Tue Dec 09 12:44:32 2008 +0000
@@ -2105,7 +2105,7 @@ class XendDomainInfo:
         # overhead is greater for some types of domain than others. For
         # example, an x86 HVM domain will have a default shadow-pagetable
         # allocation of 1MB. We free up 2MB here to be on the safe side.
-        balloon.free(2*1024) # 2MB should be plenty
+        balloon.free(2*1024, self) # 2MB should be plenty
 
         ssidref = 0
         if security.on() == xsconstants.XS_POLICY_USE:
@@ -2299,7 +2299,7 @@ class XendDomainInfo:
             vtd_mem = ((vtd_mem + 1023) / 1024) * 1024
 
             # Make sure there's enough RAM available for the domain
-            balloon.free(memory + shadow + vtd_mem)
+            balloon.free(memory + shadow + vtd_mem, self)
 
             # Set up the shadow memory
             shadow_cur = xc.shadow_mem_control(self.domid, shadow / 1024)
@@ -2716,7 +2716,7 @@ class XendDomainInfo:
             # The domain might already have some shadow memory
             overhead_kb -= xc.shadow_mem_control(self.domid) * 1024
         if overhead_kb > 0:
-            balloon.free(overhead_kb)
+            balloon.free(overhead_kb, self)
 
     def _unwatchVm(self):
         """Remove the watch on the VM path, if any.  Idempotent.  Nothrow
diff -r 628b3a76dbf4 -r b1b9cf7a2d36 tools/python/xen/xend/balloon.py
--- a/tools/python/xen/xend/balloon.py  Tue Dec 09 12:42:18 2008 +0000
+++ b/tools/python/xen/xend/balloon.py  Tue Dec 09 12:44:32 2008 +0000
@@ -67,7 +67,7 @@ def get_dom0_target_alloc():
         raise VmError('Failed to query target memory allocation of dom0.')
     return kb
 
-def free(need_mem):
+def free(need_mem ,self):
     """Balloon out memory from the privileged domain so that there is the
     specified required amount (in KiB) free.
     """
@@ -121,6 +121,40 @@ def free(need_mem):
             max_free_mem = total_mem - dom0_alloc
         if need_mem >= max_free_mem:
             retries = rlimit
+
+        # Check whethercurrent machine is a numa system and the new 
+        # created hvm has all its vcpus in the same node, if all the 
+        # conditions above are fit. We will wait until all the pages 
+        # in scrub list are freed (if waiting time go beyond 20s, 
+        # we will stop waiting it.)
+        if physinfo['nr_nodes'] > 1 and retries == 0:
+            oldnode = -1
+            waitscrub = 1
+            vcpus = self.info['cpus'][0]
+            for vcpu in vcpus:
+                nodenum = 0
+                for node in physinfo['node_to_cpu']:
+                    for cpu in node:
+                        if vcpu == cpu:
+                            if oldnode == -1:
+                                oldnode = nodenum
+                            elif oldnode != nodenum:
+                                waitscrub = 0
+                    nodenum = nodenum + 1
+
+            if waitscrub == 1 and scrub_mem > 0:
+                log.debug("wait for scrub %s", scrub_mem)
+                while scrub_mem > 0 and retries < rlimit:
+                    time.sleep(sleep_time)
+                    physinfo = xc.physinfo()
+                    free_mem = physinfo['free_memory']
+                    scrub_mem = physinfo['scrub_memory']
+                    retries += 1
+                    sleep_time += SLEEP_TIME_GROWTH
+                log.debug("scrub for %d times", retries)
+
+            retries = 0
+            sleep_time = SLEEP_TIME_GROWTH
 
         while retries < rlimit:
             physinfo = xc.physinfo()

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-unstable] xend: Fix memory allocation bug after hvm reboot in numa system, Xen patchbot-unstable <=