WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Fix restore handling checks

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Fix restore handling checks
From: Michal Novotny <minovotn@xxxxxxxxxx>
Date: Wed, 23 Jun 2010 13:27:59 +0200
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 23 Jun 2010 04:29:49 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100622141039.GA4552@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C846136B.18263%keir.fraser@xxxxxxxxxxxxx> <4C205583.80609@xxxxxxxxxx> <4C20B2F8.4030409@xxxxxxxxxx> <20100622141039.GA4552@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100430 Fedora/3.0.4-3.fc13 Thunderbird/3.0.4
On 06/22/2010 04:10 PM, Konrad Rzeszutek Wilk wrote:
On Tue, Jun 22, 2010 at 02:56:24PM +0200, Michal Novotny wrote:
On 06/22/2010 08:17 AM, Michal Novotny wrote:
On 06/22/2010 08:14 AM, Keir Fraser wrote:
Okay, hopefully someone will be able to Ack this patch with
better knwoledge
of xend than me.

  -- Keir

Ok, good. If it will be accepted and put into
xen-staging/xen-unstable tree please let me know about it.

Thanks,
Michal

Well, this is new version of the patch since I've tried now to both
migrate from RHEL-5 host (python-2.4) to RHEL-6 host (python-2.6)
using the RHEL-5 host as a source machine for migrations and also
RHEL-6 host to restore the the guest locally using `xm restore` to
see whether the guest memory calculated correctly and it did pass
when there was enough memory to create the guest and failed when
there was not enough memory to create the guest. Also, some fixing
for comparing the values was necessary since it was treated as
string comparison on one of the hosts.

Both the host machines were having the latest Xen-4.1 installed.

Michal

Signed-off-by: Michal Novotny<minovotn@xxxxxxxxxx>

--
Michal Novotny<minovotn@xxxxxxxxxx>, RHCE
Virtualization Team (xen userspace), Red Hat

diff -r a24dbfcbdf69 tools/python/xen/xend/XendCheckpoint.py
--- a/tools/python/xen/xend/XendCheckpoint.py   Tue Jun 22 07:19:38 2010 +0100
+++ b/tools/python/xen/xend/XendCheckpoint.py   Tue Jun 22 12:52:56 2010 +0200
@@ -64,6 +64,78 @@ def insert_after(list, pred, value):
                list.insert (i+1, value)
      return

+def get_avail_memory():
+    """Get available memory for new guest creation (in KiB, for restore)"""
+    from xen.xend import XendOptions
+
+    # First get total memory in KiB
+    xc = xen.lowlevel.xc.xc()
+    info = xc.domain_getinfo()
+    total_mem  = xc.physinfo()['total_memory']
+    del xc
+
+    # Count memory of all running guests in KiB
+    mem_used = 0L
+    for x in info:
+        if x['domid'] != 0:
+            # If blocked&  paused&  !paused&  no online_vcpus we most
+            # probably migrate so we get maxmem instead (since mem_kb
+            # is having currently transferred amount of memory so we
+            # cannot use it for good calculations)
+            if (x['blocked'] == 1 and x['paused'] == 1 and
+                x['running'] == 0 and x['online_vcpus'] == 0):
+                log.debug("Domain %d seems to be restoring" % x['domid'])
+                mem_used += x['maxmem_kb']
+            else:
+                mem_used += x['mem_kb']
+
+    # Get minimal memory for dom0 and convert to KiB
+    min_mem = XendOptions.instance().get_dom0_min_mem() * 1024
+
+    return total_mem - mem_used - min_mem
+
+def check_for_restore_bail(msg):
+    raise VmError("Cannot restore: %s" % msg)
+
+def check_for_enough_mem(val):
+    mem_avail = get_avail_memory() / 1024
+    log.debug("Available memory: %s MiB, guest requires: %s MiB" % (mem_avail, 
val))
+    return int(mem_avail)>  int(val)
+
+def check_for_restore_is_hvm(cfg):
+    for item in cfg:
+        if (type(item) == list):
+           if item[0] == 'image':
+               return type(item[1]) == list and item[1][0] == 'hvm'
There is no chance that 'hvm' would be in different case? Say 'HVM' ?

According to my testing it's always lowercase here and maybe everything on the SXP is lowercase. Nevertheless I am not sure about everything but what I'm sure is that the hvm is *always* lowercase.
+
+def check_for_restore_hvm_have_readonly_ide(cfg):
+    """Check the configuration for read-only IDE devices
+       Fail if such a device is found."""
+    disallow = None
+    if type(cfg) == list and cfg[0] in ('tap', 'vbd'):
+        for p in cfg:
+            if (type(p) != str):
Is that right? should it not == list ?

It's been tested and working fine. There are no more options than 'list' and 'str' so in fact this is the same to implement it like 'if .. != str' and 'if ... == list'.

+                if p[0] == 'dev':
+                    disallow = ((p[1].find('hd')>= 0) and
+                                (p[1].find('cdrom') == -1))
+                if p[0] == 'mode' and disallow:
+                    if p[1] == 'r':
+                        return True
+    return False
+
+def check_for_restore(cfg):
+    is_hvm = check_for_restore_is_hvm(cfg)
+    name = None
+    for item in cfg:
+        if (type(item) == list):
+            # Check for enough memory to create the guest
+            if item[0] == 'memory':
+                if not check_for_enough_mem(item[1]):
+                    check_for_restore_bail('Host machine doesn\'t have enough 
memory to create the guest')
+            # We disable read-only IDE drives only for HVM guests
+            if item[0] == 'device' and is_hvm:
+                if check_for_restore_hvm_have_readonly_ide(item[1]):
+                    check_for_restore_bail('HVM domains cannot be using 
read-only IDE drives')

  def save(fd, dominfo, network, live, dst, checkpoint=False, node=-1):
      from xen.xend import XendDomain
@@ -220,6 +292,7 @@ def restore(xd, fd, dominfo = None, paus
              othervm = xd.domain_lookup_nr(domconfig["uuid"])
          if othervm is not None and othervm.domid is not None:
              raise VmError("Domain '%s' already exists with ID '%d'" % 
(domconfig["name_label"], othervm.domid))
+        check_for_restore(vmconfig)

      if dominfo:
          dominfo.resume()
diff -r a24dbfcbdf69 tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py   Tue Jun 22 07:19:38 2010 +0100
+++ b/tools/python/xen/xend/XendDomainInfo.py   Tue Jun 22 12:52:56 2010 +0200
@@ -82,6 +82,32 @@ log = logging.getLogger("xend.XendDomain
  log = logging.getLogger("xend.XendDomainInfo")
  #log.setLevel(logging.TRACE)

+def cfg_is_hvm(cfg):
+    for item in cfg:
+        if (type(item) == list):
+           if item[0] == 'image':
+               return type(item[1]) == list and item[1][0] == 'hvm'
+
+def cfg_hvm_have_readonly_ide_disks(cfg):
+    """Check whether the configuration for read-only IDE disks since
+       they are not supported according to IDE specifications."""
+    if not cfg_is_hvm(cfg):
+        return False
+
+    disallow = None
+    for item in cfg:
+        if (type(item) == list):
+            if item[0] == 'device':
+                if type(item[1]) == list and item[1][0] in ('tap', 'vbd'):
+                    for p in item[1]:
+                        if (type(p) != str):
+                            if p[0] == 'dev':
+                              disallow = ((p[1].find('hd')>= 0) and
+                                          (p[1].find('cdrom') == -1))
+                        if p[0] == 'mode' and disallow:
+                            if p[1] == 'r':
+                                return True
+    return False

  def create(config):
      """Creates and start a VM using the supplied configuration.
@@ -100,6 +126,10 @@ def create(config):
          othervm = XendDomain.instance().domain_lookup_nr(domconfig["uuid"])
      if othervm is not None and othervm.domid is not None:
          raise VmError("Domain '%s' already exists with ID '%d'" % 
(domconfig["name_label"], othervm.domid))
+
+    if cfg_hvm_have_readonly_ide_disks(config):
+        raise VmError("HVM domains cannot be using read-only IDE drives")
+
      log.debug("XendDomainInfo.create(%s)", scrub_password(config))
      vm = XendDomainInfo(domconfig)
      try:
Otherwise looks good to me.
Good, thanks for review.

Michal

--
Michal Novotny<minovotn@xxxxxxxxxx>, RHCE
Virtualization Team (xen userspace), Red Hat


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel