WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-community

[Xen-community] Xen LVM DRBD live migration

To: xen-community@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-community] Xen LVM DRBD live migration
From: Gabriel Rosca <missnebun@xxxxxxxxx>
Date: Sun, 21 Jun 2009 11:42:26 -0400
Delivery-date: Mon, 22 Jun 2009 07:55:22 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=wNhu3y/JnMu/V1ERPp417jcqLTvOrbwM14m3mC089tY=; b=uxPHPxyIg5881FcQ0lVxD/0T5ClAmBCG7t1lwo7RkAAIU9TBlknPdRweh+Kx/lSRGL SoBvFCnH3xx1HOZ+/2EYMScWaW3dSgooFS0RuU2MwZ5NvWuNnyisIsk8UeDMKBiS9sPq kk9G2Fj8dPv6Mjf3E0YTIX+gmzFH82W2SsoyE=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=s8iTArMnBwORVDkTq7eVAnujO2FdDsLPswpupd6Y3E2SrxPMCaQJGJj09fkOflYDxx z8DyzSWN0hlZFspr5DQnbv8gAs3Dkquci3usZgmNEbJ5bLBupG8ddHMHcdZLh3sUrra8 p+4ndorI+H9usXNmUXYMP4hqZ7+SquDv01OBw=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-archive: <http://lists.xensource.com/archives/html/xen-community>
List-help: <mailto:xen-community-request@lists.xensource.com?subject=help>
List-id: Community Discussion <xen-community.lists.xensource.com>
List-post: <mailto:xen-community@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-community>, <mailto:xen-community-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-community>, <mailto:xen-community-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-community-bounces@xxxxxxxxxxxxxxxxxxx

Hi guys I have few problems with live migration ... and I need some professional help :)

I have 2 xen servers ... CentOS 5.3 and I want to have a high available cluster

 

Now let`s begin ....

xen0:

 

[root@xen0 ~]# fdisk -l

 

Disk /dev/sda: 218.2 GB, 218238025728 bytes

255 heads, 63 sectors/track, 26532 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          38      305203+  83  Linux

/dev/sda2              39        3862    30716280   83  Linux

/dev/sda3            3863        5902    16386300   82  Linux swap / Solaris

/dev/sda4            5903       26532   165710475    5  Extended

/dev/sda5            5903       26532   165710443+  8e  Linux LVM

 

[root@xen0 ~]# pvcreate /dev/sda5

  Physical volume "/dev/sda5" successfully created

[root@xen0 ~]# vgcreate -c n LVM /dev/sda5

  Non-clustered volume group "LVM" successfully created

[root@xen0 ~]# lvcreate -L 12G -n genxmonitor LVM

  Logical volume "genxmonitor" created

[root@xen0 ~]# drbdadm create-md genxmonitor

md_offset 12884897792

al_offset 12884865024

bm_offset 12884471808

 

Found some data

 ==> This might destroy existing data! <==

 

Do you want to proceed?

[need to type 'yes' to confirm] yes

 

You want me to create a v08 style flexible-size internal meta data block.

There apears to be a v08 flexible-size internal meta data block

already in place on /dev/LVM/genxmonitor at byte offset 12884897792

Do you really want to overwrite the existing v08 meta-data?

[need to type 'yes' to confirm] yes

 

Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.

my drbd.conf

#

# Global Parameters

#

global {

        # Participate in http://usage.drbd.org

        usage-count yes;

}

 

 

#

# Settings common to all resources

#

 

 

common {

        # Set sync rate

        syncer { rate 100M; }

 

        # Protocol C : Both nodes have to commit before write

        # is considered successful

        protocol C;

        net {

                # Xen tests that it can write to block device

                # before starting up. Not allowing this causes

                # migration to fail.

                allow-two-primaries;

 

                # Split-brain recovery parameters

                after-sb-0pri discard-zero-changes;

                after-sb-1pri discard-secondary;

        }

        startup {

                become-primary-on both;

        }

#       handlers {

#               pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";

#               pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";

#               local-io-error "echo o > /proc/sysrq-trigger ; halt -f";

#        }

 

}

 

#

# Resource Definitions

#

 

resource "genxmonitor" {

 

        on xen0.genx.local {

 

                # The block device it will appear as

                device    /dev/drbd0;

 

                # The device we are mirroring

                disk      /dev/LVM/genxmonitor;

 

                # Store DRBD meta data the above disk

                meta-disk internal;

                # Address of *this* host and port to replicate over

                # You must use a different port for each resource

                address   172.16.160.23:7790;

        }

 

        on xen1.genx.local {

                device    /dev/drbd0;

                disk      /dev/LVM/genxmonitor;

                meta-disk internal;

                address   172.16.160.103:7790;

        }

 

}

[root@xen0 ~]# drbdadm -- --overwrite-data-of-peer primary genxmonitor

[root@xen0 ~]# cat /proc/drbd

version: 8.3.1 (api:88/proto:86-89)

GIT-hash: fd40f4a8f9104941537d1afc8521e584a6d3003c build by root@xxxxxxxxxxxxxxx, 2009-06-14 11:33:42

 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---^@

    ns:12582492 nr:0 dw:0 dr:12582492 al:0 bm:768 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

[root@xen1 ~]# drbdadm primary genxmonitor

[root@xen1 ~]# service drbd status

drbd driver loaded OK; device status:

version: 8.3.1 (api:88/proto:86-89)

GIT-hash: fd40f4a8f9104941537d1afc8521e584a6d3003c build by root@xxxxxxxxxxxxxxx, 2009-06-14 11:33:42

m:res          cs         ro               ds                 p  mounted  fstype

0:genxmonitor  Connected  Primary/Primary  UpToDate/UpToDate  C

 

now back xen0 gere is my domU install file.

kernel = "/boot/genx_vmlinuz"

ramdisk = "/boot/genx_initrd.img"

extra = "text ks=http://pxeboot.genx.local/ksfiles/x86_hardraid_xen/ks0.cfg"

name = "genx-monitor"

memory = "512"

disk = [ 'drbd:genxmonitor,xvda,w']

vif = [ "mac=00:16:3e:20:8c:a2,bridge=xenbr0" ]

vcpus=1

>

>

>

 

[root@xen0 ~]# xm create /etc/xen/servers/genx-monitor2

Using config file "/etc/xen/servers/genx-monitor2".

Started domain genx-monitor

[root@xen0 ~]# less /etc/xen/genxmonitor

name = "genx-monitor"

uuid = "364ed881-6e29-43d1-6529-2f702e8daefb"

memory = "512"

maxmem = 512

bootloader = "/usr/bin/pygrub"

#disk = [ "drbd:genx-monitor-root,xvda1,w" ]

disk = [ "phy:drbd0,xvda,w" ]

vif = [ "mac=00:16:3e:20:8c:a2,bridge=xenbr0" ]

vfb = [  ]

vcpus=1

>

>

>

[root@xen0 ~]#  drbdadm primary genxmonitor

[root@xen0 ~]# service drbd status

drbd driver loaded OK; device status:

version: 8.3.1 (api:88/proto:86-89)

GIT-hash: fd40f4a8f9104941537d1afc8521e584a6d3003c build by root@xxxxxxxxxxxxxxx, 2009-06-14 11:33:42

m:res          cs         ro               ds                 p  mounted  fstype

0:genxmonitor  Connected  Primary/Primary  UpToDate/UpToDate  C

[root@xen0 ~]# xm create genxmonitor

Using config file "/etc/xen/genxmonitor".

Started domain genx-monitor

[root@xen0 ~]# xm list

Name                                      ID Mem(MiB) VCPUs State   Time(s)

Domain-0                                   0     7529     8 r-----    169.4

genx-monitor                               3      511     1 -b----     23.4

 

[root@xen0 ~]# xm migrate --live genx-monitor xen1

 

 

 

LOG XEN 0

 

[2009-06-21 11:34:42 xend 4686] DEBUG (balloon:149) Balloon: 548 KiB free; 0 to scrub; need 3072; retries: 20.

[2009-06-21 11:34:42 xend 4686] DEBUG (balloon:164) Balloon: setting dom0 target to 7526 MiB.

[2009-06-21 11:34:42 xend.XendDomainInfo 4686] DEBUG (XendDomainInfo:1126) Setting memory target of domain Domain-0 (0) to 7526 MiB.

[2009-06-21 11:34:42 xend 4686] DEBUG (balloon:143) Balloon: 3620 KiB free; need 3072; done.

[2009-06-21 11:34:42 xend 4686] DEBUG (XendCheckpoint:89) [xc_save]: /usr/lib64/xen/bin/xc_save 22 3 0 0 1

[2009-06-21 11:34:44 xend 4686] INFO (XendCheckpoint:351) ERROR Internal error: Timed out waiting for frame list updated.

[2009-06-21 11:34:44 xend 4686] INFO (XendCheckpoint:351) ERROR Internal error: Failed to map/save the p2m frame list

[2009-06-21 11:34:44 xend 4686] INFO (XendCheckpoint:351) Save exit rc=1

[2009-06-21 11:34:44 xend 4686] ERROR (XendCheckpoint:133) Save failed on domain genx-monitor (3).

Traceback (most recent call last):

  File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 110, in save

    forkHelper(cmd, fd, saveInputHandler, False)

  File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 339, in forkHelper

    raise XendError("%s failed" % string.join(cmd))

XendError: /usr/lib64/xen/bin/xc_save 22 3 0 0 1 failed

[2009-06-21 11:34:44 xend.XendDomainInfo 4686] DEBUG (XendDomainInfo:1669) XendDomainInfo.resumeDomain(3)

[2009-06-21 11:34:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:45 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:45 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:45 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:34:45 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:35:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:767) Dev still active but hit max loop timeout

[2009-06-21 11:35:44 xend.XendDomainInfo 4686] DEBUG (XendDomainInfo:1682) XendDomainInfo.resumeDomain: devices released

[2009-06-21 11:35:44 xend.XendDomainInfo 4686] DEBUG (XendDomainInfo:832) Storing domain details: {'console/ring-ref': '2206621', 'console/port': '2', 'name': 'migrating-genx-monitor', 'console/limit': '1048576', 'vm': '/vm/364ed881-6e29-43d1-6529-2f702e8daefb', 'domid': '3', 'cpu/0/availability': 'online', 'memory/target': '524288', 'store/ring-ref': '2206622', 'store/port': '1'}

[2009-06-21 11:35:44 xend 4686] DEBUG (blkif:27) exception looking up device number for xvda: [Errno 2] No such file or directory: '/dev/xvda'

[2009-06-21 11:35:44 xend.XendDomainInfo 4686] ERROR (XendDomainInfo:1699) XendDomainInfo.resume: xc.domain_resume failed on domain 3.

Traceback (most recent call last):

  File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1693, in resumeDomain

    self.createDevices()

  File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1750, in createDevices

    self.createDevice(n, c)

  File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1202, in createDevice

    return self.getDeviceController(deviceClass).createDevice(devconfig)

  File "/usr/lib64/python2.4/site-packages/xen/xend/server/DevController.py", line 106, in createDevice

    raise VmError("Device %s is already connected." % dev_str)

VmError: Device xvda (51712, vbd) is already connected.

[2009-06-21 11:35:44 xend 4686] DEBUG (XendCheckpoint:136) XendCheckpoint.save: resumeDomain

[2009-06-21 11:35:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:35:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:35:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:35:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:35:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

[2009-06-21 11:35:44 xend.XendDomainInfo 4686] INFO (XendDomainInfo:1790) Dev 51712 still active, looping...

 

XEN1 log :

 

 

[2009-06-21 11:34:43 xend.XendDomainInfo 4157] DEBUG (XendDomainInfo:281) XendDomainInfo.restore(['domain', ['domid', '3'], ['uuid', '364ed881-6e29-43d1-6529-2f702e8daefb'], ['vcpus', '1'], ['vcpu_avail', '1'], ['cpu_weight', '1.0'], ['memory', '512'], ['shadow_memory', '0'], ['maxmem', '512'], ['bootloader', '/usr/bin/pygrub'], ['features'], ['name', 'genx-monitor'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['ramdisk', '/var/lib/xen/boot_ramdisk.Ybfgoz'], ['kernel', '/var/lib/xen/boot_kernel.g-vUFL'], ['args', 'ro root=LABEL=/ console=xvc0']]], ['device', ['vif', ['backend', '0'], ['script', 'vif-bridge'], ['bridge', 'xenbr0'], ['mac', '00:16:3e:20:8c:a2']]], ['device', ['vbd', ['backend', '0'], ['dev', 'xvda:disk'], ['uname', 'phy:drbd0'], ['mode', 'w']]], ['state', '-b----'], ['shutdown_reason', 'poweroff'], ['cpu_time', '23.404180781'], ['online_vcpus', '1'], ['up_time', '131.227479935'], ['start_time', '1245598351.71'], ['store_mfn', '2206622'], ['console_mfn', '2206621']])

[2009-06-21 11:34:43 xend.XendDomainInfo 4157] DEBUG (XendDomainInfo:312) parseConfig: config is ['domain', ['domid', '3'], ['uuid', '364ed881-6e29-43d1-6529-2f702e8daefb'], ['vcpus', '1'], ['vcpu_avail', '1'], ['cpu_weight', '1.0'], ['memory', '512'], ['shadow_memory', '0'], ['maxmem', '512'], ['bootloader', '/usr/bin/pygrub'], ['features'], ['name', 'genx-monitor'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['ramdisk', '/var/lib/xen/boot_ramdisk.Ybfgoz'], ['kernel', '/var/lib/xen/boot_kernel.g-vUFL'], ['args', 'ro root=LABEL=/ console=xvc0']]], ['device', ['vif', ['backend', '0'], ['script', 'vif-bridge'], ['bridge', 'xenbr0'], ['mac', '00:16:3e:20:8c:a2']]], ['device', ['vbd', ['backend', '0'], ['dev', 'xvda:disk'], ['uname', 'phy:drbd0'], ['mode', 'w']]], ['state', '-b----'], ['shutdown_reason', 'poweroff'], ['cpu_time', '23.404180781'], ['online_vcpus', '1'], ['up_time', '131.227479935'], ['start_time', '1245598351.71'], ['store_mfn', '2206622'], ['console_mfn', '2206621']]

[2009-06-21 11:34:43 xend.XendDomainInfo 4157] DEBUG (XendDomainInfo:417) parseConfig: result is {'shadow_memory': 0, 'start_time': 1245598351.71, 'uuid': '364ed881-6e29-43d1-6529-2f702e8daefb', 'on_crash': 'restart', 'on_reboot': 'restart', 'localtime': None, 'image': ['linux', ['ramdisk', '/var/lib/xen/boot_ramdisk.Ybfgoz'], ['kernel', '/var/lib/xen/boot_kernel.g-vUFL'], ['args', 'ro root=LABEL=/ console=xvc0']], 'on_poweroff': 'destroy', 'bootloader_args': None, 'cpus': None, 'name': 'genx-monitor', 'backend': [], 'vcpus': 1, 'cpu_weight': 1.0, 'features': None, 'vcpu_avail': 1, 'memory': 512, 'device': [('vif', ['vif', ['backend', '0'], ['script', 'vif-bridge'], ['bridge', 'xenbr0'], ['mac', '00:16:3e:20:8c:a2']]), ('vbd', ['vbd', ['backend', '0'], ['dev', 'xvda:disk'], ['uname', 'phy:drbd0'], ['mode', 'w']])], 'bootloader': '/usr/bin/pygrub', 'cpu': None, 'maxmem': 512}

[2009-06-21 11:34:43 xend.XendDomainInfo 4157] DEBUG (XendDomainInfo:1427) XendDomainInfo.construct: None

[2009-06-21 11:34:43 xend 4157] DEBUG (balloon:143) Balloon: 527764 KiB free; need 2048; done.

[2009-06-21 11:34:43 xend.XendDomainInfo 4157] DEBUG (XendDomainInfo:797) Storing VM details: {'shadow_memory': '0', 'uuid': '364ed881-6e29-43d1-6529-2f702e8daefb', 'on_reboot': 'restart', 'start_time': '1245598351.71', 'on_poweroff': 'destroy', 'name': 'genx-monitor', 'xend/restart_count': '0', 'vcpus': '1', 'vcpu_avail': '1', 'memory': '512', 'on_crash': 'restart', 'image': "(linux (ramdisk /var/lib/xen/boot_ramdisk.Ybfgoz) (kernel /var/lib/xen/boot_kernel.g-vUFL) (args 'ro root=LABEL=/ console=xvc0'))", 'maxmem': '512'}

[2009-06-21 11:34:43 xend 4157] DEBUG (DevController:110) DevController: writing {'backend-id': '0', 'mac': '00:16:3e:20:8c:a2', 'handle': '0', 'state': '1', 'backend': '/local/domain/0/backend/vif/2/0'} to /local/domain/2/device/vif/0.

[2009-06-21 11:34:43 xend 4157] DEBUG (DevController:112) DevController: writing {'bridge': 'xenbr0', 'domain': 'genx-monitor', 'handle': '0', 'script': '/etc/xen/scripts/vif-bridge', 'state': '1', 'frontend': '/local/domain/2/device/vif/0', 'mac': '00:16:3e:20:8c:a2', 'online': '1', 'frontend-id': '2'} to /local/domain/0/backend/vif/2/0.

[2009-06-21 11:34:43 xend 4157] DEBUG (blkif:27) exception looking up device number for xvda: [Errno 2] No such file or directory: '/dev/xvda'

[2009-06-21 11:34:43 xend 4157] DEBUG (DevController:110) DevController: writing {'backend-id': '0', 'virtual-device': '51712', 'device-type': 'disk', 'state': '1', 'backend': '/local/domain/0/backend/vbd/2/51712'} to /local/domain/2/device/vbd/51712.

[2009-06-21 11:34:43 xend 4157] DEBUG (DevController:112) DevController: writing {'domain': 'genx-monitor', 'frontend': '/local/domain/2/device/vbd/51712', 'format': 'raw', 'dev': 'xvda', 'state': '1', 'params': 'drbd0', 'mode': 'w', 'online': '1', 'frontend-id': '2', 'type': 'phy'} to /local/domain/0/backend/vbd/2/51712.

[2009-06-21 11:34:43 xend.XendDomainInfo 4157] DEBUG (XendDomainInfo:832) Storing domain details: {'console/port': '2', 'name': 'genx-monitor', 'console/limit': '1048576', 'vm': '/vm/364ed881-6e29-43d1-6529-2f702e8daefb', 'domid': '2', 'cpu/0/availability': 'online', 'memory/target': '524288', 'store/port': '1'}

[2009-06-21 11:34:43 xend 4157] DEBUG (XendCheckpoint:198) restore:shadow=0x0, _static_max=0x200, _static_min=0x200,

[2009-06-21 11:34:43 xend 4157] DEBUG (balloon:143) Balloon: 527756 KiB free; need 524288; done.

[2009-06-21 11:34:43 xend 4157] DEBUG (XendCheckpoint:215) [xc_restore]: /usr/lib64/xen/bin/xc_restore 15 2 1 2 0 0 0

[2009-06-21 11:34:43 xend 4157] INFO (XendCheckpoint:351) xc_domain_restore start: p2m_size = 20800

[2009-06-21 11:36:15 xend 4157] INFO (XendCheckpoint:351) ERROR Internal error: read extended-info signature failed

[2009-06-21 11:36:15 xend 4157] INFO (XendCheckpoint:351) Restore exit with rc=1

[2009-06-21 11:36:15 xend.XendDomainInfo 4157] DEBUG (XendDomainInfo:1637) XendDomainInfo.destroy: domid=2

[2009-06-21 11:36:15 xend.XendDomainInfo 4157] ERROR (XendDomainInfo:1645) XendDomainInfo.destroy: xc.domain_destroy failed.

Traceback (most recent call last):

  File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1643, in destroy

    xc.domain_destroy(self.domid)

Error: (3, 'No such process')

[2009-06-21 11:36:15 xend 4157] ERROR (XendDomain:278) Restore failed

Traceback (most recent call last):

  File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py", line 273, in domain_restore_fd

    return XendCheckpoint.restore(self, fd)

  File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 219, in restore

    forkHelper(cmd, fd, handler.handler, True)

  File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 339, in forkHelper

    raise XendError("%s failed" % string.join(cmd))

XendError: /usr/lib64/xen/bin/xc_restore 15 2 1 2 0 0 0 failed

 

_______________________________________________
Xen-community mailing list
Xen-community@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-community
<Prev in Thread] Current Thread [Next in Thread>