What i think is disk0 is yr domU image on SAN.
/virtmach/images/sles10ak/disk0
Did u mount yr SAN disk partition at common mount point at both the machies.
mount /dev/sdc to common mount point like /virtmach on both machine.
Beacuse from the log, it looks like on destination machine u r not able
to access the disk0.
Simple thing is just copy the domU onfig file to other machine and try
to craete the domU. If it goes fine then u could try migration.
--Triok
On 8/9/07, *Andre Konopka* <andre.konopka@xxxxxxxxxxxxxx
<mailto:andre.konopka@xxxxxxxxxxxxxx>> wrote:
Hi
I'm trying to build a HA Xen Cluster on SLES10SP1
I have two machines running a XEN dom0, the two machines share a SAN
disk, installed as OCFS2 partition.
First I installed SLES10 as a guest machine. The image is stored on my
shared partion.
I can start and stop the VM on the machine where I created it without
any problem...
Okay, as next step I changed the xend-config.sxp config file and enabled
relocation.
First I tried an offline migration. Without success...
In the log I found the following information (on the target machine).
Maybe someone can explain it a little bit:
[2007-08-08 14:31:58 xend 4730] ERROR (XendDomain:1011) Restore failed
Traceback (most recent call last):
File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py",
line 1006, in domain_restore_fd
return XendCheckpoint.restore(self, fd, paused=paused)
File
"/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py",
line 173, in restore
dominfo.waitForDevices() # Wait for backends to set up
File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py",
line 503, in waitForDevices
self.getDeviceController (devclass).waitForDevices()
File
"/usr/lib64/python2.4/site-packages/xen/xend/server/DevController.py",
line 149, in waitForDevices
return map(self.waitForDevice, self.deviceIDs())
File
"/usr/lib64/python2.4/site-packages/xen/xend/server/DevController.py",
line 168, in waitForDevice
raise VmError("Device %s (%s) could not be connected. "
VmError: Device 51728 (vbd) could not be connected. Backend device not
found.
For me it seems that the second machine can't activate the disc???
On the first machine my shared partion is mounted as /dev/sdc
Device File: /dev/sdc (/dev/sg3)
Device Files: /dev/sdc,
/dev/disk/by-id/scsi-3600508b4001077d100008000ecdf0000,
/dev/disk/by-path/pci-0000:10:00.0-scsi-0:0:0:3,
/dev/disk/by-uuid/7646af4f-e57e-4f1c-974b-308d552ff4a5,
/dev/disk/by-label/virtmach
on the second machine as /dev/sdb
Device File: /dev/sdb (/dev/sg2)
Device Files: /dev/sdb,
/dev/disk/by-id/scsi-3600508b4001077d100008000ecdf0000,
/dev/disk/by-path/pci-0000:10:00.0-scsi-0:0:0:2,
/dev/disk/by-uuid/7646af4f-e57e-4f1c-974b-308d552ff4a5,
/dev/disk/by-label/virtmach
This is the config file, created by the graphical 'virt-manager'.
ostype="sles10"
name="sles10-1"
memory=512
vcpus=1
uuid="55adf3ee-9f23-f3d4-6cad-48d4c43cdf84"
on_crash="destroy"
on_poweroff="destroy"
on_reboot="restart"
localtime=0
builder="linux"
bootloader="/usr/lib/xen/boot/domUloader.py"
bootargs="--entry=xvda2:/boot/vmlinuz-xen,/boot/initrd-xen"
extra="TERM=xterm "
disk=[ 'file:/virtmach/images/sles10ak/disk0,xvda,w',
'file:/isoimages/SLES-10-SP1-x86_64-DVD1.iso,xvdb,r', ]
vif=[ 'mac=00:16:3e:6f:70:d0', ]
vfb=["type=vnc,vncunused=1"]
After booting both dom0 machines my fresh installed domU isn't visible
any longer.
The image ist still on my shared partition but 'xm list' only reports
the dom0 instance????
HOW can I 'import' my 'lost' domU???
I think this 'import' is also necessary for my HA solution. If my first
machine crashes, the second one must be able to 'import' the DOMU to
start it???
Best regards
Andre
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx <mailto:Xen-users@xxxxxxxxxxxxxxxxxxx>
http://lists.xensource.com/xen-users
------------------------------------------------------------------------
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users