[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Error migrating VM to secondary host using COLO replication





On 10/29/2016 12:56 AM, Konrad Rzeszutek Wilk wrote:
On Thu, Oct 27, 2016 at 08:56:34PM -0200, Sadi wrote:
Hello,
Hey!

CC-ing relevant people.

CC
Xie Changlong, Wen Congyang and Yang Hongyang on the COLO-Xen wiki for help.

Thanks
Zhang Chen


I've been trying to set COLO replication to work but i'm stuck on a problem
when migrating de primary VM to secondary host.

I have been following the instructions from this wiki

- http://wiki.xenproject.org/wiki/COLO_-_Coarse_Grain_Lock_Stepping

and this mail thread

  -
http://xen.markmail.org/search/?q=COLO#query:COLO+page:1+mid:fb7wrn62vbks4unn+state:results

I'm anexing the steps i took setting the environment before facing this
problem when executing 'xl remus' command:

migration target: Ready to receive domain.
Saving to migration stream new xl format (info 0x3/0x0/2840)
Loading new save file <incoming migration stream> (new xl fmt info
0x3/0x0/2840)
Savefile contains xl domain config in JSON format
Parsing config from <saved>
xc: info: Saving domain 2, type x86 HVM
xc: info: Found x86 HVM domain from Xen 4.7
xc: info: Restoring domain
xc: Frames iteration 0 of 5: 1045504/1045504  100%
xc: Domain now suspended: 0/0    0%
libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error: No
such file or directory
libxl: error: libxl_colo_restore.c:817:colo_restore_setup_cds_done: COLO:
failed to setup device >for guest with domid 1
xc: error: Restore failed (38 = Function not implemented): Internal error
libxl: info: libxl_colo_restore.c:320:libxl__colo_restore_teardown: colo
fails
libxl: error: libxl_stream_read.c:852:libxl__xc_domain_restore_done:
restoring domain: Function >not implemented
libxl: info: libxl_colo_restore.c:320:libxl__colo_restore_teardown: colo
fails

I'm hoping that someone could provide with directions.

Thanks for your time and sory for bad english (not native language).


Sadi.
Network

master:
br0 : 10.20.107.30 binded with eth0
eth1: 192.168.1.30
eth2: 192.168.2.30

slave:
br0 eth0: 10.20.107.33 binded with eth0
br1: no ip address binded with eth1
eth1: 192.168.1.33
eth2: 192.168.2.33

Eth1 both sides directly connected by cable
Eth2 both sides directly connected by cable

Repositories used:

https://github.com/Pating/colo-proxy/tree/changlox
https://github.com/macrosheep/iptables.git
https://github.com/torvalds/linux

https://github.com/wencongyang/xen

Kernel build instructions followed:

2. Prepare host kernel for Dom0
colo-proxy kernel module need cooperate with linux kernel. You should patch 
kernel with ~/colo-proxy/colo-patch-for-kernel.patch
-cd ~/colo-proxy/; git checkout 405527cbfa9f
-cd ~/linux/; git checkout v4.0; git am ~/colo-proxy/colo-patch-for-kernel.patch
-cp /boot/config-3.0.76-0.11-xen .config; make menuconfig to config your kernel 
support Dom0. Ref: http://wiki.xenproject.org/wiki/Mainline_Linux_Kernel_Configs
-make -j8; make modules_install; make install
-reboot


COLO-Proxy:

-cd ~/colo-proxy/; git checkout 405527cbfa9f; make; make install

IPTables:

-cd iptables; ./autogen.sh; ./configure --prefix=/usr/ --libdir=/usr/lib64; 
make; make install

XEN:

-./autogen.sh
-./configure --enable-debug
-touch tools/libxl/libxlu_disk_l.l
-touch tools/libxl/libxlu_cfg_l.l
-make dist-xen
-make dist-tools
-make install-xen
-make install-tools

*i've tried with https://github.com/wencongyang/qemu-xen but got an error with 
qemu when xl creating the VM as follows:

libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error 
message from QMP server: Could not set password
Qemu

-cd ~/qemu-xen/; git checkout colo-xen-v2

Configured QEMU with script provided at:
http://xen.markmail.org/message/y4jcdqxw2s2labdo?q=COLO#query:COLO+page:1+mid:3lzcuzeokqsqpu4i+state:results

*path_to_xen_source updated according my directory tree.
then..

-make
-make install

Running COLO

*HVM SUSE 64bits

primary:
rm -f /var/log/xen/*
rm -f /var/lib/xen/userdata-d.*
service xencommons start
modprobe nf_conntrack_ipv4
modprobe xt_PMYCOLO sec_dev=eth1

secondary:
rm -f /var/log/xen/*
rm -f /var/lib/xen/userdata-d.*
service xencommons start
modprobe xt_SECCOLO
active_disk=/mnt/ramfs/active_disk.img
hidden_disk=/mnt/ramfs/hidden_disk.img
local_img=/root/new/SUSE/xenguest.img
tmp_disk_size=`/root/new/pating/qemu-xen/qemu-img info $local_img |grep 
'virtual size' |awk  '{print $3}'`
rm -rf /mnt/ramfs/*
umount /mnt/ramfs/
rm -rf /mnt/ramfs/
mkdir /mnt/ramfs

function create_image()
{
      /root/new/pating/qemu-xen/qemu-img create -f qcow2 $1 $tmp_disk_size
}
function prepare_temp_images()
{
      grep -q "^none /mnt/ramfs ramfs" /proc/mounts
      if [[ $? -ne 0 ]]; then
          mount -t ramfs none /mnt/ramfs/ -o size=2G
      fi

      if [[ ! -e $active_disk ]]; then
          create_image $active_disk
      fi

      if [[ ! -e $hidden_disk ]]; then
          create_image $hidden_disk
      fi
}
prepare_temp_images

primary:

xl create new/SUSE/vm-suse.cfg
xl pause vm-suse
xl remus -c -u vm-suse 192.168.2.33


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel


.


--
Thanks
zhangchen




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.