WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] blkif migration problem

To: Ewan Mellor <ewan@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] blkif migration problem
From: Cristian Zamfir <zamf@xxxxxxxxxxxxx>
Date: Thu, 07 Dec 2006 18:14:39 +0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 07 Dec 2006 10:14:32 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20061207160216.GK30076@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4578379B.7000006@xxxxxxxxxxxxx> <20061207160216.GK30076@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.8 (X11/20061117)
Ewan Mellor wrote:
On Thu, Dec 07, 2006 at 03:47:39PM +0000, Cristian Zamfir wrote:

Hi,

I am trying to live migrate blkif devices backed by drbd devices and I am struggling with a problem for a few days now. The problem is that after migration, the domU machine cannot load any new programs into memory. The ssh connection survives migration and I can run programs that are already in the memory but not something that needs to be loaded from the disk.

I am currently testing with an almost idle machine and I am triggering the drive migration after the domain is suspended, in step 2, from: XendCheckpoint.py: dominfo.migrateDevices(network, dst, DEV_MIGRATE_STEP2, domain_name).

However, I also tried before the domain is suspended from step 1 (dominfo.migrateDevices(network, dst, DEV_MIGRATE_STEP1, domain_name)) and everything works fine, except that there is the obvious possibility of loosing some writes to the disk because the domain is not suspended yet.

After migration, when I reattach a console I get this message:
"vbd vbd-769: 16 Device in use; refusing to close"
This is from the blkfront.c backend_changed() function but I cannot figure out why this error occurs.

I believe that this means that the frontend has seen that the backend is
tearing down, but since the device is still mounted inside the guest, it's
refusing.  I don't think that the frontend ought to see the backend tear down
at all -- the guest ought to be suspended before you tear down the backend
device.


I am triggering the migration in DEV_MIGRATE_STEP2, which is right after the domain was suspended, as far as I can tell from the python code in XendCheckpoint.py:

dominfo.migrateDevices(network, dst, DEV_MIGRATE_STEP1, domain_name)
....
....
def saveInputHandler(line, tochild):
           log.debug("In saveInputHandler %s", line)
           if line == "suspend":
                log.debug("Suspending %d ...", dominfo.getDomid())
                dominfo.shutdown('suspend')
                dominfo.waitForShutdown()
                dominfo.migrateDevices(network, dst, DEV_MIGRATE_STEP2,
                                      domain_name)
                log.info("Domain %d suspended.", dominfo.getDomid())
                dominfo.migrateDevices(network, dst, DEV_MIGRATE_STEP3,
                                       domain_name)


"Triggering the migration" involves dominfo.migrateDevices(..) calling my script in /etc/xen/scripts. This script checks that the drive at the source and the replica at the destination are in sync and then switches their roles (the one on the source becomes secondary and the one on the destination becomes primary). But since the guest is suspended at this point, I don't understand why should the frontend see any change.

I found that DRBD drives are not quite usable when they are in secondary state, only the primary one should be mounted. For instance, when trying to mount a drbd device in secondary state I get this error:
#mount -r -t reiserfs /dev/drbd1 /mnt/vm
mount: /dev/drbd1 already mounted or /mnt/vm busy

Therefore, could this error happen on the destination, during restore while waiting for backends to set up, if the drive is in secondary state?

I also don't understand why everything works if I migrate the hard drive in DEV_MIGRATE_STEP1. The only error I get in this case is reiserfs complainig about some writes that failed, but everything besides this seems ok.


I cannot really try localhost migration because I think drbd only works with two machines, but I have tested most of my code outside xen and it worked.

Thank you very much for your help.



When you say that you are "triggering the drive migration", what does that
involve?  Why would the frontend see the store contents change at all at this
point?

Have you tried a localhost migration?  This would be easier, because you don't
actually need to move the disk of course, so you can get half your signalling
tested before moving on to the harder problem.

Ewan.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>