I've got a patch in our tree that does (basically) what John is
The exact bug we hit was that a "xm shutdown -w vm" did not wait until
the vbds were cleared out before returning. So now I wait until the
backend/vbd nodes go away before returning.
This could probably be done more cleanly with watches, and should be
abstracted out to be sure it applies equally to migration, and so forth.
But for the sake of discussion, the patch is attached.
>>> On Mon, Jul 31, 2006 at 4:26 PM, in message
Byrne <john.l.byrne@xxxxxx> wrote:
> It would be a bit ugly, but mostly straightforward to watch for the
> destruction of the vbds (or all devices) after the destroyDomain() is
> done and then sending an all- clear. (The last time I looked there
> a waitForDomainDestroy() anywhere, so it would probably be best to
> one.) This would guarantee correctness: which is the most important
> The problem I see with that strategy is the effect on downtime during
> live- move. Ideally you'd like to start the vbd cleanup when the
> suspend is done and hope to parallelize the any final device
> with the final pass of live- move. How to do that and play nice with
> domain destruction on the normal path and handle errors seems a lot
> clear to me.
> So, are you just ignoring the notion of minimizing downtime for the
> moment or is there something I'm missing?
> Andrew Warfield wrote:
>> It's slightly more than a flush that's required. The migration
>> protocol needs to be extended so that execution on the target host
>> doesn't start until all of the outstanding (i.e. issued by the
>> backend) block requests have been either cancelled or acknowledged.
>> This should be pretty straight forward given that the backend
>> ref counts a blkif's state based on pending requests, and won't
>> down the backend directory in xenstore until all the outstanding
>> requests have cleared. All that is likely required is to have the
>> migration code register watches on the backend vbd directories, and
>> wait for them to disappear before giving the all- clear to the new
>> We've talked about this enough to know how to fix it, but haven't
>> a chance to hack it up. (I think Julian has looked into the problem
>> bit for blktap, but not yet done a general fix.) Patches would
>> certainly be welcome though. ;)
>> On 7/31/06, John Byrne <john.l.byrne@xxxxxx> wrote:
>>> I don't see any obvious flush to disk taking place for vbd's on
>>> source host in XendCheckpoint.py before the domain is started on
>>> host. Is there a guarantee that all written data is on disk
>>> else or is something needed?
>>> John Byrne
>>> Xen- devel mailing list
>>> Xen- devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen- devel
> Xen- devel mailing list
> Xen- devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen- devel
Description: Binary data
Xen-devel mailing list