WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Pending Disk io requests during live migration

To: Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Pending Disk io requests during live migration
From: Kaushik Bhandankar <kaushikb@xxxxxxxxxxxxx>
Date: Thu, 04 Oct 2007 12:21:58 -0400
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 04 Oct 2007 09:22:36 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20071004082918.GA3870@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <470467CA.4090808@xxxxxxxxxxxxx> <47047878.2080806@xxxxxxxxxxxxx> <20071004082918.GA3870@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.13 (X11/20070824)
Thanks Tim.... I have also mailed qemu-devel.....

I actually want to enhance the VMM in such a way that once the VM has migrated, the old VMM can transfer the responses of the pending disk I/O requests to the new VMM (using some communication channel between the VMMs).

You said "They're somewhere in qemu's disk i/o model. You could modify the IDE controller to remember outstanding requests so they could be reissued,or possibly you could add save/restore handlers in the disks themselves". I have several doubts here...

* I am newbie so dont know where is the source for qemu's disk I/O model.
* Where is the IDE controller source code located (I guess the emulated IDE disk code is in ioemu/hw/ide.c)? * The save/restore handler for IDE disks are the pci_ide_save()/pci_ide_load() in ioemu/hw/ide.c. Knowing where the VMM stores the pending disk IO requests would allow me to modify these handlers to remember the outstanding disk requests.


Further, I am unclear what the BMDMA structure in ide.c is meant for. I could not find relevant documentation about it.


Thanks,
Kaushik
--------------------------------------------------------------------------------------------------------------------------------------------

Tim Deegan wrote:
Hi,

At 01:22 -0400 on 04 Oct (1191460920), Kaushik Bhandankar wrote:
register_savevm() in tools/ioemu/vl.c is simply used to register the save & load routines......... register_savevm() is called in tools/ioemu/hw/ide.c:pci_piix_ide_init to register pci_ide_save() and pci_ide_load() as the save & load routines for IDE disks.......

But I am still unsure as to where these save/load routines for IDE disks get invoked....

qemu_savevm() in vl.c walks the list of registered save handlers.
qemu_loadvm() in vl.c expects a load handler to have been registered for each chunk of the save file.
Basically, ide.c:pci_ide_save() saves the state of the IDE disk in a QEMUFile and this file is sent over the network (can somebody point me to the code where this happens ??))

Seacrh for 'qemu' in tools/python/xen/xend/XendCheckpoint.py

so that the new VMM (where the VM
has migrated) invokes ide.c:pci_ide_load() to retrieve the IDE Disk contents form the file. As of now, the pending disk I/O requests do not get saved in this file so these pending disk I/O requests are not available when executing pci_ide_load().

Yes.  I looked at this before but it seemed like a PITA to track down
the request in whtever DMA callback it was living in, so I just made
pci_ide_load signal an abort and let the OS pick up the pieces.  Have
you got a system where this doesn't work, or are you just trying to do
something a bit less nasty?

I am still trying to figure out where the pending Disk I/O requests get stored in the VMM so that during live VM migration, these requests can be put in the QEMUFile (As mentioned above)

They're somewhere in qemu's disk i/o model.  You could modify the IDE
controller to remember outstanding requests so they could be reissued,
or possibly you could add save/restore handlers in the disks themselves.
Also it might be worth asking on qemu-devel since the IDE save/restore
code is independent of Xen.

Cheers,

Tim.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel