[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Pending Disk io requests during live migration



Thanks Tim.... I have also mailed qemu-devel.....

I actually want to enhance the VMM in such a way that once the VM has migrated, the old VMM can transfer the responses of the pending disk I/O requests to the new VMM (using some communication channel between the VMMs).

You said "They're somewhere in qemu's disk i/o model. You could modify the IDE controller to remember outstanding requests so they could be reissued,or possibly you could add save/restore handlers in the disks themselves". I have several doubts here...

* I am newbie so dont know where is the source for qemu's disk I/O model.
* Where is the IDE controller source code located (I guess the emulated IDE disk code is in ioemu/hw/ide.c)? * The save/restore handler for IDE disks are the pci_ide_save()/pci_ide_load() in ioemu/hw/ide.c. Knowing where the VMM stores the pending disk IO requests would allow me to modify these handlers to remember the outstanding disk requests.


Further, I am unclear what the BMDMA structure in ide.c is meant for. I could not find relevant documentation about it.


Thanks,
Kaushik
--------------------------------------------------------------------------------------------------------------------------------------------

Tim Deegan wrote:
Hi,

At 01:22 -0400 on 04 Oct (1191460920), Kaushik Bhandankar wrote:
register_savevm() in tools/ioemu/vl.c is simply used to register the save & load routines......... register_savevm() is called in tools/ioemu/hw/ide.c:pci_piix_ide_init to register pci_ide_save() and pci_ide_load() as the save & load routines for IDE disks.......

But I am still unsure as to where these save/load routines for IDE disks get invoked....

qemu_savevm() in vl.c walks the list of registered save handlers.
qemu_loadvm() in vl.c expects a load handler to have been registered for each chunk of the save file.
Basically, ide.c:pci_ide_save() saves the state of the IDE disk in a QEMUFile and this file is sent over the network (can somebody point me to the code where this happens ??))

Seacrh for 'qemu' in tools/python/xen/xend/XendCheckpoint.py

so that the new VMM (where the VM
has migrated) invokes ide.c:pci_ide_load() to retrieve the IDE Disk contents form the file. As of now, the pending disk I/O requests do not get saved in this file so these pending disk I/O requests are not available when executing pci_ide_load().

Yes.  I looked at this before but it seemed like a PITA to track down
the request in whtever DMA callback it was living in, so I just made
pci_ide_load signal an abort and let the OS pick up the pieces.  Have
you got a system where this doesn't work, or are you just trying to do
something a bit less nasty?

I am still trying to figure out where the pending Disk I/O requests get stored in the VMM so that during live VM migration, these requests can be put in the QEMUFile (As mentioned above)

They're somewhere in qemu's disk i/o model.  You could modify the IDE
controller to remember outstanding requests so they could be reissued,
or possibly you could add save/restore handlers in the disks themselves.
Also it might be worth asking on qemu-devel since the IDE save/restore
code is independent of Xen.

Cheers,

Tim.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.