[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel][PATCH][VT] Multithread IDE device model ( was: RE: [Xen-devel] [PATCH]Make IDE dma tranfer run in another thread inqemu)


  • To: "Anthony Liguori" <aliguori@xxxxxxxxxx>
  • From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
  • Date: Wed, 26 Oct 2005 23:25:44 +0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, "Yang, Xiaowei" <xiaowei.yang@xxxxxxxxx>
  • Delivery-date: Wed, 26 Oct 2005 15:23:02 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcXaP9kaoK8+v5mFRNu88qY0BvlayQAAI2rQ
  • Thread-topic: [Xen-devel][PATCH][VT] Multithread IDE device model ( was: RE: [Xen-devel] [PATCH]Make IDE dma tranfer run in another thread inqemu)

Hi Anthony:
        I think you made misunderstanding to this patch. Current Qemu in
Xen is already DMA enabled. If I remembered correctly, it happens since
we change DM from Bochs to Qemu.
        Without this patch, guest IO operation that trigger DMA (like
port 0xc000 write) will wait in Qemu till the DMA operation is
completed, that is original single thread IDE device model mean. 
        With this patch, a seperate thread will service the dma
operation started by IO operation, and interrupt target processor when
it is completed, while the main thread can rapidly return to guest (like
0xc000 write).
Thanks,eddie

Anthony Liguori wrote:
> Hi Eddie,
> 
> There was a patch floating around on qemu-devel recently to make IDE
> DMA concurrent.  Fabrice is planning to include it in QEMU as long as
> there are no regressions.  It may already be in CVS.
> 
> See
> http://people.brandeis.edu/~jcoiner/qemu_idedma/qemu_dma_patch.html 
> 
> The reported performance improvement IO is up to 20% so it's
> definitely worth applying...
> 
> Regards,
> 
> Anthony Liguori
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.