[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Buffered IO for IO?


  • To: "Zulauf, John" <john.zulauf@xxxxxxxxx>, "Keir Fraser" <keir@xxxxxxxxxxxxx>, "Trolle Selander" <trolle.selander@xxxxxxxxx>
  • From: Mats Petersson <mats@xxxxxxxxxxxxxxxxx>
  • Date: Mon, 23 Jul 2007 20:00:31 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 24 Jul 2007 09:03:10 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=beta; h=received:x-mailer:date:to:from:subject:cc:in-reply-to:references:mime-version:content-type:sender:message-id; b=un2jbxIFimpj8TQurQk/v3L5olVzfsAbmbCjMcgepNKqx1PRvicVQRcSYpBEtlwCMng6O3jFOrF+6Hh1Ksev+7vV7pFzfMM2gZvOlQXrrCagS7BHRG+wnrwVyMzaweb4nAChI9cqfG1US0fOBdcLZ9zpUXGaTU3yCYuFoy6OHsY=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

At 19:49 23/07/2007, Zulauf, John wrote:
Content-class: urn:content-classes:message
Content-Type: multipart/alternative;
        boundary="----_=_NextPart_001_01C7CD5A.31A74761"

Thanks for the comments. Frankly, I'm guessing the bulk of the time in the COM port IO is VMEXIT time, and that saving qemu round-trip would be a marginal effect**.

I guess the question of how much of the time is spent where depends on the setup. One thing you may want to try, is to ensure that the guest domain(s) and Dom0 doesn't share the same CPU(core) - by giving Dom0 it's own CPU(core) to run on you eliminate the possibility that some other guest is still using Dom0's CPU when you want QEMU to run. If you have MANY HVM domains, you may also want to give more than a single core to Dom0.


As for the read's flushing writes, this happens automatically as a result of how the buffered_io page works (and assuming one sticks to this design for IO buffering). If dir == IOREQ_READ then attempt to buffered the IO request will fail. Thus, hvm_send_assist_req is invoked. When qemu catches the "notify" event of the READ it firsts dispatches *all* of the buffered io requests before dispatching the READ. Thus order is preserved and inb are synchronous from the vcpu point of view.

Yes, that's the trivial case. But what about a write to 0x3F8 (send data) and code that goes to sleep, waiting for an IRQ to say that the data has been sent? There may not be a read of any port in the serial port in between - thanks to Trolle for reminding me of this type of operation.

--
Mats


As for controlling outbound FIFO depth, adding a per range "max_depth" test to the "queue is full" test already in use for mmio buffering would be straight forward.

The interrupt issues are more concerning. A one byte write "window" at 3F8 doesn't seem to have this issue (c.f.) ftp://ftp.phil.uni-sb.de/pub/staff/chris/The_Serial_Port

But I agree that proxy device models are not desirable when not performance critical. Regardless, they wouldn't be supported directly though a simple "hvm_buffered_io_intercept" call. This would be more suited to the approach used in hvm_mmio_intercept to do the lapic emulation.


John

** For those interested, I'm looking at the performance of using Windbg for Guest domain debug, and the time to do the serial port based initialization of a kernel debug session. Starting a WinDBG session on a Windows guest OS takes several minutes. Any suggestions to optimize that process would be gladly entertained.


----------
From: Keir Fraser [mailto:keir@xxxxxxxxxxxxx]
Sent: Saturday, July 21, 2007 4:09 AM
To: Trolle Selander; Zulauf, John
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Buffered IO for IO?

Yes, it strikes me that this cannot be done safely without providing a set of 'proxy device models' in the hypervisor that know when it is safe to buffer and when the buffers must be flushed, according to native hardware behaviour.

 -- Keir

On 21/7/07 11:59, "Trolle Selander" <trolle.selander@xxxxxxxxx> wrote:
Safety would depend on how the emulated device works. For serial ports in particular, it's definitely not safe, since depending on the model of UART emulated, and the settings of the UART control registers, every outb may result in a serial interrupt and UART register changes that will have to be processed before any further io can be done. It's possible that there might be some performance to be gained by "upgrading" the emulated UART to a 16550A or better, and doing buffered IO for the FIFO. Earlier this year I was experimenting with a patch that made the qemu-dm serial emulation into a 16550A with FIFO, but though the patch did fix some compatability issues with software that assumed a 16550A UART in the HVM guest I'm working with, serial performance actually got noticeably _worse_, so I never bothered submitting it. Implementing the FIFO with buffered IO would possibly make it work better, but I don't see how it could be done without moving at least part of the serial device model into the hypervisor, which just strikes me as more trouble than it's worth.

/Trolle

On 7/21/07, Keir Fraser <keir@xxxxxxxxxxxxx> wrote:



On 20/7/07 22:33, "Zulauf, John" <john.zulauf@xxxxxxxxx> wrote:

> Has anyone experimented with adding Buffered IO support for "out"
> instructions?  Currently, the buffered io pages is only used for mmio
> writes (and then only to vga space).  It seems quite straight-forward to
> add.

Is it safe to buffer, and hence arbitrarily delay, any I/O port write?

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
<http://lists.xensource.com/xen-devel>http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
<http://lists.xensource.com/xen-devel>http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.