[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] theoretical network rx performance of Windows with PV drivers


  • To: James Harper <james.harper@xxxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Tue, 18 Nov 2008 12:47:01 +0000
  • Cc:
  • Delivery-date: Tue, 18 Nov 2008 04:47:27 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AclJdZaq6OGodxhGRNi3MA0h9mpWyQABil1r
  • Thread-topic: [Xen-devel] theoretical network rx performance of Windows with PV drivers

On 18/11/08 12:02, "James Harper" <james.harper@xxxxxxxxxxxxxxxx> wrote:

> I'm finding some odd things during development of the GPLPV and am
> wondering if I'm just expecting too much of a HVM Windows DomU.
> 
> I'm using iperf for testing, and the most I can get on a 1.8GHz AMD1210
> out of a Dom0->DomU network performance is about 500MBits, and that's
> with Dom0 sending packets at close to 1GBit, with about 50% of packets
> being lost. But it's not consistent... things seem to stall at strange
> times (some of that may be a driver or a windows problem - the time
> between scheduling a Dpc and the Dpc being executed is up to 3 seconds
> sometimes when this happens...)
> 
> How much overhead is introduced in the event channel -> HVM IRQ path, as
> compared to the normal interdomain event channels? I think that the
> delay there might be bringing me down, but maybe I'm looking in the
> wrong place?

I don't think evtchn->IRQ latency is particularly large. But also I don't
know what else might be causing your erratic behaviour.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.