This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Paravirtualised drivers for fully virtualised domains, r

To: Steven Smith <sos22-xen@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Paravirtualised drivers for fully virtualised domains, rev9
From: Steve Dobbelstein <steved@xxxxxxxxxx>
Date: Tue, 15 Aug 2006 17:05:47 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, sos22@xxxxxxxxxxxxx
Delivery-date: Tue, 15 Aug 2006 15:06:34 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20060815072750.GA2610@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Steven Smith <sos22-xen@xxxxxxxxxxxxx> wrote on 08/15/2006 02:27:50 AM:

> > > > > 2) How often is the event channel interrupt firing according to
> > > > >    /proc/interrupts?  I see about 50k-150k/second.
> > > > I'm seeing ~ 500/s when netpipe-tcp reports decent throughput at
> > smaller
> > > > buffer sizes and then ~50/s when the throughput drops at larger
> > > > sizes.
> > > How large do they have to be to cause problems?
> > I'm noticing a drop off in throughput at a buffer size of 3069.  Here
is a
> > snip from the output from netpipe-tcp.
> What are the MTUs on the interfaces, according to ifconfig, in dom0
> and domU?

MTUs on all the interfaces are 1500.

> > I don't know offhand why the throughput drops off.  I'll look into it.
> > tips would be helpful.
> tcpdump in the domU and dom0 might be enlightening, just to see if any
> packets are getting dropped or truncated.  The connections probably
> slow enough when it's misbehaving for it to keep up.

tcpdump on both dom0 and domU shows no packets dropped and none truncated.

I noticed lines such as:

16:28:18.596654 IP dib.ltc.austin.ibm.com > hvm1.ltc.austin.ibm.com: ICMP
dib.ltc.austin.ibm.com unreachable - need to frag (mtu 1500), length 556

in the tcpdump output during the slow down.  (dib.ltc.austin.ibm.com is
dom0.)  Knowing very little about the TCP protocol, I'm not sure if that
indicates a problem.

> Are you running through the bridge?  It's unlikely to be that, but it
> would be good to eliminate it as a variable by doing some domU<->dom0
> tests without it involved.

I am running through the bridge, the default Xen setup.

I doubt the bridge is the problem since I also use the bridge for a PV domU
and an FV domU and those don't see a slowdown.

> What version of Linux are you running in the domU?  Does it have any
> patches applied?

SLES 10 beta 10.  (Yes, SLES 10 has released.  We haven't updated our
automated testing framework yet.)  I'm running a kernel.org
kernel, the current base kernel for xen-unstable.  No patches applied.

Here is the kernel config from /proc/config.gz in the HVM domU.
(See attached file: hvm_kernel_config)

Thanks for your attention.

Steve D.

Attachment: hvm_kernel_config
Description: Binary data

Xen-devel mailing list