This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: Fw: Re: [Xen-users] bridge throughput problem

To: Fasiha Ashraf <feehapk@xxxxxxxxxxx>
Subject: Re: Fw: Re: [Xen-users] bridge throughput problem
From: "Fajar A. Nugraha" <fajar@xxxxxxxxx>
Date: Mon, 7 Sep 2009 18:50:07 +0700
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 07 Sep 2009 04:50:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <26045.24367.qm@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <26045.24367.qm@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Mon, Sep 7, 2009 at 5:15 PM, Fasiha Ashraf<feehapk@xxxxxxxxxxx> wrote:
> I have tried what you suggested me. I pinned 1 core per guest also pin 1
> core to Dom0 instead of allowing dom0 to use all 8 cores. But the results
> remained same.

At this point I have to say I don't know. I'm not famliar enough with
F11 (especially the kernel) to know what kind of throughput to expect
under Xen. In my RHEL5 setup (kernel 2.6.18), inter domU communication
can easily reach 2 Gbps.

Perhaps it's performance issue with newer pv_ops kernel. Hopefully
others familiar with this setup can help you.


Xen-users mailing list