On Mon, Sep 07, 2009 at 10:16:38PM +0100, Mike Williams wrote:
> Fasiha, you're not alone.
> I've got a xen-tip/master pv_ops
dom0 running, and I get roughly the same
> figures you do.
Can you verify the throughput problem gets fixed if you change the dom0
kernel to non-pv_ops? (and keep the rest of the configuration and settings unchanged).
http://xenbits.xen.org/linux-2.6.18-xen.hg-- Pasi
> 0.14 domU to domU, and 12990.91 domU to dom0.
> The netserver end is completely idle (as reported by sar), as is dom0, during
> all test.
>
> Whereas a 2.6.18 based kernel on an old dual p3 xeon gets 327 and 456
> respectively.
>
> On Monday 07 September 2009 11:15:01 Fasiha Ashraf wrote:
> > I have tried what you suggested me. I pinned 1 core per guest also pin 1
> > core to Dom0 instead of allowing dom0 to use all 8 cores. But the results
> > remained same. Below are the details:-
> >
[root@HPCNL-SR-2 ~]# xm vcpu-list
> > Name ID VCPU CPU State Time(s) CPU
> > Affinity Domain-0 0 0 0 r-- 69.4
> > any cpu Domain-0 0 1 - --p 4.7
> > any cpu Domain-0 0 2 - --p
6.2
> > any cpu Domain-0 0 3 - --p 5.5
> > any cpu Domain-0 0 4 - --p 4.7
> > any cpu Domain-0 0 5 - --p 3.5
> > any cpu Domain-0 0 6
- --p 3.8
> > any cpu Domain-0 0 7 - --p 3.5
> > any cpu F11-G1S2 0 0.0
> > any cpu F11-G2S2 1 0 1 -b- 14.7
> > 1 F11-G3S2 2 0
2 -b- 14.9 2
> > F11-G4S2 0 0.0 any cpu
> >
> > [root@F11-G2S2 ~]# netserver
> > Starting netserver at port 12865
> > Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC
> >
> > [root@F11-G3S2 ~]# netperf -l 60 -H 10.11.21.212
> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.11.21.212
> > (10.11.21.212) port 0 AF_INET Recv Send Send
> > Socket Socket Message Elapsed
> > Size Size Size Time Throughput
> > bytes bytes bytes secs. 10^6bits/sec
> >
> > 87380 16384 16384 60.05 0.29
> >
> > There is something strange that I have observed in my set-up, when i
> > traceroute guest it doesn't reach any destination. do not get reply from
> > nay hope.
> > [root@F11-G3S2 ~]# traceroute 10.11.21.212
> > traceroute to 10.11.21.212 (10.11.21.212), 30 hops max, 60 byte packets
> > 1 * * *
> > 2 * * *
> > 3 * * *
> > 4 * * *
> > 5 * * *
> > 6 *^C
> > it displays the same stars till 30. normally it doesn't happen. It should
> > be something like [root@F11-G3S2 ~]# traceroute 10.11.21.32
> > traceroute to 10.11.21.32
(10.11.21.32), 30 hops max, 60 byte packets
> > 1 10.11.21.32 (10.11.21.32) 0.740 ms 0.710 ms 0.674 ms
> >
> > I feel there is some network configuration issue. would Please guide me how
> > to find out the route cause and to resolve the problem. How can i check
> > ICMP thing in my fedora11 system?
> >
> > Regards,
> > Fasiha Ashraf
> >
> > --- On Sat, 5/9/09, Fajar A. Nugraha <
fajar@xxxxxxxxx> wrote:
> >
> > From: Fajar A. Nugraha <
fajar@xxxxxxxxx>
> > Subject: Re: [Xen-users] bridge throughput problem
> > To: "Fasiha Ashraf" <
feehapk@xxxxxxxxxxx>
> > Cc:
xen-users@xxxxxxxxxxxxxxxxxxx> > Date: Saturday, 5 September, 2009, 4:59 PM
> >
> > On Sat, Sep 5, 2009 at 12:06 PM, Fasiha Ashraf<
feehapk@xxxxxxxxxxx> wrote:
> > > What is Guest1 and Guest2?
> > > These are PV domUs of Fedora11(32bit).
> > > Is it on the same dom0 or on different dom0?
> > > Yes, they are on the same host on same physical machine.
> >
> > Perhaps it's CPU/interrupt issue. Can you make sure that dom0, guest1,
> > and guest2 ONLY use 1 vcpu each, and they're located on DIFFERENT
> > physical cpu/core (xm vcpu-set, xm vcpu-pin), and repeat the
test.
> >
> > Also, have another window running for each dom0/domU, and observe CPU
> > load during that test with "top". Which domain uses 100%? Is it user
> > or system?
>
> --
> Mike Williams
>
> _______________________________________________
> Xen-users mailing list
>
Xen-users@xxxxxxxxxxxxxxxxxxx>
http://lists.xensource.com/xen-users_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxhttp://lists.xensource.com/xen-users