> Hello again James,
>
> > These days, each DomU network interface should be making Dom0 aware
of
> > what multicast traffic it should be receiving, so unless your domU
> > kernels are old that shouldn't be your problem, but maybe someone
else
> > can confirm that multicast is definitely in place?
>
> Hmmm, how would I verify that? As far as I can see the dom0 is in
> constant promiscuous mode so that it can pass all bridged traffic.
This
> doesn't really matter though, I actually do need all the traffic I am
> receiving. The problem is that the load is exorbitant between dom0
and
> domU. I mean, with 600 Mbps of network IO, dom0 consumes an entire
5310
> core (2.33 GHz penryn). Whereas if I pin that interface into the domU
via
> PCI-Passthrough, we only get a 5% cpu load to ingest that traffic.
If netback is treating multicast traffic as broadcast traffic, then all
multicast traffic will be forwarded to all DomU network interfaces on
that bridge. More work for Dom0 and more work for each DomU. If the DomU
is telling Dom0 what sort of multicast traffic is desired then Dom0
doesn't have to work so hard. If, as you say, all your DomU's all want
all the multicast traffic then this is probably irrelevant.
> I
> don't know if its important or not, but in the dom0, if I use "top"
the
> CPU is 99% idle. But if a run "xm top" this is where I see the 100%
> utilization on dom0.
Hmmm... well if you have 600Mbps traffic of 1316 bytes per packet, that
is ~60Mbytes/second / 1316 bytes = ~45000 packets per second.
While things are going at 600Mbps, please try the following in both Dom0
and DomU:
cat /proc/interrupts && sleep 10 && cat /proc/interrupts
That should get a very approximate count of interrupts over a 10 second
period. what is the difference (after - before) for:
Dom0 physical Ethernet interface
Dom0 vif (backend) interface
DomU eth0
James
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|