[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen & I/O in clusters - problems!



> Hi, we are benchmarking Xen in a cluster, and got some bad results. We
> might do something wrong, and wonder if anyone
> have similar problems.
> 
> When we benchmark troughput from native Linux to native Linux (two
> physical nodes in the cluster) we get 786.034 MByte/s
> When we benchmark from a virtual domain (running on Xen on a physical
> node) to an another virtual domain (on another physical node) we get
> 56.480 MByte/s (1:16)

(Presumably you mean MBits rather than Mbytes)

The numbers you're getting are terrible compared to what we see.
Running between virtual domains on a cluster we measure
throughput as high as 897Mb/s (same as Linux native).

Our results were recorded with dual 2.4GHz Xeons with tg3 NICs
and a 128KB socket buffer, measured using ttcp. With the virtual
domain running on the other physical CPU from domain 0 we get
897Mb/s. We get similar results running the virtual domain on the
other hyperthread of the same physical CPU. We observe a
performance reduction if we run the virtual domain on the same
(logical) CPU as domain 0, down to 660Mb/s [843Mb/s on a dual
3GHz machine, so we appear to be CPU limited in this case].

> The difference is huge, and we wonder if the bottleneck could be the
> fact that we are using software routing (We use this in order to route
> from the physical node to the virtual OSs), or if this is just a
> downside of Xen?

Our results were recorded using the dom0 linux bridge code rather
than using routing.

One thing to check is that you have don't have
CONFIG_IP_NF_CONNTRACK set to 'y' -- this slays performance.

Also, if you're running multiple domains on the same CPU you may
be running into CPU scheduling issues. Some tweaks to scheduler
parameters may fix this.

> I would guess it IS the SW routing, so is there any good alternatives
> to make virtual domains communicate on a cluster without sw routing?

The Xen 2.0 architecture is not as slick as the
monolithic-hypervisor approach of Xen 1.2, but we get better
hardware support and a lot more flexibility. However, we do burn
more CPU to achieve the same IO rate. We just have to wait for
Moore's law to catch up ;-)

Ian


-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.