> Just to add some salt to the original posters question, I'm going to
> migrate clustermembers into domU's on one big machine.
> Well, it's not a full featered HA cluster, it consists of one huge nfs/nis
> server and a lot of diskless servers with as little as necessary failover
> First tests showed, that booting one of the machines as domU results in
> random disk throughput of about 10 MB/s against about 80-95MB/s when
> running on bare-matel.
I'm not entirely clear where your domU is accessing storage from vs the bare
metal case. The domU is accessing disk via NFS? How about the bare metal
machine in your example?
> I don't necessarilly need to keep the current infrastructure, but I'll
> definitely need one mountpoint available on many (expandable) machines.
> Is there some best-practice description on how to get one mountpoint
> available to a lot of domU's ?
Well, if you can arrange for it to be readonly then the obvious thing to do is
to export readonly VBDs to all the domUs. That way they should all get
access to it at a speed similar to local disk access.
If you needed them to have private writeable access you could look at some
kind of layered copy-on-write access (e.g. run unionfs in the domUs?).
If they actually needed shared writeable access then at the moment I guess the
best option is either NFS or to set them all up with a cluster filesystem
such as GFS or OCFS2. You might get better performance with a cluster
filesystem, I guess - I'm not aware of any benchmarks of cluster FSes on Xen
Various projects, such as my XenFS filesystem are aiming to provide high
performance NFS-like functionality on Xen, but I don't know of any ready for
> Thanks for any suggestion!
> Mark Williamson schrieb:
> >> I would like to learn the speed of the network bridge interfaces created
> >> by XEN.
> >> More specifically, on xenbr0, given that the traffic only occurs between
> >> my Dom0 host and PV DomU guest, am I limited to 10 Mbps, 100 Mbps or
> >> 1000 Mbps? Does it depend on something such as ethernet card capability
> >> (even though the packets don't go out of the card and stay inside Dom0)?
> > It's not limited by your physical ethernet card, nor is it restricted to
> > any particular maximum. It's basically limited by how fast the Xen
> > virtual network drivers and the Linux bridging code can move the data
> > around. This used to actually be slower a domU accessing the physical
> > ethernet due to the extra memory operations that were required (and used
> > a fair bit of CPU). I think there have been some changes to reduce the
> > bottleneck and improve intrahost performance since then, so it would be
> > faster than I remember it. I'm not sure if it's currently faster than
> > GigE; possibly.
> > It ought to be significantly faster than 100Mbps on a modern machine.
> > It'll act like a really fast ethernet card, with no hard limit on the
> > transmission speed (instead, transmission speed will be limited by how
> > powerful your machine is and how efficient the virtual ethernet code is).
> >> I plan to use iScsi or Ata-over-Ethernet, that's why I'm asking this
> >> question,
> > Is that from dom0 to domU? Do you have a particular reason for doing
> > that? Using blkback / blkfront would be simpler and more efficient.
> > Cheers,
> > Mark
Dave: Just a question. What use is a unicyle with no seat? And no pedals!
Mark: To answer a question with a question: What use is a skateboard?
Dave: Skateboards have wheels.
Mark: My wheel has a wheel!
Xen-users mailing list