xen-users
Re: [Xen-users] My future plan
----- Original message -----
> Hi Jonathan,
>
> I use a DRBD bases IET install. It syncs between the nodes with two bonded
> Intel e1000 NICs. I use the same network cards to connect to the Xen
> hypervisors.
Correction, the same kind of cards
> MIND YOU: I use dual port NICs (two in total on the storga servers) but I
> CROSS the connections: that is, I connect one port of one card to the Xen
> nodes, but I use the other for the DRBD sync; And the other way around of
> course. This way, if a card breaks, I still have things running. To be able to
> use two switches in between Xen hosts and the storage, I use multipathing to
> connect to the iSCSI LUNs. This results in higher speed and network
> redundancy. It would make no sense to use more than 2 ports since DRBD cannot
> sync faster, but also, as mentioned before, it seems that bonding more than 2
> does not result in higher speeds.
> This however is easily tested with netperf . I would be happy to hear someones
> testresults about this.
>
> O yes, if you don't get the expected speeds with bonded cards in mode 0, try
> looking at tcp_reordering in /proc/sys/net/ipv4 something ...
>
>
> On Wednesday 09 June 2010 14:53:28 Jonathan Tripathy wrote:
> > So should I just go with 2 NICs for the storage server then?
> >
> > In your future setup, how many NICs are you using for the storage server
> > and how many for the nodes? I take it you're using software iSCSI?
> >
> > ________________________________
> >
> > From: Bart Coninckx [mailto:bart.coninckx@xxxxxxxxxx]
> > Sent: Wed 09/06/2010 11:25
> > To: xen-users@xxxxxxxxxxxxxxxxxxx
> > Cc: Jonathan Tripathy; Michael Schmidt
> > Subject: Re: [Xen-users] My future plan
> >
> >
> >
> > On the DRBD mailing lists I've seen a couple of times that they did tests
> > with bonding and they claim that a bond with more than 2 NICs will
> > actually decrease performance because of the TCP reordering that needs to
> > be done.
> >
> > That's the reason why I limit the storage connection to two NICs. I have a
> > very similar to yours in the making by the way.
> >
> > On Tuesday 08 June 2010 15:55:47 Jonathan Tripathy wrote:
> > > Hi Michael,
> > >
> > > Thanks for the tips using SSD for the node OS drives.
> > >
> > > Regarding the NIC, I was thinking about using this for the nodes:
> > >
> > > http://www.intel.com/products/server/adapters/pro1000pt-dualport/pro1000p
> > > t- dualport-overview.htm
> > >
> > > and this for the server:
> > >
> > > http://www.intel.com/products/server/adapters/pro1000pt-quadport-low-prof
> > > il e/pro1000pt-quadport-low-profile-overview.htm
> > >
> > > Are those the cards you were talking about? They are very cheap on ebay
> > > you see...
> > >
> > > Think 4 port bonding for the server is good enough for 8 nodes?
> > >
> > > Thanks
> > >
> > > ________________________________
> > >
> > > From: Michael Schmidt [mailto:michael.schmidt@xxxxxxxxxx]
> > > Sent: Tue 08/06/2010 14:49
> > > To: Jonathan Tripathy; Xen-users@xxxxxxxxxxxxxxxxxxx
> > > Subject: Re: [Xen-users] My future plan
> > >
> > >
> > > Hi Jonathan,
> > >
> > > you should think about flash or SD cards as xen-boot-drive.
> > > This provides you lower costs and higher energy efficiency.
> > > If you mount /tmp and /var/log to an tmpfs, this disks works very well
> > > and long.
> > >
> > > If you dont need so much disk space for your storage, use sas disks.
> > > SAS (10k/15k) disks provides you many more IOPs than sata disks (more
> > > IOPS per $/EUR as well). And very important: A very large cache for your
> > > raid controller.
> > >
> > > Intel e1000e is a pretty good choice. This cards have a large buffer and
> > > generates just a few interrupts on your CPUs (in comparison to the
> > > Broadcom NICs).
> > >
> > > Best Regards
> > >
> > > Michael Schmidt
> > > Am 08.06.10 14:55, schrieb Jonathan Tripathy:
> > >
> > > My future plan currently looks like this for my VPS hosting
> > > solution, so any feedback would be appreciated:
> > >
> > > Each Node:
> > > Dell R210 Intel X3430 Quad Core 8GB RAM
> > > Intel PT 1Gbps Server Dual Port NIC using linux "bonding"
> > > Small pair of HDDs for OS (Probably in RAID1)
> > > Each node will run about 10 - 15 customer guests
> > >
> > >
> > > Storage Server:
> > > Some Intel Quad Core Chip
> > > 2GB RAM (Maybe more?)
> > > LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps)
> > > Battery backup for the above RAID controller
> > > 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total)
> > > Each RAID10 array will connect to 2 nodes (8 nodes per storage
> > > server) Intel PT 1Gbps Quad port NIC using Linux bonding
> > > Exposes 8 X 1.5GB iSCSI targets (each node will use one of these)
> > >
> > > HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage
> > > server), and 8 X 2 port trunk (for the nodes)
> > >
> > > What you think? Any tips?
> > >
> > > Thanks
> > >
> > >
> > > _______________________________________________
> > > Xen-users mailing list
> > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/xen-users
> >
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|