This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] My future plan

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] My future plan
From: Bart Coninckx <bart.coninckx@xxxxxxxxxx>
Date: Wed, 9 Jun 2010 12:25:24 +0200
Cc: Michael Schmidt <michael.schmidt@xxxxxxxxxx>, Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
Delivery-date: Wed, 09 Jun 2010 03:26:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <46C13AA90DB8844DAB79680243857F0F062035@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <46C13AA90DB8844DAB79680243857F0F062033@xxxxxxxxxxxxxxxxxxx> <4C0E4A70.2040308@xxxxxxxxxx> <46C13AA90DB8844DAB79680243857F0F062035@xxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.12.4 (Linux/; KDE/4.3.5; x86_64; ; )
On the DRBD mailing lists I've seen a couple of times that they did tests with 
bonding and they claim that a bond with more than 2 NICs will actually 
decrease performance because of the TCP reordering that needs to be done. 

That's the reason why I limit the storage connection to two NICs. I have a 
very similar to yours in the making by the way.

On Tuesday 08 June 2010 15:55:47 Jonathan Tripathy wrote:
> Hi Michael,
> Thanks for the tips using SSD for the node OS drives.
> Regarding the NIC, I was thinking about using this for the nodes:
> http://www.intel.com/products/server/adapters/pro1000pt-dualport/pro1000pt-
> dualport-overview.htm
> and this for the server:
> http://www.intel.com/products/server/adapters/pro1000pt-quadport-low-profil
> e/pro1000pt-quadport-low-profile-overview.htm
> Are those the cards you were talking about? They are very cheap on ebay you
>  see...
> Think 4 port bonding for the server is good enough for 8 nodes?
> Thanks
> ________________________________
> From: Michael Schmidt [mailto:michael.schmidt@xxxxxxxxxx]
> Sent: Tue 08/06/2010 14:49
> To: Jonathan Tripathy; Xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-users] My future plan
> Hi Jonathan,
> you should think about flash or SD cards as xen-boot-drive.
> This provides you lower costs and higher energy efficiency.
> If you mount /tmp and /var/log to an tmpfs, this disks works very well and
>  long.
> If you dont need so much disk space for your storage, use sas disks.
> SAS (10k/15k) disks provides you many more IOPs than sata disks (more IOPS
>  per $/EUR as well). And very important: A very large cache for your raid
>  controller.
> Intel e1000e is a pretty good choice. This cards have a large buffer and
>  generates just a few interrupts on your CPUs (in comparison to the
>  Broadcom NICs).
> Best Regards
> Michael Schmidt
> Am 08.06.10 14:55, schrieb Jonathan Tripathy:
>       My future plan currently looks like this for my VPS hosting solution, so
>  any feedback would be appreciated:
>       Each Node:
>       Dell R210 Intel X3430 Quad Core 8GB RAM
>       Intel PT 1Gbps Server Dual Port NIC using linux "bonding"
>       Small pair of HDDs for OS (Probably in RAID1)
>       Each node will run about 10 - 15 customer guests
>       Storage Server:
>       Some Intel Quad Core Chip
>       2GB RAM (Maybe more?)
>       LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps)
>       Battery backup for the above RAID controller
>       4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total)
>       Each RAID10 array will connect to 2 nodes (8 nodes per storage server)
>       Intel PT 1Gbps Quad port NIC using Linux bonding
>       Exposes 8 X 1.5GB iSCSI targets (each node will use one of these)
>       HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage
>  server), and 8 X 2 port trunk (for the nodes)
>       What you think? Any tips?
>       Thanks
>       _______________________________________________
>       Xen-users mailing list
>       Xen-users@xxxxxxxxxxxxxxxxxxx
>       http://lists.xensource.com/xen-users

Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>