|
|
|
|
|
|
|
|
|
|
xen-users
Re: [Xen-users] My future plan
On the DRBD mailing lists I've seen a couple of times that they did tests with
bonding and they claim that a bond with more than 2 NICs will actually
decrease performance because of the TCP reordering that needs to be done.
That's the reason why I limit the storage connection to two NICs. I have a
very similar to yours in the making by the way.
On Tuesday 08 June 2010 15:55:47 Jonathan Tripathy wrote:
> Hi Michael,
>
> Thanks for the tips using SSD for the node OS drives.
>
> Regarding the NIC, I was thinking about using this for the nodes:
>
> http://www.intel.com/products/server/adapters/pro1000pt-dualport/pro1000pt-
> dualport-overview.htm
>
> and this for the server:
>
> http://www.intel.com/products/server/adapters/pro1000pt-quadport-low-profil
> e/pro1000pt-quadport-low-profile-overview.htm
>
> Are those the cards you were talking about? They are very cheap on ebay you
> see...
>
> Think 4 port bonding for the server is good enough for 8 nodes?
>
> Thanks
>
> ________________________________
>
> From: Michael Schmidt [mailto:michael.schmidt@xxxxxxxxxx]
> Sent: Tue 08/06/2010 14:49
> To: Jonathan Tripathy; Xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-users] My future plan
>
>
> Hi Jonathan,
>
> you should think about flash or SD cards as xen-boot-drive.
> This provides you lower costs and higher energy efficiency.
> If you mount /tmp and /var/log to an tmpfs, this disks works very well and
> long.
>
> If you dont need so much disk space for your storage, use sas disks.
> SAS (10k/15k) disks provides you many more IOPs than sata disks (more IOPS
> per $/EUR as well). And very important: A very large cache for your raid
> controller.
>
> Intel e1000e is a pretty good choice. This cards have a large buffer and
> generates just a few interrupts on your CPUs (in comparison to the
> Broadcom NICs).
>
> Best Regards
>
> Michael Schmidt
> Am 08.06.10 14:55, schrieb Jonathan Tripathy:
>
> My future plan currently looks like this for my VPS hosting solution, so
> any feedback would be appreciated:
>
> Each Node:
> Dell R210 Intel X3430 Quad Core 8GB RAM
> Intel PT 1Gbps Server Dual Port NIC using linux "bonding"
> Small pair of HDDs for OS (Probably in RAID1)
> Each node will run about 10 - 15 customer guests
>
>
> Storage Server:
> Some Intel Quad Core Chip
> 2GB RAM (Maybe more?)
> LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps)
> Battery backup for the above RAID controller
> 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total)
> Each RAID10 array will connect to 2 nodes (8 nodes per storage server)
> Intel PT 1Gbps Quad port NIC using Linux bonding
> Exposes 8 X 1.5GB iSCSI targets (each node will use one of these)
>
> HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage
> server), and 8 X 2 port trunk (for the nodes)
>
> What you think? Any tips?
>
> Thanks
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|