Hi Michael,
Thanks for the tips using SSD for the node OS drives.
Regarding the NIC, I was thinking about using this for the
nodes:
and this for the server:
Are those the cards you were talking about? They are very cheap on
ebay you see...
Think 4 port bonding for the server is good enough for 8
nodes?
Thanks
Hi Jonathan,
you should think about flash or SD cards as
xen-boot-drive.
This provides you lower costs and higher energy
efficiency.
If you mount /tmp and /var/log to an tmpfs, this disks works very
well and long.
If you dont need so much disk space for your storage, use
sas disks.
SAS (10k/15k) disks provides you many more IOPs than sata disks
(more IOPS per $/€ as well).
And very important: A very large cache for your
raid controller.
Intel e1000e is a pretty good choice. This cards have a
large buffer and generates just a few interrupts on your CPUs (in comparison to
the Broadcom NICs).
Best Regards
Michael Schmidt
Am 08.06.10 14:55, schrieb Jonathan Tripathy:
My future plan currently looks like this for my VPS hosting
solution, so any feedback would be appreciated:
Each Node:
Dell R210 Intel X3430 Quad Core 8GB RAM
Intel PT 1Gbps Server Dual Port NIC using linux "bonding"
Small pair of HDDs for OS (Probably in RAID1)
Each node will run about 10 - 15 customer guests
Storage Server:
Some Intel Quad Core Chip
2GB RAM (Maybe more?)
LSI 8704EM2 RAID Controller (Think this controller does 3
Gbps)
Battery backup for the above RAID controller
4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in
total)
Each RAID10 array will connect to 2 nodes (8 nodes per storage
server)
Intel PT 1Gbps Quad port NIC using Linux bonding
Exposes 8 X 1.5GB iSCSI targets (each node will use one of
these)
HP Procurve 1800-24G switch to create 1 X 4 port trunk (for
storage server), and 8 X 2 port trunk (for the nodes)
What you think? Any tips?
Thanks
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users