This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] RAID10 Array

To: Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
Subject: Re: [Xen-users] RAID10 Array
From: Adi Kriegisch <kriegisch@xxxxxxxx>
Date: Thu, 17 Jun 2010 09:32:37 +0200
Cc: Xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 17 Jun 2010 00:36:58 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C195614.1030501@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4C195614.1030501@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)

> I have 3 RAID ideas, and I'd appreciate some advice on which would be 
> better for lots of VMs for customers.
> My storage server will be able to hold 16 disks. I am going to export 1 
> iSCSI LUN to each xen node. 6 nodes will connect to one storage server, 
> so that's 6 LUNs per server of equal size. The server will connect to a 
> switch using quad port bonded NICs (802.3ad), and each Xen node will 
> connect to the switch using Dual port bonded NICs.
hmmm... with one LUN per server you will loose the ability to do live
migration -- or do I miss something?
Some people mention problems with bonding more than two NICs for iSCSI as
the reordering of the commands/packets adds tremendously to latency and load.
If you want high performance and avoid latency issues you might want to
choose ATA-over-Ethernet.
> I'd appreciate any thoughts or ideas on which would be best for 
> throughput/IPOS
Your server is a Linux box exporting the RAIDs to your Xen servers? Then
just take fio and do some benchmarking. If you're using software raid than
you might want to add RAID5 to the equation.
I'd suggest to measure performance of your RAID system with various
configurations and then choose which level of isolation gives the best
I don't think a setup with 6 hot spare disks is necessary -- at least not
when they're connected to the same server. Depending on the quality of your
disks 1 to 3 should suffice. With eg. 1 hot spare in the server plus some
cold spares in your office you should be able to survive a broken harddisk.
You should also "smartctl -t long" your disks frequently (ie once per week)
and do more or less permanent resync of your raid to be able to detect
disk errors early. (The worst case scenario is to never check your disks --
then a disk is broken and replaced by a hot/cold spare -- and raid resync
fails other disks on your array, just because the bad blocks are already

Hope this helps

-- Adi

Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>