xen-users
RE: [Xen-users] RAID10 Array
Thanks for this Rob
and for being very helpful.
What is your view on ATA over Ethernet? It seems that it can work
better with 802.3ad link agregation, and may be simplier to set up..
Cheers
From: Robert Dunkley
[mailto:Robert@xxxxxxxxx] Sent: Thu 17/06/2010 11:07 To:
Jonathan Tripathy Subject: RE: [Xen-users] RAID10
Array
Hi
Jonathan,
There
is the complicated scripted scientific approach which I did not have time for
when I constructed things here although others on the list might be able to help
you with that sort of benchamrking.
I
just ran Bonnie and timed DD on a couple of Linux VMs whilst running Sandra on
couple of Windows VMs and coincide them to roughly finish at the same time.
Whichever setup provided decent all round results whilst all were running would
be my choice. The scheduler selection in Dom0 also affected disk performance a
fair bit.
I
also tested the replicated arrays we have using IOZone from Dom 0. I attach my
results from one of our systems using a simple raid 1 array of both 7.2K SATA
and 15K SAS disks. It will hopefully show you the affect of different raid
controller settings on different IO usage scenarios.
Our
setup here was storage on the VM servers but replicated between them using DRBD,
might sound different to yours but testing is similar.
Testing
first on the local arrays and tweaking the raid controller settings and driver
along with local IO cache settings would be your first step. Then team up your
NICs and use something like iperf to tweak your MTU and other settings for max
bandwidth. Then do the same IOZone tests from a Dom0 using ISCSI and try
to optimise your ISCSI as best you can. Lastly test from the VM and optimise the
Xen config as best you can. Splitting the above tasks will allow you to work on
one area at a time and aim any questions you might have at the correct mailing
list / forum for each one.
Rob
From: Jonathan
Tripathy [mailto:jonnyt@xxxxxxxxxxx] Sent: 17 June 2010
10:37 To: Robert Dunkley;
Xen-users@xxxxxxxxxxxxxxxxxxx Subject: RE: [Xen-users] RAID10
Array
Can you suggest a way I could benchmark all these things?
I've never benchmarked Hard Drives before..
From: Robert Dunkley
[mailto:Robert@xxxxxxxxx] Sent: Thu 17/06/2010 10:06 To:
Jonathan Tripathy; Adi Kriegisch;
Xen-users@xxxxxxxxxxxxxxxxxxx Subject: RE: [Xen-users] RAID10
Array
Hi,
I
like the sound of idea 1 best. One big Raid 10 might sound nice but are you sure
it is purely bandwidth you need. For small file latency I think a number of
smaller arrays spread between the different VMs might be faster (eg. 4 Raid 10
or 4 Raid 5). Seperate arrays also provides some degree of performance
isolation between the LUNs. The Raid 1 part of raid 10 does allow for read
interleaving but if you have random mixed reads and writes occurring fairly
evenly across the VMs then separate arrays should be more responsive (Even with
read and write caching enabled on the raid card).
The
way to find out is to benchmark with multiple VMs simultaneously.
Rob
From:
xen-users-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jonathan
Tripathy Sent: 17 June 2010 09:09 To: Adi Kriegisch;
Xen-users@xxxxxxxxxxxxxxxxxxx Subject: RE: [Xen-users] RAID10
Array
From: Adi Kriegisch
[mailto:kriegisch@xxxxxxxx] Sent: Thu 17/06/2010 08:32 To:
Jonathan Tripathy Cc: Xen-users@xxxxxxxxxxxxxxxxxxx Subject:
Re: [Xen-users] RAID10 Array
Hi!
> I have 3 RAID ideas, and I'd
appreciate some advice on which would be > better for lots of VMs for
customers. > > My storage server will be able to hold 16 disks. I am
going to export 1 > iSCSI LUN to each xen node. 6 nodes will connect to
one storage server, > so that's 6 LUNs per server of equal size. The
server will connect to a > switch using quad port bonded NICs (802.3ad),
and each Xen node will > connect to the switch using Dual port bonded
NICs. hmmm... with one LUN per server you will loose the ability to do
live migration -- or do I miss something? Some people mention problems
with bonding more than two NICs for iSCSI as the reordering of the
commands/packets adds tremendously to latency and load. If you want high
performance and avoid latency issues you might want to choose
ATA-over-Ethernet.
> I'd appreciate any thoughts or ideas on which
would be best for > throughput/IPOS Your server is a Linux box
exporting the RAIDs to your Xen servers? Then just take fio and do some
benchmarking. If you're using software raid than you might want to add RAID5
to the equation. I'd suggest to measure performance of your RAID system with
various configurations and then choose which level of isolation gives the
best performance. I don't think a setup with 6 hot spare disks is
necessary -- at least not when they're connected to the same server.
Depending on the quality of your disks 1 to 3 should suffice. With eg. 1 hot
spare in the server plus some cold spares in your office you should be able
to survive a broken harddisk. You should also "smartctl -t long" your disks
frequently (ie once per week) and do more or less permanent resync of your
raid to be able to detect disk errors early. (The worst case scenario is to
never check your disks -- then a disk is broken and replaced by a hot/cold
spare -- and raid resync fails other disks on your array, just because the
bad blocks are already there...)
Hope this helps
--
Adi
-------------------------------------------------------------------------------------------------------------------
Hi Adi,
The RAID controller I'm
planning to use is the MegaRAID SAS 9260-4i. The storage server will be
built by Broadberry, so it will be using Supermicro kit.
As for the O/S on the server, I was thinking of using Windows
Storage Server actually, however maybe this is a bad idea? You're correct about
the live migration, however I may implement some sort of clustering iSCSI
filesystem, however the main issue at the minute is the RAID array.
I've heard the same things about bonding 2 vs 4 NICs as
well.
Currently, I'm leaning towards the RAID10 array with 14 disks
with 2 hot spares
The
SAQ Group
Registered
Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ
is the trading name of SEMTEC Limited. Registered in England &
Wales Company Number: 06481952
http://www.saqnet.co.uk AS29219
SAQ
Group Delivers high quality, honestly priced communication and I.T. services to
UK Business.
Broadband :
Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed
Networks : Remote Support.
ISPA Member
The
SAQ Group
Registered
Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the
trading name of SEMTEC Limited. Registered in England & Wales Company
Number: 06481952
http://www.saqnet.co.uk AS29219
SAQ
Group Delivers high quality, honestly priced communication and I.T. services to
UK Business.
Broadband :
Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed
Networks : Remote Support.
ISPA
Member
|
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- RE: [Xen-users] RAID10 Array, (continued)
- Message not available
- RE: [Xen-users] RAID10 Array, Jonathan Tripathy
- Message not available
- RE: [Xen-users] RAID10 Array, Jonathan Tripathy
- RE: [Xen-users] RAID10 Array, Jonathan Tripathy
- Re: [Xen-users] RAID10 Array, Adi Kriegisch
- Re: [Xen-users] RAID10 Array, Bart Coninckx
- Message not available
- RE: [Xen-users] RAID10 Array,
Jonathan Tripathy <=
- Re: [Xen-users] RAID10 Array, Adi Kriegisch
- RE: [Xen-users] RAID10 Array, James Harper
- Re: [Xen-users] RAID10 Array, Adi Kriegisch
Re: [Xen-users] RAID10 Array, Bart Coninckx
Re: [Xen-users] RAID10 Array, jpp@xxxxxxxxxxxxxxxxxx
|
|
|