[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Re: Xen and I/O Intensive Loads



Oliver,

The way the Compellent system works is that is does a per-volume RAID.  So, there are 30 disks presented to the SAN controllers as a JBOD, and then each volume is assigned one or more RAID levels, and the controller stripes the data and moves it between RAID levels.  The GroupWise data volume is configured as a RAID10 only, but it does contend with other volumes on the same set of disks.  GroupWise does not use separate disks or volumes for temporary data, databases, logs, etc. - everything is kept in the same filesystem and there really isn't much documentation or whether it's possible or how to separate those things.


I'm not sure about the RAID block/chunk/stripe size - the user interface on the controller doesn't really lend itself well to those sorts of detailed customizations.  I'll have to dig a little bit to see about that.  Drives are FCATA 7200 RPM, and the 4GB cache is for read and write - I'm not sure if they do write-through or write-back - I'll check on that.


-Nick

>>> On 2009/08/27 at 09:11, "Oliver Wilcock" <oliver@xxxxxxx> wrote:

Nick,
Do you mean Groupwise data volume is on one RAID10 comprised of 30 disks
dedicated to Groupwise data?  Or that this one RAID volume is contending
with other volumes using the disks on the SAN?  I'm not familiar with how
Groupwise works, does ideal deployment suggest separate sets of spindles
for temp file, database and transaction logs?

Is the RAID block/chunk/stripe size aligned with xfs sunit/swidth
parameters?  Are the xfs block boundaries aligned with the RAID blocks?

Is that 4GB of write back cache?  What is the write back delay?  How fast
are the drives in rpm?


> Date: Thu, 27 Aug 2009 08:25:08 -0600
> From: "Nick Couchman" <Nick.Couchman@xxxxxxxxx>
> Subject: Re: [Xen-users] Xen and I/O Intensive Loads
>
> Let's see...the SAN has two controllers with a 4GB cache in each
> controller.  Each controller has a single 4 x 2Gb FC controller.  Two of
> those ports go to the switch; the other two create redundant loops with
> the disk array (going from the controller to one disk array, then to the
> next disk array, then to the second controler).  The disks are FCATA
> disks, there are 30 active disks (with 2 hot-spares).  The SAN does RAIDs
> across the disks on a per-volume basis, and my e-mail volume is using a
> RAID10 configuration.
>
> I've done most of the filesystem tuning I can without completely
> rebuilding the filesystem - atime is turned off.  I've also adjusted the
> elevator per previous suggestions and played with some of the tuning
> parameters for the elevators.  I haven't got around to trying something
> other than XFS, yet - it's going to take a while to sync over stuff from
> the existing FS to an EXT3 or something similar.  I'm also contacting the
> SAN vendor to get their help in the situation.
>
> -Nick




<br><hr>
This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.