This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-users] Re: Xen and I/O Intensive Loads

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Re: Xen and I/O Intensive Loads
From: "Oliver Wilcock" <oliver@xxxxxxx>
Date: Thu, 27 Aug 2009 11:11:01 -0400 (EDT)
Delivery-date: Thu, 27 Aug 2009 08:11:58 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
Importance: Normal
In-reply-to: <20090827143312.9A2C040032F4@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <20090827143312.9A2C040032F4@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: SquirrelMail/1.4.17
Do you mean Groupwise data volume is on one RAID10 comprised of 30 disks
dedicated to Groupwise data?  Or that this one RAID volume is contending
with other volumes using the disks on the SAN?  I'm not familiar with how
Groupwise works, does ideal deployment suggest separate sets of spindles
for temp file, database and transaction logs?

Is the RAID block/chunk/stripe size aligned with xfs sunit/swidth
parameters?  Are the xfs block boundaries aligned with the RAID blocks?

Is that 4GB of write back cache?  What is the write back delay?  How fast
are the drives in rpm?

> Date: Thu, 27 Aug 2009 08:25:08 -0600
> From: "Nick Couchman" <Nick.Couchman@xxxxxxxxx>
> Subject: Re: [Xen-users] Xen and I/O Intensive Loads
> Let's see...the SAN has two controllers with a 4GB cache in each
> controller.  Each controller has a single 4 x 2Gb FC controller.  Two of
> those ports go to the switch; the other two create redundant loops with
> the disk array (going from the controller to one disk array, then to the
> next disk array, then to the second controler).  The disks are FCATA
> disks, there are 30 active disks (with 2 hot-spares).  The SAN does RAIDs
> across the disks on a per-volume basis, and my e-mail volume is using a
> RAID10 configuration.
> I've done most of the filesystem tuning I can without completely
> rebuilding the filesystem - atime is turned off.  I've also adjusted the
> elevator per previous suggestions and played with some of the tuning
> parameters for the elevators.  I haven't got around to trying something
> other than XFS, yet - it's going to take a while to sync over stuff from
> the existing FS to an EXT3 or something similar.  I'm also contacting the
> SAN vendor to get their help in the situation.
> -Nick

Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>