This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] Xen and I/O Intensive Loads

To: Nick Couchman <Nick.Couchman@xxxxxxxxx>
Subject: Re: [Xen-users] Xen and I/O Intensive Loads
From: John Madden <jmadden@xxxxxxxxxxx>
Date: Wed, 26 Aug 2009 13:32:44 -0400
Cc: XEN Mailing List <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 26 Aug 2009 10:35:08 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4A9507E90200009900017BED@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4A9507E90200009900017BED@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> I'm attempting to run an e-mail server on Xen.  The e-mail system is
> Novell GroupWise, and it serves about 250 users.  The disk volume for
> the e-mail is on my SAN, and I've attached the FC LUN to my Xen host,
> then used the "phy:/dev..." method to forward the disk through to the
> domU.  I'm running into an issue with high I/O wait on the box (~250%)
> and large load averages (20-40 for the 1/5/15 minute average).  I was
> wondering if anyone has ideas on tuning the domU to handle this - is
> there a better way to forward the disk device through, should I try
> using an iSCSI software initiator in the domU, or is it just a bad
> idea to put an I/O load like this in a domU?  Unfortunately mapping
> the entire F/C card through to the domU isn't really an option - the
> FC card accesses other SAN volumes for the Xen host, so it needs to be
> present in dom0. 

If this turns out to be a global issue, I'd certainly like to hear about
it.  I recently load-tested a postfix+cyrus domU with 6 SATA-backed
spools and 6 FC-backed meta partitions for about 300,000 IMAP accounts
and consistently delivered around 100 messages/sec to them.  That load
was obviously all i/o-bound, but at what I'd consider to be an
acceptable delivery rate (which seems to be the most
performance-challenging operation at least with Cyrus).  I did see
similar load averages though.

This was with a RHEL 5 domU and a CentOS 5 dom0 and phy: mappings.  


John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana

Xen-users mailing list