This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] Xen and I/O Intensive Loads

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Xen and I/O Intensive Loads
From: Daniel Mealha Cabrita <dancab@xxxxxxxxxxxx>
Date: Wed, 26 Aug 2009 13:37:58 -0300
Delivery-date: Wed, 26 Aug 2009 09:40:14 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4A9507E90200009900017BED@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Organization: UTFPR
References: <4A9507E90200009900017BED@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.9
250 users normally is no big deal for an e-mail server, even a virtualized 
one, though I don't know how GroupWise behaves.

I suggest you to change your domU IO scheduler to minimize dom0-domU IO 
latency impact:

BLAH: your domU block device.
$ echo deadline > /sys/block/BLAH/queue/scheduler
And play with the settings inside /sys/block/BLAH/queue/

About dom0, I don't know about your storage and RAID setup so it might (or 
might not) be a good idea to try to reduce the latency between dom0-storage:

BLAH: your FC device paths (sda, sdb ... sdaa, sdab etc)
$ echo noop > /sys/block/BLAH/queue/scheduler

On Wednesday 26 August 2009 13:01:13 Nick Couchman wrote:
> Hi, folks,
> I'm attempting to run an e-mail server on Xen.  The e-mail system is Novell
> GroupWise, and it serves about 250 users.  The disk volume for the e-mail
> is on my SAN, and I've attached the FC LUN to my Xen host, then used the
> "phy:/dev..." method to forward the disk through to the domU.  I'm running
> into an issue with high I/O wait on the box (~250%) and large load averages
> (20-40 for the 1/5/15 minute average).  I was wondering if anyone has ideas
> on tuning the domU to handle this - is there a better way to forward the
> disk device through, should I try using an iSCSI software initiator in the
> domU, or is it just a bad idea to put an I/O load like this in a domU? 
> Unfortunately mapping the entire F/C card through to the domU isn't really
> an option - the FC card accesses other SAN volumes for the Xen host, so it
> needs to be present in dom0.
> I'm running Xen 3.2.0 on SLES 10 SP2, on a Dell PowerEdge R610 server.  The
> FC HBA is a QLE2462, dual-channel 4Gb FC card.  Any help, hints, etc., are
> greatly appreciated!
> -Nick
> --------
> This e-mail may contain confidential and privileged material for the sole
> use of the intended recipient.  If this email is not intended for you, or
> you are not responsible for the delivery of this message to the intended
> recipient, please note that this message may contain SEAKR Engineering
> (SEAKR) Privileged/Proprietary Information.  In such a case, you are
> strictly prohibited from downloading, photocopying, distributing or
> otherwise using this message, its contents or attachments in any way.  If
> you have received this message in error, please notify us immediately by
> replying to this e-mail and delete the message from your mailbox. 
> Information contained in this message that does not relate to the business
> of SEAKR is neither endorsed by nor attributable to SEAKR.

 Daniel Mealha Cabrita
 Divisao de Suporte Tecnico
 AINFO / Reitoria / UTFPR

Xen-users mailing list