WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: Subject: RE: [Xen-users] Poor disk io performance in domUs

To: "Andrej Radonic" <rado@xxxxxxxxxxxxx>
Subject: Re: Subject: RE: [Xen-users] Poor disk io performance in domUs
From: "David Brown" <dmlb2000@xxxxxxxxx>
Date: Fri, 22 Jun 2007 08:16:09 -0700
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 22 Jun 2007 08:14:11 -0700
Dkim-signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=skbq+yfqO2ApuLE/Uz+Ma8Tmd4UGxbTjDc9yMKKlesO+xW4G+tNNWkOQNDT+R8QGk/zJ3h5yQN9DYtRn5L1z5dD6Mwsn8FPVG0r22CoQ5lSMw4XzyscByUzN8B6hcG9lXXlVqP5CYC55lxKUdIyY2dtnxP9Fqam0pkG674VHG6Q=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=jqPvq47UPIYCnQdgaCVF7iBphPKSym+LhQNyJSryDHVz+PMyfgNrWZGuNSmOZbDlfbtyY/iC1UuCW3HNgnYv12PBzjgNLBYYg1nTd2Dnl1mP0Q4VEFSZ1O/GH07CO7aV7GSrKD/pxntUEfZW4VgmNbOAEJYWWhaxiGppyfsMPG0=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <467BC6D2.3030200@xxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <467BC6D2.3030200@xxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On 6/22/07, Andrej Radonic <rado@xxxxxxxxxxxxx> wrote:
Mats,

>> dd simultaneously in both dom0 = 170 MB/s
> I take it you mean "two parallel 'dd' commands at the same time"? That
> would still write to the same portion of disk (unless you specifically
> choose different partitions?)

it's different partitions - one dedicated partition for each domU. The
partition are created as "virtual" block devices with the dell storage
box manager.

>> dd simultaneously in two domU = 34 MB/s
> I take it this means two different DomU doing "dd"?
> Is that 34 MB/s "total" (i.e. 17MB/s per domain) or per domain (68 MB/s
> total)?

sorry, good you asked: it's the total, i.e. 17MB/s per domain! I guess
you are getting the picture now as to my feelings... ;-)


Yeah I've experienced some interesting things with very good I/O
performance and xen not handling it very well with the domU's. Since
there's a little kernel process running on the dom0 for each virtual
block device exported to domU's, which does translation mostly, I've
found that the more domU's you bring up all doing I/O the dom0
processes tend to do just as much work as all the dd operations of all
the domU's combined. So if you have 6 domU's all doing about 15% using
dd's your dom0 is going to be pushing 100% of its cpu usage and going
to be doing a crap load of work and the I/O performance in the domU's
will be failing. So it does pay to make sure your dom0 can handle
translating everything (note this should go away with the IOMMU
support, I would hope).

Also I'd check what the dell virtual block manager can do, try
creating virtual block devices then try dd'ing to them in parallel in
the dom0 it might only be that the dell virtual block device manager
can handle 60Mb/s total to any of the block devices it creates.

I've got experience with the HP virtual block device thingy and you
can actually specify to only use two of the disks to raid0/1 them
depending on what you want to do. Then export the raided disks to the
kernel. Since we have 6 drives that gives at most 3 block devices
getting to disk somehow to test if the pipe between the disk and OS
can't handle the data. I would suggest doing something similar.

Thanks,
- David Brown

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users