WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] poor domU VBD performance.

To: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>, "Peter Bier" <peter_bier@xxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] poor domU VBD performance.
From: Andrew Theurer <habanero@xxxxxxxxxx>
Date: Mon, 28 Mar 2005 15:48:34 -0600
Delivery-date: Mon, 28 Mar 2005 21:48:41 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D1E38CC@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D1E38CC@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.5
On Monday 28 March 2005 14:14, Ian Pratt wrote:
> > > > > I found out that dom0 does file-system IO and raw IO ( using
> > > > > dd as a tool to test
> > > > > throughput from the disk ) is about exactly the same as when
> > > > > using a standard
> > > > > linux kernel without XEN. But the raw IO from DomU to an
> > > > > unused disk ( a second
> > > > > disk in the system ) is limited to fourty percent of the
> > > > > speed I get within Dom0.
> >
> > Is the second disk exactly the same as the first one?  I'll
> > try an IO test
> > here on the same disk array with dom0 and domU and see what I get.
>
> I've reproduced the problem and its a real issue.
> It only affects reads, and is almost certainly down to how the blkback
> driver passes requests down to the actual device.
>
> Does anyone on the list actually understand the changes made to linux
> block IO between 2.4 and 2.6?
>
> In the 2.6 blkfront there is no run_task_queue() to flush requests to
> the lower layer, and we use submit_bio() instead of 2.4's
> generic_make_request(). It looks like this is happening syncronously
> rather than queueing multiple requests. What should we be doing to cause
> things to be batched?

To my knowlege you cannot queue multiple bio requests at once.  The IO 
schedulers should batch them up before submitting to the actual devices.  I 
tried xen-2.0.5 and xen-unstable with a sequential read test using 256k 
request size and 8 reader threads with o_direct on a lvm-raid-0 scsci array 
(no HW cache) and got:

xen-2-dom0-2.6.10:  177 MB/sec
xen-2-domU-2.6.10:  185 MB/sec
xen-3-dom0-2.6.11:  177 MB/sec
xen-3-domU-2.6.11:  185 MB/sec

Better results with VBD :)  I am wondering if going through 2 layers of IO 
schedulers streams the IO better.  I was using AS scheduler.  I am going to 
try noop scheduler and see what i get.

What block size were you using with dd?

-Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel