WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Odd blkdev throughput results

To: "Daniel Stodden" <stodden@xxxxxxxxxx>, "Xen Developers" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Odd blkdev throughput results
From: "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxxx>
Date: Sun, 9 Mar 2008 20:07:02 -0000
Cc: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>
Delivery-date: Sun, 09 Mar 2008 13:08:16 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1205082786.14527.116.camel@xxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1205082786.14527.116.camel@xxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AciCCSTVwF2+LIZbT5+MXlxlTHzucQAFlYHQ
Thread-topic: [Xen-devel] Odd blkdev throughput results
> The fun (for me, fun is probably a personal thing) part is that
> throughput is higher than with TCP. May be due to the block layer being
> much thinner than TCP/IP networking, or the fact that transfers utilize
> the whole 4KB page size for sequential reads. Possibly some of both, I
> didn't try.

The big thing is that on network RX it is currently dom0 that does the copy. In 
the CMP case this leaves the data in the shared cache ready to be accessed by 
the guest. In the SMP case it doesn't help at all. In netchannel2 we're moving 
the copy to the guest CPU, and trying to eliminate it with smart hardware.

Block IO doesn't require a copy at all. 

> This is not my question. What strikes me is that for the blkdev
> interface, the CMP setup is 13% *slower* than SMP, at 661.99 MB/s.
> 
> Now, any ideas? I'm mildly familiar with both netback and blkback, and
> I'd never expected something like that. Any hint appreciated.

How stable are your results with hdparm? I've never really trusted it as a 
benchmarking tool.

The ramdisk isn't going to be able to DMA data into the domU's buffer on a 
read, so it will have to copy it. The hdparm running in domU probably doesn't 
actually look at any of the data it requests, so it stays local to the dom0 
CPU's cache (unlike a real app). Doing all that copying in dom0 is going to 
beat up the domU in the shared cache in the CMP case, but won't effect it as 
much in the SMP case.
 

Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>