This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-users] Performance of network block devices (iSCSI)

To: "Simon Hobson" <linux@xxxxxxxxxxxxxxxx>, <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] Performance of network block devices (iSCSI)
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Fri, 14 Nov 2008 11:18:50 +1100
Delivery-date: Thu, 13 Nov 2008 16:19:27 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <a06240809c542456d400a@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <a06240809c542456d400a@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AclF0sDKNJ1/5BiYSXCC5c1YXkjyWgAG5QTQ
Thread-topic: [Xen-users] Performance of network block devices (iSCSI)
> I have a 'backup' server to which I have a number of machines dumping
> their filesystems using rdiff-backup. The backup server is storing
> this data on a volume mounted off an iSCSI store (Dell/EMC AX100i).
> I've found the performance to be 'very poor' and asked on the
> rdiff-backup list, a response I got was :
> >I found that the network and I/O scheduler in Xen was a single
> >pipeline and contention was terrible. We got terrible performance
> >when we used network block devices with Xen, as the VMs would just
> >sit in waitI/O all the time when accessing the network block devices
> >(we tried AoE, NBD, iSCSI).
> >...
> >We ended up moving to OpenVZ and haven't looked back.
> I've done a test after copying the store to a local disk (xvda) which
> is another volume in the LVM setup of the Xen host - it's notable
> that copying the backup off the iSCSI volume ran at only about
> 1/2G/hr. The difference is quite dramatic, a backup from one client
> takes 36s to a local disk, but 9 1/2 minutes to the iSCSI box -
> that's a 15 fold difference.
> While copying to or from the iSCSI volume the backup server sits at
> 100% (occasionally 99%) wait-io, while backing up to the virtual disk
> it shows the normal levels of processor activity I would expect (with
> minimal wait-io).
> Systems are Debian Lenny, running on a Dell 2650 with hardware raid
> (PERC) and plenty of RAM.
> Is there something I've missed ? Is there anything I can do ?

Try turning off all network offloading (ethtool -k ...) in DomU. If that
doesn't improve things, try turning it off on the vif in Dom0, and then
if that doesn't work either, the hardware adapter in Dom0.


Xen-users mailing list