This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] Performance of network block devices (iSCSI)

On Mon, 23 Feb 2009, Simon Hobson wrote:

On 14/11/08, Javier Guerra wrote:
On Thu, Nov 13, 2008 at 5:58 PM, Simon Hobson <linux@xxxxxxxxxxxxxxxx> wrote:
I've deliberately not put the iSCSI initiator in Dom0 as I want to run the absolute minimum in Dom0. Also, putting the iSCSI initiator in Dom0 makes it harder to move a VM to another host - having it in the DomU means that the
 VM can be moved without any config changes.

you should really try that.  the different nature of net- and
block-devices means that there are a lot less context switches per MB
transferred, and the point of acknowledging makes a huge difference in
latency sensibility.

OK, I've *finally* managed to scrounge another box to do some testing on. I've setup open-iscsi in Dom0 and performance seems to be a lot better - I'm getting about 20MB/s and the backup server isn't sat on 99% wio :-)

1. I wonder what test did you make to check performance?
I have the same concept with iscsi initiator directly on the domU via bonding interface-->dedicated vlan on top of bonding-->transparent bridge in Dom0-->DomU's eth1.

I would expect more advantages (performance, administration) with iscsi-->DomU model.

I assume this is because the device name can change across restarts. Since I'm not mounting the volume in Dom0, just passing it to the guest in a "disk = [ 'phy:v ..." line, any suggestions on the best way to deal with this ?

2. how can you pass the storage without mounting it/having access to, in Dom0?

Longina Przybyszewska, system programmer

IMADA, Department of Mathematics and Computer Science

University of Southern Denmark, Odense
Campusvej 55,DK-5230 Odense M, Denmark

tel: +45 6550 2359 - http://www.imada.sdu.dk
email: longina@xxxxxxxxxxxx

Xen-users mailing list