WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] disk speed

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] disk speed
From: "Sebastian Reitenbach" <sebastia@xxxxxxxxxxxxxxxxxxxx>
Date: Tue, 23 Oct 2007 07:25:54 +0200
Delivery-date: Mon, 22 Oct 2007 22:26:54 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Organization: L00 bugdead prods.
Reply-to: Sebastian Reitenbach <sebastia@xxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi all,

Dylan Martin <dmartin@xxxxxxxxxxxx> wrote: 
> Has all the testing that shows this slowness been done with large
> files?  I'd be interested to see if the same is true under more normal
> use.  E.G. copy 10 medium files 10 times each and 100 medium files 1
> time each.  Caching could make it faster on domU and seeking around
> could make it slower... Or who knows what other variables might kick
> in..
yes, it has been done with these files. In my usecase I have to handle a lot 
of files of that size. So I do not really care how fast I can handle a 
million 1k sized files.

> 
> > On Mon, Oct 22, 2007 at 02:12:39PM +0200, Sebastian Reitenbach wrote:
> > > 
> > > I measured the disk speed, created a 1gb file with dd. 
> > > copying that file on the dom0 always took about 5 seconds, on the 
domU, it 
> > > took about 15-20seconds. I used "time cp large_file large_file2" to 
measure 
> > > the speed. I only expected a small time difference, but not factor 
3-4.
> > We also did some testing like this, writing inside a domU sitting on lvm
> > on local discs took 3.5 times as long as dom0 writes to a filesystem
> > there. Some values here: http://fluxcoil.net/doku.php/xen/docs - but i
> > cant explain some numbers myself and should redo the testing.
> > Also the values vary when testing different xen-packages from suse.
> > 
> > > As far as I know, using the physical partitions as the virtual disk, 
should 
> > > be the fastest solution for virtual disks, compared to files.
> > Files when loopbackmounted showed good values, but shouldnt be used for
> > known reasons. Just that using tap:aio still makes trouble for us on 
those
> > sles10sp1 amd64 boxes.
> > 
> > > Are there different ways to present a physical partition from dom0 to 
a 
> > > domU, that would influence the speed? Or is the speed factor I have 
seen 
> > > above the one to expect?
> > When dom0 is involved i dont know of a different way. One could still 
look
> > into performance of space available via iscsi to the domU, or handing a
> > pci-device like a san- or scsi-card over to the domU (with this trading 
the
> > better performance for features like live-migration).
Trying iSCSI sounds interesting. Also I did now know yet, that I can hand 
over the SAN device to the virtual node. 
I want to use xen in a HA cluster, as long as everything is in a good 
condition each virtual machine will be on a separate physical machine, but 
if one of the physical nodes dies, two or more of the xen instances have to 
share a physical node. Do I can hand over one physical device to more than 
one virtual instance in that case? If not, then I have to use iSCSI.

kind regards
Sebastian


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>