WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] What is the "fastest" way to access disk storage from Do

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] What is the "fastest" way to access disk storage from DomU?
From: Maximilian Wilhelm <max@xxxxxxxxxxx>
Date: Fri, 25 Jan 2008 09:40:33 +0100
Delivery-date: Fri, 25 Jan 2008 00:41:10 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200801242318.31698.mark.williamson@xxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Mail-followup-to: xen-users@xxxxxxxxxxxxxxxxxxx
References: <20080124004402.GG17160@xxxxxxxxxxxxxxxxxxx> <200801242318.31698.mark.williamson@xxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
Am Thursday, den 24 January hub Mark Williamson folgendes in die Tasten:

> > I've read some threads about storage speed here but didn't really got
> > a clue on what's the "best" or fastest way to set it up.

> > At the moment all the virtual disks are configured as

> >   disk = [ "phy:/dev/vg_xen1/<LV>,xvda1,w", ... ]

> > The volumes residing on the SAN storage are configured via EVMS and
> > I've 200MB/s writing speed from Dom0 (mesured with dd if=/dev/zero
> > of=/mnt/file) and "only" around 150MB/s when doing the same from DomU.

> When you do the tests of writing speed from dom0, are you writing to the 
> domU's filesystem LV?  Otherwise you're not testing like-for-like since 
> you're using a different part of the storage.  I'm not sure if this makes a 
> difference in your case, but different parts of a physical disk can have 
> surprisingly big differences in bandwidth (outer edge of the disk moves 
> faster, so better bandwidth).

Sure I used the same EVMS volume.
Anything other would have been pointless :)

> I'm not too familiar with EVMS, maybe there's some bottleneck there I'm not 
> familiar with and therefore missing.  Does EVMS do cluster volume management? 
>  
> I guess it does, as you're using it on a SAN ;-)

Paired with heartbeat (neccessary for EVMS) there is a Cluster Volume
Manager plugin/module (maybe the buzzword is called different), so
it's somehow possible to have the volumes shared among hosts.

> > Is this expected speed loss or is there any other way to give the DomU
> > access to the devices?

> You can only give domUs direct access to whole PCI devices at the moment, so 
> unless you gave each a separate SAN adaptor, you can't really give them any 
> more direct access.

> There's some work on SCSI passthrough being done by various people, so maybe 
> at some point that'll let you pass individual LUNs through from the SAN.

Hmm.
That would most probably not really helpful in my case as I'm not
using the /dev/sd* devices I get from the SAN about 4 ways (dual-head
HBA connected to SAN with two SPs) but the /dev/mapper/<foo> device
handled via multipathd.

OK, I could push all the according SCSI devices to the DomU and
multipath inside (if possible), but it's not a simple task to figure out
which sd* belong to which LUN as far I know of.
(Ok, multipath can do so, so there has to be a way...)

> For really high performance SAN access from domUs, the solution will 
> eventually (one fine day, in the future) to use SAN adaptors with 
> virtualization support that can natively give shared direct access to 
> multiple domUs.  We're not quite there yet though!

So let's hope :)

Thanks
Ciao
Max
-- 
        Follow the white penguin.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users