This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] iscsi vs nfs for xen VMs

To: Rudi Ahlers <Rudi@xxxxxxxxxxx>
Subject: Re: [Xen-users] iscsi vs nfs for xen VMs
From: Adi Kriegisch <adi@xxxxxxxxxxxxxxx>
Date: Thu, 27 Jan 2011 12:04:17 +0100
Cc: yue <ooolinux@xxxxxxx>, Adi Kriegisch <adi@xxxxxxxxxxxxxxx>, Christian Zoffoli <czoffoli@xxxxxxxxxxx>, "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 27 Jan 2011 03:05:34 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <sig.7008e5f88f.AANLkTikSkZWkskOCGECNaDuvpS-nnHJRkg9gO3R3OcFw@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTin+K5G10_03qLRT_yqCRELu339roLEHy1bVFoqR@xxxxxxxxxxxxxx> <994429490908070648s69eed40eua19efc43c3eb85a7@xxxxxxxxxxxxxx> <4D3FF9BC.40601@xxxxxxxxxxx> <sig.4007da378a.AANLkTiku=-RhcyUZVHmwnJ18+Az6Fk5CxdEjKdHQKJ54@xxxxxxxxxxxxxx> <4D4032C7.9000003@xxxxxxxxxxx> <1daff6e.e808.12dc3148556.Coremail.ooolinux@xxxxxxx> <4D40655B.20100@xxxxxxxxxxx> <20110127083846.GE29664@xxxxxxxx> <sig.7008e5f88f.AANLkTikSkZWkskOCGECNaDuvpS-nnHJRkg9gO3R3OcFw@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)

On Thu, Jan 27, 2011 at 12:09:54PM +0200, Rudi Ahlers wrote:
> On Thu, Jan 27, 2011 at 10:38 AM, Adi Kriegisch <adi@xxxxxxxxxxxxxxx> wrote:
> >> 2 is better IMHO ...more flexible, not so high overhead
> > 100% ACK. The best thing about this: There is no overhead in using CLVM:
> > The cluster locking is only required when modifying LVs. For the rest of
> > the time performance is (most probably) slightly better than when using
> > LUNs directly because LVM will take care of readahead dynamically.
> How would you do this?
> Export LUN1 from SAN1 & LUN1 from SAN2 to the same client PC, and then
> setup cLVM on top of the 2 LUN's?
Ja, exactly.

> What do you then do if you want redundancy, between 2 client PC's, i.e
> similar to RAID1 ?
Oh well, there are several ways to achieve this, I guess:
* use dm mirroring on top of clvm (I tested this once personally but did
  not need it for production then -- will probably look into it some time
  I think this is just the way to go although it might be a little slower
  than running a raid in domU.
* Giving two LVs to the virtual machines and let them do the mirroring with
  software raid.
  I think this option offers greatest performance while being robust. The
  only disadvantage I see is that in case of failure you have to recreate
  all the software raids in your domUs. In some hosting environments this
  might be an issue.
* Use glusterfs/drbd/... Performancewise and in terms of reliability and
  stability I do not see any issues here. But to use those you actually do
  not need a SAN as a backend. A SAN always adds a performance penalty due
  to an increase of latency. Local storage always has an advantage over SAN
  in this respect. So in case you plan to use glusterfs, drbd or something
  like that, you should reconsider the SAN issue. This might save alot of
  money as well... ;-)

-- Adi

Xen-users mailing list