This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-users] iscsi vs nfs for xen VMs

To: <rudi@xxxxxxxxxxx>, "Matej Zary" <matej.zary@xxxxxxxxx>
Subject: RE: [Xen-users] iscsi vs nfs for xen VMs
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Wed, 26 Jan 2011 22:21:56 +1100
Cc: jg@xxxxxxxxxxxx, Dustin Black <vantage@xxxxxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 26 Jan 2011 03:23:20 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <sig.40076de9f5.AANLkTim+T5yAfovgX2JsH9BMp3r6agxCqRxVBoG7acXT@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <994429490908070648s69eed40eua19efc43c3eb85a7@xxxxxxxxxxxxxx><7bc80d500908070700s7050c766g4d1c6af2cd71ea89@xxxxxxxxxxxxxx><994429490908070711q4c64f92au9baa6577524e5c5d@xxxxxxxxxxxxxx><3463f63d0908070726y630d320u3e3f1f1cae9b34a4@xxxxxxxxxxxxxx><sig.0007322cfd.AANLkTi=2S3bKf6jv9BbqYMbkWFbjJTrpYh8GK2EGXGns@xxxxxxxxxxxxxx><4D3FD940.1090000@xxxxxxxxxxxx><AANLkTi=J-s+oc44wY-N_wQ+wQr=VhnG-EK48QYpx7y-Y@xxxxxxxxxxxxxx><5DB0519124BB3D4DBEEB14426D4AC7EA18BFE6FF56@xxxxxxxxxxxxxxxxxxxxx> <sig.40076de9f5.AANLkTim+T5yAfovgX2JsH9BMp3r6agxCqRxVBoG7acXT@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acu9NzWmeVFs3XN2TxSBdIHYy0IPoQAE8pTQ
Thread-topic: [Xen-users] iscsi vs nfs for xen VMs
> On Wed, Jan 26, 2011 at 10:44 AM, Matej Zary <matej.zary@xxxxxxxxx> wrote:
> > Depends on quality of NAS/SAN device. Some of them are more reliable&robust
> that rest of the infrastructure (dual controllers, raid6, multipathing etc.),
> obviously they cost arm&leg. So they SHOULD not totally fail (firmware issues
> are  another thing though). And in that case, even if one owns enterprise
> grade storage, backups (tape, another storage, remote site) are always must.
> Yeah, if storage fails, there will be downtime. You can still have locals
> disks on xen host. So for example you can restore most important Xen guests on
> the local disks from backups and live without live migration until the NAS/SAN
> issues are solved.
> >
> > Matej
> > ________________________________________
> Well, that's the problem. We have (had, soon to be returned) a so
> called "enterprise SAN" with dual everything, but it failed miserably
> during December and we ended up migrating everyone to a few older NAS
> devices just to get the client's websites up again (VPS hosting). So,
> just cause a SAN has dual PSU's, dual controllers, dual NIC's, dual
> HEAD's, etc doesn't mean it's non-redundant.
> I'm thinking of setting up 2 independent SAN's, of for that matter
> even NAS clusters, and then doing something like RAID1 (mirror) on the
> client nodes with the iSCSI mounts. But, I don't know if it's feasible
> or worth the effort. Has anyone done something like this ?

There are plenty of recipes for DRBD + pacemaker/heartbeat + iSCSI. With 
appropriate redundancy in place and plenty of testing you should be able to 
build something that's pretty much bulletproof.


Xen-users mailing list