WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iscsi vs nfs for xen VMs

To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] iscsi vs nfs for xen VMs
From: Freddie Cash <fjwcash@xxxxxxxxx>
Date: Wed, 26 Jan 2011 09:11:49 -0800
Delivery-date: Wed, 26 Jan 2011 09:12:51 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=XoNSQ28L3197++SKARVfszyYnO2ojg1zzhwI6eFdHas=; b=JrOuiyaT8QC4BtxhjvtxjcQ0p+n7ysvYNsyicpEG+9JJn79jhm1loLXtP8xUoBgW6R 24LMhzS8XvDMQdlFthNi1ummj6pAIvflS7P1STWBa+Ik/zBQi3defWNGGTIMtsY8m678 Z28xKe0UotSRJVcBZ6rUmEGyRd0VtcwqkVZ44=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=aU74tUdQ19itANoHzue/qZYeMaM0PJNXvMqm6I3Mo3iRrdaHEtavIAqp02/pO7JxFw 99YrWCgnucqo3crLgRY2UenXmeodg6U8lxiX2s5G7SrNjPI2TxsflWfXbmEmU2w9s3cv TKLyXnWqrC4KVfBH1nKriwxQQ3Ej9IM/YBVZE=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <sig.40076de9f5.AANLkTim+T5yAfovgX2JsH9BMp3r6agxCqRxVBoG7acXT@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <994429490908070648s69eed40eua19efc43c3eb85a7@xxxxxxxxxxxxxx> <7bc80d500908070700s7050c766g4d1c6af2cd71ea89@xxxxxxxxxxxxxx> <994429490908070711q4c64f92au9baa6577524e5c5d@xxxxxxxxxxxxxx> <3463f63d0908070726y630d320u3e3f1f1cae9b34a4@xxxxxxxxxxxxxx> <sig.0007322cfd.AANLkTi=2S3bKf6jv9BbqYMbkWFbjJTrpYh8GK2EGXGns@xxxxxxxxxxxxxx> <4D3FD940.1090000@xxxxxxxxxxxx> <AANLkTi=J-s+oc44wY-N_wQ+wQr=VhnG-EK48QYpx7y-Y@xxxxxxxxxxxxxx> <5DB0519124BB3D4DBEEB14426D4AC7EA18BFE6FF56@xxxxxxxxxxxxxxxxxxxxx> <sig.40076de9f5.AANLkTim+T5yAfovgX2JsH9BMp3r6agxCqRxVBoG7acXT@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Wed, Jan 26, 2011 at 12:55 AM, Rudi Ahlers <Rudi@xxxxxxxxxxx> wrote:
> Well, that's the problem. We have (had, soon to be returned) a so
> called "enterprise SAN" with dual everything, but it failed miserably
> during December and we ended up migrating everyone to a few older NAS
> devices just to get the client's websites up again (VPS hosting). So,
> just cause a SAN has dual PSU's, dual controllers, dual NIC's, dual
> HEAD's, etc doesn't mean it's non-redundant.
>
> I'm thinking of setting up 2 independent SAN's, of for that matter
> even NAS clusters, and then doing something like RAID1 (mirror) on the
> client nodes with the iSCSI mounts. But, I don't know if it's feasible
> or worth the effort. Has anyone done something like this ?

Our plan is to use FreeBSD + HAST + ZFS + CARP to create a
redundant/fail-over storage setup, using NFS.  VM hosts will boot off
the network and mount / via NFS, start up libvirtd, pick up their VM
configs, and start the VMs.  The guest OSes will also boot off the
network using NFS, with separate ZFS filesystems for each guest.

If the master storage node fails for any reason (network, power,
storage, etc), CARP/HAST will fail-over to the slave node, and
everything carries on as before.  NFS clients will notice the link is
down, try again, try again, try again, notice the slave node is up
(same IP/hostname), and carry on.

The beauty of using NFS is that backups can be done from the storage
box without touching the VMs (snapshot, backup from snapshot).  And
provisioning a new server is as simple as cloning a ZFS filesystem
(takes a few seconds).  There's also no need to worry about sizing the
storage (NFS can grow/shrink without the client caring); and even less
to worry about due to the pooled storage setup of ZFS (if there's
blocks available in the pool, any filesystem can use it).  Add in
dedupe and compression across the entire pool ... and storage needs go
way down.

It's also a lot easier to configure live-migration using NFS than iSCSI.

To increase performance, just add a couple of fast SSDs (one for write
logging, one for read caching) and let ZFS handle it.

Internally, the storage boxes have multiple CPUs, multiple cores,
multiple PSUs, multiple NICs bonded together, multiple drive
controllers etc.  And then there's two of them (one physically across
town connected via fibre).

VM hosts are basically throw-away appliances with gobs of CPU, RAM,
and NICs, and no local storage to worry about.  One fails, just swap
it with another and add it to the VM pool.

Can't get much more redundant than that.

If there's anything that we've missed, let me know.  :)

-- 
Freddie Cash
fjwcash@xxxxxxxxx

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users