WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iscsi vs nfs for xen VMs

To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] iscsi vs nfs for xen VMs
From: Marcin Kuk <marcin.kuk@xxxxxxxxx>
Date: Wed, 26 Jan 2011 23:59:40 +0100
Delivery-date: Wed, 26 Jan 2011 15:09:12 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type:content-transfer-encoding; bh=D9DkmRc4+QzsA+g40o5UmO10N1Qmv4ot0+IUh6UtuEM=; b=BZEY8pPQHvcTvN+PjydgTxUxzvcc0jk2bRTlvbjy6elWSKtNkOvGrKvtF+CLWPVDwJ btaioGX5X98OfCOOkfSP6lykZPdy3G9GglzKlSYdM8ZLAlF+ucwngaS2QTjCbqS9KQhG B54ZV8z3vmoyhH9sN3Vw+A2fCMIJYpUhjkBp0=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=LP/oiLklvENXAGDj3CBZujYJSqWXi3BNRR0jvgxxUrNtDAie3T0zASHZdLAU9COZTw 7ZM0RKjjiJGg3Hmh0OPd4pItJMlRo0+bEvUQ5S2X51xEyapbroyUfbf6/abCVLiGO1LG WAUh4dkSy5H8SnddgkCcaknwTUQ5q4a9CHi4g=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTi=Nws9ULuk6MjoEJJJLh=zTf9yDJo5HnheX6sie@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <994429490908070648s69eed40eua19efc43c3eb85a7@xxxxxxxxxxxxxx> <7bc80d500908070700s7050c766g4d1c6af2cd71ea89@xxxxxxxxxxxxxx> <994429490908070711q4c64f92au9baa6577524e5c5d@xxxxxxxxxxxxxx> <3463f63d0908070726y630d320u3e3f1f1cae9b34a4@xxxxxxxxxxxxxx> <sig.0007322cfd.AANLkTi=2S3bKf6jv9BbqYMbkWFbjJTrpYh8GK2EGXGns@xxxxxxxxxxxxxx> <4D3FD940.1090000@xxxxxxxxxxxx> <AANLkTi=J-s+oc44wY-N_wQ+wQr=VhnG-EK48QYpx7y-Y@xxxxxxxxxxxxxx> <5DB0519124BB3D4DBEEB14426D4AC7EA18BFE6FF56@xxxxxxxxxxxxxxxxxxxxx> <sig.40076de9f5.AANLkTim+T5yAfovgX2JsH9BMp3r6agxCqRxVBoG7acXT@xxxxxxxxxxxxxx> <AANLkTi=Nws9ULuk6MjoEJJJLh=zTf9yDJo5HnheX6sie@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
2011/1/26 Freddie Cash <fjwcash@xxxxxxxxx>:
> On Wed, Jan 26, 2011 at 12:55 AM, Rudi Ahlers <Rudi@xxxxxxxxxxx> wrote:
>> Well, that's the problem. We have (had, soon to be returned) a so
>> called "enterprise SAN" with dual everything, but it failed miserably
>> during December and we ended up migrating everyone to a few older NAS
>> devices just to get the client's websites up again (VPS hosting). So,
>> just cause a SAN has dual PSU's, dual controllers, dual NIC's, dual
>> HEAD's, etc doesn't mean it's non-redundant.
>>
>> I'm thinking of setting up 2 independent SAN's, of for that matter
>> even NAS clusters, and then doing something like RAID1 (mirror) on the
>> client nodes with the iSCSI mounts. But, I don't know if it's feasible
>> or worth the effort. Has anyone done something like this ?
>
> Our plan is to use FreeBSD + HAST + ZFS + CARP to create a
> redundant/fail-over storage setup, using NFS.  VM hosts will boot off
> the network and mount / via NFS, start up libvirtd, pick up their VM
> configs, and start the VMs.  The guest OSes will also boot off the
> network using NFS, with separate ZFS filesystems for each guest.
>
> If the master storage node fails for any reason (network, power,
> storage, etc), CARP/HAST will fail-over to the slave node, and
> everything carries on as before.  NFS clients will notice the link is
> down, try again, try again, try again, notice the slave node is up
> (same IP/hostname), and carry on.
>
> The beauty of using NFS is that backups can be done from the storage
> box without touching the VMs (snapshot, backup from snapshot).  And
> provisioning a new server is as simple as cloning a ZFS filesystem
> (takes a few seconds).  There's also no need to worry about sizing the
> storage (NFS can grow/shrink without the client caring); and even less
> to worry about due to the pooled storage setup of ZFS (if there's
> blocks available in the pool, any filesystem can use it).  Add in
> dedupe and compression across the entire pool ... and storage needs go
> way down.
>
> It's also a lot easier to configure live-migration using NFS than iSCSI.
>
> To increase performance, just add a couple of fast SSDs (one for write
> logging, one for read caching) and let ZFS handle it.
>
> Internally, the storage boxes have multiple CPUs, multiple cores,
> multiple PSUs, multiple NICs bonded together, multiple drive
> controllers etc.  And then there's two of them (one physically across
> town connected via fibre).
>
> VM hosts are basically throw-away appliances with gobs of CPU, RAM,
> and NICs, and no local storage to worry about.  One fails, just swap
> it with another and add it to the VM pool.
>
> Can't get much more redundant than that.
>
> If there's anything that we've missed, let me know.  :)

Yes. NFS can handle only 16 first groups. If user belong to more than
16 users - you are close to have troubles.

Regards,
Marcin Kuk

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users