On Wed, Jan 26, 2011 at 12:55 AM, Rudi Ahlers <Rudi@xxxxxxxxxxx> wrote:
> Well, that's the problem. We have (had, soon to be returned) a so
> called "enterprise SAN" with dual everything, but it failed miserably
> during December and we ended up migrating everyone to a few older NAS
> devices just to get the client's websites up again (VPS hosting). So,
> just cause a SAN has dual PSU's, dual controllers, dual NIC's, dual
> HEAD's, etc doesn't mean it's non-redundant.
>
> I'm thinking of setting up 2 independent SAN's, of for that matter
> even NAS clusters, and then doing something like RAID1 (mirror) on the
> client nodes with the iSCSI mounts. But, I don't know if it's feasible
> or worth the effort. Has anyone done something like this ?
Our plan is to use FreeBSD + HAST + ZFS + CARP to create a
redundant/fail-over storage setup, using NFS. VM hosts will boot off
the network and mount / via NFS, start up libvirtd, pick up their VM
configs, and start the VMs. The guest OSes will also boot off the
network using NFS, with separate ZFS filesystems for each guest.
If the master storage node fails for any reason (network, power,
storage, etc), CARP/HAST will fail-over to the slave node, and
everything carries on as before. NFS clients will notice the link is
down, try again, try again, try again, notice the slave node is up
(same IP/hostname), and carry on.
The beauty of using NFS is that backups can be done from the storage
box without touching the VMs (snapshot, backup from snapshot). And
provisioning a new server is as simple as cloning a ZFS filesystem
(takes a few seconds). There's also no need to worry about sizing the
storage (NFS can grow/shrink without the client caring); and even less
to worry about due to the pooled storage setup of ZFS (if there's
blocks available in the pool, any filesystem can use it). Add in
dedupe and compression across the entire pool ... and storage needs go
way down.
It's also a lot easier to configure live-migration using NFS than iSCSI.
To increase performance, just add a couple of fast SSDs (one for write
logging, one for read caching) and let ZFS handle it.
Internally, the storage boxes have multiple CPUs, multiple cores,
multiple PSUs, multiple NICs bonded together, multiple drive
controllers etc. And then there's two of them (one physically across
town connected via fibre).
VM hosts are basically throw-away appliances with gobs of CPU, RAM,
and NICs, and no local storage to worry about. One fails, just swap
it with another and add it to the VM pool.
Can't get much more redundant than that.
If there's anything that we've missed, let me know. :)
--
Freddie Cash
fjwcash@xxxxxxxxx
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|