I've tried this a few ways, and found there is really no "good" way that
fits every need that I found. This is, for lack of a better phrase , a
pain in the ass.
You can do it similar to how it was described if using a san, you form
and break the md raid on the san triggered via [ insert favorite
interconnect method here ]. That's tricky, and if not timed right
creates issues. I've gotten this working with iscsi in the past, but
I've also run into major meltdowns.
You can also use something like gfs, but if you have high performance
I/O needs, well.. it can hinder things a bit.
You also need to take into consideration the power going out at the
worst possible time, and what would happen if it did [meltdown,
usually].
Your best bet (for now) is to use something like Open SSI with 2.0.7
(paravirtualized) , or 3.0.2 with open SSI as a hvm if you want a true
HA setup to get it rolled out quick and avoid (many) 3 AM phone calls,
unless this is also a learning and tinkering experience for you.
Open SSI is very easy to get going, and does a very good job. Xen lends
even more management and much faster boot / recovery to the setup. You
can then just migrate the entire virtualized cluster, or a cluster node
setup using etherboot over to another machine and avoid the hassles of
disk sync all together (for all but the director nodes). If done as HVM
with 3.0.2, you can do some really neat dynamic scaling as well.
Problems with this - bridging gets to be a nightmare, and dom-0 needs to
be a little 'fatter' than usual.
To my knowledge, nobody has ported the Xen/SSI paravirtualized kernel
over to 3.0.2, however I could be wrong. I'm also pretty sure the
whitebox flavor of SSI was never ported to Xen, but again, I could be
wrong.
I know it sounds a little convoluted and complex, but it does solve a
bunch of problems regarding this.
HTH
-Tim
On Wed, 2006-08-30 at 14:53 +0200, Michael Paesold wrote:
> Chris de Vidal wrote:
> > Reflecting upon this some more, I think perhaps it could be done in Dom0,
> > making setting up
> > software RAID inside the DomUs unnecessary. But that requires a shared
> > hardware-like setup. DRBD
> > 0.7 doesn't allow primary/primary (equivalent to shared hardware) but
> > things like AoE do (NBD
> > should, too). When DRBD 0.8 becomes more stable and primary/primary is
> > possible, perhaps that
> > will be an option. I like the DRBD project and would be eager to try it.
> >
> > As Eric said, using AoE means any network interruption generates a resync.
> > But then that's a
> > concern with /ANY/ AoE or iSCSI + software RAID setup. So methinks AoE
> > isn't too bad. Add in the
> > fr1 patch and it might be usable.
> >
> > If one were to patch the kernel with the "Fast RAID" patch and use AoE to
> > create /dev/md0 on Dom0
> > using both a local disk and a remote, this might work! In this case, LVM
> > would no longer be a
> > necessity.
>
> It can't see how this should work with RAID in Dom0. At that point you
> are just at the same problem as with DRBD. During live migration, you
> would first have to deactivate the RAID on the first node and only then
> activate it on the second node.
>
> But for live migration to work, both need to be active for a short
> period of time.
>
> I wonder if anyone will ever step up and create a patch to xend that
> makes it possible to do live migration without having a storage setup
> that can be active at two nodes, i.e. suspend -> disconnect storage ->
> connect storage on second node -> resume on second node. I am really
> sorry I have no time to do it myself.
>
> > Does anyone know if Xen's live migration requires something like GFS
> > on shared hardware? Or does
> > it take care of the problem as long as it's the only thing accessing
> > the drive?
>
> Only one node must access the device at a time. But for your
> RAID-in-dom0 idea, that is not the case.
>
> Best Regards,
> Michael Paesold
>
>
>
>
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>
>
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|