Hi Thomas, Thanks for your response.
On Thu, Nov 04, 2010 at 03:46:44PM +0100, Thomas Halinka wrote:
> Hi Marc,
>
> Am Donnerstag, den 04.11.2010, 12:58 +0000 schrieb Mark Adams:
> > Hi All,
> >
> > I'm thinking about the best way to set this up, and would greatly
> > appreciate your learned opinions on it. I'm thinking of the following:
> >
> > - 2 Storage servers, running LVM, heartbeat, drbd (primary/standy) and
> > iscsi.
> > This will protect against 1 storage server failing.
>
> ACK
>
> > - 2 Xen hosts, running heartbeat to ensure the domU's are available. If
> > not, migrate all hosts on to other xen host. This will protect against
> > 1 xen host failure.
>
> ACK
>
> >
> > Any opinions on this arrangement of setup or links to resources
> > discussing it would be much appreciated.
>
> If ur interested i could provide a link to my wiki, which describes such
> a setup
That would be excellent, thanks. Do you also do any multipathing so you
have network redundancy? or do you deal with this in some other way?
>
> > Also any alternative ways to
> > provide the same HA would be useful for comparison.
> >
> > - Any Pitfalls?
>
> nope - works like a charm
>
> > - Gaps in the availability? (split-brain possibilities?)
>
> Im running 2 iSCSI-Linux-Targets with a bunch of XEN-Boxes...
>
> >
> > - How would you add in additional xen hosts? would they always need to be
> > paired in this arrangement (1 fails over to the other..)
>
> No need for pairing. Just use hb2 with crm and udev for static
> device-names...
> >
> > - Is a clustered filesystem required?
>
> NOT required, but im testing another kind of such a setup. My receipt is
> to run XEN-Boxes with glusterfs as filebases-Diskbackend...
>
> http://www.gluster.com/community/documentation/index.php/GlusterFS_and_Xen
>
> first tests were impressive and performance was higher than iscsi, since
> im running ~60VMs over 10GBit-Nics, where the iscsi-targets were the
> bottleneck :-(
>
> >
> > Thanks in advance for any advice or opinions on this.
> >
> > Regards,
> > Mark
>
> hth,
>
> thomas
>
>
Cheers
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|