On Thursday 04 November 2010 17:11:34 Thomas Halinka wrote:
> Hi Mark,
>
> Am Donnerstag, den 04.11.2010, 15:45 +0000 schrieb Mark Adams:
> > Hi Thomas, Thanks for your response.
> >
> > On Thu, Nov 04, 2010 at 03:46:44PM +0100, Thomas Halinka wrote:
> > > Hi Marc,
> > >
> > > Am Donnerstag, den 04.11.2010, 12:58 +0000 schrieb Mark Adams:
> > > > Hi All,
> > > >
> > > > I'm thinking about the best way to set this up, and would greatly
> > > > appreciate your learned opinions on it. I'm thinking of the
> > > > following:
> > > >
> > > > - 2 Storage servers, running LVM, heartbeat, drbd (primary/standy)
> > > > and iscsi.
> > > >
> > > > This will protect against 1 storage server failing.
> > >
> > > ACK
> > >
> > > > - 2 Xen hosts, running heartbeat to ensure the domU's are available.
> > > > If
> > > >
> > > > not, migrate all hosts on to other xen host. This will protect
> > > > against 1 xen host failure.
> > >
> > > ACK
> > >
> > > > Any opinions on this arrangement of setup or links to resources
> > > > discussing it would be much appreciated.
> > >
> > > If ur interested i could provide a link to my wiki, which describes
> > > such a setup
> >
> > That would be excellent, thanks. Do you also do any multipathing so you
> > have network redundancy? or do you deal with this in some other way?
>
> in my tests multipath bonding and multipath had similar
> read-Performance, but multipath had much faster writes than a
> 802.3ad-Trunk so i just went with mulipathing...
Am using a similar setup as well, also with multipathing. Multipathing does
not seem to add a spectacular speed gain, though my guess is that could be
helped by adding bonding to both paths.
Challenges are to get the performance high enough for all VMs you want to run.
If having spent enough on RAID hardware and disks, I find the network part for
the storage the biggest botlleneck. iSCSI sucks in performance. AoE is in that
regard probably a better choice but I shied away from it sine being less
industry-standard.
B.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|