WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

Re: [Xen-API] Alternative to Vastsky?

To: xen-api@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-API] Alternative to Vastsky?
From: Tim <tim@xxxxxxxxxx>
Date: Wed, 20 Apr 2011 01:09:56 +0100
Delivery-date: Tue, 19 Apr 2011 17:10:27 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4DAE0ADD.70209@xxxxxxxxx>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <4DAE085F.5040707@xxxxxxxxxx> <4DAE0ADD.70209@xxxxxxxxx>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.14) Gecko/20110223 Lightning/1.0b2 Thunderbird/3.1.8
On 19/04/11 23:21, George Shuklin wrote:
I think we shall split this to three different scenarios:

1) local storage redundancy of local storage within terms of single host (e.g. software RAID support, I think this require a little tweak of installer to create RAID1 instead plain /dev/sda installation)

This works quite well - I do it manually for each host after installing. It would be less painful XCP could be upgraded using yum. That way there wouldn't be a need to re-do it after each upgrade.

2) local storage redundancy within pool with limited host replication (primary/primary DRBD between two XCP hosts, similar to current /opt/xensource/packages/iso shared ISO SR)

I use this as a backing for an LVM storage repository. The only problem I can foresee, is that I'm not sure if DRBD supports multi-path. Network problems in a primary-primary setup would lead to split-brain with different VMs running on different brains....... I can't imagine that being fun to solve.... I'm using a crossover cable and it seems to work well - very reliable but definitely not scalable.

3) Supports for external storage supports replication and clustering and many other enterprise-level buzzwords.

Most interesting is third.

Right now I have plans to test iscsi over DRBD with muplipath to both iscsi initiators (never test this, but it may be interesting), alternative is corosync/pacemaker clustering for NFS/ISCSI + DRBD...

If I am understanding you correctly, I have tried this setup. Two iSCSI targets kept in sync using DRBD, with multi-path between the initiators and targets. This was replaced with the aforementioned solution when the hosts were upgraded. It only required two servers as opposed to four, no additional switches, there were fewer points of failure overall, and it removed the processing overhead/latency caused by the iSCSI layer.

I can imagine it would be of use in a situation where you had multiple initiators, but it would then run the risk of being the bottleneck.

I also tried an active/passive DRBD pair with iSCSI/multi-path, with fail-over managed by pacemaker/heartbeat. Write performance was marginally better, but the problem was insuring that the fail-over worked as planned.


On 20.04.2011 02:10, Tim Titley wrote:
Has anyone considered a replacement for the vastsky storage backend now that the project is officially dead (at least for now)?

I have been looking at Ceph ( http://ceph.newdream.net/ ). A suggestion to someone so inclined to do something about it, may be to use the Rados block device (RBD) and put an LVM storage group on it, which would require modification of the current LVM storage manager code - I assume similar to LVMOISCSI.

This would provide scalable, redundant storage at what I assume would be reasonable performance since the data can be striped across many storage nodes.

Development seems reasonably active and although the project is not officially production quality yet, it is part of the Linux kernel which looks promising, as does the news that they will be providing commercial support.

The only downside is that RBD requires a 2.6.37 kernel. For those "in the know" - how long will it be before this kernel makes it to XCP - considering that this vanilla kernel supposedly works in dom0 (I have yet to get it working)?

Any thoughts?

Regards,

Tim

_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api

_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api


_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api