WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen and iSCSI

To: Markus Hochholdinger <Markus@xxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] Xen and iSCSI
From: Alvin Starr <alvin@xxxxxxxxxx>
Date: Sun, 29 Jan 2006 16:58:14 -0500
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 29 Jan 2006 22:08:27 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <200601291400.14736.Markus@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <200601281724.47989.Markus@xxxxxxxxxxxxxxxxx> <43DCA940.50401@xxxxxxxxx> <200601291400.14736.Markus@xxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4.2) Gecko/20040308
Markus Hochholdinger wrote:

Hi,

Am Sonntag, 29. Januar 2006 12:38 schrieb Per Andreas Buer:
Markus Hochholdinger wrote:
Background is that i am planing virtual servers within xen and they
should get their disks from the network so that i can perform live
migration. Has anyone a setup like this working?
Sure. I've got a similar setup with gnbd. gnbd works like a charm. I've
heard of people combining gnbd with drbd to get a true HA setup. I've
only used mine for test and development.

well, my idea of HA is as follows:
- Two storage servers on individual SANs connected to the Xen hosts. Each storage server provides block devices per iscsi. - On domU two iscsi block devices are combined to a raid1. On this raid1 we will have the rootfs.

We are doing doing this. Well sort of.
We are having the dom0 attach to the iscsi devices and then pass them up as hda/hdb. The domU's deal with the raiding of the devices.

My reason for doing this is that the domU's don't need any access to the network that contains the iscsi devices and has all the iscsi information hidden from them so that they don't know anthing about the arrays.

All my dom0's have access to the all the iscsi devices so that migration is possible.


Advantages:
- storage servers can easily upgraded. Because of raid1 you can savely disconnect on storage server and upgrade hard disk space. After resync the raid1 you can do the same with the other storage server. - If you use a kind of lvm on the storage servers you can easily expand the exportet iscsi block devices (the raid1 and the filesystem has also to be expanded). - You can make live migration without configuring the destination Xen host specially (e.g. provide block devices in dom0 to export to domU) because all is done in domU. - If one domU dies or the Xen host you can easily start the domUs on other Xen hosts.
I have done the resync thing to upgrade the storage server software. It is a pain but it is do-able. The other advantages hold true even if the dom0's attach to the iscsi devices.

Disadvantages:
- When one storage server dies ALL domU have to rebuild their raid1 when storage this storage server comes back. High traffic on the SANs.
- Not easy to setup a new domU in this environment (lvm, iscsi, raid1)

Rebuilding the arrays can suck back a lot of network bandwidth. It is much better to rebuild the raid arrays consecutivly. rebuilding them concurrently really slows things down and causes the drives in the iscsi targets to really go mad seeking.

building a new domU is quite easy.

I build the raid array in the dom0 and then mount it locally and extract a minimal install onto the array. I then unmount the array and stop the raid. I can then boot the domU.
then ..... bob's your uncle.


Not sure:
- Performance? Can we get full network performance in domU? Ideal is we can use full bandwith of the SANs (e.g. 1GBit/s). And if the SANs can handle this (i will make raid0 with three SATA disks in each storage server).
- How is the CPU load on dom0 and domU when using iscsi in domU?



It would be interesting to know if running the iscsi devices in the domU is faster or slower than running the device in the dom0 and exporting the block device to the domU.

To get all this working for me I had to hack together a way to insure that all the iscsi devices had consistant names in each dom0

Important safty tips.

1) do not have 2 domU's trying to run out of the same raid array.
2) do not have 2 domU's try to run the same filesystem.
3) raid 5 is a little too delicate to run with iscsi devices.


--
Alvin Starr                   ||   voice: (416)585-9971
Interlink Connectivity        ||   fax:   (416)585-9974
alvin@xxxxxxxxxx              ||




_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>