WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Xen and iSCSI - options and questions

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Xen and iSCSI - options and questions
From: jpranevich@xxxxxxxxxxx
Date: Mon, 4 Aug 2008 18:02:04 -0400 (EDT)
Delivery-date: Tue, 05 Aug 2008 09:52:49 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hello,

I have a small Xen farm of 8 dom0 servers with 64 virtual machines running 
para-virtualized and this has been working great. Unfortunately, I've hit a 
limit: my iSCSI hardware supports only 512 concurrent connections and so I'm 
pretty much at the limit. (Wish I would have seen that problem sooner!)

Of course, 87% of those connections are idle-- but necessary because I need to 
have every volume mounted everywhere for migrations, etc. (And I have some 
utility scripts I wrote to handle migrations and load balancing using Xen-API, 
so it's not an easy matter to simply connect to the iSCSI volumes as I need 
them.) 

I'm using stock Xen 3.2.1, btw. RPM that I compiled on a x86.

>From where I sit, I have several options, but I wanted to run this by the list 
>to tell me what others have done in this situation:

1. "Just-in-time" iSCSI connections from the iSCSI layer. So, I'd have all of 
my device nodes in /dev/devices/by-path/... and iSCSI would magically connect 
to them properly when the device node is opened. Unfortunately, none of the 
Linux iSCSI clients that I can find support this feature.

2. "Just-in-time" iSCSI connections from Xen. I found that SuSE's Xen seems to 
do this with a "block-iscsi" script in /etc/xen/scripts, but it's written for 
3.0 and doesn't seem to work in 3.2. The  trick is that I'm doing all of my Xen 
management through the XMLRPC API and I don't see any way to do iSCSI mounts 
there, so I suspect that their Xen 3.0 workaround doesn't actually mesh with 
Xen 3.2's new way of doing things? (Otherwise, there would be a way to do it 
through the API.)

3. Root-on-iSCSI boots for all the virtual hosts. This is messier, but I could 
in theory change all 64 VMs to do root-on-iSCSI and (I presume) the iSCSI 
connection that their local disks were on would be properly moved with a "xm 
migrate". The downside is that RedHat Enterprise 5.1 doesn't make this easy and 
I'm trying not to make this too hacky. (And would I need to have little volumes 
for the iSCSI ramdisks? I haven't worked out how that scales yet.)

I think the best method is #2 and it seems like it SHOULD be possible. What am 
I missing? How have others solved this dilemma?

Thanks for your help,

Joe Pranevich


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users