WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] SAN + XEN + FC + LVM question(s)

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] SAN + XEN + FC + LVM question(s)
From: Wendell Dingus <wendell@xxxxxxxxxxxxx>
Date: Mon, 15 Sep 2008 12:02:19 -0400 (EDT)
Delivery-date: Mon, 15 Sep 2008 09:03:31 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1053235014.99411221494497785.JavaMail.root@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
I wonder if someone might be so kind as to sort of "validate" this layout and confirm a few assumptions...

I want to deploy a number of Linux servers, probably RHEL5/CentOS5 (as both DOM0 and multiple DOMU roles) and already have a FC RAID and some FC cards. I've toyed around with this on a stand-alone machine and am pretty sure the concept will translate over to a shared storage system on FC. Oh and yes I've read a tremendous amount of info, blog posts, how-to's, etc.. Many thanks to all who have shared info which has helped me (and others) better understand.

I've configured and maintained GFS before. It's great technology and I'm trying to decide if I need it for this server virtualization project or not. If the VM's live inside "files" then obviously those files would need to be accessible by any dom0 physical node right (hence GFS or similar)? GFS can have serious performance issues though, especially with very large files. The alternative is to carve out a chunk of disk space, a LUN or possibly an LV as seen by the DOM0's and just expose it to the DOMU's. The latter is what I'm leaning towards, it would seem to offer a lot of advantages. If it will work as I think it will that is...

What I see is servers with FC cards attached to a switch and to an FC RAID device. All 3 have local disks which they boot from. If CLVM is installed and configured I should be able to make the FC RAID into a PV. Via CLVM all 3 boxes see the PV and can work with it with changes staying in lockstep (from the LVM perspective I mean). On any node I create a VG and within that create a few LVs of various sizes. In virt-manager I then create a VM pointing to physical storage as /dev/mapper/name-of-lv-created.

Testing this totally on local storage appeared to work fine. Extending that to FC/SAN storage, is this the proper approach? Will CLVM do the necessary magic to make this work without issue? I'm hoping live migration should also be possible?

What I've also played with on a single local test setup is to pretend disk space was exhausted in the DOMU and I needed to assign it some more. From the DOM0 I just created a second LV named similarly to the main one that DOMU uses as it's hard drive. From virt-manager I went into devices and added that LV as virtual storage. A "tail -f" of /var/log/messages in the booted DOMU instance showed xvdb just suddenly appeared. I did pvcreate/vgextend/lvextend/resize2fs to allow the DOMU VM to add that extra space and extend a filesystem onto it. The idea of a full-fledged LVM living insize of an LV was a bit strange at first. The ability to create new LVs at the DOM0 level and expose those to DOMU VM's is simply fantastic though. Is it truly this easy?

On a local internal 500GB SATA drive on a single server this appears to work prefectly. If CLVM will allow it to work on an FC shared hard drive thoguh, absolutely fantastic...

Are there pitfalls or limitations I've not thought of here though? Is this approach a "best practices" or is some other method considered "better"?

Thanks!
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users