WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Shared Storage


On 25/04/2011 19:57, John Madden wrote:
Hands down mamanging LVM is my number one choice. Ideally I would just
like to set up the iSCSI connections once and just leave it

Yeah. iSCSI a few LUNs from your SAN, cLVM across your nodes (do the iSCSI in dom0), create your LV's and you're done.

This is really only half the picture though and touches on another level of storage concepts. What does your backend disk and cache look like? In my clusters, I create two storage pools, one for "fast disk" and the other for "slow disk," then add LUNs from the SAN appropriately. You should get as granular as you can in performance and use-case terms though to keep the right IOs on the right disks but that may not be practical with your SAN (e.g., if you just have 64 spindles in a single RAID-10 or some dumb JBOD or something).

I guess the message is to think about how you're laying out your data and then align that with how you lay out your disks. You may squeeze out an extra 5% by going with multiple LUNs versus a single LUN and another 30% by going with FC instead of multi-GbE, but you can gain even more by utilizing the limited i/o of a spindle more effectively.

John

Thanks for the excellent advice John. Very much appreciated. While I'm not able to disclose our disk setup (for commercial reasons), I am confident that what I have in mind is good for us, as we have been doing this in a non-shared manner (i.e. disks local to the Dom0) for quite some time. But yes, as you say, iSCSI will allow for a little bit of "fine tuning".

I also need to sanity test CLVM and see how well (or how badly) it handles iSCSI lost connections, propagating LVM metadata changes to other nodes, etc..

Now onto some testing to see what works out best...

Cheers

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>