WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] RE: iSCSI and LVM



You can use live migration in such setup, even safely if you back it by
clvm.  You can even live without clvm if you deactivate your VG on all
but a single dom0 before changing the LVM metadata in any way.  A
non-clustered VG being active on multiple dom0s isn't a problem in
itself and makes live migration possible, but you'd better understand
what you're doing.

> Can you please explain to me how my plan is ambitious? Can someone
> please suggest where I should cut down/ scale up?

Even 100 domUs on a single dom0 is quite a lot.  100 Mbit/s upstream
bandwidth isn't much.  You'll have to tune your iSCSI carefully to
achieve reasonable I/O speeds, which is limited by your total storage
speed.  Even if your domUs don't do much I/O, 128 MB of memory is pretty
much a minimum for each, 128 of those require 16 GB of dom0 memory (this
is probably the easiest requirement to accomodate).
------------------------------------------------------------------------------------------------
 
Can you please explain the steps I would need to take in order to connect multipl clients to a single iSCSI target? I was thinking of using LVM on the storage server to split my RAID array in 2 big LVs, and then export one LV to a node. Then the xen node would use LVM within this exported LV. to split it up into small LVs for the DomUs. Is this a good or bad idea?
 
The 100 Mbit/s upstream is for the internet connection. The bandwidth to the iSCSI server is dual bonded gigabit ethernet. What tuning could i do to the iSCSI setup?
 
Thanks

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>