This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-users] Re: iSCSI and LVM

"Jonathan Tripathy" <jonnyt@xxxxxxxxxxx> writes:

> From: Ferenc Wagner [mailto:wferi@xxxxxxx]
>> "Jonathan Tripathy" <jonnyt@xxxxxxxxxxx> writes:
>>> Does anyone have any experience with "shared storage" using iSCSI with
>>> Debian/Ubuntu Dom0s?
>> Yes.  We use one iSCSI export for each domU, shared by two dom0s for
>> failover.  Each domU uses LVM if it wants, but neither does it for
>> snapshotting alone: they are regular backup clients, no magic there.
>> You'll have to check if your SAN supports this high number of exports.
>> Your plan is *very* ambitious anyway.
> My current train of though is to just export one bigh LUN to each
> node, and let the node handle LVM. While I coudn't use "live
> migration", I could always mount the big lun on another server if the
> orignal were to fail.

You can use live migration in such setup, even safely if you back it by
clvm.  You can even live without clvm if you deactivate your VG on all
but a single dom0 before changing the LVM metadata in any way.  A
non-clustered VG being active on multiple dom0s isn't a problem in
itself and makes live migration possible, but you'd better understand
what you're doing.

> Can you please explain to me how my plan is ambitious? Can someone
> please suggest where I should cut down/ scale up?

Even 100 domUs on a single dom0 is quite a lot.  100 Mbit/s upstream
bandwidth isn't much.  You'll have to tune your iSCSI carefully to
achieve reasonable I/O speeds, which is limited by your total storage
speed.  Even if your domUs don't do much I/O, 128 MB of memory is pretty
much a minimum for each, 128 of those require 16 GB of dom0 memory (this
is probably the easiest requirement to accomodate).
Good luck,

Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>