WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] SAN / LVM backend and partitions

On Mon, 2007-01-22 at 11:34 -0200, Christopher G. Stach II wrote:
> Reinhard Brandstaedter wrote:
> > On Fri, 2007-01-19 at 11:18 -0200, Christopher G. Stach II wrote:
> >> You have a few basic choices:
> >>
> >> 1. Export a single LV from dom0 as a full disk, allowing your domU to
> >> see the LV as an entire disk and use partition tables.
> >> 2. Export each single LV from dom0 as partitions, allowing your domU
> > to
> >> only see those partitions.
> >> 3. Export the VG device from dom0, allowing your domUs to see all LVs.
> >>
> >> Since you're using a SAN, you may be aiming to support multiple Xen
> >> boxes on the same device(s).  If so, you're probably going to want to
> >> run CLVM.  The question is, "Where?"  If you use #1 or #2, your dom0s
> >> will have to participate in the cluster.  If you use #3, you push the
> >> clustering to the domUs (where I think it belongs.)  #3 is less
> > secure,
> >> however.
> > 
> > That's exactly what I want to do - booting several VMs from one
> > read-only rootfs (one LV). But I think I will run CLVM on the Dom0s and
> > use method #2. I haven't used CLVM yet, but I think it's easier to
> > configure it on 2-3 Dom0s than on every DomU?
> 
> It's "easier" (if you've done it more than once, it's not that
> difficult, but the first time...) if you run it on fewer dom0s, but it
> also puts all of the VMs at risk of going down if your dom0 gets fenced
> and reset.  It also puts greater load on the dom0s, which could impact
> VM performance quite a bit (like if you're using default SEDF params.)
> 
> You _could_ also use LVM in the dom0s without the cluster capabilities,
> but you won't see any LV changes across the cluster until you reset the
> machine(s).  As with all of the other cluster suite stuff, there's also
> the risk of it exploding in your face. :)

The scenario Chris described happened to me (unfortunately) a few times
until I really began simplifying things. The NFS / Open SSI classic way
of doing things does apply here, and a many-read-no-write volume
shouldn't be too risky. 

Don't use a cluster FS for something you could pull off with NFS, no
matter how much easy it may seem to make your 'down the road' ideas work
better. If you need a cluster FS, I highly recommend going with ocfs2 vs
the RH cluster suite.

It almost always every time blows up in your face somehow, and I'm still
hearing nasty things about a GFS inode DOS vulnerability. I'm not saying
that still exists, I'm just pointing out some would rather risk it than
try upgrading GFS. :)

Best
--Tim

> 


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>