You know what, I think I'm in the same boat as you are. I got my test
environment up and running, but now that I'm verifying everything I am
actually seeing the same errors you are. The DomUs can't write to their
filesystems and I'm getting the same log messages in Dom0:
Apr 18 16:22:49 fjcruiser kernel: GFS: fsid=example:my_lock.1: warning:
assertion "gfs_glock_is_locked_by_me(ip->i_gl)" failed
Apr 18 16:22:49 fjcruiser kernel: GFS: fsid=example:my_lock.1:
function = gfs_prepare_write
Apr 18 16:22:49 fjcruiser kernel: GFS: fsid=example:my_lock.1: file =
/usr/src/build/729060-x86_64/BUILD/xen0/src/gfs/ops_address.c, line =
329
Apr 18 16:22:49 fjcruiser kernel: GFS: fsid=example:my_lock.1: time =
1145395369
Sorry I spoke too soon. So ... anyone else have a clue? :)
-Steve
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Stephen Palmer
> Sent: Tuesday, April 18, 2006 4:08 PM
> To: Jim Klein; xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-users] Xen and GFS
>
> Oh, well, I guess the difference is that I'm not actually mounting the
> files as VBD's (as I innacurately said earlier). I'm just using the
> syntax:
>
> disk = [ 'file:/mnt/xen/vrserver1,xvda,w' ]
>
> ... to do file backed storage. They're never attached as VBD's to
DomU.
> Maybe that would work for you?
>
> -Steve
>
> > -----Original Message-----
> > From: Jim Klein [mailto:jklein@xxxxxxxxxxxxxxxx]
> > Sent: Tuesday, April 18, 2006 3:58 PM
> > To: xen-users@xxxxxxxxxxxxxxxxxxx
> > Cc: Stephen Palmer
> > Subject: Re: [Xen-users] Xen and GFS
> >
> > That's exactly what I want to do, and I am using FC5 as well. But
> > when I create the VBD's (either with the xenguest-install.py script
> > or manually creating an img file with dd and mounting -o loop) I get
> > I/O errors and the messages in the log listed earlier. The images
> > mount, but are not writable, presumably because of a locking
problem.
> > I found a note in the kernel archives that spoke of problems getting
> > loop file systems to mount properly off a GFS volume, but didn't see
> > a resolution.
> >
> >
> > On Apr 18, 2006, at 1:42 PM, Stephen Palmer wrote:
> >
> > > I've done exactly this (with iSCSI instead of FC), but I did take
> the
> > > extra step to configure GFS, as I intended each cluster node to
run
> > > various DomU's (3 or 4 on each). The DomU VBD's are all stored on
> the
> > > same iSCSI LUN, so each node can read/write to the LUN
> simultaneously
> > > with GFS.
> > >
> > > It took a lot of trial and error to get everything working - I got
> > > stuck
> > > trying to figure out why the LVM2-cluster package was missing in
> > > Fedora
> > > Core 5, and finally realized that it wasn't really necessary as
> > > long as
> > > I did all of the LVM administration from one node and used the
> > > pvscan/vgscan/lvscan tools on the other nodes to refresh the
> metadata.
> > >
> > > Stephen Palmer
> > > Gearbox Software
> > > CIO/Director of GDS
> > >
> > >> -----Original Message-----
> > >> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> > >> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of John Madden
> > >> Sent: Tuesday, April 18, 2006 3:31 PM
> > >> To: xen-users@xxxxxxxxxxxxxxxxxxx
> > >> Cc: Jim Klein
> > >> Subject: Re: [Xen-users] Xen and GFS
> > >>
> > >> On Tuesday 18 April 2006 16:17, Jim Klein wrote:
> > >>> The setup I have is 3 - AMD_64DP server blades w/ 4Gb RAM each,
> > >>> attached to FC SAN. The thought was that I would create a GFS
> volume
> > >>> on the SAN, mount it under Xen dom0 on all 3 blades, create all
> the
> > >>> VBDs for my VMs on the SAN, and thus be able to easily migrate
VMs
> > >>> from one blade to another, without any intermediary mounts and
> > >>> unmounts on the blades. I thought it made a lot of sense, but
> maybe
> > >>> my approach is wrong.
> > >>
> > >> Not necessarily wrong, but perhaps just an unnecessary layer. If
> > >> your
> > >> intent
> > >> is HA Xen, I would set it up like this:
> > >>
> > >> 1) Both machines connected to the SAN over FC
> > >> 2) Both machines having visibility to the same SAN LUN(s)
> > >> 3) Both machines running heartbeat with private interconnects
> > >> 4) LVM lv's (from dom0) on the LUN(s) for carving up the storage
> for
> > > the
> > >> domU's
> > >> 5) In the event of a node failure, the failback machine starts
with
> > >> an "/etc/init.d/lvm start" or equivalent to prep the lv's for
use.
> > > Then
> > >> xend
> > >> start, etc.
> > >>
> > >> For migration, you'd be doing somewhat the same thing, only you'd
> > >> need
> > > a
> > >> separate SAN LUN (still use LVM inside dom0) for each VBD. My
> > >> understanding
> > >> is that writing is only done by one Xen stack at once (node 0
> before
> > >> migration, node 1 after migration, nothing in between), so all
you
> > > have to
> > >> do
> > >> is make that LUN available to the other Xen instance and you
> > >> should be
> > >> set.
> > >> A cluster filesystem should only be used when more than one node
> must
> > >> write
> > >> to the same LUN at the same time.
> > >>
> > >> John
> > >>
> > >>
> > >>
> > >> --
> > >> John Madden
> > >> Sr. UNIX Systems Engineer
> > >> Ivy Tech Community College of Indiana
> > >> jmadden@xxxxxxxxxxx
> > >>
> > >> _______________________________________________
> > >> Xen-users mailing list
> > >> Xen-users@xxxxxxxxxxxxxxxxxxx
> > >> http://lists.xensource.com/xen-users
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|