WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re:[Xen-users] Which distributed file system for xen

To: master@xxxxxxxxxxxxxxx
Subject: Re:[Xen-users] Which distributed file system for xen
From: Chris de Vidal <chris@xxxxxxxxxx>
Date: Tue, 26 Jul 2005 00:25:28 -0700 (PDT)
Cc: Xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 26 Jul 2005 07:24:03 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <50538.204.174.64.37.1122335514.squirrel@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Reply-to: chris@xxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
--- master@xxxxxxxxxxxxxxx wrote:
> I think Chris and I are looking for something similiar -- fault
> tolerant/high-availability over slow/fast links with ease of
> administration and (ideally) zero downtime. I'd like to virtualize my
> storage along with my Xen machines.

Me too.

Ideally I would like a couple dozen machines at both sites; each machine would
be dedicated to its task (web/POP/SMTP/etc.) and replicated real-time or
near-real-time as the service allows.

Until I can install dozens of real machines, creating a pair of host nodes with
many individual virtual machines allows me to scale up almost effortlessly
because the entire network is set up for thousands of users right from day one
:-)  It won't be necessary to update DNS or change IPs or migrate data from
machine to machine, it'll be ready to go; just bring up a new box, install Xen,
do a live copy and bingo, that resource-hungry app has its own dedicated
hardware :-)

Replication between sites causes me to examine solutions like GFS/GNBD.


> After looking at AFS again, I was wrong about a couple of things. In the
> unstable AFS tree, there is no longer a 2GB file size limit and volumes
> can be much larger. AFS has many cool features, including local caching,
> online resizing, hot server add/remove, etc. Other than requiring hardware
> redundancy, what's wrong with AFS?  Doesn't look all that difficuly to get
> working.

I'd read some things that scared me away, such as corruption (or was that with
Coda?).

And it seemed far more complex than necessary, although compared to GFS+GNBD
it's looking alot simpler :-)  At the time I was checking out DRBD.

According to the Wikipedia, AFS "allows limited filesystem access in the event
of a server crash or a network outage."  That word "limited" scares me, I want
"guaranteed."  Need to read more.


I'm looking to see if GFS+GNBD offers data redundancy like RAID with
multiple-client access like NFS.  If it had AFS's caching it'd be ideal.


> (I've never used it, just read the docs). I've not come across
> AFS in corp. production environments. They all seem to use EMC storage
> accessed with NFS (at least the Solaris shops anyway).

http://www.openafs.org/success.html


This is a good list:
http://en.wikipedia.org/wiki/List_of_file_systems#Network_file_systems

CD

You have to face a Holy God on Judgment Day. He sees lust as adultery (Matt. 
5:28) and hatred as murder (1 John 3:15). Will you be guilty? 

Jesus took your punishment on the cross, and rose again defeating death, to 
save you from Hell. Repent (Luke 13:5) and trust in Him today.

NeedGod.com

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>