xfs
[Top] [All Lists]

Re: Shared Devices with XFS

To: Greg Freemyer <freemyer-ml@xxxxxxxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx
Subject: Re: Shared Devices with XFS
From: Craig Tierney <ctierney@xxxxxxxx>
Date: Fri, 28 May 2004 17:14:32 -0600
In-reply-to: <1085774867.518.11.camel@david.internal.norcrossgroup.com>
References: <40B5D5CB.7020702@opticalart.de> <40B5D903.4050601@xfs.org> <40B5E594.1000605@opticalart.de> <1085774867.518.11.camel@david.internal.norcrossgroup.com>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Fri, 2004-05-28 at 14:07, Greg Freemyer wrote:
> On Thu, 2004-05-27 at 05:56, Frank Hellmann wrote:
> > Hi!
> > 
> > see below...
> > 
> > Steve Lord wrote:
> > > Frank Hellmann wrote:
> > > 
> 
> > > 
> > > Sharing a SAN filesystem between machines involves a lot more than
> > > just cabling up the storage ;-).
> > 
> > Right. Most of it has to do with loads of money... ;-)
> > 
> 
> OT: non-xfs below
(deleting descriptions of openGFS, GFS, ocfs, and Lustre)

There are a few other options as well.

StorNEXT (ADIC) - A heterogeneous, shared filesystem.  It used
to be called Centravision Filesystem (CVFS).  All clients
share the same physical disk(s).  Works with Linux and other platforms. 
Similar to CXFS (at least to a first order).
Single metadata server, but supports failover.  StorNEXT is
powerful because it is more than just a shared filesytem.
They have integrated HSM features so multiple clients can
access mass store caches directly.  If you have really big
mass store systems this additional feature is quite handy.

Polyserve - Shared filesystem for Linux.  Metadata is striped over
all servers.  Actively supports efficient export over NFS to
non-Linux clients or to large systems where you are not going to
have all clients directly attached to disks (SAN network or iSCSI)

Ibrix - Shared/distributed filesystem for Linux.  It is distributed
in that you can have multiple, separate servers that are aggregated
into one namespace.  It also shares the 'Shared' description because
if a server shares a disk with the other servers (FC or iSCSI) then
the clients can obtain locks from the other server and then directly
read and/or write to the disk.  I know they support direct reads,
but I am not sure if direct writes are supported yet. 

There is one more interesting thing here.  Ibrix provides a client
kernel module for access.  So all of the nodes in your cluster
or infrastructure can use the client for efficient access of the
servers, skipping other alternatives like NFS.

DataGRID (Terrascale) - This is an interesting project because it is
more like a parallel filesystem but used shared filesystem access
patterns.  It leverages much of the Linux infrastructure.  It is
based on EXT2, and block devices are exported from each IO server
(target).  Then the rest of the clients (initiators) uses traditional
Linux tools to mount the filesystems.  If you want a parallel filesytem,
use MD or LVM to stripe each block device.  They have
their own iSCSI like device that does all of the magic.  It 
supports locking and coordinates disk access because all initiators
access the same set of disks from each target.

Since the original topic was share devices,  PVFS1/2 don't really fit,
because they are parallel filesystems.  Each uses their own disk, and a
filesystem is striped across all servers.  This is more 
similar to Lustre, but Lustre can be a distrbuted filesystem
(multiple servers, mulitple disks, single namespace) as well as
a parallel filesytem.  Metadata services in PVFS2 are striped
across all servers.

Obvious choices for web server names work in all cases.

Craig



<Prev in Thread] Current Thread [Next in Thread>