On 2010-10-23 13:13, Peter Grandi wrote:
* JFS is good for almost everything, including largish filesystems
on somewhat largish systems with lots of processes accessing
lots of files, and works equally well on 32b and 64b, is very
stable, and has a couple of nice features. Its major downside is
less care than XFS for barriers. I think that it can support
well filesystems up to 10-15TB, and perhaps beyond. It should
have been made the default for Linux for at least a decade
instead of 'ext3'.
Would comment here that JFS is indeed very good, but does have a problem
when reaching/hitting the 32TB boundary. This appears to be a user
space tool issue. It is the main reason why I switched over to XFS as
was running into this problem too often.
* XFS is like JFS, and with somewhat higher scalability both as to
sizes and as to higher internal parallelism in the of multiple
processes accessing the same file, and has a couple of nice
features (mostly barrier support, but also small blocks and large
inodes). Its major limitation are internal complexity and should
only be used on 64b systems. It can support single filesystems
larger than 10-15TB, but that's stretching things.
Have used XFS up to 120TB myself on real media (i.e. not sparse files)
under linux; will be building >128 shortly. Have used more w/ XFS
Irix in the past.
Generally I find with most file systems/tools there are many bugs when
you cross bit boundaries where they were not tested. Whenever
using/planning large systems /always/ test first and have good backups.
|