xfs
[Top] [All Lists]

Re: RHEL ES 4

To: linux-xfs@xxxxxxxxxxx
Subject: Re: RHEL ES 4
From: pg_xfs@xxxxxxxxxxxxxxxxxx (Peter Grandi)
Date: Fri, 18 Nov 2005 20:19:03 +0000
In-reply-to: <437E190B.7070901@xxxxxxx>
References: <32927.68.52.44.223.1132279914.squirrel@xxxxxxxxxxxxx> <437D6935.2090905@xxxxxxx> <1132326431.12165.9.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <437DFBD8.3070106@xxxxxxx> <437E0297.40807@xxxxxxx> <437E053E.8090704@xxxxxxx> <437E190B.7070901@xxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
>>> On Fri, 18 Nov 2005 12:10:19 -0600, Eric Sandeen
>>> <sandeen@xxxxxxx> said:

[ ... ]

>>> Oh, repair on a 300T filesystem -wil-l be painful anywhere,
>>> I think, unfortunately.

>> hm, painful yes, but hopefully not impossible? otherwise if
>> sth. goes wrong on a 300TB fs the only way to fix it would be
>> restoring from backup, no matter how tiny the corruption
>> might be (and xfs_repair could fix it easliy if only the
>> volume was not so big)....

> The time & memory requirements for repair on a filesystem of
> this size are currently extremely large... there have been
> some rules of thumb for time/memory requirements on this list
> before, but I don't have them offhand...

As a handy pointer I summarized the issue a bit, with links
to past discussions, in a recent posting, Nov. 10th.:

  http://OSS.SGI.com/archives/linux-xfs/2005-11/msg00051.html

BTW, I was scanning recently the XFS mailing list archives (and
got depressed by the sheer percentage of fools trying rather
ambitious things), and 'xfs_check'/'xfs_repair' time/space taken
is a common ''surprise''.

My impression is that the most common questions are:

* What about XFS and RH EL 3/4?

* What about files filled with zeroes?

* What about XFS, lots of other stuff, and 4K kernel stacks?

* How much RAM do 'xfs_check'/'xfs_repair' take?

* I got RAM and plenty of swap space, 'xfs_check'/'xfs_repair'
  still fail on 32 bit system on filesystems larger than 2TiB?

* What about XFS, small lowmem and lots and lots of cached
  pages with kernel 2.4?

* What about XFS and SCSI HAs that don't support READ/WRITE16?

I suppose that someone could put these in a little web page and
then put the URL in the footer of every email sent out by the
mailing list processor.


<Prev in Thread] Current Thread [Next in Thread>