| To: | linux-xfs@xxxxxxxxxxx |
|---|---|
| Subject: | Re: xfs_check problems on 3.6TB fs |
| From: | Frank Hellmann <frank@xxxxxxxxxxxxx> |
| Date: | Wed, 27 Oct 2004 13:44:35 +0200 |
| In-reply-to: | <20041027090000.GA29337@astrouw.edu.pl> |
| Organization: | Optical Art Film- und Special-Effects GmbH |
| References: | <20041025150037.GA4665@astrouw.edu.pl> <417E216A.2090503@opticalart.de> <20041027090000.GA29337@astrouw.edu.pl> |
| Sender: | linux-xfs-bounce@xxxxxxxxxxx |
| User-agent: | Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7) Gecko/20040616 |
Hi Michal! Michal Szymanski wrote: On Tue, Oct 26, 2004 at 12:05:30PM +0200, Frank Hellmann wrote:
Remember the Large Block Device (2+ TB devices) support is working "correctly" since Kernel 2.6.6. So there might be a few issues that have not showed up until recently. Indeed, I have several hardware RAID arrays sliced into sub-2TB partitions with EXT3 FS, and it takes a few hours to complete a filesystem check after a crash. Well, but it completes. Now the XFS, advertized as a true 64-bit FS, capable of dealing with Petabytes, seems to be unable to check an almost empty partition and/or repair a partition with several hundred thousand files. We have two production servers running a similar setup. We work with film images that are usually 10MB-60MB in size and about 130.000 files minimum. Except for that one issue (putting everything into the filesystem root) this works really well, has good performance and is stable. We have way more issues with the nvidia drivers, than anything else. It seems to me, that xfs_check is running into some wrap-around bug at the 2TB limit and just spits out the "out of memory" error. I don't have sufficent knowledge to fix this (IMHO there is a bigger chance of me breaking it completly).
I am guessing that they use IRIX and that is known to support larger devices since quiet some time. I guess the user-tools will just catch up with the new kernel features soon. As I said LBD is known to be working since only a _few_ month.
No. There will be times, you'll need it. Powerloss is never going to give you predictable results. To resume, could someone tell me whether I can safely put my data on a 3.6TB XFS filesystem or not? The machine it is currently attached to has 4GB RAM + 2GB Swap. If this is not enough for XFS to do a check/repair, I would say it is not a solution for me. As I said, in a very unlikely event (about 500.000 files in the fs-root, + Nvidia driver crashing the machine) there is an issue repairing it. I all other cases we never had an issue with xfs and the xfs_repair. And we have to run xfs_repair at least once a week due to the nvidia crashes and hard resets.
A bit outdated I would say. But maybe someone else would like to comment on the LBD stability and sanity? Cheers,
Frank...
--
--------------------------------------------------------------------------
Frank Hellmann Optical Art GmbH Waterloohain 7a
DI Supervisor http://www.opticalart.de 22769 Hamburg
frank@xxxxxxxxxxxxx Tel: ++49 40 5111051 Fax: ++49 40 43169199 |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | TAKE 923980 - splitup pagebuf_get, Christoph Hellwig |
|---|---|
| Next by Date: | Re: xfs_check problems on 3.6TB fs, Frank Hellmann |
| Previous by Thread: | Re: xfs_check problems on 3.6TB fs, Frank Hellmann |
| Next by Thread: | Re: xfs_check problems on 3.6TB fs, Michal Szymanski |
| Indexes: | [Date] [Thread] [Top] [All Lists] |