xfs
[Top] [All Lists]

Re: xfs_check problems on 3.6TB fs

To: linux-xfs@xxxxxxxxxxx
Subject: Re: xfs_check problems on 3.6TB fs
From: Frank Hellmann <frank@xxxxxxxxxxxxx>
Date: Wed, 27 Oct 2004 13:50:45 +0200
In-reply-to: <20041027095434.GA30788@xxxxxxxxxxxxxx>
Organization: Optical Art Film- und Special-Effects GmbH
References: <20041025150037.GA4665@xxxxxxxxxxxxxx> <417E216A.2090503@xxxxxxxxxxxxx> <20041027090000.GA29337@xxxxxxxxxxxxxx> <20041027095434.GA30788@xxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7) Gecko/20040616
Hi Michal!

That would support my theory that there is a wrap-around bug somewhere in xfs_check. It is not in xfs_repair. so I'll give it a try and have a look.
                Cheers,
                        Frank...

Michal Szymanski wrote:
On Wed, Oct 27, 2004 at 11:00:00AM +0200, Michal Szymanski wrote:

To resume, could someone tell me whether I can safely put my data on a
3.6TB XFS filesystem or not? The machine it is currently attached to
has 4GB RAM + 2GB Swap. If this is not enough for XFS to do a
check/repair, I would say it is not a solution for me.

PS. I've just found, on http://oss.sgi.com/projects/xfs/irix-linux.html


Linux XFS filesystems are limited to 2 Terabytes in size due to
limitations in the Linux block device I/O layers.

Is it just an out-of-date page? Or, maybe, it is just the true reason of
our problems?


PS2. I have made a following test: I stopped the software RAID and
created two separate XFS systems on both "slices", seen by the system as
/dev/sda1 and /dev/sdb1. One is just below 2TB, the other is 1.6TB.
xfs_check works silently on such a (clean) filesystem. So maybe it is
really the 2TB limit that makes the diffrence?

Michal.


--
--------------------------------------------------------------------------
Frank Hellmann          Optical Art GmbH           Waterloohain 7a
DI Supervisor           http://www.opticalart.de   22769 Hamburg
frank@xxxxxxxxxxxxx     Tel: ++49 40 5111051       Fax: ++49 40 43169199


<Prev in Thread] Current Thread [Next in Thread>