the 2.4 statfs interface is 32 bits; I think that until you upgrade the
server to 2.6, and (as Christoph tells me) you'll also need a very
recent glibc to take advantage of it.
On a large filesystem, xfs easily wraps around the ints in the statfs
structure.
-Eric
On Thu, 2004-02-26 at 07:49, James Pearson wrote:
> I have a problem with df reporting "Value too large for defined data
> type" for file systems on a particular server.
>
> The problem occurs when using an NFS client running on a 2.6.3 kernel
> that mounts XFS file systems over NFS from a server running a 2.4.21
> kernel with XFS 1.3
>
> When I run df on these file systems using the 2.6.3 client I get:
>
> df: `/mnt/tmp': Value too large for defined data type
>
> However, running df on any other client running 2.4.X, or on the server
> itself works OK. Also, running df on the 2.6.3 client to any other NFS
> exported file system works OK.
>
> I initially thought this was an NFS issue, but it appears to be a
> problem with the number of inodes reported from the underlying XFS file
> system.
>
> My original posts to the NFS list can be seen via:
>
> http://marc.theaimsgroup.com/?t=107762984200002&r=1&w=2
>
> It appears the number of inodes that is being reported back to statfs()
> from this file system (from any client or on the server itself) is 2^64
> - 1. On 2.4.X clients, the kernel routine just takes the lower 32 bits
> and doesn't complain. But on the 2.6.X client, it checks for the
> overflow and returns EOVERFLOW to statfs() - hence the error above.
>
> When I run 'df -i /disk2' on the 2.4.21 server, I get:
>
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/sdc1 4294967295 3 4294967292 1% /disk2
>
> df reports:
>
> Filesystem 1k-blocks Used Available Use% Mounted on
> /dev/sdc1 976428116 3744 976424372 1% /disk2
>
>
> i.e. 2^32 - 1 inodes (which is in fact the lower 32 bits of 2^64 - 1)
>
> The server in question has an external 3.5TB RAID array, partitioned on
> the RAID into 4 separate volumes that are seen on the host server as
> separate LUNs. 3 of these devices are about 1TB, the forth is approx
> 500Gb. Each of 3 1TB devices have this problem, the 500Gb device reports
> sensible inode numbers (much less than 2^32) and hence df works OK when
> statfs'ing this file system from a 2.6.3 client.
>
> The question is why is this file system (and other similar file systems
> on the same file server) reporting so many available inodes?
>
> All the other XFS file systems on other servers of about 1TB (or more)
> report the number of inodes at much less than 2^32.
>
> The XFS file systems were initially made using a very old version of
> xfsprogs, but I've just remade one file system using v2.5.6-0 - and it
> still shows the same behaviour.
>
> xfs_info reports:
>
> meta-data=/disk2 isize=256 agcount=233,
> agsize=1048576 blks
> = sectsz=512
> data = bsize=4096 blocks=244139797,
> imaxpct=25
> = sunit=0 swidth=0 blks, unwritten=1
> naming =version 2 bsize=4096
> log =internal bsize=4096 blocks=32768, version=1
> = sectsz=512 sunit=0 blks
> realtime =none extsz=65536 blocks=0, rtextents=0
>
>
> Any idea as to what is going on?
>
> Thanks
>
> James Pearson
--
Eric Sandeen [C]XFS for Linux http://oss.sgi.com/projects/xfs
sandeen@xxxxxxx SGI, Inc. 651-683-3102
|