xfs
[Top] [All Lists]

influencing reported free inodes?

To: linux-xfs@xxxxxxxxxxx
Subject: influencing reported free inodes?
From: Tim Flower <tim@xxxxxxxxx>
Date: Tue, 01 Feb 2005 12:04:16 -0700
Organization: SEAKR Engineering
Sender: linux-xfs-bounce@xxxxxxxxxxx
Greetings,

I have an odd question I'm trying to answer and was wondering if some
kind soul help point me in the right direction.



I have a SUSE 8 Server with an attached ~1.4 TB RAID device that I've
formatted as a single XFS partition.  I didn't use any odd options or
anything, just a straight 'mkfs.xfs /dev/sdb1'

fileserver$ xfs_info /dev/sdb1
meta-data=/myshare               isize=256    agcount=351,
agsize=1048576 blks
          =                       sectsz=512  
 data     =                       bsize=4096   blocks=367276014,
imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

fileserver$ dmesg|grep XFS
SGI XFS 1.3.1 with ACLs, no debug enabled
SGI XFS Quota Management subsystem

fileserver$ rpm -qa |grep xfs
xfsprogs-2.5.11-0
xfsprogs-devel-2.5.11-0

The partition is served out via NFS to a Solaris 9 box.  The Solaris box
mounts/accesses the the partition just fine.  No problems from an
OS-standpoint.



The problem is with one of the apps that runs on the Solaris box.  I've
traced the problem to a statvfs() system call that gets an overflow
error (EOVERFLOW) that the app decides to bail on.  I dug further and
found that the statvfs() call wants 32-bit addressing and overflows on
some large numbers that are being returned.  For example, if I do a df
on the partition on the Solaris box (which uses the 64-bit friendly stat
stavfs64() call) I see the following:

$ df /myshare
/myshare         (fileserver:/myshare):1700188000 blocks
18446744073706766974 files

Small XFS paritions (sorry, don't have good example...~1-2 GB total
size, probably a little larger too) seem to solve the problem but defeat
the purpose of having the large direct-attached RAID since they're only
a few percent (?) the total size I want to work with.



My question is this.  Is there anything that can be done on the
server-side to influence the free inodes/files that XFS reports?
(Specifically, other than rebuilding the XFS partition smaller)

Also a related question, has anyone ever run into this sort problem in
the past and (if so) what did you do to work around it?

The app's vendor has been notified and has logged this as a bug but they
consider this to be a "corner-case" and don't seem to care much about
updating their code, at least very quickly.  I've pointed out to them
the many other apps that run on there without problems but was unable to
successfully goad them into patching the problem.

Sorry about the length.  Many thanks in advance to anyone helping me
out!

tim


<Prev in Thread] Current Thread [Next in Thread>