Correct usage of inode64/running out of inodes
Adam Donald
Adam.Donald at gencopharma.com
Mon Jun 29 15:35:41 CDT 2009
From:
Eric Sandeen <sandeen at sandeen.net>
To:
Adam Donald <Adam.Donald at gencopharma.com>
Cc:
xfs at oss.sgi.com
Date:
06/29/2009 02:39 PM
Subject:
Re: Correct usage of inode64/running out of inodes
Adam Donald wrote:
>
> Hello
>
> In short, I believe that I have used the indode64 option correctly in
> mounting my XFS device on my Centos 5.2 system, however, I seem to only
> have 59 free inodes available and 7.5TB of free space. I would
> appreciate any insight as to what the best approach would be to fix this
> situation. In case it is helpful, I have included output from various
> commands/files below, the XFS device in question is
> /dev/mapper/VolGroup01-DATA01. Thank you in advance for your
assistance!
It all looks sane to me; what are the actual symptoms of the problem
You can create 59 files and then -ENOSPC? Any kernel messages?
Maybe this is a bug in the old xfs code in the centos module... though I
don't remember such a bug right now.s
-Eric
> uname:
> Linux NAS01 2.6.18-92.1.6.el5 #1 SMP Wed Jun 25 13:45:47 EDT 2008 x86_64
> x86_64 x86_64 GNU/Linux
>
> df -h:
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/VolGroup00-LogVol00 71G 12G 55G 18% /
> /dev/sda1 99M 25M 70M 26% /boot
> tmpfs 3.9G 0 3.9G 0% /dev/shm
> /dev/mapper/VolGroup01-DATA01 18T 9.9T 7.5T 57% /DATA01
>
> df -ih:
> Filesystem Inodes IUsed IFree IUse% Mounted
on
> /dev/mapper/VolGroup00-LogVol00 19M 123K 19M 1% /
> /dev/sda1 26K 44 26K 1% /boot
> tmpfs 999K 1 999K 1%
/dev/shm
> /dev/mapper/VolGroup01-DATA01 18G 297K 18G 1% /DATA01
>
> mount:
> ...
> /dev/mapper/VolGroup01-DATA01 on /DATA01 type xfs (rw,inode64)
> ...
>
> fstab:
> ...
> /dev/VolGroup01/DATA01 /DATA01 xfs
> rw,suid,dev,exec,auto,nouser,async,inode64 1 0
> ...
Thank you for your response. To be honest, I only ran out of "space"
(inodes) once on this volume a month or so ago, and I recall receiving a
ENOSPC type error at that time. At the time I received out of space
errors I found the xfs_db command and have since started to monitor the
ifree value, deleting files when I felt that ifree was dipping too low, as
I was unable to apply the inode64 option without first taking down various
production systems. When the time came this past weekend to apply the
inode64 option, I was expecting the ifree option value to shoot up
dramatically (several hundred, perhaps), and instead the ifree value
remained unaffected, the same as mounting the volume without the inode64
option.
Given the fact that I have this volume mounted with the inode64 option,
have roughly 7.5TB free, and show ifree with a double digit number
(currently 30 on our system), is there a an inconsistency between the
total amount of free space available and the number of free inodes
available?
Thanks again for the input, I appreciate it!
AD
______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
______________________________________________________________________
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20090629/dd62b574/attachment.htm>
More information about the xfs
mailing list