On Thu, May 26, 2005 at 02:49:11PM +0900, Shinya Sakamoto wrote:
> Hello Dave,
>
> Thanks for your response.
> `df -i` and `-k` were listed below. We have four 2TB and a 0.9TB. The
> problem was only in /dev/pool/lvol2. # of inodes seemed to be fine. # of
> files/directories were almost 5000.
>
> # df -k
> Filesystem 1k-blocks Used Available Use% Mounted on
> /dev/pool/lvol1 2147287040 2146338596 948444 100% /shares/nas3_0
> /dev/pool/lvol2 2147287040 1214659232 932627808 57% /shares/nas3_1
> /dev/pool/lvol3 2147287040 1577714296 569572744 74% /shares/nas3_2
> /dev/pool/lvol4 2147287040 1651783800 495503240 77% /shares/nas3_3
> /dev/pool/lvol5 933511888 787869104 145642784 85% /shares/nas3_4
>
> # df -i
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/pool/lvol1 3820400 17414 3802986 1% /shares/nas3_0
> /dev/pool/lvol2 4294967295 5824 4294961471 1% /shares/nas3_1
> /dev/pool/lvol3 4294967295 31879 4294935416 1% /shares/nas3_2
The number of inodes looks wrong - 4294967295 = 2^32 - 1 = -1.
If these filesystems were all built with the same mkfs command,
I'd expect them all to report the same number here. What does
an strace of the df -i command show (the statfs calls in particular)?
> /dev/pool/lvol4 1982069792 16476 1982053316 1% /shares/nas3_3
> /dev/pool/lvol5 582608256 14384 582593872 1% /shares/nas3_4
>
> As you may guess, we have already abandon to fix it. Once we backup data,
> eliminated only the lvol2 and make it again, then restore data. Now, the
> lvol2 works fine, it can be create files even if the # of files is greater
> than used to be. So, I would like to know what was the cause and if there
> was other solution or not.
IIRC, an extremely fragmented filesystem can cause this sort of
behaviour. Have you tried running xfs_bmap on some of the files
to determine if they are fragmented at all? Do you run xfs_fsr
at all on these filesystems?
Cheers,
Dave.
--
Dave Chinner
R&D Software Engineer
SGI Australian Software Group
|