xfs
[Top] [All Lists]

Re: msg00264; Cant create new files

To: Shinya Sakamoto <sakamoto@xxxxxxxxx>
Subject: Re: msg00264; Cant create new files
From: Dave Chinner <dgc@xxxxxxx>
Date: Fri, 27 May 2005 17:00:48 +1000
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <003c01c561b6$a43a3f30$170a0e0a@FD3S>; from sakamoto@kel.co.jp on Thu, May 26, 2005 at 02:49:11PM +0900
References: <06d001c5619d$35b6f670$170a0e0a@FD3S> <20050526130746.T19332@melbourne.sgi.com> <003c01c561b6$a43a3f30$170a0e0a@FD3S>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.2.5.1i
On Thu, May 26, 2005 at 02:49:11PM +0900, Shinya Sakamoto wrote:
> Hello Dave,
> 
> Thanks for your response.
> `df -i` and `-k` were listed below. We have four 2TB and a 0.9TB. The 
> problem was only in /dev/pool/lvol2. # of inodes seemed to be fine. # of 
> files/directories were almost 5000.
> 
> # df -k
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/pool/lvol1      2147287040 2146338596    948444 100% /shares/nas3_0
> /dev/pool/lvol2      2147287040 1214659232 932627808  57% /shares/nas3_1
> /dev/pool/lvol3      2147287040 1577714296 569572744  74% /shares/nas3_2
> /dev/pool/lvol4      2147287040 1651783800 495503240  77% /shares/nas3_3
> /dev/pool/lvol5      933511888 787869104 145642784  85% /shares/nas3_4
> 
> # df -i
> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
> /dev/pool/lvol1      3820400   17414 3802986    1% /shares/nas3_0
> /dev/pool/lvol2      4294967295    5824 4294961471    1% /shares/nas3_1
> /dev/pool/lvol3      4294967295   31879 4294935416    1% /shares/nas3_2

The number of inodes looks wrong - 4294967295 = 2^32 - 1 = -1.

If these filesystems were all built with the same mkfs command,
I'd expect them all to report the same number here. What does
an strace of the df -i command show (the statfs calls in particular)?

> /dev/pool/lvol4      1982069792   16476 1982053316    1% /shares/nas3_3
> /dev/pool/lvol5      582608256   14384 582593872    1% /shares/nas3_4
> 
> As you may guess, we have already abandon to fix it. Once we backup data, 
> eliminated only the lvol2 and make it again, then restore data. Now, the 
> lvol2 works fine, it can be create files even if the # of files is greater 
> than used to be. So, I would like to know what was the cause and if there 
> was other solution or not.

IIRC,  an extremely fragmented filesystem can cause this sort of
behaviour.  Have you tried running xfs_bmap on some of the files
to determine if they are fragmented at all? Do you run xfs_fsr
at all on these filesystems?

Cheers,

Dave.
-- 
Dave Chinner
R&D Software Engineer
SGI Australian Software Group


<Prev in Thread] Current Thread [Next in Thread>