On Fri, Jan 25, 2008 at 03:16:36PM +0800, lxh wrote:
> we have dozens of file servers with a 1.5TB/2.5 TB large xfs file system
> volume running on a RAID6 SATA array. Each volume contains about
> 10,000,000 files. The Operating system is debian GNU/Linux 2.6.18-5-amd64
> #1 SMP. we got a kernel oops frequently last year.
> here is the oops :
> Filesystem "cciss/c0d1": XFS internal error xfs_trans_cancel at line 1138
> of file fs/xfs/xfs_trans.c. Caller 0xffffffff881df006
> Call Trace:
> [<ffffffff881fed18>] :xfs:xfs_trans_cancel+0x5b/0xfe
> [<ffffffff88207006>] :xfs:xfs_create+0x58b/0x5dd
> [<ffffffff8820f496>] :xfs:xfs_vn_mknod+0x1bd/0x3c8
Are you running out of space in the filesystem?
The only vectors I've seen that can cause this are I/O errors
or ENOSPC during file create after we've already checked that
this cannot happen. Are there any I/O errors in the log?
which is in 2.6.23 fixed the last known cause of the ENOSPC
issue, so upgrading the kernel or patching this fix back
to the 2.6.18 kernel may fix the problem if it is related to
> Every time the error occurs, the volume can not be accessed. So we have to
> umount this volume, run xfs_repair, and then remount it. This problem
> causes seriously impact of our service.
Anyway, next time it happens, can you please run xfs_check on the
filesystem first and post the output? If there is no output, then
the filesystem is fine and you don't need to run repair.
If it is not fine, can also post the output of xfs_repair?
Once the filesystem has been fixed up, can you then post the
output of this command to tell us the space usage in the filesystems?
# xfs_db -r -c 'sb 0' -c p <dev>
SGI Australian Software Group