Input/Output error
Srinivasan T
srinivasan at storegrid.com
Wed Feb 23 00:33:12 CST 2011
Hi Eric,
There is enough disk space in the fs.
[root at domU-12-31-39-07-81-36 StoreGrid]# df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 2.9G 6.6G 31% /
/dev/sdb 147G 188M 140G 1% /mnt
none 854M 0 854M 0% /dev/shm
/dev/sdh 5.0G 25M 5.0G 1% /mymountpoint
Also, the output of modinfo is as follows,
[root at domU-12-31-39-07-81-36 StoreGrid]# modinfo xfs
filename: /lib/modules/2.6.21.7-2.fc8xen/kernel/fs/xfs/xfs.ko
license: GPL
description: SGI XFS with ACLs, security attributes, large block
numbers, no debug enabled
author: Silicon Graphics, Inc.
srcversion: C7114C18263E3067C64F2BC
depends:
vermagic: 2.6.21-2952.fc8xen SMP mod_unload 686 4KSTACKS
I have installed xfs with yum by using the following command,
yum install xfsprogs
I think I have installed the kernel rpm xfs ko. Can you confirm if this
is correct.
Regards,
Srinivasan
On 02/23/2011 01:51 AM, Eric Sandeen wrote:
> On 2/22/11 9:44 AM, Srinivasan T wrote:
>> Hi,
>>
>> We are running an C++ application in AWS EC2 instance (CentOS 5.4)
>> mounted with an EBS Volume (say /mymountpoint). We do more
>> simultaneous writes to the EBS Volume from our application. But at
>> some point we get 'ERROR: Input/output error'. After this, 'ls -l
>> /mymountpoint' command itself fails with the i/o error. The
>> filesystem which we use for the EBS Volume is xfs.
>>
>> I unmounted the drive and done xfs_check and again mounted the drive.
> xfs_check is read-only, FWIW, a task best handled these days
> by xfs_repair -n.
>
>> Now, everything seems to be working fine. But the issue still
>> persists everytime when we do simultaneous writes.>
>> I believe the following details will be useful,
>>
>> [root at domU-12-31-39-07-81-36 StoreGrid]# cat /etc/redhat-release
>> CentOS release 5.4 (Final)
> is quota in use? Might you be running out of space on the fs?
>
> Sadly I'm not even sure what xfs you might be running; there were centos kmod
> rpms of older xfs for a while, and then more recent kernels have xfs.ko
> built in, because RHEL5 started with official support for XFS later on.
>
> line 1138 of file fs/xfs/xfs_trans.c does not line up with current RHEL5.
>
> "modinfo xfs" might tell us at least where the xfs module was found,
> and from there know where it came from.
>
> But, xfs support on RHEL is best handled by Red Hat, I'm afraid. And if this
> is an old xfs kmod (which *cough* I did for centos years ago) it's not going
> to be well supported by anyone.
>
> -if- you have xfs-kmod installed, and a kernel rpm which contains xfs.ko
> itself, I'd suggest trying again with the latter.
>
> Barring all that, perhaps this is a known problem more obvious to another
> person on this list ...
>
> -Eric
>
>> [root at domU-12-31-39-07-81-36 StoreGrid]# df -lTi
>> Filesystem Type Inodes IUsed IFree IUse% Mounted on
>> /dev/sda1 ext3 1310720 107566 1203154 9% /
>> /dev/sdb ext3 19546112 11 19546101 1% /mnt
>> none tmpfs 186059 1 186058 1% /dev/shm
>> /dev/sdh xfs 1934272 495857 1438415 26% /mymountpoint
>>
>> [root at domU-12-31-39-07-81-36 StoreGrid]# uname -a
>> Linux domU-12-31-39-07-81-36 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 i686 i386 GNU/Linux
>>
>> Output of dmesg :
>>
>> SGI XFS with ACLs, security attributes, large block numbers, no debug enabled
>> SGI XFS Quota Management subsystem
>> Filesystem "sdh": Disabling barriers, not supported by the underlying device
>> XFS mounting filesystem sdh
>> Ending clean XFS mount for filesystem: sdh
>> Filesystem "sdh": XFS internal error xfs_trans_cancel at line 1138 of file fs/xfs/xfs_trans.c. Caller 0xee201944
>> [<ee2032fe>] xfs_trans_cancel+0x59/0xe3 [xfs]
>> [<ee201944>] xfs_rename+0x8f8/0x954 [xfs]
>> [<ee201944>] xfs_rename+0x8f8/0x954 [xfs]
>> [<ee21458c>] xfs_vn_rename+0x30/0x70 [xfs]
>> [<c10bb5e3>] selinux_inode_rename+0x11f/0x16d
>> [<c1078d88>] vfs_rename+0x2c3/0x441
>> [<c107a77f>] sys_renameat+0x15a/0x1b4
>> [<c1074b7f>] sys_stat64+0xf/0x23
>> [<c1072d3b>] __fput+0x140/0x16a
>> [<c10841ee>] mntput_no_expire+0x11/0x6a
>> [<c107a800>] sys_rename+0x27/0x2b
>> [<c1005688>] syscall_call+0x7/0xb
>> =======================
>> xfs_force_shutdown(sdh,0x8) called from line 1139 of file fs/xfs/xfs_trans.c. Return address = 0xee217778
>> Filesystem "sdh": Corruption of in-memory data detected. Shutting down filesystem: sdh
>> Please umount the filesystem, and rectify the problem(s)
>> I/O error in filesystem ("sdh") meta-data dev sdh block 0x3c0001 ("xfs_trans_read_buf") error 5 buf count 512
>> I/O error in filesystem ("sdh") meta-data dev sdh block 0x780001 ("xfs_trans_read_buf") error 5 buf count 512
>> xfs_force_shutdown(sdh,0x1) called from line 423 of file fs/xfs/xfs_rw.c. Return address = 0xee217778
>> xfs_force_shutdown(sdh,0x1) called from line 423 of file fs/xfs/xfs_rw.c. Return address = 0xee217778
>> Filesystem "sdh": Disabling barriers, not supported by the underlying device
>> XFS mounting filesystem sdh
>> Starting XFS recovery on filesystem: sdh (logdev: internal)
>> Ending XFS recovery on filesystem: sdh (logdev: internal)
>>
>> The XFS utilities are in v2.9.4
>>
>> Any help would be appreciated.
>>
>> Regards,
>> Srinivasan
>>
>>
>>
>> _______________________________________________
>> xfs mailing list
>> xfs at oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20110223/cf132ee6/attachment.htm>
More information about the xfs
mailing list