xfs: xfs_io oops on _raw_spin_lock
Kamal Dasu
kdasu.kdev at gmail.com
Tue Jan 22 19:24:02 CST 2013
I am running a simple test with xfs_io on Kernel 3.3 writing in
O_DIRECT mode and I get
# /rt.sh
++ DEV=/dev/sda2
++ RTDEV=/dev/sda3
++ MNT=/mnt/xfsmnt
++ mkdir /mnt/xfsmnt
++ mkfs.xfs -f /dev/sda2 -r extsize=2m,rtdev=/dev/sda3
meta-data=/dev/sda2 isize=256 agcount=4, agsize=983479 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=3933916, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =/dev/sda3 extsz=2097152 blocks=240089416,
rtextents=468924
mkfs.xfs: sending ioctl 20001261 to a partition!
mkfs.xfs: sending ioctl 20001261 to a partition!
mkfs.xfs: sending ioctl 20001261 to a partition!
mkfs.xfs: sending ioctl 20001261 to a partition!
++ mount /dev/sda2 -o rtdev=/dev/sda3 /mnt/xfsmnt
UDF-fs: bad mount option "rtdev=/dev/sda3" or missing value
XFS (sda2): Mounting Filesystem
XFS (sda2): Ending clean mount
++ touch /mnt/xfsmnt/foo
++ xfs_io -c 'chattr +r' /mnt/xfsmnt/foo
++ for i in ''\''0'\''' ''\''1g'\''' ''\''2g'\'''
++ xfs_io -d -c 'pwrite 0 1g' /mnt/xfsmnt/foo
CPU 0 Unable to handle kernel paging request at virtual address
00000070, epc == 8050b82c, ra == 8021e71c
Oops[#1]:
Cpu 0
$ 0 : 00000000 00000001 00010000 00000001
$ 4 : 00000070 00001000 00000000 00000004
$ 8 : 00000000 0000123c 00001000 00000000
$12 : 00000001 00000000 bc667f38 22adedc1
$16 : 00000000 00000f01 00000000 c6e48000
$20 : 00001000 00000008 cee65620 ceabfc00
$24 : 00000007 80059f48
$28 : cee64000 cee65508 00000000 8021e71c
Hi : 0000fe48
Lo : 00001000
epc : 8050b82c _raw_spin_lock+0x4/0x30
Not tainted
ra : 8021e71c _xfs_buf_find+0xa4/0x364
Status: 30008703 KERNEL EXL IE
Cause : 00800008
BadVA : 00000070
PrId : 00025a11 (Broadcom BMIPS5000)
Modules linked in:
Process xfs_io (pid: 578, threadinfo=cee64000, task=cfd4a138, tls=77b13460)
Stack : 00000250 00000001 cee6557c cee65578 00001000 00000000 00000000 00028009
00028009 80e37240 cea75e80 00000007 00000008 8021ec4c cee65624 00000000
cf831ab0 8024ff48 00000008 00028009 00000000 ce968b80 00000000 cee6557c
cee65578 00028009 00000007 00008008 cea60df8 00000008 cea75e80 8021f078
00000000 00000000 0000000c 00000000 00000008 00028009 00000000 80e37240
...
Call Trace:
[<8050b82c>] _raw_spin_lock+0x4/0x30
[<8021e71c>] _xfs_buf_find+0xa4/0x364
[<8021ec4c>] xfs_buf_get+0x44/0x1ac
[<8021f078>] xfs_buf_read+0x28/0xf0
[<8028c2bc>] xfs_trans_read_buf+0x204/0x304
[<8028cb40>] xfs_rtbuf_get+0x140/0x160
[<8028cebc>] xfs_rtfind_forw+0x8c/0x554
[<8028e19c>] xfs_rtallocate_range+0xec/0x328
[<8028f030>] xfs_rtallocate_extent_block+0x34c/0x3f4
[<8028f1e0>] xfs_rtallocate_extent_size+0x108/0x3d4
[<802901b4>] xfs_rtallocate_extent+0x190/0x1fc
[<80249f0c>] xfs_bmap_rtalloc+0x1bc/0x3f8
[<8024d430>] xfs_bmapi_allocate+0xec/0x354
[<802508ac>] xfs_bmapi_write+0x264/0x82c
[<802299dc>] xfs_iomap_write_direct+0x248/0x544
[<8021ac8c>] __xfs_get_blocks+0x3d4/0x6f4
[<8021afd0>] xfs_get_blocks_direct+0x24/0x30
[<80104750>] __blockdev_direct_IO+0xf80/0x3fd8
[<8021a2b4>] xfs_vm_direct_IO+0xb8/0x158
[<80089c00>] generic_file_direct_write+0x198/0x2d0
[<80222d90>] xfs_file_dio_aio_write+0x180/0x258
[<80223220>] xfs_file_aio_write+0x23c/0x24c
[<800c582c>] do_sync_write+0xc4/0x13c
[<800c6598>] vfs_write+0xc4/0x16c
[<800c6a58>] sys_pwrite64+0x88/0xc0
[<8000db1c>] stack_done+0x20/0x40
Code: 03e00008 0002102b 3c020001 <c0830000> 00622821 e0850000
10a0fffc 00032c02 3063ffff
---[ end trace 355da1cf684cbaba ]---
/rt.sh: line 17: 578 Segmentation fault xfs_io -d -c "pwrite $i
1g" ${MNT}/foo
++ for i in ''\''0'\''' ''\''1g'\''' ''\''2g'\'''
++ xfs_io -d -c 'pwrite 1g 1g' /mnt/xfsmnt/foo
Has anyone else seen this ?.
Thanks
Kamal
More information about the xfs
mailing list