XFS - issues with writes using sync

Amit Sahrawat amit.sahrawat83 at gmail.com
Wed Jan 19 23:04:30 CST 2011


Hi,

I am facing issues in XFS for a simple test case.
*Target:* ARM
*Kernel version:* 2.6.35.9

*Test case:*
mkfs.xfs -f /dev/sda2
mount -t xfs /dev/sda2 /mnt/usb/sda2
(Run script - trying to fragment the XFS formatted partition)
#!/bin/sh
index=0
while [ "$?" == 0 ]
do
index=$((index+1))
sync
cp /mnt/usb/sda1/setupfile /mnt/usb/sda2/setupfile.$index
done

Partition Size on which files are being created - 1GB(I need to fragment
this first to run other cases)
Size of *'setupfile'*  - 16K

There used be no such issues till *2.6.34*(last XFS version where we tried
to create setup). There is no reset involved this time, just simple running
the script caused this issue.

*Back Trace:*
#> ./createsetup.sh
kernel BUG at fs/buffer.c:396!
Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = c0004000
[00000000] *pgd=00000000
Internal error: Oops: 817 [#1] PREEMPT
last sysfs file:
/sys/devices/platform/ehci-sdp.1/usb2/2-1/2-1.3/2-1.3:1.0/host0/target0:0:0/0:0:0:0/model
Modules linked in:
CPU: 0    Not tainted  (2.6.35.9 #4)
PC is at __bug+0x24/0x30
LR is at walk_stackframe+0x24/0x40
pc : [<c04483e8>]    lr : [<c04481b8>]    psr: 60000013
sp : c35cfee0  ip : c35cfdd0  fp : c35cfeec
r10: c05c7b64  r9 : c35896a8  r8 : c35c07f0
r7 : c7856688  r6 : c78585e0  r5 : c402a960  r4 : c78566c8
r3 : 00000000  r2 : c35cfe30  r1 : c35cfe00  r0 : 00000025
Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
Control: 10c53c7d  Table: 83c28059  DAC: 00000017
Process xfsdatad/0 (pid: 340, stack limit = 0xc35ce2e8)
Stack: (0xc35cfee0 to 0xc35d0000)
fee0: c35cff2c c35cfef0 c05126e4 c04483d0 00000004 c78584c0 c78585e0
c05c78b4
ff00: c35cff2c c35cff10 c05a51cc c05c0220 c78584c0 c35c07c8 c78584c0
c78585e0
ff20: c35cff4c c35cff30 c05c7a50 c05125fc c35c07c8 c35c07f0 c35c07f4
c3d264a0
ff40: c35cff74 c35cff50 c05c7c4c c05c7a28 c35cff74 c35cff60 c35ce000
c35896a0
ff60: c35c07f4 c3d264a0 c35cffc4 c35cff78 c0482874 c05c7b70 c35cff9c
c35cff88
ff80: c06f7158 00000000 c3d264a0 c0486820 c35cff90 c35cff90 c06f2bb4
c3c1fec0
ffa0: c35cffcc c048268c c35896a0 00000000 00000000 00000000 c35cfff4
c35cffc8
ffc0: c0486380 c0482698 00000000 00000000 c35cffd0 c35cffd0 c3c1fec0
c04862fc
ffe0: c046eeb8 00000013 00000000 c35cfff8 c046eeb8 c0486308 00000000
00000000
Backtrace:
[<c04483c4>] (__bug+0x0/0x30) from [<c05126e4>]
(end_buffer_async_write+0xf4/0x1c8)
[<c05125f0>] (end_buffer_async_write+0x0/0x1c8) from [<c05c7a50>]
(xfs_destroy_ioend+0x34/0x84)
 r6:c78585e0 r5:c78584c0 r4:c35c07c8
[<c05c7a1c>] (xfs_destroy_ioend+0x0/0x84) from [<c05c7c4c>]
(xfs_end_io+0xe8/0xf0)
 r7:c3d264a0 r6:c35c07f4 r5:c35c07f0 r4:c35c07c8
[<c05c7b64>] (xfs_end_io+0x0/0xf0) from [<c0482874>]
(worker_thread+0x1e8/0x294)
 r7:c3d264a0 r6:c35c07f4 r5:c35896a0 r4:c35ce000
[<c048268c>] (worker_thread+0x0/0x294) from [<c0486380>] (kthread+0x84/0x8c)
[<c04862fc>] (kthread+0x0/0x8c) from [<c046eeb8>] (do_exit+0x0/0x6cc)
 r7:00000013 r6:c046eeb8 r5:c04862fc r4:c3c1fec0
Code: e59f0010 e1a01003 eb0aa878 e3a03000 (e5833000)
---[ end trace 016e72fe751b35ae ]---
^C^Z[1] + Stopped                    ./createsetup.sh

After this I tried to unmount the XFS partition,

#umount /mnt/usb/sda2 (This command hangs and never returns)

Then, I did a reset of the target to check the state of XFS partition on
next mount.

*Back trace:*
#> mount /dev/sda2 /mnt/
XFS mounting filesystem sda2
Starting XFS recovery on filesystem: sda2 (logdev: internal)
Filesystem "sda2": XFS internal error xlog_valid_rec_header(1) at line 3431
of file fs/xfs/xfs_log_recover.c.  Caller 0xc05b95d8

Backtrace:
[<c04486ac>] (dump_backtrace+0x0/0x110) from [<c06f24e0>]
(dump_stack+0x18/0x1c)
 r6:c324e000 r5:00000000 r4:000012bb r3:c3629be0
[<c06f24c8>] (dump_stack+0x0/0x1c) from [<c05a07e8>]
(xfs_error_report+0x4c/0x5c)
[<c05a079c>] (xfs_error_report+0x0/0x5c) from [<c05b5870>]
(xlog_valid_rec_header+0xe4/0x10c)
[<c05b578c>] (xlog_valid_rec_header+0x0/0x10c) from [<c05b95d8>]
(xlog_do_recovery_pass+0x80/0x650)
 r7:00000000 r6:c324e000 r5:c36d2440 r4:c3044220
[<c05b9558>] (xlog_do_recovery_pass+0x0/0x650) from [<c05b9bf4>]
(xlog_do_log_recovery+0x4c/0x90)
[<c05b9ba8>] (xlog_do_log_recovery+0x0/0x90) from [<c05b9c58>]
(xlog_do_recover+0x20/0x120)
 r9:00000000 r8:0001e91e r7:00000000 r6:000012bb r5:00000000
r4:c3044220
[<c05b9c38>] (xlog_do_recover+0x0/0x120) from [<c05b9de0>]
(xlog_recover+0x88/0xa8)
 r9:00000000 r8:0001e91e r7:00000000 r6:000012bb r5:00000000
r4:c3044220
[<c05b9d58>] (xlog_recover+0x0/0xa8) from [<c05b2888>]
(xfs_log_mount+0xec/0x17c)
 r7:00000000 r6:00000000 r4:c300fc00
[<c05b279c>] (xfs_log_mount+0x0/0x17c) from [<c05bc6a4>]
(xfs_mountfs+0x310/0x674)
 r9:00000000 r8:0001e91e r7:000004b0 r6:00002580 r5:c05d4f84
r4:c300fc00
[<c05bc394>] (xfs_mountfs+0x0/0x674) from [<c05d4f84>]
(xfs_fs_fill_super+0x1f8/0x36c)
 r9:00000040 r8:00000400 r7:c05d4d8c r6:00000000 r5:c36fd600
r4:c300fc00
[<c05d4d8c>] (xfs_fs_fill_super+0x0/0x36c) from [<c04ee600>]
(get_sb_bdev+0x114/0x170)
[<c04ee4ec>] (get_sb_bdev+0x0/0x170) from [<c05d2f44>]
(xfs_fs_get_sb+0x24/0x30)
[<c05d2f20>] (xfs_fs_get_sb+0x0/0x30) from [<c04ed138>]
(vfs_kern_mount+0x64/0x114)
[<c04ed0d4>] (vfs_kern_mount+0x0/0x114) from [<c04ed244>]
(do_kern_mount+0x3c/0xe0)
 r8:00008000 r7:c31db500 r6:c32eb000 r5:c365bf20 r4:c07f6a6c
[<c04ed208>] (do_kern_mount+0x0/0xe0) from [<c0506594>]
(do_mount+0x700/0x77c)
 r8:00008000 r7:00000000 r6:00000000 r5:c31db500 r4:00000020
r3:c32eb000
[<c0505e94>] (do_mount+0x0/0x77c) from [<c050669c>] (sys_mount+0x8c/0xcc)
[<c0506610>] (sys_mount+0x0/0xcc) from [<c04449a0>]
(ret_fast_syscall+0x0/0x30)
 r7:00000015 r6:001854e0 r5:bee32780 r4:00186028
XFS: log mount/recovery failed: error 117
XFS: log mount failed
mount: mounting /dev/sda2 on /mnt/ failed: Structure needs cleaning


Tried to us xfs_repair on the device
#> xfs_repair /dev/sda2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

#> xfs_repair -L /dev/sda2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
bad hash ordering in block 8388617 of directory inode 128
imap claims a free inode 200248 is in use, correcting imap and clearing
inode
cleared inode 200248
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
entry "setupfile.3126" at block 22 offset 2576 in directory inode 128
references free inode 200248
        clearing inode number in entry at offset 2576...
        - agno = 1
        - agno = 2
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
rebuilding directory inode 128
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done

Please let me know if there is anything i have missed. Also, if it is good
enough to 2.6.35.9 for product?

Thanks & Regards,
Amit Sahrawat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20110120/9f9a8f72/attachment.htm>


More information about the xfs mailing list