<div>Hi,</div>
<div>I tried with the same test case - just removed 'sync' command from the script(please check script below) to check the behaviour.</div>
<div> </div>
<div>Please find the logs:</div>
<div>#> ./createsetup.sh <br>------------[ cut here ]------------<br>WARNING: at lib/list_debug.c:30 __list_add+0x6c/0x90()<br>list_add corruption. prev->next should be next (c78699c0), but was c78299c0. (prev=c78699c0).<br>
Modules linked in:<br>Backtrace: <br>[<c04486ac>] (dump_backtrace+0x0/0x110) from [<c06ee0e0>] (dump_stack+0x18/0x1c)<br> r6:c07bc5e1 r5:0000001e r4:c7c01d38 r3:00000000<br>[<c06ee0c8>] (dump_stack+0x0/0x1c) from [<c046b5fc>] (warn_slowpath_common+0x54/0x6c)<br>
[<c046b5a8>] (warn_slowpath_common+0x0/0x6c) from [<c046b6b8>] (warn_slowpath_fmt+0x38/0x40)<br> r8:00000000 r7:c05cabd8 r6:c7c01d80 r5:c78699c0 r4:c78699c0<br>r3:00000009<br>[<c046b680>] (warn_slowpath_fmt+0x0/0x40) from [<c060b89c>] (__list_add+0x6c/0x90)<br>
r3:c78699c0 r2:c07bc6b3<br>[<c060b830>] (__list_add+0x0/0x90) from [<c06f0710>] (__down_write_nested+0xbc/0x10c)<br> r6:c78699bc r5:60000013 r4:c302d820<br>[<c06f0654>] (__down_write_nested+0x0/0x10c) from [<c06f0774>] (__down_write+0x14/0x18)<br>
r6:c31324a0 r5:00000005 r4:c78699bc<br>[<c06f0760>] (__down_write+0x0/0x18) from [<c06efd50>] (down_write+0x28/0x30)<br>[<c06efd28>] (down_write+0x0/0x30) from [<c05a21ac>] (xfs_ilock+0x28/0xe8)<br>
r4:c7869940 r3:00000000<br>[<c05a2184>] (xfs_ilock+0x0/0xe8) from [<c05cabd8>] (xfs_file_aio_write+0x1d4/0x8cc)<br> r7:00000001 r6:c31324a0 r5:00000001 r4:c7869940<br>[<c05caa04>] (xfs_file_aio_write+0x0/0x8cc) from [<c04eb404>] (do_sync_write+0xa0/0xe0)<br>
[<c04eb364>] (do_sync_write+0x0/0xe0) from [<c04ec028>] (vfs_write+0xbc/0x178)<br> r6:bee925a0 r5:c31324a0 r4:00000f38<br>[<c04ebf6c>] (vfs_write+0x0/0x178) from [<c04ec1ac>] (sys_write+0x44/0x70)<br>
r7:00000004 r6:00000f38 r5:bee925a0 r4:c31324a0<br>[<c04ec168>] (sys_write+0x0/0x70) from [<c04449a0>] (ret_fast_syscall+0x0/0x30)<br> r9:c7c00000 r8:c0444b48 r6:bee925a0 r5:00000f38 r4:001854e0<br>---[ end trace 8124d49a241e0763 ]---<br>
INFO: task cp:5445 blocked for more than 120 seconds.<br>"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.<br>cp D c06ee75c 0 5445 2173 0x00000000<br>Backtrace: <br>[<c06ee398>] (schedule+0x0/0x454) from [<c06f0748>] (__down_write_nested+0xf4/0x10c)<br>
r9:c7869a60 r8:00000000 r7:c05cabd8 r6:c78699bc r5:60000013<br>r4:c302d820<br>[<c06f0654>] (__down_write_nested+0x0/0x10c) from [<c06f0774>] (__down_write+0x14/0x18)<br> r6:c31324a0 r5:00000005 r4:c78699bc<br>
[<c06f0760>] (__down_write+0x0/0x18) from [<c06efd50>] (down_write+0x28/0x30)<br>[<c06efd28>] (down_write+0x0/0x30) from [<c05a21ac>] (xfs_ilock+0x28/0xe8)<br> r4:c7869940 r3:00000000<br>[<c05a2184>] (xfs_ilock+0x0/0xe8) from [<c05cabd8>] (xfs_file_aio_write+0x1d4/0x8cc)<br>
r7:00000001 r6:c31324a0 r5:00000001 r4:c7869940<br>[<c05caa04>] (xfs_file_aio_write+0x0/0x8cc) from [<c04eb404>] (do_sync_write+0xa0/0xe0)<br>[<c04eb364>] (do_sync_write+0x0/0xe0) from [<c04ec028>] (vfs_write+0xbc/0x178)<br>
r6:bee925a0 r5:c31324a0 r4:00000f38<br>[<c04ebf6c>] (vfs_write+0x0/0x178) from [<c04ec1ac>] (sys_write+0x44/0x70)<br> r7:00000004 r6:00000f38 r5:bee925a0 r4:c31324a0<br>[<c04ec168>] (sys_write+0x0/0x70) from [<c04449a0>] (ret_fast_syscall+0x0/0x30)<br>
r9:c7c00000 r8:c0444b48 r6:bee925a0 r5:00000f38 r4:001854e0 <br>^C^Z[1] + Stopped ./createsetup.sh<br></div>
<div>I really doubt about the stability of 2.6.35.9, it is not passing our basic tests. Checking few more things, before deciding about the patches which got introduced between 2.6.34 ~ 2.6.35.9(around 102).</div>
<div> </div>
<div>Thanks,</div>
<div>Amit Sahrawat<br></div>
<div class="gmail_quote">On Thu, Jan 20, 2011 at 11:37 AM, Amit Sahrawat <span dir="ltr"><<a href="mailto:amit.sahrawat83@gmail.com">amit.sahrawat83@gmail.com</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">
<div>Hi,</div>
<div> </div>
<div>I will try to find out the cause for this.</div>
<div>Meanwhile, just a small request/suggestion - in the past this type of testcases have helped us in finding many problems in XFS. </div>
<div>Can something like this be added to xfstests? This might help. </div>
<div><br>Thanks,</div>
<div>Amit Sahrawat<font color="#888888"><br></font></div>
<div>
<div></div>
<div class="h5">
<div class="gmail_quote">On Thu, Jan 20, 2011 at 10:47 AM, Dave Chinner <span dir="ltr"><<a href="mailto:david@fromorbit.com" target="_blank">david@fromorbit.com</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">
<div>On Thu, Jan 20, 2011 at 10:34:30AM +0530, Amit Sahrawat wrote:<br>> Hi,<br>><br>> I am facing issues in XFS for a simple test case.<br>> *Target:* ARM<br>> *Kernel version:* 2.6.35.9<br>><br>> *Test case:*<br>
> mkfs.xfs -f /dev/sda2<br>> mount -t xfs /dev/sda2 /mnt/usb/sda2<br>> (Run script - trying to fragment the XFS formatted partition)<br>> #!/bin/sh<br>> index=0<br>> while [ "$?" == 0 ]<br>> do<br>
> index=$((index+1))<br>> sync<br>> cp /mnt/usb/sda1/setupfile /mnt/usb/sda2/setupfile.$index<br>> done<br>><br>> Partition Size on which files are being created - 1GB(I need to fragment<br>> this first to run other cases)<br>
> Size of *'setupfile'* - 16K<br>><br>> There used be no such issues till *2.6.34*(last XFS version where we tried<br>> to create setup). There is no reset involved this time, just simple running<br>> the script caused this issue.<br>
<br></div>You have a known good version, a known bad version and a<br>reproducable test case. i.e. everything you need to run a git bisect<br>and find the commit introduced the regression. Can you do this and<br>tell us what that commit is?<br>
<br>Cheers,<br><br>Dave.<br><font color="#888888">--<br>Dave Chinner<br><a href="mailto:david@fromorbit.com" target="_blank">david@fromorbit.com</a><br></font></blockquote></div><br></div></div></blockquote></div><br>