xfs
[Top] [All Lists]

>>: Re: Hа: Re: XFS: Assertion failed: fs_is_ok, file: fs/xfs/xfs_alloc.

To: xfs@xxxxxxxxxxx
Subject: >>: Re: Hа: Re: XFS: Assertion failed: fs_is_ok, file: fs/xfs/xfs_alloc.c, line: 1590
From: Dmitriy Yu Leonov <DLeonov@xxxxxxxxxx>
Date: Thu, 30 Jan 2014 08:08:54 +0400
Cc: david@xxxxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx

Good moning, dear developers.

Is there any conclusion on the submitted information? Can You help with a solution to my problem?

PS. I'm registered problem also in XFS bugzilla: http://oss.sgi.com/bugzilla/show_bug.cgi?id=1045


  Sincerely, Dmitry.

----- Переслано: Dmitriy Yu Leonov/BeeLine дата: 30.01.2014 08:07 -----

От: Dmitriy Yu Leonov/BeeLine
Кому: xfs@xxxxxxxxxxx,
Копия: david@xxxxxxxxxxxxx
Дата: 29.01.2014 08:22
Тема: >>: Re: Hа: Re: XFS: Assertion failed: fs_is_ok, file: fs/xfs/xfs_alloc.c, line: 1590




Repost my message, because I accidentally answered Dave instead of answering all. Sorry.
I'm registered problem also in XFS bugzilla: http://oss.sgi.com/bugzilla/show_bug.cgi?id=1045

Good evening, Dave.

I’m installed xfsprogs version 3.1.11 and try to repair filesystem on the raid disk. But command xfs_repair -P /dev/sdb1 hanged.
Then I decided to reboot with old kernel version 3.7.10 (I have several versions of kernel). After reboot the system, I ran the command again. Command executed successfully in the old kernel 3.7.10. Output of the commands attached to the letter after the text.
From the description of the commands output clear that there is a loss of log file data. Now I need to restore the file system with a minimum of data loss. Is it possible? What command set correctly for that use?

PS: output of programs and system info in the bottom of signature.



  Sincerely, Dmitry.


uname -a
Linux devastator 3.7.10-gentoo #2 SMP Wed Mar 27 13:28:00 MSK 2013 x86_64 Intel(R) Xeon(TM) CPU 3.00GHz GenuineIntel GNU/Linux


xfs_repair -P /dev/sdb1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
       - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.


xfs_repair -n /dev/sdb1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
       - scan filesystem freespace and inode maps...
block (3,1498933-1498933) multiply claimed by cnt space tree, state - 2
agf_freeblks 259940761, counted 259940776 in ag 3
agf_freeblks 255012362, counted 255012365 in ag 4
agf_freeblks 260627255, counted 260627372 in ag 5
agf_freeblks 255168644, counted 255168626 in ag 2
agf_freeblks 207044983, counted 207044984 in ag 6
agf_freeblks 243646150, counted 243646100 in ag 1
block (0,9288775-9288775) multiply claimed by cnt space tree, state - 2
block (0,9292880-9292880) multiply claimed by cnt space tree, state - 2
block (0,9311746-9311746) multiply claimed by cnt space tree, state - 2
block (0,9313774-9313774) multiply claimed by cnt space tree, state - 2
block (0,4010552-4010552) multiply claimed by cnt space tree, state - 2
block (0,7294010-7294010) multiply claimed by cnt space tree, state - 2
block (0,6907114-6907114) multiply claimed by cnt space tree, state - 2
block (0,4058360-4058360) multiply claimed by cnt space tree, state - 2
block (0,3891784-3891784) multiply claimed by cnt space tree, state - 2
block (0,9322824-9322824) multiply claimed by cnt space tree, state - 2
agf_freeblks 228242757, counted 228242913 in ag 0
sb_fdblocks 1709684933, counted 1709685157
       - found root inode chunk
Phase 3 - for each AG...
       - scan (but don't clear) agi unlinked lists...
       - process known inodes and perform inode discovery...
       - agno = 0
data fork in ino 16762966 claims free block 16654424
bad nblocks 256 for inode 16764668, would reset to 255
data fork in ino 16767882 claims free block 9317836
data fork in ino 16767882 claims free block 9317837
bad nblocks 530 for inode 16767882, would reset to 545
data fork in ino 16770934 claims free block 9309594
data fork in ino 16770934 claims free block 9309595
bad nblocks 2396 for inode 16772596, would reset to 2395
data fork in ino 16775619 claims free block 9319785
data fork in ino 16775619 claims free block 9319786
bad nblocks 6284 for inode 16775619, would reset to 6291
bad nblocks 103 for inode 16780498, would reset to 102
bad nextents 27 for inode 16780498, would reset to 26
data fork in ino 16781959 claims free block 7295214
data fork in ino 16781959 claims free block 7295215
bad nblocks 76 for inode 16781959, would reset to 81
bad key in bmbt root (is 1856, would reset to 1844) in inode 16782070 data fork
bad nblocks 3060 for inode 16782070, would reset to 3059
bad nextents 642 for inode 16782070, would reset to 641
data fork in ino 16783403 claims free block 1345779579
data fork in ino 16783403 claims free block 1345779580
bad nblocks 3029 for inode 16783403, would reset to 3043
bad nblocks 927 for inode 16783493, would reset to 926
bad nblocks 977 for inode 16783553, would reset to 971
data fork in ino 16786396 claims free block 8430572
bad nblocks 60 for inode 16786396, would reset to 65
data fork in ino 16786416 claims free block 9288774
data fork in ino 16786416 claims free block 9288775
bad nblocks 719 for inode 16786416, would reset to 721
data fork in ino 16786803 claims free block 9307090
data fork in ino 16786803 claims free block 9307091
bad nblocks 56 for inode 16786803, would reset to 65
bad nblocks 536 for inode 16787010, would reset to 535
data fork in ino 16792026 claims free block 9312758
data fork in ino 16792026 claims free block 9312759
bad nblocks 301 for inode 16792026, would reset to 305
bad nblocks 3059 for inode 16792057, would reset to 3045
bad nextents 580 for inode 16792057, would reset to 579
data fork in ino 16792827 claims free block 9317987
data fork in ino 16792827 claims free block 9317988
bad nblocks 88 for inode 16792827, would reset to 97
data fork in ino 16797309 claims free block 9316639
data fork in ino 16797309 claims free block 9316640
bad nblocks 1115 for inode 16797309, would reset to 1121
data fork in ino 16797369 claims free block 5187785
data fork in ino 16797369 claims free block 5187786
data fork in ino 16801363 claims free block 5195413
data fork in ino 16801363 claims free block 5195414
data fork in ino 16805149 claims free block 16857856
bad nblocks 3072 for inode 16805235, would reset to 3071
data fork in ino 16806242 claims free block 9318771
data fork in ino 16806242 claims free block 9318772
bad nblocks 3048 for inode 16806242, would reset to 3058
bad nblocks 1355 for inode 16809840, would reset to 1354
bad nblocks 2467 for inode 16812697, would reset to 2466
data fork in ino 16818259 claims free block 9305797
data fork in ino 16818259 claims free block 9305798
data fork in ino 16824269 claims free block 9319278
bad nblocks 767 for inode 16824269, would reset to 769
bad nblocks 275 for inode 16826120, would reset to 274
bad nextents 95 for inode 16826120, would reset to 94
data fork in ino 16826213 claims free block 272608246
data fork in ino 16828470 claims free block 9316767
data fork in ino 16828470 claims free block 9316768
bad nblocks 3069 for inode 16828470, would reset to 3075
bad nblocks 193 for inode 16828767, would reset to 184
data fork in ino 16829192 claims free block 539292365
bad nblocks 818 for inode 16829192, would reset to 833
data fork in ino 16829681 claims free block 5675633
data fork in ino 16831045 claims free block 6119618
data fork in ino 16833544 claims free block 1378633
bad nblocks 97 for inode 16833658, would reset to 91
bad nblocks 48 for inode 16836020, would reset to 49
data fork in ino 16837615 claims free block 9317968
data fork in ino 16837615 claims free block 9317969
bad nblocks 1237 for inode 16837615, would reset to 1249
bad nblocks 622 for inode 16843855, would reset to 621
data fork in ino 16851046 claims free block 9299867
data fork in ino 16851046 claims free block 9299868
bad nblocks 811 for inode 16851046, would reset to 817
bad nblocks 94 for inode 16852952, would reset to 93
data fork in ino 16858919 claims free block 1047326
bad nblocks 649 for inode 16858919, would reset to 657
bad nblocks 121 for inode 16861780, would reset to 120
bad nblocks 9585 for inode 16863095, would reset to 9457
bad nextents 235 for inode 16863095, would reset to 234
bad nblocks 433 for inode 16868691, would reset to 423
bad nextents 206 for inode 16868691, would reset to 205
bad nblocks 2721 for inode 16870801, would reset to 2720
data fork in ino 16870820 claims free block 9322255
bad nblocks 1025 for inode 16870900, would reset to 1015
data fork in ino 16871311 claims free block 9321968
bad nblocks 2371 for inode 16871311, would reset to 2402
data fork in ino 16871664 claims free block 272107090
data fork in ino 16871664 claims free block 272107091
data fork in ino 16871687 claims free block 272686198
data fork in ino 16872270 claims free block 9302219
data fork in ino 16872270 claims free block 9302220
bad nblocks 547 for inode 16873993, would reset to 561
data fork in ino 16876441 claims free block 9309470
bad nblocks 3071 for inode 16876441, would reset to 3073
bad nblocks 27 for inode 16876582, would reset to 26
bad nblocks 32 for inode 16889354, would reset to 33
data fork in ino 16892870 claims free block 273676067
data fork in ino 16896171 claims free block 9310630
data fork in ino 16896171 claims free block 9310631
bad nblocks 682 for inode 16896171, would reset to 689
data fork in ino 16896792 claims free block 9314447
bad nblocks 1617 for inode 16896792, would reset to 1618
data fork in ino 16906370 claims free block 9313860
data fork in ino 16906370 claims free block 9313861
bad nblocks 244 for inode 16906370, would reset to 257
data fork in ino 16908888 claims free block 9305813
data fork in ino 16908888 claims free block 9305814
bad nblocks 2417 for inode 16911368, would reset to 2416
bad nblocks 950 for inode 16912682, would reset to 949
data fork in ino 16916686 claims free block 5096396
data fork in ino 16916686 claims free block 5096397
data fork in ino 16922077 claims free block 9311859
data fork in ino 16922077 claims free block 9311860
data fork in ino 16923072 claims free block 1077183854
bad nblocks 2350 for inode 16923072, would reset to 2354
data fork in ino 16923549 claims free block 9304733
data fork in ino 16923549 claims free block 9304734
bad nblocks 1016 for inode 16923549, would reset to 1025
data fork in ino 16927417 claims free block 9321495
data fork in ino 16927417 claims free block 9321496
bad magic # 0x20313030 in inode 16927721 (data fork) bmbt block 9305534
bad data fork in inode 16927721
would have cleared inode 16927721
data fork in ino 16928450 claims free block 9318480
data fork in ino 16928450 claims free block 9318481
bad nblocks 241 for inode 16938363, would reset to 240
bad nblocks 289 for inode 16940400, would reset to 257
bad nextents 32 for inode 16940400, would reset to 31
data fork in ino 16942122 claims free block 9304143
data fork in ino 16942122 claims free block 9304144
bad nblocks 106 for inode 16942122, would reset to 113
data fork in ino 16946405 claims free block 9311443
data fork in ino 16946405 claims free block 9311444
bad nblocks 1053 for inode 16946405, would reset to 1057
data fork in ino 16948776 claims free block 9317665
data fork in ino 16948776 claims free block 9317666
bad nblocks 681 for inode 16948776, would reset to 689
bad nblocks 562 for inode 16949011, would reset to 561
bad nextents 200 for inode 16949011, would reset to 199
data fork in ino 16951530 claims free block 8418968
data fork in ino 16957296 claims free block 8435643
bad nblocks 142 for inode 16957296, would reset to 145
data fork in ino 16960362 claims free block 9320795
data fork in ino 16960362 claims free block 9320796
bad nblocks 51 for inode 16960362, would reset to 65
data fork in ino 16965029 claims free block 9304412
data fork in ino 16965029 claims free block 9304413
data fork in ino 16967072 claims free block 9322240
bad nblocks 898 for inode 16967072, would reset to 913
data fork in ino 16972513 claims free block 9322096
bad nblocks 354 for inode 16972513, would reset to 369
data fork in ino 16976981 claims free block 272642965
data fork in ino 16980431 claims free block 9305966
data fork in ino 16981023 claims free block 9313215
data fork in ino 16981023 claims free block 9313216
bad nblocks 508 for inode 16981023, would reset to 513
data fork in ino 16983271 claims free block 28015187
data fork in ino 16983271 claims free block 28015188
bad nblocks 216 for inode 16983271, would reset to 225
data fork in ino 16983280 claims free block 9321906
data fork in ino 16983280 claims free block 9321907
bad nblocks 83 for inode 16983280, would reset to 97
data fork in ino 16987049 claims free block 9314631
data fork in ino 16987049 claims free block 9314632
bad nblocks 422 for inode 16987049, would reset to 433
data fork in ino 16989722 claims free block 5014097
data fork in ino 16990238 claims free block 9318424
data fork in ino 16990238 claims free block 9318425
bad nblocks 1056 for inode 16990306, would reset to 1058
data fork in ino 16992687 claims free block 9318671
data fork in ino 16992687 claims free block 9318672
bad nblocks 180 for inode 16992687, would reset to 193
data fork in ino 16995116 claims free block 4010551
data fork in ino 16995161 claims free block 27301755
data fork in ino 16995239 claims free block 7039534
bad nblocks 39 for inode 16995239, would reset to 49
data fork in ino 16997344 claims free block 9316751
data fork in ino 16997344 claims free block 9316752
bad nblocks 3085 for inode 16997344, would reset to 3091
data fork in ino 17000640 claims free block 1076254748
bad nblocks 390 for inode 17000640, would reset to 401
data fork in ino 17004824 claims free block 9292879
data fork in ino 17004824 claims free block 9292880
bad nblocks 1345 for inode 17004824, would reset to 1346
bad nblocks 21 for inode 17005621, would reset to 33
data fork in ino 17005995 claims free block 5365367
data fork in ino 17005995 claims free block 5365368
data fork in ino 17018696 claims free block 9316278
data fork in ino 17018696 claims free block 9316279
data fork in ino 17019111 claims free block 9311403
data fork in ino 17019111 claims free block 9311404
bad nblocks 2320 for inode 17019111, would reset to 2323
       - agno = 1
       - agno = 2
       - agno = 3
       - agno = 4
       - agno = 5
       - agno = 6
       - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
       - setting up duplicate extent list...
       - check for inodes claiming duplicate blocks...
       - agno = 0
       - agno = 3
       - agno = 2
       - agno = 4
       - agno = 5
       - agno = 1
       - agno = 6
entry "10.6.114.148" at block 297 offset 496 in directory inode 19125 references free inode 16927721
       would clear inode number in entry at offset 496...
bad nblocks 256 for inode 16764668, would reset to 255
bad nblocks 530 for inode 16767882, would reset to 545
bad nblocks 2396 for inode 16772596, would reset to 2395
bad nblocks 6284 for inode 16775619, would reset to 6291
bad nblocks 103 for inode 16780498, would reset to 102
bad nextents 27 for inode 16780498, would reset to 26
bad nblocks 76 for inode 16781959, would reset to 81
bad key in bmbt root (is 1856, would reset to 1844) in inode 16782070 data fork
bad nblocks 3060 for inode 16782070, would reset to 3059
bad nextents 642 for inode 16782070, would reset to 641
bad nblocks 3029 for inode 16783403, would reset to 3043
bad nblocks 927 for inode 16783493, would reset to 926
bad nblocks 977 for inode 16783553, would reset to 971
bad nblocks 60 for inode 16786396, would reset to 65
bad nblocks 719 for inode 16786416, would reset to 721
bad nblocks 56 for inode 16786803, would reset to 65
bad nblocks 536 for inode 16787010, would reset to 535
bad nblocks 301 for inode 16792026, would reset to 305
bad nblocks 3059 for inode 16792057, would reset to 3045
bad nextents 580 for inode 16792057, would reset to 579
bad nblocks 88 for inode 16792827, would reset to 97
bad nblocks 1115 for inode 16797309, would reset to 1121
bad nblocks 3072 for inode 16805235, would reset to 3071
bad nblocks 3048 for inode 16806242, would reset to 3058
bad nblocks 1355 for inode 16809840, would reset to 1354
bad nblocks 2467 for inode 16812697, would reset to 2466
bad nblocks 767 for inode 16824269, would reset to 769
bad nblocks 275 for inode 16826120, would reset to 274
bad nextents 95 for inode 16826120, would reset to 94
bad nblocks 3069 for inode 16828470, would reset to 3075
bad nblocks 193 for inode 16828767, would reset to 184
bad nblocks 818 for inode 16829192, would reset to 833
bad nblocks 97 for inode 16833658, would reset to 91
bad nblocks 48 for inode 16836020, would reset to 49
bad nblocks 1237 for inode 16837615, would reset to 1249
bad nblocks 622 for inode 16843855, would reset to 621
bad nblocks 811 for inode 16851046, would reset to 817
bad nblocks 94 for inode 16852952, would reset to 93
bad nblocks 649 for inode 16858919, would reset to 657
bad nblocks 121 for inode 16861780, would reset to 120
bad nblocks 9585 for inode 16863095, would reset to 9457
bad nextents 235 for inode 16863095, would reset to 234
bad nblocks 433 for inode 16868691, would reset to 423
bad nextents 206 for inode 16868691, would reset to 205
bad nblocks 2721 for inode 16870801, would reset to 2720
bad nblocks 1025 for inode 16870900, would reset to 1015
bad nblocks 2371 for inode 16871311, would reset to 2402
bad nblocks 547 for inode 16873993, would reset to 561
bad nblocks 3071 for inode 16876441, would reset to 3073
bad nblocks 27 for inode 16876582, would reset to 26
bad nblocks 32 for inode 16889354, would reset to 33
bad nblocks 682 for inode 16896171, would reset to 689
bad nblocks 1617 for inode 16896792, would reset to 1618
bad nblocks 244 for inode 16906370, would reset to 257
bad nblocks 2417 for inode 16911368, would reset to 2416
bad nblocks 950 for inode 16912682, would reset to 949
bad nblocks 2350 for inode 16923072, would reset to 2354
bad nblocks 1016 for inode 16923549, would reset to 1025
bad magic # 0x20313030 in inode 16927721 (data fork) bmbt block 9305534
bad data fork in inode 16927721
would have cleared inode 16927721
bad nblocks 241 for inode 16938363, would reset to 240
bad nblocks 289 for inode 16940400, would reset to 257
bad nextents 32 for inode 16940400, would reset to 31
bad nblocks 106 for inode 16942122, would reset to 113
bad nblocks 1053 for inode 16946405, would reset to 1057
bad nblocks 681 for inode 16948776, would reset to 689
bad nblocks 562 for inode 16949011, would reset to 561
bad nextents 200 for inode 16949011, would reset to 199
bad nblocks 142 for inode 16957296, would reset to 145
bad nblocks 51 for inode 16960362, would reset to 65
bad nblocks 898 for inode 16967072, would reset to 913
bad nblocks 354 for inode 16972513, would reset to 369
bad nblocks 508 for inode 16981023, would reset to 513
bad nblocks 216 for inode 16983271, would reset to 225
bad nblocks 83 for inode 16983280, would reset to 97
bad nblocks 422 for inode 16987049, would reset to 433
bad nblocks 1056 for inode 16990306, would reset to 1058
bad nblocks 180 for inode 16992687, would reset to 193
bad nblocks 39 for inode 16995239, would reset to 49
bad nblocks 3085 for inode 16997344, would reset to 3091
bad nblocks 390 for inode 17000640, would reset to 401
bad nblocks 1345 for inode 17004824, would reset to 1346
bad nblocks 21 for inode 17005621, would reset to 33
bad nblocks 2320 for inode 17019111, would reset to 2323
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
       - traversing filesystem ...
entry "10.6.114.148" in directory inode 19125 points to free inode 16927721, would junk entry
       - traversal finished ...
       - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

xfsprogs util’s works fine with kernel 3.7.10 (config kernel params list below):
/usr/src/linux-3.7.10-gentoo/.config
CONFIG_XFS_FS=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
# CONFIG_XFS_DEBUG is not set

xfsprogs util’s hangs in start with kernel 3.10.25 (config kernel params list below):
/usr/src/linux-3.10.25-gentoo/.config
CONFIG_XFS_FS=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
CONFIG_XFS_DEBUG=y

Output of log print command I’am upload in web-site:
xfs_logprint -d -C ./xfs_log.dump
http://yadi.sk/d/Jxv-ItRSGt8vN



28.01.2014, в 8:46, "Dmitriy Yu Leonov" <
DLeonov@xxxxxxxxxx> написал(а):

    Good morning, Dave.

    Thank you for your response to my message.


    Q: So something went wrong with the HW RAID, and then you found errors in the filesystem?
    A: Yes. Presumably on the server were problems with the power supply. In the HW RAID battery backup installed. But when the server turn off/on errors occurred that led to the current situation. The current status of the HW RAID-controller stable. there are no errors.


    Q: It failed to mount with the stack trace that you attached? If so, there's a corrupt freespace tree in the filesystem.
    A: Yes. The attached stack trace is received after the incident. I watched the source code file that is referenced trace, but, unfortunately, could not figure out how to fix the problem. Is there a possibility to fix the problem with a corrupt freespace tree in the filesystem?


    Q: First of all, I'd suggest updating to at least version 3.1.11 of xfsprogs. If it still hangs, then it's quite likely there something still wrong with your HW RAID. Your first step is to make sure your HW RAID is healthy before trying to repair or mount the filesystem....
    A: I will try to follow Your recommendations. Server is running on a Gentoo system. Current stable version of xfsprogs in gentoo portage is 3.1.10, 3.1.11-r1 is unstable. But I will install the last to check. According to the results of write. Also I'm attach to the message file with a diagnosis HW RAID controller. The current status of the controller is stable.


    I hope for Your help in solving the problem.


    --
     Dmitry Leonov.


    (See attached file: Report_RAID_status_20140128.txt)


    Dave Chinner ---28.01.2014 03:14:17---On Mon, Jan 27, 2014 at 11:12:15AM +0400, Dmitriy Yu Leonov wrote: >


    От:
    Dave Chinner <david@xxxxxxxxxxxxx>
    Кому:
    Dmitriy Yu Leonov <DLeonov@xxxxxxxxxx>,
    Копия:
    xfs@xxxxxxxxxxx
    Дата:
    28.01.2014 03:14
    Тема:
    Re: XFS: Assertion failed: fs_is_ok, file: fs/xfs/xfs_alloc.c, line: 1590




    On Mon, Jan 27, 2014 at 11:12:15AM +0400, Dmitriy Yu Leonov wrote:
    >
    > Hello, dear developers
    >
    > Faced  with  the  problem of using XFS. I'm use the XFS file system on
    > the  server without problems three years. Recently discovered that the
    > disk (raid-array) with XFS is not available.
    >
    > The logs raid controller appeared: 2014-01-24 07:12:34 H/W Monitor Raid
    > Powered On

    So something went wrong with the HW RAID, and then you found errors
    in the filesystem?

    > When I restart the server, I found that the raid array is not mount on
    > the mount point /dev/sdb1 (filesystem XFS).

    It failed to mount with the stack trace that you attached? If so,
    there's a corrupt freespace tree in the filesystem.

    > When  I  run  the utility xfs_repair -P /dev/sdb1 it hangs. When I run
    > mount  /dev/sdb1  not  issued  any  errors and application also hangs.
    > Task's cannot finish even the command kill -9 <pid>.

    First of all, I'd suggest updating to at least version 3.1.11 of
    xfsprogs. If it still hangs, then it's quite likely there something
    still wrong with your HW RAID.

    Your first step is to make sure your HW RAID is healthy before
    trying to repair or mount the filesystem....

    Cheers,

    Dave.
    --
    Dave Chinner

    david@xxxxxxxxxxxxx
    <Report_RAID_status_20140128.txt>
<Prev in Thread] Current Thread [Next in Thread>