xfs
[Top] [All Lists]

Re: xfsrepair: rebuilding directory inode 128

To: Christian Guggenberger <Christian.Guggenberger@xxxxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: xfsrepair: rebuilding directory inode 128
From: Chris Wedgwood <cw@xxxxxxxx>
Date: Wed, 9 Apr 2003 14:36:41 -0700
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <20030409120252.A29309@pc9391.uni-regensburg.de>
References: <20030409120252.A29309@pc9391.uni-regensburg.de>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.3.28i
On Wed, Apr 09, 2003 at 12:02:52PM +0200, Christian Guggenberger wrote:

> I've been bitten by Bug 230 (umount hangs).

Interestingly, I've found a 100% way to reproduce this I think.  If
you have a few minutes and a non-root XFS filesystem you can probably
verify whether or not that happens for you.

(1) boot the system init=/bin/sh

(2) mount /home (or whatever) --- must be XFS

(3) umount /home -- make sure this works sanely

(4) mount /home agin

(5) du /home/foo/blem --- this creates dirty inodes in memory (atime
    (updates) which need to get flushed

(6) 'sync' --- notice nothing happens

(7) wait about 30s or so

(8) umount /home --- this *should* work



now, the icky part is


(9) mount /home again

(10) du /home/goo/blem --- again, atime updates are required to be
     (written to disk

(11) sync --- again, nothing happens, we just do this to rub it in

<don't delay>

(12) umount /home --- this causes lots of IO as the disk is updated
     heavily (depends how many atimes updates, i usually use a kernel
     tree)

     at some point, IO will cease mount is stuck in 'D' with a stack
     backtrace not unlike:

      umount        D DF373DB0 4286333116 12397  12394                     
(NOTLB)
      Call Trace:
       [<c016a849>] bio_add_page+0x159/0x160
       [<c0265abd>] submit_bio+0x3d/0x70
       [<c0108073>] __down+0x113/0x2d0
       [<c011bff0>] default_wake_function+0x0/0x20
       [<c010a81c>] common_interrupt+0x18/0x20
       [<c01086a3>] __down_failed+0xb/0x14
       [<c0221e0b>] .text.lock.page_buf_locking+0xf/0x44
       [<c022198a>] pagebuf_delwri_flush+0x34a/0x400
       [<c0229d41>] XFS_bflush+0x21/0x30
       [<c0215d27>] xfs_unmount+0x167/0x1c0
       [<c021617a>] xfs_sync+0x2a/0x30
       [<c022b5e4>] vfs_unmount+0x34/0x40
       [<c022adec>] linvfs_put_super+0x4c/0x90
       [<c016bd26>] generic_shutdown_super+0x236/0x250
       [<c016d3fd>] kill_block_super+0x1d/0x50
       [<c016b4f6>] deactivate_super+0xb6/0x260
       [<c0187975>] __mntput+0x25/0x40
       [<c01884bc>] sys_umount+0x3c/0xa0
       [<c0188539>] sys_oldumount+0x19/0x20
       [<c0109eaf>] syscall_call+0x7/0xb



Does something similar happen for you?


  --cw


<Prev in Thread] Current Thread [Next in Thread>