xfs
[Top] [All Lists]

Re: grab_cache_page deadlock | was Re: set_buffer_dirty_uptodate

To: linux-xfs@xxxxxxxxxxx
Subject: Re: grab_cache_page deadlock | was Re: set_buffer_dirty_uptodate
From: Rajagopal Ananthanarayanan <ananth@xxxxxxx>
Date: Fri, 29 Dec 2000 10:43:00 -0800
Cc: andrea@xxxxxxx
References: <Pine.LNX.4.21.0012291301070.13063-100000@xxxxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
Marcelo Tosatti wrote:

        [ ... ]
> 
> Basically what Andrea did in 2.2 was to add a "has_io_locks" flag to the
> task_struct structure which indicates if the current process has any fs
> lock held. (the flag is increased when any fs lock is taken, and decreased
> when any fs lock is unlocked)
> 
> With this scheme its possible to not sleep on kswapd if we have
> "current->has_io_locks" > 0 and avoid the deadlock.
> 
> Ananth, what do you think about this fix?

Yes, that'll work ... currently, it is only the
XFS i[o]lock on the inode exhibits this problem.
And it seems to be only getting triggered when
xfs_inactive_free_eofblocks() is called as part
of inode deactivation. So, for starters we can
add this counter increment/decrement to xfs_ilock/xfs_iunlock.

For what it is worth, I tried to trigger the problem
with a smaller amount of memory. Perviously the problem
surfaced after ~7hours of heavy stress on a 2P 64M machine,
running dbench. Running on the same machine, this time
with mem=32M, the same test is still running after ~18hours!
So, it seems to be a fairly obscure deadlock for XFS ...

What is the likelyhood of getting current->has_io_locks
kind of thing in 2.4?


-- 
--------------------------------------------------------------------------
Rajagopal Ananthanarayanan ("ananth")
Member Technical Staff, SGI.
--------------------------------------------------------------------------

<Prev in Thread] Current Thread [Next in Thread>