xfs
[Top] [All Lists]

Re: inode/dcache shrink and nested locks

To: Andi Kleen <ak@xxxxxxx>
Subject: Re: inode/dcache shrink and nested locks
From: Steve Lord <lord@xxxxxxx>
Date: Tue, 22 May 2001 07:27:19 -0500
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: Message from Andi Kleen <ak@suse.de> of "Tue, 22 May 2001 09:42:19 +0200." <20010522094219.A24123@gruyere.muc.suse.de>
Sender: owner-linux-xfs@xxxxxxxxxxx

Andi Kleen wrote:
> 
> 
> xfs_ilock as far as I can see does not use a recursive lock for exclusive
> locking. xfs_release() gets an exclusive lock on the inode. It can be called
> from shrink_[di]cache_memory(), which can be called from most memory
> allocations inside the XFS, thus probably deadlocking. The normal linux
> VFS avoids this situation for the superblock lock by checking in 
> shrink_[di]cache_memory() for __GFP_IO and not doing anything when it is set.
> XFS usually sets GFP_PAGE_IO though, which is not checked this way and doesn'
> t
> include __GFP_IO. 
> 
> One easy way would be to just check for __GFP_PAGE_IO in the shrink_* 
> functions also, this just would have the drawback that the defragmentation
> effects of shrink_* could not be used inside XFS (kmem_shake_memory would
> be mostly useless). Another option would be to make xfs_ilock() recursive.
> 
> Comments?
> 
> 
> -Andi 
> 

Hi Andi,

Hmm, I don't think you mean xfs_release, that is called from fput and
the nfs server, but the exclusive lock is obtained in the clear_inode
path though.

The only time this would be a problem is memory allocations done from
a thread holding the ilock on an inode which has a zero i_count or d_count.
I would have to do a code audit to find these, but it is a small fraction
of the xfs code base, in fact it is probably just  the xfs_iget path since
we generally do not do anything with an inode unless we have a reference
to it.

I will take a look though.

Steve




<Prev in Thread] Current Thread [Next in Thread>