xfs
[Top] [All Lists]

Re: XFS Lock debugging noise or real problem?

To: "Linda A. Walsh" <xfs@xxxxxxxxx>
Subject: Re: XFS Lock debugging noise or real problem?
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 13 Aug 2008 10:58:52 +1000
Cc: xfs-oss <xfs@xxxxxxxxxxx>, LKML <linux-kernel@xxxxxxxxxxxxxxx>, Eric Sandeen <sandeen@xxxxxxxxxxx>
In-reply-to: <48A20E9E.9090100@xxxxxxxxx>
Mail-followup-to: "Linda A. Walsh" <xfs@xxxxxxxxx>, xfs-oss <xfs@xxxxxxxxxxx>, LKML <linux-kernel@xxxxxxxxxxxxxxx>, Eric Sandeen <sandeen@xxxxxxxxxxx>
References: <48A093A7.40606@xxxxxxxxx> <48A09CA9.9080705@xxxxxxxxxxx> <48A0F686.2090700@xxxxxxxxx> <48A0F9FC.1070805@xxxxxxxxxxx> <48A20E9E.9090100@xxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.18 (2008-05-17)
On Tue, Aug 12, 2008 at 03:28:46PM -0700, Linda A. Walsh wrote:
> Eric Sandeen wrote:
>>   ...
> Is it also known, (and the same bug) when you get the lock warnings when  
> doing
> "xfs_restore", as well (dio_get_page and xfs_ilock)...

That's the mm code calling fput() with the mmap_sem held. That's a
problem in the VM code, which XFS can do nothing about. The normal
I/O paths always lock the inode first, then (if a page fault occurs
during copyin/out or we need to lock down pages for direct I/O) grab
the mmap_sem at that point. This one could deadlock if you are mixing
read/write with mmap on the same file in different threads of a
multithreaded app. Unlikely, but possible, though it would
only hang that app (not the rest of the machine).

> The bugs with 'sort', & imap were both with xfs_ilock and  
> shrink_icache_memory.

Once again, a problem with the generic code inverting the normal
lock order. This one cannot deadlock, though, because by definition
any inode on the unused list is, well, unused and hence we can't be
holding a reference to it...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx


<Prev in Thread] Current Thread [Next in Thread>