On Tue, Sep 02, 2008 at 03:58:58PM +1000, Lachlan McIlroy wrote:
> Dave Chinner wrote:
>> On Tue, Sep 02, 2008 at 02:48:49PM +1000, Lachlan McIlroy wrote:
>> This is supposed to catch all the inodes in memory and mark them
>> XFS_ISTALE to prevent them from being written back once the
>> transaction is committed. The question is - how are dirty inodes
>> slipping through this?
>>
>> If we are freeing the cluster buffer, then there can be no other
>> active references to any of the inodes, so if they are dirty it
>> has to be due to inactivation transactions and so should be in
>> the log and attached to the buffer due to removal from the
>> unlinked list.
>>
>> The question is - which bit of this is not working? i.e. what is the
>> race condition that is allowing dirty inodes to slip through the
>> locking here?
>>
>> Hmmm - I see that xfs_iflush() doesn't check for XFS_ISTALE when
>> flushing out inodes. Perhaps you could check to see if we are
>> writing an inode marked as such.....
>
> That's what I was suggesting.
I'm not suggesting that as a fix. I'm suggesting that you determine
whether the inode being flushed has that flag set or not. If it is
not set, then we need to determine how it slipped through
xfs_ifree_cluster() without being marked XFS_ISTALE, and if it is
set, why it was not marked clean by xfs_istale_done() when the
buffer callbacks are made and the flush lock dropped....
> I'm just not sure about the assumption
> that if the flush lock cannot be acquired in xfs_ifree_cluster() then
> the inode must be in the process of being flushed. The flush could
> be aborted due to the inode being pinned or some other case and the
> inode never gets marked as stale.
Did that happen?
Basically I'm asking what the sequence of events is that leads up
to this problem - we need to identify the actual race condition
before speculating on potential fixes....
Cheers,
Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx
|