On Fri, Mar 02, 2012 at 09:04:26PM +1100, Dave Chinner wrote:
> On Fri, Mar 02, 2012 at 02:51:04AM -0500, Christoph Hellwig wrote:
> > Hmm, I don't like this complication all that much.
> Though it is a simple, self contained fix for the problem...
It just smells hacky. If the non-caching version doesn't go anywhere
I won't veto it, but it's defintively not my favourite.
> > Why would we even bother caching inodes during quotacheck? The bulkstat
> > is a 100% sequential read only workload going through all inodes in the
> > filesystem. I think we should simply not cache any inodes while in
> > quotacheck.
> I have tried that approach previously with inodes read through
> bulkstat, but I couldn't find a clean workable solution. It kept
> getting rather complex because all our caching and recycling is tied
> into VFS level triggers. That was a while back, so maybe there is a
> simpler solution that I missed in attempting to do this.
> I suspect for a quotacheck only solution we can hack a check into
> .drop_inode, but a generic coherent non-cached bulkstat lookup is
> somewhat more troublesome.
Right, the whole issue also applies to any bulkstat. But even for that
it doesn't seem that bad.
We add a new XFS_IGET_BULKSTAT flag for iget, which then sets an
XFS_INOTCACHE or similar flag on the inode. If we see that in bulkstat
on a clean inode in ->drop_inode return true there, which takes care
of the VFS side.
For the XFS side we'd have to move the call to xfs_syncd_init earlier
during the mount process, which effectively revers
2bcf6e970f5a88fa05dced5eeb0326e13d93c4a1. That should be fine now that
we never call into the quota code from the sync work items. If we want
to be entirely on the safe side we could only move starting the reclaim
work item earlier.