On Tue, Dec 21, 2010 at 08:20:46PM -0600, Alex Elder wrote:
> On Tue, 2010-12-21 at 18:29 +1100, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> >
> > When inode buffer IO completes, usually all of the inodes are removed from
> > the
> > AIL. This involves processing them one at a time and taking the AIL lock
> > once
> > for every inode. When all CPUs are processing inode IO completions, this
> > causes
> > excessive amount sof contention on the AIL lock.
> >
> > Instead, change the way we process inode IO completion in the buffer
> > IO done callback. Allow the inode IO done callback to walk the list
> > of IO done callbacks and pull all the inodes off the buffer in one
> > go and then process them as a batch.
> >
> > Once all the inodes for removal are collected, take the AIL lock
> > once and do a bulk removal operation to minimise traffic on the AIL
> > lock.
> >
> > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> > Reviewed-by: Christoph Hellwig <hch@xxxxxx>
>
> One question, below. -Alex
>
> . . .
>
> > @@ -861,28 +910,37 @@ xfs_iflush_done(
> > * the lock since it's cheaper, and then we recheck while
> > * holding the lock before removing the inode from the AIL.
> > */
> > - if (iip->ili_logged && lip->li_lsn == iip->ili_flush_lsn) {
> > + if (need_ail) {
> > + struct xfs_log_item *log_items[need_ail];
>
> What's the worst-case value of need_ail we might see here?
The number of inodes in a cluster. That's 32 for 256 byte inodes
with the current 8k cluster size.
Cheers,
Dave
--
Dave Chinner
david@xxxxxxxxxxxxx
|