No subject
Thu Oct 6 05:08:24 CDT 2011
in IO patterns and performance under heavy load here with this
patch set. it doesn't however, reduce the buffer cache lookups all
that much on such workloads - about 10% at most - as most of the
lookups are common from the directory and inode buffer
modifications. Here's a sample profile:
- 10.09% [kernel] [k] _xfs_buf_find
- _xfs_buf_find
- 99.57% xfs_buf_get
- 99.35% xfs_buf_read
- 99.87% xfs_trans_read_buf
+ 50.36% xfs_da_do_buf
+ 26.12% xfs_btree_read_buf_block.constprop.24
+ 12.36% xfs_imap_to_bp.isra.9
+ 10.73% xfs_read_agi
This shows that 50% of the lookups from the directory code, 25% from
the inode btree lookups, 12% from mapping inodes, and 10% from
reading the AGI buffer during inode allocation.
You know, I suspect that we could avoid almost all those AGI buffer
lookups by moving to a similar in-core log and flush technique that
the inodes use. We've already got all the information in the struct
xfs_perag - rearranging it to have a "in-core on disk" structures
for the AGI, AGF and AGFL would make a lot of the "select an AG"
code much simpler than having to read and modify the AG buffers
directly. It might even be possible to do such a change without
needing to change the on-disk journal format for them...
I think I'll put that on my list of stuff to do - right next to
in-core unlinked inode lists....
Cheers,
Dave.
--
Dave Chinner
david at fromorbit.com
More information about the xfs
mailing list