On Wed, Jul 18, 2007 at 12:10:39PM -0700, Mike Montour wrote:
> David Chinner wrote:
> > The issue here is not the cluster size - that is purely an in-memory
> > arrangement for reading/writing muliple inodes at once. The issue
> > here is inode *chunks* (as Eric pointed out).
> >
> > [...]
> > The best you can do to try to avoid these sorts of problems is
> > use the "ikeep" option to keep empty inode chunks around. That way
> > if you remove a bunch of files then fragement free space you'll
> > still be able to create new files until you run out of pre-allocated
> > inodes....
> >
>
> What would it take to add an option to mkfs.xfs (or to create a
> dedicated tool) that would efficiently[1] pre-allocate a specified
> number of inode chunks when a filesystem is created?
Like an extension to mkfs.xfs's prototype file?
> This filesystem was created with "-i maxpct=0,size=2048", so a new chunk
> of 64 inodes would require an extent of 128 KiB (32 * 4KiB blocks).
i.e. worst case.
> 1. "efficiently" = significantly faster than a userspace script to
> 'touch' a few million files and then 'rm' them.
A "bulk create" option has long been considered to optimise filesystem
restore - precreating lots of inodes in an efficient manner is pretty
much a prereq for this.
The other option (and one that I prefer) is extending xfs_fsr to b
able to defragment free space. i.e. to compact space in each AG. To
do this efficiently, however, we really need a reverse map to determine
the owners of the blocks we want to move...
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
|