David Chinner wrote:
> The issue here is not the cluster size - that is purely an in-memory
> arrangement for reading/writing muliple inodes at once. The issue
> here is inode *chunks* (as Eric pointed out).
>
> [...]
> The best you can do to try to avoid these sorts of problems is
> use the "ikeep" option to keep empty inode chunks around. That way
> if you remove a bunch of files then fragement free space you'll
> still be able to create new files until you run out of pre-allocated
> inodes....
>
What would it take to add an option to mkfs.xfs (or to create a
dedicated tool) that would efficiently[1] pre-allocate a specified
number of inode chunks when a filesystem is created? I know that XFS's
dynamic inode allocation is usually considered a "feature" relative to
filesystems like ext3, but there are cases where it's important to know
that you will not run out of inodes due to free-space fragmentation.
Note that "df -i" will still report a large number of "free inodes" when
this happens, so it's hard for a userspace application to know why it
got an error:
linux:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/cciss/c0d1p1 28353238 1873216 26480022 7% /mnt
linux:~# df
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/cciss/c0d1p1 429977152 377017108 52960044 88% /mnt
gn1-a-1:~# xfs_db -r /dev/cciss/c0d1p1 -c "freesp -s"
from to extents blocks pct
1 1 128231 128231 0.97
2 3 223964 555531 4.20
4 7 400255 2113089 15.97
8 15 838820 10436529 78.86
16 31 8 128 0.00
total free extents 1591278
total free blocks 13233508
average free extent size 8.31628
This filesystem was created with "-i maxpct=0,size=2048", so a new chunk
of 64 inodes would require an extent of 128 KiB (32 * 4KiB blocks).
1. "efficiently" = significantly faster than a userspace script to
'touch' a few million files and then 'rm' them.
|