xfs_fsr question for improvement
Dave Chinner
david at fromorbit.com
Mon May 3 07:17:16 CDT 2010
On Mon, May 03, 2010 at 08:49:43AM +0200, Michael Monnerie wrote:
> On Samstag, 17. April 2010 Dave Chinner wrote:
> > They have thousands of extents in them and they are all between
> > 8-10GB in size, and IO from my VMs are stiall capable of saturating
> > the disks backing these files. While I'd normally consider these
> > files fragmented and candidates for running fsr on tme, the number
> > of extents is not actually a performance limiting factor and so
> > there's no point in defragmenting them. Especially as that requires
> > shutting down the VMs...
>
> I personally care less about file fragmentation than about
> metadata/inode/directory fragmentation. This server gets accesses from
> numerous people,
>
> # time find /mountpoint/ -inum 107901420
> /mountpoint/some/dir/ectory/path/x.iso
>
> real 7m50.732s
> user 0m0.152s
> sys 0m2.376s
>
> It took nearly 8 minutes to search through that mount point, which is
> 6TB big on a RAID-5 striped over 7 2TB disks, so search speed should be
> high.
Not necessarily, as your raid array has shown.
>
> Especially as there are only 765.000 files on that disk:
> Filesystem Inodes IUsed IFree IUse%
> /mountpoint 1258291200 765659 1257525541 1%
>
> Wouldn't you say an 8 minutes search over just 765.000 files is slow,
> even when only using 7x 2TB 7200rpm disks in RAID-5?
Depends on the directory structure and the number of IOs needed to
traverse it. If it's only a handful of files per directory, then you
get no internal directory readahead to hide read latency. That
results in a small random synchronous read workload that might
require a couple of hundred thousand IOs to complete.
More information about the xfs
mailing list