On Tue, Feb 07, 2012 at 05:41:10PM +0000, Tom Crane wrote:
> Eric Sandeen wrote:
> >On 2/6/12 5:19 AM, Tom Crane wrote:
> >>Eric Sandeen wrote:
> >>>Newer tools are fine to use on older filesystems, there should be no
> >>>issue there.
> >>>running fsr can cause an awful lot of IO, and a lot of file reorganization.
> >>>(meaning, they will get moved to new locations on disk, etc).
> >>>How bad is it, really? How did you arrive at the 40% number? Unless
> >>xfs_db -c frag -r <block device>
> >which does:
> > answer = (double)(extcount_actual - extcount_ideal) * 100.0 /
> > (double)extcount_actual;
> >If you work it out, if every file was split into only 2 extents, you'd have
> >"50%" - and really, that's not bad. 40% is even less bad.
> Here is a list of some of the more fragmented files, produced using,
> xfs_db -r /dev/mapper/vg0-lvol0 -c "frag -v" | head -1000000 | sort
> -k4,4 -g | tail -100
> >inode 1323681 actual 12496 ideal 2
> >inode 1324463 actual 12633 ideal 2
> >inode 1320625 actual 20579 ideal 2
> >inode 1335016 actual 22701 ideal 2
> >inode 753185 actual 33483 ideal 2
> >inode 64515 actual 37764 ideal 2
> >inode 76068 actual 41394 ideal 2
> >inode 76069 actual 65898 ideal 2
Ok, so that looks like you have a fragmentation problem here. What
is the workload that is generating these files?