On 2/13/2012 3:16 PM, Christoph Hellwig wrote:
> I'd have to look into it in more detail. IIRC you said you're using
> RAID6 which can be fairly nasty for small reads. Did you use the
> inode64 mount option on the filesystem?
On 2/13/2012 10:57 AM, Richard Ems wrote:
> # mount | grep xfs
> /dev/sda1 on /backup/IFT type xfs
With a 16TB+ XFS, 20TB here, isn't inode64 the default allocator?
> 20 TB XFS
> partition is 100% full
Does the fact the FS is 100% full make any difference here?
> The case is connected through SCSI.
Do you mean iSCSI? Does the host on which you're running your "find
dir" command have a 1GbE or 10GbE connection to the InforTrend unit?
More than one connection using bonding or multipath? Direct connected
or through a switch(es)? What brand is the switch(es)? Switch(es)
under heavy load?
If it's a single direct 1GbE connection it's possible you're running out
of host pipe bandwidth which is only ~100MB/s in each direction. Check
iotop/iostat while running your command to see if you're peaking the
interface with either read or write bytes. If either are at or above
100MB/s then your host pipe is full, thus this is a significant part of
your high run time problem.
Also check the performance data in the management interface on the
InforTrend unit to see if you're hitting any limits there (if it has
such a feature). RAID6 is numerically intensive and that particular
controller may not have the ASIC horsepower to keep up with the IOPS
workload you're throwing at it.
Lastly, please paste the exact command or script you refer to as "find
dir" which is generating the workload in question.