On Wed, Apr 04, 2007 at 03:05:35PM +0200, Thomas Kaehn wrote:
> I've got a strange problem on one machine using XFS. Deleting large
> directories (containing about 100000 files, 20k each) using "rm -rf"
> lasts nearly as long as creating the the files using a bash loop.
> RAM: 4 GB
> RAID10: 4x 320 GB disks connected to 3ware 9550SXU-8LP
> (Firmware Version = FE9X 3.08.00.004)
> The XFS was first created using default options and later on with
> "-d su=64k,sw=2 -l su=64k" which improved overall performance
> but not delete performance.
have you tried w/o using the hw raid?
> Has anyone realized similar effects? On a different server (Dell
> 6850) the directory can be deleted within seconds. What could be the
> reason for the huge difference in delete performance?
a lot of log updates; does the other server have a battery-backed
write-cache like many cards to these days?
> | # time for i in `seq 1 100000`; do dd if=/dev/zero of=$i bs=1k count=20
> >/dev/null 2>&1; done
> | real 6m6.814s
> | user 0m30.290s
> | sys 2m42.562s
that's about the same as my quick single-spindle cheap-desktop test
> | # time rm -rf y
> | real 5m18.034s
> | user 0m0.036s
> | sys 0m8.169s
v2 logs? what logbufs & logbsize is used?
testing with my cheap crappy desktop workstation thing with a
single disk I get "1m25.004s" for the delete