On Wed, 4 Apr 2007, Justin Piszcz wrote:
On Wed, 4 Apr 2007, Thomas Kaehn wrote:
Hi Justin,
On Wed, Apr 04, 2007 at 09:29:46AM -0400, Justin Piszcz wrote:
Please see below for "time" output.
| # time for i in `seq 1 100000`; do dd if=/dev/zero of=$i bs=1k count=20
/dev/null 2>&1; done
|
| real 6m6.814s
| user 0m30.290s
| sys 2m42.562s
| # time rm -rf y
|
| real 5m18.034s
| user 0m0.036s
| sys 0m8.169s
Deletes on XFS is one area that is a little slower than other filesystems.
You can increase the log size during the creation of the filesystem and
also increase logbufs to 8 and that might help.
Thanks for your suggestions.
I also tried to increase the log size and logbufs mount option. This
optimizes create and delete times to the above values (with default options
both are around 9-10 minutes).
The strange thing is that on a similar Dell machines using XFS, too,
deletes take only ten seconds which would match user and system time.
More than five minutes for deleting 100000 files where ext3 needs
3 seconds on the same machine is actually more than a little bit slower
- to my mind there must be something wrong. JFS needs around 18 seconds.
However I am not sure if the problem is hardware or software related.
I've also tried to use the newest 3ware firmware - but this did not lead
to an improvement.
Ciao,
Thomas
--
Thomas Kähn WESTEND GmbH | Internet-Business-Provider
Technik CISCO Systems Partner - Authorized Reseller
Im Süsterfeld 6 Tel 0241/701333-18
tk@xxxxxxxxxxx D-52072 Aachen Fax 0241/911879
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Die Gesellschaft ist eingetragen im Handelsregister Aachen unter HRB 7608
Geschäftsführer: Thomas Neugebauer, Thomas Heller, Michael Kolb
The benchmark:
$ time for i in `seq 1 100000`; do dd if=/dev/zero of=$i bs=1k count=20
/dev/null 2>&1; done
1. Six 400GB SATA drives using SW RAID5:
real 6m24.411s
user 0m43.097s
sys 2m17.350s
2. Four Raptor 150 ADFD drives using SW RAID5:
real 3m16.962s
user 0m42.899s
sys 2m15.420s
3. Two Raptor 74GB *GD drives using SW RAID1:
real 3m19.241s
user 0m41.731s
sys 2m15.873s
I used the DEFAULT create options for XFS as I find it highly optimizes
itself (at least with SW raid) with the exception of the ROOT FS, I had
that optimized awhile ago and I kept it:
/dev/md2 / xfs
logbufs=8,logbsize=262144,biosize=16,noatime,nodiratime,nobarrier 0 1
For my regular RAID5s though I use defaults,noatime.
Justin.
|