hi,
Bernhard R. Erdmann wrote:
we've been running XFS on the data disks of our HPC Linux cluster since
a while. we are quite happy with xfs, thx guys for your work!
the setup is:
- dual >=1GHZ box, 4GB mem
- lvm 0.9beta7, phys. volume size ~140 GB, logical vol for xfs: 100GB
- no additional mount options or options for mkfs.xfs were given
- kernel 2.4.8pre4-xfs, highly patched SuSE 7.0 (not that it should matter)
a user complained that "rm -rf of 400MB" takes ~10 minutes (!) until
the
command returns, whereas on the systems with reiserfs we have e.g. it
takes seconds.
Some very important data is missing:
- what's the I/O performance of the disk subsystem?
why is that relevant? with reiserfs it takes seconds, so the
disks/controller cannot be the bottleneck.
but to answer your question: we don't have hamster cage style disks
hooked up, in this case
the lvm containing XFS is a concatenated volume over to 10krpm 72GB IBM
SCSI disks on a single
channel of a plain aic controller.
which disk subsystem you think would take 10 minutes for the task?
- what was the system doing during the observed 10 min?
don't know, since i wasn't doing that. but my good guess is nothing
else. also if the system would have
been in really use, we don't see high i/o read numbers.
CPU power doesn't count as much as disk I/O performance because
unlink(2) on XFS is a synchronous operation.
:-( why is that?
i don't know so much about the quality of the data, my guess is that some
files
are small (~100k), others a big (a few hundred MB). i read in the FAQ that
XFS isn't particular good in rm-rf'ing files, which isn't really the
issue for
us, because in 99.9% of the time data is being read from the volume and not
removed via rm -rf.
So, three files à 100 MB and 1,024 files à 100 KB are 400 MB in sum and
even a busy system shouldn't take 10 min for deleting 1,027 files. I
guess your estimate of the file sizes is slightly wrong.
as i said above, the number of 10 minutes is what i was told. i am not
working with our HPC cluster,
i set it up. concerning the estimate of the file size, this is also
info i got. and since the dataset is a
couple of 10GB, i am not really into headcounting here. consider it
please as an estimate and pick
a lower number if it doesn't make sense.
so, again the question: are there mkfs options or mount options which i
should set, without bringing
the filesystem into imbalance?
i'd love to have my users not to experience this dent, since they won't
likely accept this and despite
other technical reasons might vote against using xfs.
cheers,
~dirkw
______________________________
Dirk Wetter @ Renaissance Techn.
mailto:<dirkw at rentec dot com>
|