On Mon, Oct 18, 2010 at 05:07:33PM +0200, Markus Roth wrote:
> I'm running a Centos 5.4 x64 Server with Raid0 HDDs and XFS.
What's the layout of you storage? how many disks, size, etc.
How much RAM, number of CPUs in your server?
Also, what's the output of of 'xfs_info <mntpt>'?
> I did some performance tweaking according to  as performance
> was not good enough with std. options.
Using information from a 7 year old web page about how someone
optimised their filesystem for a specific bonnie++ workload
is not really a good place to start when it comes to real-world
> Extracting a tar archive with 6.1 million files (avg. Size just
> below 2KiB) is blazingly fast after the fs has been generated. But
> after some time while doing deletes/moves (need to sort those
> files by their contents) the fs performance degenerates quite
> badly (from 8k file writes/sec to about 200).
You need understand why the performance falls through the floor
first, then work out what needs optimising. It may be that your sort
algorithm has exponential complexity, or you are running out of RAM,
or something else. It may not have anything at all to do with the
The symptoms you describe sound like the difference between
operation in cache vs out of cache (i.e. RAM speed vs disk speed),
and if so then no amount of filesystem tweaking will fix it.
However, if you describe you system and the algorithms you are using
in more detail, we might be able to help identify the cause...