> Hello, I'm running a Centos 5.4 x64 Server with Raid0 HDDs and
> Extracting a tar archive with 6.1 million files (avg. Size
> just below 2KiB)
Ah, one of the "the file system is an optimal small record DBMS"
sort of delusions.
> is blazingly fast after the fs has been generated. But after
> some time while doing deletes/moves (need to sort those files
> by their contents) the fs performance degenerates quite badly
Very funny. "the file system is an optimal small record DBMS"
sort of delusion, only squared/cubed.
It may be amusing to hear some explanation of what "sort those
files by their contents" actually is thought to mean here, as I
have the peculiar feeling that "sort" here actually means "sort
the directory entries" (as in "deletes/moves") instead of the
inodes or the file data, as if they were the same thing.
> 8k file writes/sec to about 200).
Why is this surprising?
> Is there any way I can find out what the issue is and how I
> can help it?
Some entry level tutorial on storage systems? Some introductory
book on DBMSes?
What most amuses or depresses me is that this question or a very
similar variants gets asked quite regularly in this (and other
filesystems oriented) mailing list.