LWN.net article: creating 1 billion files -> XFS looses
Christoph Hellwig
hch at infradead.org
Thu Aug 19 07:05:12 CDT 2010
On Thu, Aug 19, 2010 at 01:12:45PM +0200, Michael Monnerie wrote:
> The subject is a bit harsh, but overall the article says:
> XFS is slowest on creating and deleting a billion files
> XFS fsck needs 30GB RAM to fsck that 100TB filesystem.
>
> http://lwn.net/SubscriberLink/400629/3fb4bc34d6223b32/
The creation and deletion performance is a known issue, and too a large
extent fixes by the new delaylog code. We're not quite as fast as ext4
yet, but it's getting close.
The repair result looks a lot like the pre-3.1.0 xfsprogs repair.
More information about the xfs
mailing list