LWN.net article: creating 1 billion files -> XFS looses
Emmanuel Florac
eflorac at intellique.com
Tue Sep 7 01:46:43 CDT 2010
Le Tue, 7 Sep 2010 08:04:10 +1000 vous écriviez:
> Oh, that's larger than I've ever run before ;)
Excellent :) Still works fine afterwards; mount, umount, etc works
flawlessly. Memory consumption though is huge :)
>
> Try using:
>
> # mkfs.xfs -d size=64k
>
> Will speed up large directory operations by at least an order of
> magnitude.
OK, we'll try that too :)
> > Now we're starting afresh with 1000 directories with 1 million files
> > each :)
>
> Which is exactly the test that was used to generate the numbers that
> were published.
>
> > (Kernel version used : vanilla 2.6.32.11 x86_64 smp)
>
> Not much point in testing that kernel - delayed logging is where the
> future is for this sort of workload, which is what I'm testing.
I'll compile a 2.6.36rc for comparison.
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac at intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
More information about the xfs
mailing list