Le Thu, 19 Aug 2010 13:12:45 +0200
Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx> écrivait:
> The subject is a bit harsh, but overall the article says:
> XFS is slowest on creating and deleting a billion files
> XFS fsck needs 30GB RAM to fsck that 100TB filesystem.
>
> http://lwn.net/SubscriberLink/400629/3fb4bc34d6223b32/
So We've made a test with 1KB files (space, space...) and a production
kernel : 2.6.32.11 (yeah I know, 2.6.38 should be faster but you know,
we upgrade our production kernels prudently :).
mk1BFiles will create and delete 1000000000 files with 32 threads
Version: v0.2.4-10-gf6decd3, build: Sep 7 2010 13:39:34
Creating 1000000000 files, started at 2010-09-07 13:45:16...
Done, time spent: 89:35:12.262
Doing `ls -R`, started at 2010-09-11 07:20:28...
Stat: ls (pid: 18844) status: ok, returned value: 0
Cpu usage: user: 1:27:47.242, system: 20:18:21.689
Max rss: 229.01 MBytes, page fault: major: 4, minor: 58694
Compute size used by 1000000000 files, started at 2010-09-12 09:30:52...
Size used by files: 11.1759 TBytes
Size used by directory: 32.897 GBytes
Size used (total): 11.2080 TBytes
Done, time spent: 25:50:32.355
Deleting 1000000000 files, started at 2010-09-13 11:21:24...
Done, time spent: 68:37:38.117
Test run on a dual Opteron quad core, 16 GB RAM, kernel 2.6.32.11
x86_64...
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@xxxxxxxxxxxxxx>
| +33 1 78 94 84 02
------------------------------------------------------------------------
|