On 15 May 2008, at 06:14, Barry Naujok wrote:
On Thu, 15 May 2008 14:57:00 +1000, Christoph Hellwig <hch@xxxxxxxxxxxxx
On Wed, May 14, 2008 at 05:55:45PM +1000, Barry Naujok wrote:
Not quite sure if this is the right test, but I did 1000 creates on
a brand new filesystem with and without ci on my SATA drive, both
sustained almost 600 creates per second.
I believe creates would be the worst case scenario for not adding
No, negative dentries shouldn't have any effect on that. negative
entries help to optimize away lookups. E.g. thing of the PATH
and say your shell is not in the first directory listed there.
a negative dentry for it means that you don't have to do a lookup in
the first directories everytime someone wants to use the shell.
Ah, that makes more sense. I did a test of a million lookups to a
non-existant file in a short-form directory (dual 1.6G opteron):
CI = 4.6s
non-CI = 3.7s
And a directory with 10000 files:
CI = 10.3s
non-CI = 3.9s
Yes, and you can get the performance back if you allow negative
dentries to be created. You just have to make sure that every time a
directory entry is created in directory X, all negative dentries which
are children of directory X are thrown away.
Failure to do so will result in lookups returning ENOENT even though a
file now exists that matches case insensitively. This happens because
the VFS will find the negative dentry and return ENOENT without
calling the file system lookup method thus the file system does not
get a chance to discover the new matching directory entry...
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/