Eric Sandeen wrote:
> This is for bug #850,
> http://oss.sgi.com/bugzilla/show_bug.cgi?id=850
> XFS file system segfaults , repeatedly and 100% reproducable in 2.6.30 ,
> 2.6.31
Grr well this slowed things down a little, on about 200,000 entries in a
~10MB directory on a single sata spindle.
stracing /bin/ls (no color/stats, output to /dev/null) 4x in a row with
cache drops in between shows:
stock:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.241378 414 583 getdents
100.00 0.231012 396 583 getdents
100.00 0.244977 420 583 getdents
100.00 0.258624 444 583 getdents
patched:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.285928 769 372 getdents
100.00 0.273747 736 372 getdents
100.00 0.271060 729 372 getdents
100.00 0.251360 676 372 getdents
so that's slowed down a bit. Weird that more calls, originally, made it
faster overall...?
But one thing I noticed is that we choose readahead based on a guess at
the readdir buffer size, and at least for glibc's readdir it has this:
const size_t default_allocation =
(4 * BUFSIZ < sizeof (struct dirent64) ?
sizeof (struct dirent64) : 4 * BUFSIZ);
where BUFSIZ is a magical 8192.
But we do at max PAGE_SIZE which gives us almost no readahead ...
So bumping our "bufsize" up to 32k, things speed up nicely. Wonder if
the stock broken bufsize method led to more inadvertent readahead....
32k:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.176826 475 372 getdents
100.00 0.177491 477 372 getdents
100.00 0.176548 475 372 getdents
100.00 0.139812 376 372 getdents
Think it's worth it?
-Eric
|