Hi All.
We are running dbench on a machine with Dual Xeon (Hyper threading turned off),
1GB RAM
and 4 Drive software RAID5. The kernel is 2.4.29-xfs 1.2. We are using LVM.
However
similar test done using ext2 on a disk partiotion (no md or LVM) shows
The throughput is find till the number of clients are Around 16. At that point
the throughput
plummets about 40%. We are trying to avoid that and see how we could have a
consistent throughput
perhaps sacrificing some peak performance.
One could argue that at the point the performance drops we are actualy
beggining to see
I/O requests missing page cache and going to disk. That is our current guess.
We try to tune
the buffer age and the interval page buffer daemon runs, but had no effect on
the curve.
So transactional meta data operations seems not be causing the bottle neck.
Are there any VM patches that smooths out the Page Cache dirty page swapping
process so that we
wont see this sudden drop of through put, but could have a smoother transition.
I ran a
kernel profiler on the test and I dont see any dentry cache flushes or inode
cache flushes.
Similar test done using ext2 on a disk partiotion (no md or LVM) shows us
similar performances.
However for ext2 when we tweak the bdflush parameters in /proc (specifically
the max age of meta
data buffers) we can push the point where the data throughput falls to around
40 clients.
But we are more interested in finding out ways to solve the XFS case.
Any insights on this ?
Thanx
Ravi
__________________________________
Do you Yahoo!?
Yahoo! Hotjobs: Enter the "Signing Bonus" Sweepstakes
http://hotjobs.sweepstakes.yahoo.com/signingbonus
|