I ripped out enough of the non-XFS code to get it working on our MediaGX
machine, I think it might be that etherboot stuff now and not the
kernel debugger. Hopefully Monday or Tuesday I can put some effort in
to tracking that down.
Now I have another question...
I'm not sure how much of this is secret and how much isn't but the scene
looks something like this: We are building an embedded machine that has
a huge harddrive in it. Our current requirements are to be able to
stream 3 streams of data at 2.5 MBytes/sec, to writers and one reader.
With Ext2 we can shoulder that load with our MediaGX. We have a test
harness that measures it. With standard system I/O we can sustain that
data rate with 13% CPU load. With mapped I/O we can do it with 9% CPU
load. The drive is tricked out with full UDMA.
So I hacked the XFS kernel up so it would run on the MediaGX and the
numbers were stunning. With standard I/O it couldn't sustain that data
rate with our default test settings (we can adjust a few parameters and
since the 5th and final game of the western conference semis between the
Detriot DeadWings and the Colorado Avs is on in about an hour I'm going
to do it another time and not today..) and it used ~70% CPU load, it
missed the mark by about 30%. With mapped I/O it really couldn't
sustain that data rate and took ~81% CPU load, it took over twice as
long as it should.
Our test isn't scientific yet, I just hacked a kernel and ran it. I
also have tuned anything, but what kinds of performance work is there to
be done on XFS. I understand that it's still way ahead of release but
is there any sort of estimate on how it should performance and what
kinds of CPU use it should take? I would expect XFS to outperform Ext2
in throughput but will that cost a lot of CPU cycles?
thanks, and have a great weekend.