We've completed much of the endian conversion. Not yet converted is
log recovery (except in the degenerate clean log case), extended
attributes and quotas.
We've conducted a number of experiments on a range of IA32 systems,
falling into 2 categories:
(a) small (1p or 2p, 64-128 Mb)
(b) not small (2p, 1 Gb)
The experimental method involved making a new native (little-endian)
XFS filesystem, running the benchmark until a set of statistically
stable results were produced (this is not strictly possible in some
cases because the benchmarks are so poorly engineered, or their
measurement methodology is so bad) ... then do it all again for a new
MIPS (big-endian) XFS filesystem.
If the test is I/O bound, we compared the user+system CPU times (since
the endian conversions consume CPU time, not I/O time) ... this gives a
pessimistic comparison, as the slow-down (if any) would be less dramatic
if elapsed times were compared.
For CPU-bound tests (yes, many of these so-called "filesystem" tests do
not do much I/O!) we used either elapsed or CPU times, whichever was
more easily available (some tests report throughput which is inversely
proportional to elapsed time for fixed amounts of work).
The tests and analysis are as follows:
1. AIM Suite IX (or at least the FS intensive parts, creat-clo, disk_*,
link_test, sync_disk_*)
No significant differences.
2. lmbench (or at least the *delete, *create and mmaplat tests) ...
you need to run this one 20+ times to get numbers that are even
close to useful (others please note ... variance analysis is your
friend).
No significant differences.
3. Compiling parts of XFS after make clean. There were troubles
getting this to complete, so on the small system the build was confined
to the user commands (including the sim library), and on the not small
system the build involved the kernel, modules and XFS user commands.
1% degradation for the big endian case.
4. find-corruption. A single-threaded test that starts with 50+ files
with sizes in the range 1 byte to 8+ Mbytes (roughly a logarithmic
distribution), then treats each file as a chain, chooses a chain at
random copies the last file on the chain to make a new file on the
end of the chain ... repeat this 300 times ... compare the last file
in the chain with the head of the chain ... remove all but the head
of each chain ... repeat 5 times
1% degradation for the big endian case.
5. dbench. Abandoned as we could not get close to reproducible numbers.
We have one last test to try and this will be a simulated multi-user
workload for a mail-server. But I do not expect this to show a
different trend, so we are proposing that ...
+--------------------------------------------------------------+
| XFS become the Irix MIPS format, i.e. BIG endian, everywhere |
+--------------------------------------------------------------+
We've already demonstrated that the Linux big endian format is
interchangeable with Irix by migrating filesystems between an Irix
box and an IA32 Linux box and back again.
Note, when we make the necessary changes in the XFS open source tree to
turn on this as the default and remove the little endian support ...
all existing XFS filesystems on Linux will become unmountable, so
everyone will have to recreate their XFS filesystems ... which is why
we want this to happen as soon as possible.
Feedback welcome.
|