I re-ran the bonnie++ over NFS and dd over NFS tests with the fixed
kernel and the results are good. The system is as before: uni-processor,
128M ram, no RAID:
time dd if=/dev/zero of=bigFile bs=1024k count=??
write size (MByte) throughput time
same as count= (MBytes/s)
xfs 50 8.6 5.8s
" 100 6.8 14.8s
" 200 7.7 26s
" 400 7.8 51s
" 800 7.5 1min 47s
ext2 800 8.6 1min 33s
So that's pretty close to the ext2 speeds. Massively better than before.
and bonnie++
Version 1.00h ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
NFS to xfs 64M 3277 72 4518 27 2518 19 5046 99 +++++ +++ 1254.9 28
NFS to ext2 64M 3190 69 4989 28 2471 21 5048 99 +++++ +++ 1255.6 28
NFS to xfs 256M 3272 72 4660 26 8078 35 4983 99 104856 98 1222.9 3
NFS to ext2 256M 2682 59 4952 28 8177 36 4978 99 103544 98 1224.1 31
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
NFS to xfs 16 78 76 213 1 95 53 25 91 1472 26 37 92
NFS to ext2 16 61 58 163 3 88 54 22 84 30 97 40 90
Looks fine.
The NFS block read results are obviously skewed 'cos the O2k box I
was testing from had 4G ram.
NFS + xfs over RAID0 with a SMP kernel also works just as well as
ext2 now. 512M of ram in the machine works without a slowdown also.
The one down side is that whilst testing these out I got a few weird
messages logged. Like this one sometimes when shutting down NFS
exports on the xfs box:
...
Mar 7 22:14:01 jersey portmap: portmap shutdown succeeded
Mar 7 22:14:08 jersey kernel: xfs_unmount: xfs_ibusy says error/16
Mar 7 22:14:08 jersey kernel: XFS unmount got error 16
Mar 7 22:14:08 jersey kernel: linvfs_put_super: vfsp/0xc1289320 left dangling!
Mar 7 22:14:08 jersey kernel: VFS: Busy inodes after unmount. Self-destruct in
5 seconds. Have a nice day...
and seemingly at random-ish times I got this twice:
Mar 7 22:30:00 jersey kernel: kmem_cache_destroy: Can't free all objects
c12f0890
Mar 7 22:30:00 jersey kernel: kmem_cache_destroy: Can't free all objects
c12f0c14
Mar 7 22:30:00 jersey kernel: kmem_cache_destroy: Can't free all objects
c12f0c78
and twice I got a 'kernel BUG' message when trying to mount an xfs
partition that had mounted fine just before:
# mount /scratch
mount: fs type xfs not supported by kernel
# lsmod
Module Size Used by
xfs 439408 1 (initializing)
xfs_support 9232 0 (autoclean) [xfs]
pagebuf 28352 1 (autoclean) [xfs]
3c59x 23712 1 (autoclean)
agpgart 22560 0 (unused)
and I had to reboot before I can mount an xfs disk again. Apart from
that the machine apparently still worked ok. The message logged was:
Mar 7 20:43:27 jersey kernel: kernel BUG at slab.c:804!
Mar 7 20:43:27 jersey kernel: invalid operand: 0000
Mar 7 20:43:27 jersey kernel: CPU: 0
Mar 7 20:43:27 jersey kernel: EIP: 0010:[kmem_cache_create+889/960]
Mar 7 20:43:27 jersey kernel: EIP: 0010:[<c01288f1>]
Mar 7 20:43:27 jersey kernel: EFLAGS: 00010296
Mar 7 20:43:27 jersey kernel: eax: 0000001a ebx: c12f08ec ecx: c77e2000
edx: 00000001
Mar 7 20:43:27 jersey kernel: esi: c12f08e2 edi: c88a9e3f ebp: c12f0ae8
esp: c170de48
Mar 7 20:43:27 jersey kernel: ds: 0018 es: 0018 ss: 0018
Mar 7 20:43:27 jersey kernel: Process modprobe (pid: 1043, stackpage=c170d000)
Mar 7 20:43:27 jersey kernel: Stack: c01f3f77 c01f4017 00000324 c8842000
c88a3d78 00000002 c88ad478 00000002
Mar 7 20:43:27 jersey kernel: 00000118 00000000 00000158 c170de80
c12f0afc 00000004 00000118 c880069b
Mar 7 20:43:27 jersey kernel: c88a9e35 00000218 00000020 00000000
00000000 00000000 c889168b 00000218
Mar 7 20:43:27 jersey kernel: Call Trace:
[3c59x:__insmod_3c59x_O/lib/modules/2.4.2-XFS/kernel/drivers/net/3+-1114112/ ...
...
once the Call Trace was related to the 3c59x module and the other time
it was for some serial module that I don't even use. Both BUG listings
were related to 'Process modprobe', and '[kmem_cache_create+889/960]'.
The register and Stack parts differed a bit. Let me know if you'd
like to see the full thing.
cheers,
robin
|