I've searched all I can, but find no real solution to the problem of the
linux raid driver constantly outputting a stream of :
Jun 20 10:15:38 red kernel: raid5: switching cache buffer size, 512 --> 4096
Jun 20 10:15:38 red kernel: raid5: switching cache buffer size, 4096 --> 512
Jun 20 10:15:38 red kernel: raid5: switching cache buffer size, 512 --> 4096
Jun 20 10:15:38 red kernel: raid5: switching cache buffer size, 4096 --> 512
Jun 20 10:15:38 red kernel: raid5: switching cache buffer size, 512 --> 4096
Jun 20 10:15:39 red kernel: raid5: switching cache buffer size, 0 --> 4096
Jun 20 10:15:39 red kernel: raid5: switching cache buffer size, 4096 --> 512
etc. messages when doing stuff with an xfs mounted filesystem. I've tried
putting the log on a separate raid1, and on a separate raw ide partition
too and I still see these. Sure, I can turn them off, but it doesn't help
the performance much. Is there a majike rune to the mkfs.xfs command that
I've overlooked?
Reading archives, etc. seems to suggest that making the logfile have a 4K
block size would fix it, and/or putting it on a separate device, but from
what I've already tried, it would seem to suggest that it already has a 4K
block size anyway.
Last one I tried was:
# mkfs.xfs -f -b size=4096 -l logdev=/dev/hdm5,version=2 /dev/md5
That puts the logfile on a separate partition on the hd device, another
partition on that drive is part of the raid5 array, but I don't think that
would affect it (should it?)
Obviously I'd ideally like the logfile to be on a mirror or raid
partition, but I thought I'd try to simplest solution to try to get to the
bottom of this.
meta-data=/dev/md5 isize=256 agcount=38, agsize=1048568
blks
data = bsize=4096 blocks=39688448, imaxpct=25
= sunit=8 swidth=32 blks, unwritten=0
naming =version 2 bsize=4096
log =/dev/hdm5 bsize=4096 blocks=8024, version=2
= sunit=8 blks
realtime =none extsz=131072 blocks=0, rtextents=0
# mount -ologdev=/dev/hdm5 /dev/md5 /mnt
The system is
Linux red 2.4.21-ac1 #3 SMP Wed Jun 18 20:31:28 BST 2003 i686 unknown
with xfsprogs-2.3.5
Any clues would be appreciated, even if it's just to comment out the
printk in the raid5 code - performance isn't a real issue, but being up
and runinng immediately rather than waiting an hour for an ext2 fsck (this
box will have 4 x 150Gb partitions) is an issue should it ever crash.
Gordon
|