I've just set up a system with one sw raid5 of 7 disks, and one sw raid5
on 6 disks (all scsi), made a LVM stripe over these, and put XFS on it.
Full set of commands run:
mdadm Create md0 --level=raid5 --raid-devices=8 sda1 sdb1 sdc1 \
sdd1 sde1 sdf1 sdg1
mdadm Create md1 --level=raid5 --raid-devices=7 sdh1 sdi1 sdj1 \
sdk1 sdl1 sdm1
pvcreate /dev/md0
pvcreate /dev/md1
vgcreate --physicalextentsize=8M sstudvg /dev/md0 /dev/md1
lvcreate --stripes 2 --size 339G --name sstudlv sstudvg
lvcreate --size 34G --name sparelv sstudvg
mkfs.xfs /dev/sstudvg/sstudlv
mkfs.xfs /dev/sstudvg/sparelv
When I mounted this fs and started using it I got lots and lots of
messages saying something like:
raid5: switching cache buffer size, 0 --> 512
raid5: switching cache buffer size, 512 --> 4096
raid5: switching cache buffer size, 0 --> 512
Then I checked the xfs_info, and noticed the only thing being
512 bytes was the sector size. I changed this to 4K (mkfs -s size=4096),
and the problem seems to have gone away. I still get a couple during boot
when I mount the filesystems:
SGI XFS 1.3.3 with ACLs, large block numbers, no debug enabled
SGI XFS Quota Management subsystem
raid5: switching cache buffer size, 1024 --> 512
raid5: switching cache buffer size, 1024 --> 4096
XFS mounting filesystem lvm(58,0)
raid5: switching cache buffer size, 512 --> 4096
Ending clean XFS mount for filesystem: lvm(58,0)
raid5: switching cache buffer size, 4096 --> 512
XFS mounting filesystem lvm(58,1)
raid5: switching cache buffer size, 512 --> 4096
Ending clean XFS mount for filesystem: lvm(58,1)
So I am a bit conserned I might get the same problem as Charles
Steinkuehler if I ever need to run xfs_repair.. Charles did you find a
solution for your problem?
And, what consequence does the increased sector size have on the fs?
BTW: I'm running the 2.4.21-15.EL.sgi3smp kernel.
-jf
|