Chris Wedgwood wrote:
On Tue, Jun 22, 2004 at 01:40:45PM -0500, Charles Steinkuehler wrote:
Notes:
- I created the xfs filesystems on the LVM using the -ssize=4k
option, but I still see notices about the RAID5: cachebuffer
switching sizes (between 0, 512, and 4096), mainly when running
xfs_repair. I'm running kernel 2.4.26, and had thought the md
problems with the cachebuffer size were fixed back around 2.4.18?!?
just to make sure, you dont ever run repair (even read-only) or
anything else like dd and/or a snapshot over the device when it's
mounted do you?
No...the cachebuffer switching sizes occured when running xfs_repair on
an unmounted LVM Logical volume, ie:
$ umount /home
$ xfs_repair /dev/mapper/vg00-home
<lots of RAID5: cachebuffer notices along with xfs_repair output>
The volume was one of several on the same RAID PV, however, and the
other LV's *WERE* mounted (if that matters). My current setup has 5
logical volumes in a single Volume Group (all on one RAID5 Physical
Volume). All filesystems are XFS (except the swap partition :), and
pretty much everything was mounted at the time except for home, which is
the largest partition by far (400G vs. <4G for any of the others). Root
and /boot are on seperate RAID1 partitions, and are not part of the big
VG on the RAID5 partition.
VG: vg00
LV0: swap
LV1: tmp
LV2: var
LV3: usr
LV4: home
PV: md2 (RAID5)
<root>: md1 (RAID1)
/boot: md1 (RAID1)
--
Charles Steinkuehler
charles@xxxxxxxxxxxxxxxx
|