xfs
[Top] [All Lists]

Re: XFS + LVM + Software RAID5 on Debian testing

To: Charles Steinkuehler <charles@xxxxxxxxxxxxxxxx>
Subject: Re: XFS + LVM + Software RAID5 on Debian testing
From: Chris Wedgwood <cw@xxxxxxxx>
Date: Tue, 22 Jun 2004 19:57:01 -0700
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <40D8EE38.6070200@xxxxxxxxxxxxxxxx>
References: <40D87D2D.9060803@xxxxxxxxxxxxxxxx> <20040623021127.GA23321@xxxxxxxxxxxxxxxxxxxxx> <40D8EE38.6070200@xxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Tue, Jun 22, 2004 at 09:43:04PM -0500, Charles Steinkuehler wrote:

> $ xfs_repair /dev/mapper/vg00-home
>  <lots of RAID5: cachebuffer notices along with xfs_repair output>

Odd, I wouldn't have expected that.  I wonder why xfs_repair needs to
do this?

A quick eyeball of the code doesn't show why this might be going on.

Checking xfs_repair locally I can see it doing:

    23713 ioctl(4, BLKBSZSET, 0xbfffe7b8)   = 0
    23713 fstat64(4, {st_mode=S_IFBLK|0660, st_rdev=makedev(3, 65), ...}) = 0
    23713 ioctl(4, BLKGETSIZE64, 0xbfffe7d0) = 0
    23713 ioctl(4, BLKSSZGET, 0x80d4b30)    = 0

at the start and nothing that would change it again afterwards.  Is
there something in the LVM layer that might want to change the
blocksize internally?

> The volume was one of several on the same RAID PV, however, and the
> other LV's *WERE* mounted (if that matters).

I think that should be safe, the buffers used for the fs and repair
won't be overlapping.

I guess open a bug if you haven't done so already, nothing obvious
springs to mind and I don't know much about LVs (I just assumed they
were simple enough and would work as expected).


  --cw


<Prev in Thread] Current Thread [Next in Thread>