> I'm building an ia32 server which will be running RAID 5 over three 40
> GB disks. We're using 4 RAID partitions (/var, /home, /usr, and /),
> all using XFS. We booted using an NFS root, initialized the RAID
> devices, made an XFS filesystem on all of them, mounted them, then
> rsynced over our NFS root image to the target filesystems.
>
> This all worked fine. However, after rebooting, when we tried to
> mount one of the arrays (which was /var), we got:
>
> Jul 28 01:03:27 debian kernel: Start mounting filesystem: md(9,1)
> Jul 28 01:03:31 debian kernel: Starting XFS recovery on filesystem: md(9,1) (
> dev: 9/1)
> Jul 28 01:03:31 debian kernel: XFS: xlog_recover_process_data: bad clientid
> Jul 28 01:03:31 debian kernel: XFS: log mount/recovery failed
> Jul 28 01:03:31 debian kernel: XFS: log mount failed
>
> The other partitions mounted correctly; I am confused as to why just
> one of the partitions would fail. We ran xfs_repair /dev/md2, and it
> didn't report any errors. After that, mounting the partition worked
> just fine:
>
> Jul 28 01:03:49 debian kernel: Start mounting filesystem: md(9,1)
> Jul 28 01:04:04 debian kernel: Ending clean XFS mount for filesystem: md(9,1)
>
> Any ideas?
My guess is you are running a CVS or patch based kernel from sometime after the
1.0.1 release. There was a period of a couple of weeks where raid5 had this
problem. If you update to the latest CVS (2.4.8-pre2 now) or get the latest
2.4.7 patch from the ftp site:
ftp://oss.sgi.com/projects/xfs/download/patches/patch-2.4.7-xfs-2001-07-27.bz2
This should be fixed. What you did was the correct workaround for the
kernel you have.
Steve
|