XFS_REPAIR on LVM partition
Rafael Weingartner
rafaelweingartner at gmail.com
Mon Dec 16 02:52:23 CST 2013
What's a "RAID NAS controller"? Details, please, or we can't help
> you.
Maybe I am not expressing my self clearly. That is what I meant:
http://www.starline.de/produkte/raid-systeme/infortrend-raid-systeme/eonstor/es-a08u-g2421/
It is a piece of hardware that we use to apply RAIDx (normally 1 or 5) over
physical disks instead of plugging them on the storage server and applying
RAID via software or something else. It exports the volumes using an SCSI
channel. The devices are seen on the server as normal sd*, as if they were
normal physical devices.
So, hardware RAID5, lost a drive, rebuild on replace, filesystem in
> a bad way after rebuild?
That is exactly what happened, the RAID5 array lost a drive, and after we
replaced and rebuild it, the filesystem was not mounting anymore.
Teorically, this should not affect the filesystem since the RAID5 would
have recovered any lost information.
2013/12/16 Dave Chinner <david at fromorbit.com>
> On Sun, Dec 15, 2013 at 10:34:43PM -0200, Rafael Weingartner wrote:
> > So, sadly I went for the big hammer option, I thought that there were no
> > other options ;).
> >
> > I'm guessing it can't find or validate the primary superblock, so
> > > it's looking for a secondary superblock. Please post the output of
> > > the running repair so we can see exactly what it is doing.
> >
> > That is exactly what it seems that it is happening.
> >
> > *dmesg erros:*
> >
> > > 81.927888] Pid: 878, comm: mount Not tainted 3.5.0-44-generic
> > > #67~precise1-Ubuntu
> > > [ 81.927891] Call Trace:
> > > [ 81.927941] [<ffffffffa01d460f>] xfs_error_report+0x3f/0x50 [xfs]
> > > [ 81.927972] [<ffffffffa01ecd66>] ? xfs_free_extent+0xe6/0x130 [xfs]
> > > [ 81.927990] [<ffffffffa01ea318>] xfs_free_ag_extent+0x528/0x730
> [xfs]
> > > [ 81.928007] [<ffffffffa01e8e07>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
> > > [ 81.928033] [<ffffffffa01ecd66>] xfs_free_extent+0xe6/0x130 [xfs]
> > > [ 81.928055] [<ffffffffa021bb10>]
> xlog_recover_process_efi+0x170/0x1b0
> > > [xfs]
> > > [ 81.928075] [<ffffffffa021cd56>]
> > > xlog_recover_process_efis.isra.8+0x76/0xd0 [xfs]
> > > [ 81.928097] [<ffffffffa0220a17>] xlog_recover_finish+0x27/0xd0
> [xfs]
> > > [ 81.928119] [<ffffffffa022812c>] xfs_log_mount_finish+0x2c/0x30
> [xfs]
> > > [ 81.928140] [<ffffffffa0223620>] xfs_mountfs+0x420/0x6b0 [xfs]
> > > [ 81.928156] [<ffffffffa01e2ffd>] xfs_fs_fill_super+0x21d/0x2b0
> [xfs]
> > > [ 81.928163] [<ffffffff8118b716>] mount_bdev+0x1c6/0x210
> > > [ 81.928179] [<ffffffffa01e2de0>] ? xfs_parseargs+0xb80/0xb80 [xfs]
> > > [ 81.928194] [<ffffffffa01e10a5>] xfs_fs_mount+0x15/0x20 [xfs]
> > > [ 81.928198] [<ffffffff8118c563>] mount_fs+0x43/0x1b0
> > > [ 81.928202] [<ffffffff811a5ee3>] ? find_filesystem+0x63/0x80
> > > [ 81.928206] [<ffffffff811a7246>] vfs_kern_mount+0x76/0x120
> > > [ 81.928209] [<ffffffff811a7c34>] do_kern_mount+0x54/0x110
> > > [ 81.928212] [<ffffffff811a9934>] do_mount+0x1a4/0x260
> > > [ 81.928215] [<ffffffff811a9e10>] sys_mount+0x90/0xe0
> > > [ 81.928220] [<ffffffff816a7729>] system_call_fastpath+0x16/0x1b
> > > [ 81.928229] XFS (dm-0): Failed to recover EFIs
> > > [ 81.928232] XFS (dm-0): log mount finish failed
> > > [ 81.972741] XFS (dm-1): Mounting Filesystem
> > > [ 82.195661] XFS (dm-1): Ending clean mount
> > > [ 82.203627] XFS (dm-2): Mounting Filesystem
> > > [ 82.479044] XFS (dm-2): Ending clean mount
> >
> > Actually, the problem was a little bit more complicated. This LVM2
> > partition, was using a physical device (PV) that is exported by a RAID
> NAS
> > controller.
>
> What's a "RAID NAS controller"? Details, please, or we can't help
> you.
>
> > This volume exported by the controller was created using a RAID
> > 5, there was a hardware failure in one of the HDs of the array and the
> > volume got unavailable, till we replaced the bad driver with a new one
> and
> > the array rebuild finished.
>
> So, hardware RAID5, lost a drive, rebuild on replace, filesystem in
> a bad way after rebuild?
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david at fromorbit.com
>
--
Rafael Weingärtner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20131216/564a0be0/attachment-0001.html>
More information about the xfs
mailing list