[Top] [All Lists]

Re: Unexpected XFS SB number 0x00000000

To: Chris <hsvchris@xxxxxx>
Subject: Re: Unexpected XFS SB number 0x00000000
From: Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx>
Date: Mon, 10 Dec 2007 17:55:17 -0500 (EST)
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <002a01c83b74$52060330$f6120990$@de>
References: <002a01c83b74$52060330$f6120990$@de>
Sender: xfs-bounce@xxxxxxxxxxx

On Mon, 10 Dec 2007, Chris wrote:


I'm running a home file server with Debian GNU/Linux 4.0 4.0r1 etch
(2.6.18-5-amd64 Kernel) and an Areca ARC-1220 hardware RAID controller.
I used to have 4 750GB HDDs connected and set up as RAID 5 array, single
volume, single XFS partition (set up during the installation of Debian). No
problems so far.

Now I added another 750GB HDD to the array, online capacity/volume expansion
by the controller finished just fine.
My plan was to add the extra space to the above mentioned XFS partition. So
I unmounted the partition, started cfdisk, removed the partition table and
wrote a new one that included the new free space.
After rebooting the partition wasn't mounted, so I couldn't use xfs_growth
to expand the filesystem.

xfs_check: unexpected XFS SB magic number 0x00000000

xfs_repair -n:
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!
attempting to find secondary superblock.......[...].............found
candidate secondary superblock...unable to verify superblock,
.......Sorry, could not find valid secondary superblock
Exciting now.

I realize the magic number 0x00000000 is probably not a good thing and maybe
using fdisk to write a new table was not the way to do it?
Any suggestions on restoring the old partition table / magic number or how
to proceed? Is there an easy fix or is this a serious problem?

Kind Regards,

When I grew mine, I used mdadm/raid5, expanded the array and then you run xfs_growfs on the mounted filesystem and it worked.

Wiping out the partition is not the way to do it.


<Prev in Thread] Current Thread [Next in Thread>