xfs
[Top] [All Lists]

Re: xfs_repair segfaults

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: xfs_repair segfaults
From: Ole Tange <tange@xxxxxxxxxx>
Date: Fri, 1 Mar 2013 13:24:36 +0100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:cc:content-type; bh=xvh0KeVuUi0u1BWnhWlFNddZjlkheIzGsNbfLfRbBlY=; b=x36B+UAC6/i4gSGiVZ8F2xQpjRfcTgxItz8j8g50UjuDKEZYEKq6xnlpdJ1kRik6qT oLp7QxeI2nvma9sKgLMMHlsWExrmSXumMCybrk/Yus4Hp5M5qofF/JogRel216EdbOLN y0bnfeXf9ggj/hxynGff/FMlU069QAQIfl7fkRI3sZb8J3XW5UPRg7CbW6MlE7S/wAQo mxufz+cB3GRpNS8aAEaeUUNx1j6GV04Zh7idlalBqnMJ49Yp7uK8t7eDjDRi33PGepWj hxXo2rSdNmkPPc2MTF6UMcOC4ay40+fmU3JyjKcUWBVfyZW9LrtAXyRY3RK9KVeRtXxE /G7g==
In-reply-to: <20130301111701.GB23616@dastard>
References: <CANU9nTnvJS50vdQv2K0gKHZPvzzH5EY1qpizJNsqUobrr2juDA@xxxxxxxxxxxxxx> <20130301111701.GB23616@dastard>
Sender: ole.tange.work@xxxxxxxxx
On Fri, Mar 1, 2013 at 12:17 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Thu, Feb 28, 2013 at 04:22:08PM +0100, Ole Tange wrote:
:
>> I forced a RAID online. I have done that before and xfs_repair
>> normally removes the last hour of data or so, but saves everything
>> else.
>
> Why did you need to force it online?

More than 2 harddisks went offline. We have seen that before and it is
not due to bad harddisks. It may be due to driver/timings/controller.

The alternative to forcing it online would be to read back a backup.
Since we are talking 100 TB of data reading back the backup can take a
week and will set us back to the last backup (which is more than a day
old). So it is preferable to force the last failing harddisk online
even though that causes us to loose a few hours of work.

>> Today that did not work:
>>
>> /usr/local/src/xfsprogs-3.1.10/repair# ./xfs_repair -n /dev/md5p1
>> Phase 1 - find and verify superblock...
>> Phase 2 - using internal log
>>         - scan filesystem freespace and inode maps...
>> flfirst 232 in agf 91 too large (max = 128)
>
> Can you run:
>
> # xfs_db -c "agf 91" -c p /dev/md5p1
>
> And post the output?

# xfs_db -c "agf 91" -c p /dev/md5p1
xfs_db: cannot init perag data (117)
magicnum = 0x58414746
versionnum = 1
seqno = 91
length = 268435200
bnoroot = 295199
cntroot = 13451007
bnolevel = 2
cntlevel = 2
flfirst = 232
fllast = 32
flcount = 191
freeblks = 184285136
longest = 84709383
btreeblks = 24

The partition has earlier been mounted with -o inode64.

/Ole

<Prev in Thread] Current Thread [Next in Thread>