xfs
[Top] [All Lists]

Re: XFS shutdown with 1.3.0

To: "Nathan Scott" <nathans@xxxxxxx>
Subject: Re: XFS shutdown with 1.3.0
From: "Simon Matter" <simon.matter@xxxxxxxxxxxxxxxx>
Date: Fri, 5 Sep 2003 11:22:13 +0200 (CEST)
Cc: linux-xfs@xxxxxxxxxxx
Importance: Normal
In-reply-to: <20030905052032.GD1126@frodo>
References: <41782.213.173.165.140.1062330069.squirrel@imap01.ch.sauter-bc.com> <20030902071613.GB1378@frodo> <43946.213.173.165.140.1062501263.squirrel@imap01.ch.sauter-bc.com> <20030905052032.GD1126@frodo>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: SquirrelMail/1.4.1
>
> This error is key I think - from your stack trace, it looks like
> xfs_repair is dumping core related to this (it _should_ be thinking
> the root inode# is 128 for your fs, not 512).
>
>> > Wierd.  Is this a 4K blocksize filesystem?  Do you have xfs_info
>> > output handy?  taa.
>>
>> [root@xxl root]# xfs_info /home
>> meta-data=/home                  isize=256    agcount=160, agsize=262144
>> blks
>>          =                       sectsz=512
>> data     =                       bsize=4096   blocks=41889120,
>> imaxpct=25
>>          =                       sunit=32     swidth=96 blks,
>> unwritten=0
>> naming   =version 2              bsize=4096
>> log      =external               bsize=4096   blocks=25600, version=1
>>          =                       sectsz=512   sunit=0 blks
>> realtime =none                   extsz=393216 blocks=0, rtextents=0
>>
>> IIRC I created the FS with some XFS 1.0.x version. The size was ~20G, I
>> have since grown it to ~170G.
>
> Oh.  There was a growfs bug for a long time, fixed only recently,
> where the kernel growfs code would often corrupt the new last AG of
> the grown filesystem.  I wonder if that may have started this off
> for you.

Maybe. Unfortunately I don't remember which filesystems have been grown or
not. I have just tried xfs_repair on /tmp and /opt and they both have the
same problem. Running xfs_repair results in lost files and others moved to
lost+found. Not good.

>
> Probably the best bet at this stage will be to re-mkfs and restore
> your backup.  Be good to know what went wrong in repair though.
> You will probably get a better ondisk layout that way too (you
> should see a smaller agcount when you re-mkfs, and bigger AG size).
>

Probably the only way now to fix the box. rpm -Va show that the
filesystems are still okay and no corruption occured. Seems that running
xfs_repair is a bad idea now. I'll now move the hole system to newly
created filesystems.

Unfortunately the problem looks like a timebomb to me. Is there a way to
find out whether a filesystem has ever been grown? This would help me to
find out whether the growing was the culprit here.

Simon


<Prev in Thread] Current Thread [Next in Thread>