xfs
[Top] [All Lists]

Re: XFS shutdown with 1.3.0

To: "Nathan Scott" <nathans@xxxxxxx>
Subject: Re: XFS shutdown with 1.3.0
From: "Simon Matter" <simon.matter@xxxxxxxxxxxxxxxx>
Date: Tue, 9 Sep 2003 16:28:30 +0200 (CEST)
Cc: linux-xfs@xxxxxxxxxxx
Importance: Normal
In-reply-to: <20030907232446.GC818@frodo>
References: <41782.213.173.165.140.1062330069.squirrel@xxxxxxxxxxxxxxxxxxxxxxx> <20030902071613.GB1378@frodo> <43946.213.173.165.140.1062501263.squirrel@xxxxxxxxxxxxxxxxxxxxxxx> <20030905052032.GD1126@frodo> <2588.10.1.200.117.1062753733.squirrel@xxxxxxxxxxxxxxxxxxxxxxx> <20030907232446.GC818@frodo>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: SquirrelMail/1.4.1
I have been forced to investigate this a bit more. I have found more and
more servers in my environment with similar problems. It looks like the
following systems are affected:

The filesystem:
- resides on Software RAID
- was created with an early Linux/XFS version (IIRC 1.0.x)

I don't know when the problem came in. I have the problem regardless
whether the filesystem has ever been grown or not. This happens with
XFS1.3.0.

When doing xfs_repair, I usually find all content in lost+found/128/. One
file is usually missing. After xfs_repair, I have to mount/unmount the fs.
Then, xfs_repair runs without any error.

Any ideas what could have happened ?

Simon

> On Fri, Sep 05, 2003 at 11:22:13AM +0200, Simon Matter wrote:
>> >> [root@xxl root]# xfs_info /home
>> >> meta-data=/home                  isize=256    agcount=160,
>> agsize=262144
>> >> blks
>> >>          =                       sectsz=512
>> >> data     =                       bsize=4096   blocks=41889120,
>> >> imaxpct=25
>> >>          =                       sunit=32     swidth=96 blks,
>> >> unwritten=0
>> >> naming   =version 2              bsize=4096
>> >> log      =external               bsize=4096   blocks=25600, version=1
>> >>          =                       sectsz=512   sunit=0 blks
>> >> realtime =none                   extsz=393216 blocks=0, rtextents=0
>> >>
>> ...
>> Unfortunately the problem looks like a timebomb to me. Is there a way to
>> find out whether a filesystem has ever been grown? This would help me to
>> find out whether the growing was the culprit here.
>
> The above filesystem has almost certainly been grown.  You can
> tell because the agsize is fairly small & the agcount is quite
> large (the only case where this may not have been grown is if
> that agsize/agcount was explicitly requested at mkfs time, and
> thats unlikely I think).
>
> To contradict Eric - ;) - there is a cute trick you can use to
> tell:  if you run "mkfs.xfs -N /dev/XXX", this will just print
> the geometry that mkfs _would_ have used (-N means "don't") so
> if that doesn't match up to the actual filesystem geometry (in
> particular, the agcount= field), then it has likely been grown.
>
> Both this method and Erics method can be fooled if unusual mkfs
> options were used when the filesystem was created, however.
>
> cheers.
>
> --
> Nathan
>



<Prev in Thread] Current Thread [Next in Thread>