xfs
[Top] [All Lists]

RE: XFS corruption on 2.4.28

To: "'Eric Sandeen'" <sandeen@xxxxxxx>
Subject: RE: XFS corruption on 2.4.28
From: "Renaat Dumon" <renaat.dumon@xxxxxxxxxx>
Date: Wed, 2 Nov 2005 14:50:13 +0100
Cc: <linux-xfs@xxxxxxxxxxx>
In-reply-to: <Pine.LNX.4.44.0510312216420.26145-100000@penguin.americas.sgi.com>
Sender: linux-xfs-bounce@xxxxxxxxxxx
Thread-index: AcXemxon+DTwCBFeQgq6VmfLEgptkABGIQFA
Hi Eric,

I did the tests,  (added one "for" loop because of
[0-9a-f]/[0-9a-f]/[0-9a-f]/somereallylongfilename.somenumber.db  )
I did not observe the behaviour then :(


I have - in the mean time - gotten a chance to remount the filesystem too on
this particular box (which is the worst box I have for the phenomenon, due
to the amount of data that is sitting on it I guess). This night new backups
will occur, so I'll know pretty soon now whether or not the geometry options
have anything to do with it

One question though, suppose I create an XFS filesystem using a 2.6
bootdisk, untar a 2.4 system backup on the disk, and then boot from disk (so
a 2.4 kernel). Could that interfere?

That's what I did originally, but I have the mean time recreated the
filesystem under the running 2.4 kernel, so I guess that shouldn't be an
issue ..


Kind regards,

Renaat


-----Original Message-----
From: Eric Sandeen [mailto:sandeen@xxxxxxx] 
Sent: 01 November 2005 05:17
To: Renaat Dumon
Subject: RE: XFS corruption on 2.4.28

Renaat Dumon wrote:
> Hi Eric,
> 
> Thanks for taking the time to look at this.
> 
> bacardi root # xfs_info /Storage 
> meta-data=/Storage               isize=256    agcount=56, agsize=1048576
> blks
>          =                       sectsz=512  
> data     =                       bsize=4096   blocks=58663328, imaxpct=25
>          =                       sunit=0      swidth=0 blks, unwritten=0
> naming   =version 2              bsize=4096  
> log      =internal               bsize=4096   blocks=7161, version=1
>          =                       sectsz=512   sunit=0 blks
> realtime =none                   extsz=65536  blocks=0, rtextents=0

FWIW I tried this test with stock 2.4.28:

[root@penguin5 src2]# mkfs.xfs -f -bsize=4096
-dfile,name=testfs,agsize=1048576b,size=58663328b,unwritten=0
meta-data=testfs                 isize=256    agcount=56, agsize=1048576
blks
         =                       sectsz=512  
data     =                       bsize=4096   blocks=58663328, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=0
naming   =version 2              bsize=4096  
log      =internal log           bsize=4096   blocks=28644, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0
[root@penguin5 src2]# mount -o loop,noatime,sunit=128,swidth=256 testfs
/mnt/test/
[root@penguin5 src2]# cd /mnt/test/
[root@penguin5 test]# ls
[root@penguin5 test]# echo abcdefghijklmnopqrstuvwxyza > file
[root@penguin5 test]# ls -l file
-rw-r--r--    1 root     root           28 Oct 31 22:03 file
[root@penguin5 test]# for a in `seq 1 3`; do for b in `seq 1 3`; do for c in
`seq 1 10000`; do mkdir -p $a/$b; cp file
$a/$b/00005d697a5a05795f53cb7b081f242d.$c.db; done; done; done
[root@penguin5 test]# find . | xargs du -sk | grep -v ^4
366124  .
122040  ./1
122040  ./2
122040  ./3

so that did not trip it.  perhaps you could try a similar test with your
kernel.... either on loopback like this, or on your real filesystem?

Does the above tree structure / file naming more or less match your real
application?

-Eric



<Prev in Thread] Current Thread [Next in Thread>