Here are my mount settings:
/dev/sdc1 /backup xfs rw,nobarrier,logbufs=8,logbsize=256k,noquota 0 0
This is my current setting, but it also happend before i changed the
settings. Before I had it with this:
/dev/sdc1 /backup xfs rw,noquota 0 0
The problem occured independently of the settings i changed.
Shall I try to set ikeep/noikeep (whats the default for that ?).
At the moment I have no time to create a minimum Script to
reproduce, but essentially I do the following:
- I have a tree with about 2 million files in it called daily.1
- I create a new tree daily.0 with rsync --link-dest=daily.1
so that the most (the unchanged ones) of those million files
just get hardlinked to the ones in daily.1 and only the
changed ones are created newly.
- every day daily.1 gets renamed daily.2 and daily.0 gets
renamed daily.1 (currently I have rotated to daily.14)
The oldest daily.X folder gets removed by "rm -rf" which
is where the oops sometimes (not every time, but often
enough to reproduce) happens.
So the setting is: I have about 2 million files, and most of
them are multip hardlinked, so i have about > 20 million Inodes
on this system. Every night about 2 million of those inodes
get removed, most of them pointing to files which have other
hardlinks and therefore are not really removed.
> How long does it take to reproduce the problem?
On my system I just need to make a new rsync and remove
some million files/hardlinks, but it take some hours until it happens.
Somtimes it even runs successfully through ...
As I said before an xfs_check/xfs_repair does not detect any
incosistencies after the problem happend. ( But the rm process
hangs and the filesystem cannot be umounted any more )
I need to see, if the problem is with the massive hardlinking,
or if it just can be reproduced by creating 20 million files,
and remove them in one sweep ... I will check it, when i have
> On Thu, Feb 05, 2009 at 06:38:47AM +0100, Ralf Liebenow wrote:
> > Hello !
> > Finally I found the time to compile and test the latest stable 126.96.36.199
> > kernel
> > but I can reproduce it:
> > Hmmm ... can I do something to help you find the problem ? I can
> > reproduce it by creating some millon of hardlinks to files and then remove
> > some
> > million hardlinks with one "rm -rf"
> Interesting. Sounds like a race between writing back the inode and
> it being freed. How long does it take to reproduce the problem?
> Do you have a script that you could share?
> Next question - what is the setting of ikeep/noikeep in your mount
> options? If you dump /proc/self/mounts on 2.6.28 it will tell us
> if inode clusters are being deleted or not....
> Dave Chinner
> xfs mailing list
HRB 78053, Amtsgericht Charlottenbg
Vorstand: Ralf Liebenow, Michael Oesterreich, Peter Witzel
Aufsichtsratsvorsitzender: Wolf von Jaduczynski
Oranienstr. 10-11, 10997 Berlin [×]
fon +49 30 617 897-0 fax -10