[PATCH] shared: new test to use up free inodes
Eryu Guan
eguan at redhat.com
Wed Mar 19 23:53:47 CDT 2014
On Thu, Mar 20, 2014 at 12:05:25PM +0800, Eryu Guan wrote:
> On Thu, Mar 20, 2014 at 11:14:29AM +1100, Dave Chinner wrote:
> > On Wed, Mar 19, 2014 at 05:27:49PM +0800, Eryu Guan wrote:
[snip]
> >
> > On XFS, that will create at least 500 threads creating 1000 inodes each
> > all in the same directory. This doesn't give you any extra
> > parallelism at all over just creating $free_inode files in a single
> > directory with a single thread. Indeed, it will probably be slower
> > due to the contention on the directory mutex.
> >
> > If you want to scale this in terms of parallelism to keep the
> > creation time down, each loop needs to write into a different
> > directory. i.e. something like:
> >
> >
> > echo "Create $((loop * 1000)) files in $SCRATCH_MNT/testdir" >>$seqres.full
> > while [ $i -lt $loop ]; do
> > mkdir -p $SCRATCH_MNT/testdir/$i
> > create_file $SCRATCH_MNT/testdir/$i 1000 $i >>$seqres.full 2>&1 &
> > let i=$i+1
> > done
> > wait
It turns out that creating files in different dirs is unable to
reproduce the bug that commit
d586858 xfs_repair: fix sibling pointer tests in verify_dir2_path()
fixed. I'll keep the files in one testdir.
Thanks,
Eryu
> >
> > And even then I'd suggest that you'd be much better off with 10,000
> > files to a sub-directory....
>
> Will do.
>
> >
> > > +# log inode status in $seqres.full for debug purpose
> > > +echo "Inode status after taking all inodes" >>$seqres.full
> > > +df -i $SCRATCH_MNT >>$seqres.full
> > > +
> > > +_check_scratch_fs
> > > +
> > > +# Check again after removing all the files
> > > +rm -rf $SCRATCH_MNT/testdir
> >
> > That can be parallelised as well when you have multiple subdirs:
> >
> > for d in $SCRATCH_MNT/testdir/*; do
> > rm -rf $d &
> > done
> > wait
>
> Will do.
>
> Thanks for the detailed review (as always)!
>
> Eryu
> >
> > Cheers,
> >
> > Dave.
> > --
> > Dave Chinner
> > david at fromorbit.com
>
> _______________________________________________
> xfs mailing list
> xfs at oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
More information about the xfs
mailing list