xfs
[Top] [All Lists]

Re: [PATCH] shared: new test to use up free inodes

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: [PATCH] shared: new test to use up free inodes
From: Eryu Guan <eguan@xxxxxxxxxx>
Date: Thu, 20 Mar 2014 12:53:47 +0800
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20140320040525.GX8312@xxxxxxxxxxxxxxxxxxxxxxxxxx>
References: <1395221269-11085-1-git-send-email-eguan@xxxxxxxxxx> <20140320001429.GI7072@dastard> <20140320040525.GX8312@xxxxxxxxxxxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Thu, Mar 20, 2014 at 12:05:25PM +0800, Eryu Guan wrote:
> On Thu, Mar 20, 2014 at 11:14:29AM +1100, Dave Chinner wrote:
> > On Wed, Mar 19, 2014 at 05:27:49PM +0800, Eryu Guan wrote:
[snip]
> > 
> > On XFS, that will create at least 500 threads creating 1000 inodes each
> > all in the same directory. This doesn't give you any extra
> > parallelism at all over just creating $free_inode files in a single
> > directory with a single thread. Indeed, it will probably be slower
> > due to the contention on the directory mutex.
> > 
> > If you want to scale this in terms of parallelism to keep the
> > creation time down, each loop needs to write into a different
> > directory. i.e. something like:
> > 
> > 
> > echo "Create $((loop * 1000)) files in $SCRATCH_MNT/testdir" >>$seqres.full
> > while [ $i -lt $loop ]; do
> >     mkdir -p $SCRATCH_MNT/testdir/$i
> >     create_file $SCRATCH_MNT/testdir/$i 1000 $i >>$seqres.full 2>&1 &
> >     let i=$i+1
> > done
> > wait

It turns out that creating files in different dirs is unable to
reproduce the bug that commit

d586858 xfs_repair: fix sibling pointer tests in verify_dir2_path()

fixed. I'll keep the files in one testdir.

Thanks,
Eryu

> > 
> > And even then I'd suggest that you'd be much better off with 10,000
> > files to a sub-directory....
> 
> Will do.
> 
> > 
> > > +# log inode status in $seqres.full for debug purpose
> > > +echo "Inode status after taking all inodes" >>$seqres.full
> > > +df -i $SCRATCH_MNT >>$seqres.full
> > > +
> > > +_check_scratch_fs
> > > +
> > > +# Check again after removing all the files
> > > +rm -rf $SCRATCH_MNT/testdir
> > 
> > That can be parallelised as well when you have multiple subdirs:
> > 
> > for d in $SCRATCH_MNT/testdir/*; do
> >     rm -rf $d &
> > done
> > wait
> 
> Will do.
> 
> Thanks for the detailed review (as always)!
> 
> Eryu
> > 
> > Cheers,
> > 
> > Dave.
> > -- 
> > Dave Chinner
> > david@xxxxxxxxxxxxx
> 
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs

<Prev in Thread] Current Thread [Next in Thread>