bad performance on touch/cp file on XFS system

Zhang Qiang zhangqiang.buaa at gmail.com
Mon Aug 25 05:31:10 CDT 2014


2014-08-25 17:08 GMT+08:00 Dave Chinner <david at fromorbit.com>:

> On Mon, Aug 25, 2014 at 04:47:39PM +0800, Zhang Qiang wrote:
> > I have checked icount and ifree, but I found there are about 11.8 percent
> > free, so the free inode should not be too few.
> >
> > Here's the detail log, any new clue?
> >
> > # mount /dev/sda4 /data1/
> > # xfs_info /data1/
> > meta-data=/dev/sda4              isize=256    agcount=4, agsize=142272384
>
> 4 AGs
>
Yes.

>
> > icount = 220619904
> > ifree = 26202919
>
> And 220 million inodes. There's your problem - that's an average
> of 55 million inodes per AGI btree assuming you are using inode64.
> If you are using inode32, then the inodes will be in 2 btrees, or
> maybe even only one.
>

You are right, all inodes stay on one AG.

BTW, why i allocate 4 AGs, and all inodes stay in one AG for inode32?,
sorry as I am not familiar with xfs currently.


>
> Anyway you look at it, searching btrees with tens of millions of
> entries is going to consume a *lot* of CPU time. So, really, the
> state your fs is in is probably unfixable without mkfs. And really,
> that's probably pushing the boundaries of what xfsdump and
> xfs-restore can support - it's going to take a long tiem to dump and
> restore that data....
>

 Thanks reasonable.



> With that many inodes, I'd be considering moving to 32 or 64 AGs to
> keep the btree size down to a more manageable size. The free inode

btree would also help, but, really, 220M inodes in a 2TB filesystem
> is really pushing the boundaries of sanity.....
>

So the better inodes size in one AG is about 5M, is there any documents
about these options I can learn more?

I will spend more time to learn how to use xfs, and the internal of xfs,
and try to contribute code.

Thanks for your help.



> Cheers,
>
> Dave.
> --
> Dave Chinner
> david at fromorbit.com
>
> _______________________________________________
> xfs mailing list
> xfs at oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20140825/50c36ee3/attachment-0001.html>


More information about the xfs mailing list