<div dir="ltr">Great, thank you.<div><br></div><div>From my xfs_db debug, I found I have icount and ifree as follow:</div><div><br></div><div><div>icount = 220619904</div><div>ifree = 26202919</div><div><br></div><div>So the number of free inode take about 10%, so that's not so few.</div>
<div><br></div><div>So, are you still sure the patches can fix this issue?</div><div><br></div><div>Here's the detail xfs_db info:</div><div><br></div><div><div># mount /dev/sda4 /data1/</div><div># xfs_info /data1/</div>
<div>meta-data=/dev/sda4 isize=256 agcount=4, agsize=142272384 blks</div><div> = sectsz=512 attr=2, projid32bit=0</div><div>data = bsize=4096 blocks=569089536, imaxpct=5</div>
<div> = sunit=0 swidth=0 blks</div><div>naming =version 2 bsize=4096 ascii-ci=0</div><div>log =internal bsize=4096 blocks=277875, version=2</div><div>
= sectsz=512 sunit=0 blks, lazy-count=1</div><div>realtime =none extsz=4096 blocks=0, rtextents=0</div><div># umount /dev/sda4</div><div># xfs_db /dev/sda4</div><div>xfs_db> sb 0</div>
<div>xfs_db> p</div><div>magicnum = 0x58465342</div><div>blocksize = 4096</div><div>dblocks = 569089536</div><div>rblocks = 0</div><div>rextents = 0</div><div>uuid = 13ecf47b-52cf-4944-9a71-885bddc5e008</div><div>logstart = 536870916</div>
<div>rootino = 128</div><div>rbmino = 129</div><div>rsumino = 130</div><div>rextsize = 1</div><div>agblocks = 142272384</div><div>agcount = 4</div><div>rbmblocks = 0</div><div>logblocks = 277875</div><div>versionnum = 0xb4a4</div>
<div>sectsize = 512</div><div>inodesize = 256</div><div>inopblock = 16</div><div>fname = "\000\000\000\000\000\000\000\000\000\000\000\000"</div><div>blocklog = 12</div><div>sectlog = 9</div><div>inodelog = 8</div>
<div>inopblog = 4</div><div>agblklog = 28</div><div>rextslog = 0</div><div>inprogress = 0</div><div>imax_pct = 5</div><div>icount = 220619904</div><div>ifree = 26202919</div><div>fdblocks = 147805479</div><div>frextents = 0</div>
<div>uquotino = 0</div><div>gquotino = 0</div><div>qflags = 0</div><div>flags = 0</div><div>shared_vn = 0</div><div>inoalignmt = 2</div><div>unit = 0</div><div>width = 0</div><div>dirblklog = 0</div><div>logsectlog = 0</div>
<div>logsectsize = 0</div><div>logsunit = 1</div><div>features2 = 0xa</div><div>bad_features2 = 0xa</div><div>xfs_db> sb 1</div><div>xfs_db> p</div><div>magicnum = 0x58465342</div><div>blocksize = 4096</div><div>dblocks = 569089536</div>
<div>rblocks = 0</div><div>rextents = 0</div><div>uuid = 13ecf47b-52cf-4944-9a71-885bddc5e008</div><div>logstart = 536870916</div><div>rootino = 128</div><div>rbmino = null</div><div>rsumino = null</div><div>rextsize = 1</div>
<div>agblocks = 142272384</div><div>agcount = 4</div><div>rbmblocks = 0</div><div>logblocks = 277875</div><div>versionnum = 0xb4a4</div><div>sectsize = 512</div><div>inodesize = 256</div><div>inopblock = 16</div><div>fname = "\000\000\000\000\000\000\000\000\000\000\000\000"</div>
<div>blocklog = 12</div><div>sectlog = 9</div><div>inodelog = 8</div><div>inopblog = 4</div><div>agblklog = 28</div><div>rextslog = 0</div><div>inprogress = 1</div><div>imax_pct = 5</div><div>icount = 0</div><div>ifree = 0</div>
<div>fdblocks = 568811645</div><div>frextents = 0</div><div>uquotino = 0</div><div>gquotino = 0</div><div>qflags = 0</div><div>flags = 0</div><div>shared_vn = 0</div><div>inoalignmt = 2</div><div>unit = 0</div><div>width = 0</div>
<div>dirblklog = 0</div><div>logsectlog = 0</div><div>logsectsize = 0</div><div>logsunit = 1</div><div>features2 = 0xa</div><div>bad_features2 = 0xa</div><div>xfs_db> sb 2</div><div>xfs_db> p</div><div>magicnum = 0x58465342</div>
<div>blocksize = 4096</div><div>dblocks = 569089536</div><div>rblocks = 0</div><div>rextents = 0</div><div>uuid = 13ecf47b-52cf-4944-9a71-885bddc5e008</div><div>logstart = 536870916</div><div>rootino = null</div><div>rbmino = null</div>
<div>rsumino = null</div><div>rextsize = 1</div><div>agblocks = 142272384</div><div>agcount = 4</div><div>rbmblocks = 0</div><div>logblocks = 277875</div><div>versionnum = 0xb4a4</div><div>sectsize = 512</div><div>inodesize = 256</div>
<div>inopblock = 16</div><div>fname = "\000\000\000\000\000\000\000\000\000\000\000\000"</div><div>blocklog = 12</div><div>sectlog = 9</div><div>inodelog = 8</div><div>inopblog = 4</div><div>agblklog = 28</div><div>
rextslog = 0</div><div>inprogress = 1</div><div>imax_pct = 5</div><div>icount = 0</div><div>ifree = 0</div><div>fdblocks = 568811645</div><div>frextents = 0</div><div>uquotino = 0</div><div>gquotino = 0</div><div>qflags = 0</div>
<div>flags = 0</div><div>shared_vn = 0</div><div>inoalignmt = 2</div><div>unit = 0</div><div>width = 0</div><div>dirblklog = 0</div><div>logsectlog = 0</div><div>logsectsize = 0</div><div>logsunit = 1</div><div>features2 = 0xa</div>
<div>bad_features2 = 0xa</div><div>xfs_db> sb 3</div><div>xfs_db> p</div><div>magicnum = 0x58465342</div><div>blocksize = 4096</div><div>dblocks = 569089536</div><div>rblocks = 0</div><div>rextents = 0</div><div>uuid = 13ecf47b-52cf-4944-9a71-885bddc5e008</div>
<div>logstart = 536870916</div><div>rootino = 128</div><div>rbmino = null</div><div>rsumino = null</div><div>rextsize = 1</div><div>agblocks = 142272384</div><div>agcount = 4</div><div>rbmblocks = 0</div><div>logblocks = 277875</div>
<div>versionnum = 0xb4a4</div><div>sectsize = 512</div><div>inodesize = 256</div><div>inopblock = 16</div><div>fname = "\000\000\000\000\000\000\000\000\000\000\000\000"</div><div>blocklog = 12</div><div>sectlog = 9</div>
<div>inodelog = 8</div><div>inopblog = 4</div><div>agblklog = 28</div><div>rextslog = 0</div><div>inprogress = 1</div><div>imax_pct = 5</div><div>icount = 0</div><div>ifree = 0</div><div>fdblocks = 568811645</div><div>frextents = 0</div>
<div>uquotino = 0</div><div>gquotino = 0</div><div>qflags = 0</div><div>flags = 0</div><div>shared_vn = 0</div><div>inoalignmt = 2</div><div>unit = 0</div><div>width = 0</div><div>dirblklog = 0</div><div>logsectlog = 0</div>
<div>logsectsize = 0</div><div>logsunit = 1</div><div>features2 = 0xa</div><div>bad_features2 = 0xa</div></div><div><br></div><div><br></div><div>Thanks</div><div>Qiang</div><div><br></div><div class="gmail_extra"><br><br>
<div class="gmail_quote">2014-08-25 16:56 GMT+08:00 Dave Chinner <span dir="ltr"><<a href="mailto:david@fromorbit.com" target="_blank">david@fromorbit.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="">On Mon, Aug 25, 2014 at 04:09:05PM +0800, Zhang Qiang wrote:<br>
> Thanks for your quick and clear response. Some comments bellow:<br>
><br>
><br>
> 2014-08-25 13:18 GMT+08:00 Dave Chinner <<a href="mailto:david@fromorbit.com">david@fromorbit.com</a>>:<br>
><br>
> > On Mon, Aug 25, 2014 at 11:34:34AM +0800, Zhang Qiang wrote:<br>
> > > Dear XFS community & developers,<br>
> > ><br>
> > > I am using CentOS 6.3 and xfs as base file system and use RAID5 as<br>
> > hardware<br>
> > > storage.<br>
> > ><br>
> > > Detail environment as follow:<br>
> > > OS: CentOS 6.3<br>
> > > Kernel: kernel-2.6.32-279.el6.x86_64<br>
> > > XFS option info(df output): /dev/sdb1 on /data type xfs<br>
> > > (rw,noatime,nodiratime,nobarrier)<br>
</div>....<br>
<div class=""><br>
> > > It's very greatly appreciated if you can give constructive suggestion<br>
> > about<br>
> > > this issue, as It's really hard to reproduce from another system and it's<br>
> > > not possible to do upgrade on that online machine.<br>
> ><br>
> > You've got very few free inodes, widely distributed in the allocated<br>
> > inode btree. The CPU time above is the btree search for the next<br>
> > free inode.<br>
> ><br>
> > This is the issue solved by this series of recent commits to add a<br>
> > new on-disk free inode btree index:<br>
> ><br>
> [Qiang] This meas that if I want to fix this issue, I have to apply the<br>
> following patches and build my own kernel.<br>
<br>
</div>Yes. Good luck, even I wouldn't attempt to do that.<br>
<br>
And then use xfsprogs 3.2.1, and make a new filesystem that enables<br>
metadata CRCs and the free inode btree feature.<br>
<div class=""><br>
> As the on-disk structure has been changed, so should I also re-create xfs<br>
> filesystem again?<br>
<br>
</div>Yes, you need to download the latest xfsprogs (3.2.1) to be able to<br>
make it with the necessary feature bits set.<br>
<div class=""><br>
> is there any user space tools to convert old disk<br>
> filesystem to new one, and don't need to backup and restore currently data?<br>
<br>
</div>No, we don't write utilities to mangle on disk formats. dump, mkfs<br>
and restore is far more reliable than any "in-place conversion" code<br>
we could write. It will probably be faster, too.<br>
<div class=""><br>
> > Which is of no help to you, however, because it's not available in<br>
> > any CentOS kernel.<br>
> ><br>
> [Qiang] Do you think if it's possible to just backport these patches to<br>
> kernel 6.2.32 (CentOS 6.3) to fix this issue?<br>
><br>
> Or it's better to backport to 3.10 kernel, used in CentOS 7.0?<br>
<br>
</div>You can try, but if you break it you get to keep all the pieces<br>
yourself. Eventually someone who maintains the RHEL code will do a<br>
backport that will trickle down to CentOS. If you need it any<br>
sooner, then you'll need to do it yourself, or upgrade to RHEL<br>
and ask your support contact for it to be included in RHEL 7.1....<br>
<div class=""><br>
> > There's really not much you can do to avoid the problem once you've<br>
> > punched random freespace holes in the allocated inode btree. IT<br>
> > generally doesn't affect many people; those that it does affect are<br>
> > normally using XFS as an object store indexed by a hard link farm<br>
> > (e.g. various backup programs do this).<br>
> ><br>
> OK, I see.<br>
><br>
> Could you please guide me to reproduce this issue easily? as I have tried<br>
> to use a 500G xfs partition, and use about 98 % spaces, but still can't<br>
> reproduce this issue. Is there any easy way from your mind?<br>
<br>
</div>Search the archives for the test cases that were used for the patch<br>
set. There's a performance test case documented in the review<br>
discussions.<br>
<div class=""><br>
Cheers,<br>
<br>
Dave.<br>
--<br>
Dave Chinner<br>
<a href="mailto:david@fromorbit.com">david@fromorbit.com</a><br>
<br>
</div>_______________________________________________<br>
xfs mailing list<br>
<a href="mailto:xfs@oss.sgi.com">xfs@oss.sgi.com</a><br>
<a href="http://oss.sgi.com/mailman/listinfo/xfs" target="_blank">http://oss.sgi.com/mailman/listinfo/xfs</a><br>
</blockquote></div><br></div></div></div>