Search String: Display: Description: Sort:

Results:

References: [ +subject:/^(?:^\s*(re|sv|fwd|fw)[\[\]\d]*[:>-]+\s*)*xfs_fsr\,\s+sunit\,\s+and\s+swidth\s*$/: 25 ]

Total 25 documents matching your query.

1. xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Hall <kdhall@xxxxxxxxxxxxxx>
Date: Wed, 13 Mar 2013 14:11:19 -0400
Does xfs_fsr react in any way to the sunit and swidth attributes of the file system? In other words, with an XFS filesytem set up directly on a hardware RAID, it is recommended that the mount command
/archives/xfs/2013-03/msg00353.html (7,241 bytes)

2. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 14 Mar 2013 10:57:03 +1100
Not directly. The mount option does nothing if sunit/swidth weren't specified at mkfs time. sunit/swidth affect the initial layout of the filesystem, and that cannot be altered after the fact. Hence
/archives/xfs/2013-03/msg00377.html (8,618 bytes)

3. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Wed, 13 Mar 2013 19:03:18 -0500
No, manually remounting with new stripe alignment and then running xfs_fsr is not going to magically reorganize your filesystem. This recommendation (as well as most things storage related) is worklo
/archives/xfs/2013-03/msg00378.html (10,825 bytes)

4. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Thu, 14 Mar 2013 07:26:36 -0500
No need, I'm CC'ing the list address. Read this entirely before hitting reply. So your RAID6 stripe width is 14 * 128KB = 1,792KB. So you've got a metadata heavy workload with lots of links being cre
/archives/xfs/2013-03/msg00386.html (16,922 bytes)

5. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Thu, 14 Mar 2013 07:55:29 -0500
Quick note below, need one more bit of info. ~$ uname -a
/archives/xfs/2013-03/msg00387.html (13,206 bytes)

6. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Hall <kdhall@xxxxxxxxxxxxxx>
Date: Thu, 14 Mar 2013 10:59:27 -0400
Dave Hall Binghamton University kdhall@xxxxxxxxxxxxxx 607-760-2328 (Cell) 607-777-4641 (Office) On 03/14/2013 08:55 AM, Stan Hoeppner wrote: Yes, please provide the output of the following commands:
/archives/xfs/2013-03/msg00414.html (17,742 bytes)

7. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Stefan Ring <stefanrin@xxxxxxxxx>
Date: Thu, 14 Mar 2013 19:07:25 +0100
I notice that XFS in general will report less % wa than ext4, although it exercises the disks a bit more when traversing a large directory tree, for example. But with 64 cores, you will see at most
/archives/xfs/2013-03/msg00418.html (9,583 bytes)

8. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Fri, 15 Mar 2013 00:14:40 -0500
Ok, so you're already on a recent kernel with delaylog. XFS uses relatime by default, so noatime/nodiratime are useless, though not part of the problem. inode64 is good as your files and metadata hav
/archives/xfs/2013-03/msg00443.html (16,235 bytes)

9. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Fri, 15 Mar 2013 22:45:38 +1100
Don't be so sure ;) FWIW, you can't really tell how bad the freespace fragmentation is from the global output like this. All of the large contiguous free space might be in one or two AGs, and the oth
/archives/xfs/2013-03/msg00451.html (12,260 bytes)

10. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Fri, 15 Mar 2013 23:47:08 -0500
The only thing I'm sure of is that I'll always be learning something new about XFS and how to troubleshoot it. ;) True. What would be representative of 26AGs? First, middle, last? So Mr. Hall would e
/archives/xfs/2013-03/msg00514.html (15,408 bytes)

11. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Sat, 16 Mar 2013 18:21:26 +1100
Yup, though I normally just run something like: for i in `seq 0 1 <agcount - 1>`; do To look at the them all quickly... Ok, so what size blocks are the metadata held in? 1-4 filesystem block extents.
/archives/xfs/2013-03/msg00516.html (18,063 bytes)

12. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Sat, 16 Mar 2013 06:45:06 -0500
Ahh, you have to put the xfs_db command in quotes if it has args. I kept getting an error when using -a in my command line. Thanks. Your command line will give histograms for all 26 AGs. This isn't s
/archives/xfs/2013-03/msg00522.html (21,596 bytes)

13. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Hall <kdhall@xxxxxxxxxxxxxx>
Date: Mon, 25 Mar 2013 13:00:51 -0400
Dave Hall Binghamton University kdhall@xxxxxxxxxxxxxx 607-760-2328 (Cell) 607-777-4641 (Office) Dave, which perf command(s) would you like me to run. (I'm familiar with the concept behind this kind o
/archives/xfs/2013-03/msg00738.html (10,416 bytes)

14. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Wed, 27 Mar 2013 16:16:37 -0500
I'll let Dave answer this one. A pastebin link should be fine. Only a couple of people will be looking at it. I don't see value in free space maps of 26 AGs being archived. FWIW, it's probably best t
/archives/xfs/2013-03/msg00782.html (11,379 bytes)

15. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 28 Mar 2013 12:38:41 +1100
Just run 'perf top -U' for 10s while the problem is occurring and pastebin the output.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx
/archives/xfs/2013-03/msg00792.html (10,603 bytes)

16. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Hall <kdhall@xxxxxxxxxxxxxx>
Date: Fri, 29 Mar 2013 15:59:46 -0400
Dave, Stan, Here is the link for perf top -U: http://pastebin.com/JYLXYWki. The ag report is at http://pastebin.com/VzziSa4L. Interestingly, the backups ran fast a couple times this week. Once under
/archives/xfs/2013-03/msg00833.html (12,550 bytes)

17. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Sun, 31 Mar 2013 12:22:31 +1100
12.38% [xfs] [k] xfs_btree_get_rec 11.65% [xfs] [k] _xfs_buf_find 11.29% [xfs] [k] xfs_btree_increment 7.88% [xfs] [k] xfs_inobt_get_rec 5.40% [kernel] [k] intel_idle 4.13% [xfs] [k] xfs_btree_get_bl
/archives/xfs/2013-03/msg00842.html (11,556 bytes)

18. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Hans-Peter Jansen <hpj@xxxxxxxxx>
Date: Tue, 02 Apr 2013 12:34:53 +0200
Hmm, unfortunately, this access pattern is pretty common, at least all "cp -al & rsync" based backup solutions will suffer from it after a while. I noticed, that the "removing old backups" part is al
/archives/xfs/2013-04/msg00019.html (10,341 bytes)

19. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Hall <kdhall@xxxxxxxxxxxxxx>
Date: Wed, 03 Apr 2013 10:25:55 -0400
So, assuming entropy has reached critical mass and that there is no easy fix for this physical file system, what would happen if I replicated this data to a new disk array? When I say 'replicate', I'
/archives/xfs/2013-04/msg00062.html (11,842 bytes)

20. Re: xfs_fsr, sunit, and swidth (score: 1)
Author: Dave Hall <kdhall@xxxxxxxxxxxxxx>
Date: Fri, 12 Apr 2013 13:25:22 -0400
Stan, IDid this post get lost in the shuffle? Looking at it I think it could have been a bit unclear. What I need to do anyways is have a second, off-site copy of my backup data. So I'm going to be b
/archives/xfs/2013-04/msg00293.html (13,369 bytes)


This search system is powered by Namazu