| To: | Stewart Smith <stewart@xxxxxxxxx> |
|---|---|
| Subject: | Re: XFS_IOC_RESVSP64 versus XFS_IOC_ALLOCSP64 with multiple threads |
| From: | Sam Vaughan <sjv@xxxxxxx> |
| Date: | Tue, 14 Nov 2006 11:04:17 +1100 |
| Cc: | xfs@xxxxxxxxxxx |
| In-reply-to: | <1163395250.14517.38.camel@localhost.localdomain> |
| References: | <1163381602.11914.10.camel@localhost.localdomain> <965ECEF2-971D-46A1-B3F2-C6C1860C9ED8@sgi.com> <1163390942.14517.12.camel@localhost.localdomain> <12275452-56ED-4921-899F-EFF1C05B251A@sgi.com> <1163395250.14517.38.camel@localhost.localdomain> |
| Sender: | xfs-bounce@xxxxxxxxxxx |
On 13/11/2006, at 4:20 PM, Stewart Smith wrote: On Mon, 2006-11-13 at 15:53 +1100, Sam Vaughan wrote:Just to be clear, are we talking about intra-file fragmentation, i.e. file data laid out discontiguously on disk, or inter-file fragmentation where each file is continguous on disk but the files from different processes are getting interleaved? Also, are there just a couple of user data files, each of them potentially much larger than the size of an AG, or do you split the data up into many files, e.g. datafile01.dat ... datafile99.dat ...? Those extents are curiously uniform, all 32kB in size. The fact that both files' extents are in AG 8 suggests that the two directories ndb_1_fs and ndb_2_fs filled their original AGs and spilled out into other ones, which is when the interference would have started. Looking at the directory hierarchy in your last email, you might be better off if you could add another directory for the datafiles and undofiles to live in, so they don't end up sharing their AG with other stuff in their parent directory. on this fs: OK, so you've got 32 2GB AGs, and the filesystem is much too small for the inode32 rotor to be involved. (somewhere between 5-15Gb free from this create IIRC) So your data file is half the size of an AG. That shouldn't be a problem but it'd be best to keep it to one or two of these files per directory if there's going to be much other concurrent allocation activity. we currently don't do any automatic extending. I'd assumed that these files were being continually grown. If all this is happening at creation time then it shouldn't be too hard to make sure the files are cleanly allocated with just one extent. Does the following not work on your file system? $ touch a b $ for file in a b; do > xfs_io -c 'allocsp 1G 0' $file & > done; wait [1] 12312 [2] 12313 [1]- Done xfs_io -c 'allocsp 1G 0' $file [2]+ Done xfs_io -c 'allocsp 1G 0' $file $ xfs_bmap -v a b a: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..2097151]: 231732008..233829159 6 (11968856..14066007) 2097152 b: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..2097151]: 233829160..235926311 6 (14066008..16163159) 2097152 $ Now in your case you're using different directories, so your files are probably OK at the start of day. Once the AGs they start in fill up though, the files for both processes will start getting allocated from the next available AG. At that point, allocations that started out looking like the first test above will end up looking like the second. That's handy. All in all it sounds like your requirements are very file system friendly in terms of getting optimum allocation. I'm not sure what could be causing all those 32kB extents. Sam |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: RHEL 4 Compatible Kernel Module Code, Eric Sandeen |
|---|---|
| Next by Date: | Re: XFS_IOC_RESVSP64 versus XFS_IOC_ALLOCSP64 with multiple threads, Sam Vaughan |
| Previous by Thread: | Re: XFS_IOC_RESVSP64 versus XFS_IOC_ALLOCSP64 with multiple threads, Stewart Smith |
| Next by Thread: | Re: XFS_IOC_RESVSP64 versus XFS_IOC_ALLOCSP64 with multiple threads, Chris Wedgwood |
| Indexes: | [Date] [Thread] [Top] [All Lists] |