| To: | Nathan Scott <nathans@xxxxxxx> |
|---|---|
| Subject: | Re: tuning for large files in xfs |
| From: | fitzboy <fitzboy@xxxxxxxxxxxxxx> |
| Date: | Tue, 23 May 2006 18:41:36 -0700 |
| Cc: | linux-kernel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx |
| In-reply-to: | <20060523115938.A242207@wobbly.melbourne.sgi.com> |
| References: | <447209A8.2040704@iparadigms.com> <20060523085116.B239136@wobbly.melbourne.sgi.com> <44725C27.90601@iparadigms.com> <20060523115938.A242207@wobbly.melbourne.sgi.com> |
| Sender: | xfs-bounce@xxxxxxxxxxx |
| User-agent: | Mozilla Thunderbird 0.9 (X11/20041124) |
Nathan Scott wrote:
Hi Tim, I read online in multiple places that the largest allocation groups should get is 4g, so I made mine 2g. However, having said that, I did test with different allocation sizes and the effect was not that dramatic. I will retest again though, just to verify. I was also thinking that the more AGs the better since I do a lot of parallel reads/writes... granted it doesn't change the file system all that much (the file only grows or get existing blocks get modified), so I am not sure if the number of AGs matter, does it? >>meta-data=/mnt/array/disk1 isize=2048 agcount=410, agsize=524288 blks >> = sectsz=512
Sorry, I meant that moving the Inode size to 2k (over 256bytes) gave me a sizeable increase in performance... I assume that is because the extent map can be smaller now (since blocks are much larger, less blocks to keep track of). Of course, ideal would be to have InodeSize be large and blocksize to be 32k... but I hit the limits on both... I thought you said you had a 2TB file? The filesystem above is 4096 * 214670562 blocks, i.e. 818GB. Perhaps its a sparse file? I guess I could look closer at the bmap and figure that out for myself. ;) On my production servers the file is 2TB, but on this testing enviroment I have, the file is only 767G of a 819G partition... This is sufficient to tell because the performance is already hindered alot even at 767G, going to 2TB just makes it worse... I made the file my copying it over via dd from another machine onto a clean partition... then from that point we just append to the end of it, or modify existing data... I set it by hand. I rebuilt the partition and am now copying over the file again to see the results...
I tried this a couple of times, but it seemed to wedge the machine... I would do: 1) touch a file (just to create it), 2) do the above command which would then show effect in du, but the file size was still 0 3) I then opened that file (without O_TRUNC or O_APPEND) and started to write out to it. It would work fine for a few minutes but after about 5 or 7GB the machine would freeze... nothing in syslog, only a brief message on console about come cpu state being bad...
stripe unit is 64k, array is a RAID5 with 14 disks, so I say sw=13 (one disk is parity). I set this when I made the array, though it doesn't seem to matter much either.
we have plenty of memory on the machines, shouldn't be an issue... I am a little cautious about moving to a new kernel though... |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: XFS write speed drop, Nathan Scott |
|---|---|
| Next by Date: | Re: tuning for large files in xfs, Nathan Scott |
| Previous by Thread: | Re: XFS write speed drop, Nathan Scott |
| Next by Thread: | Re: tuning for large files in xfs, Nathan Scott |
| Indexes: | [Date] [Thread] [Top] [All Lists] |