| To: | Federico Sevilla III <jijo@xxxxxxxxxxxxxxxxxxxx> |
|---|---|
| Subject: | Re: Playing around with NFS+XFS |
| From: | Seth Mos <knuffie@xxxxxxxxx> |
| Date: | Fri, 31 Aug 2001 13:23:17 +0200 |
| Cc: | Linux XFS Mailing List <linux-xfs@xxxxxxxxxxx> |
| In-reply-to: | <Pine.LNX.4.33.0108311645200.32367-100000@gusi.leathercolle ction.ph> |
| References: | <Pine.BSI.4.10.10108302206040.17576-100000@xs4.xs4all.nl> |
| Sender: | owner-linux-xfs@xxxxxxxxxxx |
At 17:00 31-8-2001 +0800, you wrote:
On Thu, 30 Aug 2001 at 22:37, Seth Mos wrote: > I use 16384 for 100Mbit at work which seems to be a decent size vs > reponse ratio.
> I have a few standard bonnie results of linux -> linux tests on my > homepage http://iserv.nl/
> Server was a pIII 450 with 256MB of ram and a 2c905B NIC and a 40GB > IDE disk in UDMA33 mode. Your write speeds will be lower but not more then a single disk if the card you are using is beefy enough. Raid5 involves a lot more overhead during writes, we once noted a windows NT sytem going faster when one of the harddisks in the raid5 failed. After that we converted it to raid10. This is starting to get a little off-topic in that it's not XFS-specific anymore. I hope everyone else will pardon it:
> and enlarged the buffer size from 64KB to 256KB. > echo 262144 > /proc/sys/net/core/rmem_default > echo 262144 > /proc/sys/net/core/rmem_max Note: The amount of buffers are shared between the nfsd processes, this will results in 8KB buffer per deamon. If you run 16 deamons this will mean you get 4KB per process. Change acordingly. I read somewhere that this can lead to some not-so-nice situations when left like this instead of the default. Would you be able to qualify this? This is noted from the NFS HowTo but they do not state what adverse results and under what version of the linux kernel. 2.4.9 might have this fixed but I just don't know. I have not run into funny behaviour yet and I am thinking of changing this option on the internet server gateway as well since that one has 3 networkcards and simultaneous activity on all. I do remember reading somewhere that the recommended action will be to set buffer sizes to 256KB, then start the NFS server, then revert to 64KB. I was wondering if you'd have any information on that. That is what they say yes, I am just taking that risk. I have not experienced anomelies in the other sytems yet. I can imagine that if you are routing or packetfiltering this might lead to problems. Now for something that I _think_ is a little more on-topic (but still not quite XFS specific):
Cheers -- Seth Every program has two purposes one for which it was written and another for which it wasn't I use the last kind. |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: Kernel hangs with mongo.pl benchmark., Seth Mos |
|---|---|
| Next by Date: | Re: lost xfs filesystem on md0, Utz Lehmann |
| Previous by Thread: | OT (Re: Playing around with NFS+XFS), Tru Huynh |
| Next by Thread: | Problems compiling the xfs kernel, Michael Wahlbrink |
| Indexes: | [Date] [Thread] [Top] [All Lists] |