| To: | Paul Schutte <paul@xxxxxxxxxxx> |
|---|---|
| Subject: | Re: Kernel hangs with mongo.pl benchmark. |
| From: | Seth Mos <knuffie@xxxxxxxxx> |
| Date: | Fri, 31 Aug 2001 12:58:56 +0200 |
| Cc: | XFS mailing list <linux-xfs@xxxxxxxxxxx> |
| In-reply-to: | <3B8EBD0B.3ECF8150@it.up.ac.za> |
| References: | <Pine.BSI.4.10.10108302244090.17576-100000@xs4.xs4all.nl> |
| Sender: | owner-linux-xfs@xxxxxxxxxxx |
At 00:24 31-8-2001 +0200, Paul Schutte wrote:
Seth Mos wrote:
kernel-2.4.10-pre2 and egcs-1.1.2 is used here. This can happen if you have too little memory. I couldn't mount my disk with logbufs=8 on a 128MB machine. 2) if I do mkfs.xfs with -i maxpct=90,size=2048 then it also dies irespective of the I don't know how large your raid array is but you only need larger inodes when passing the 1TB barrier and the maxpct option says how much percent of disk space can be used for inodes. I did not make a fs with more inodes and it survived normally. the default is 25% 3) mkfs.xfs -f -l size=8192b -i maxpct=90,size=2048 and mount -o logbufs=8 is a lethal
4) mkfs.xfs -f -l size=8192b and mount -o logbufs=2 is stable. Which are the defaults for making and mounting a xfs fs. If logbufs is not given 2 is used and the logsize of the fs is based on the amount of space. These observations holds true for the Dell 4400 in my test environment.
I am going to rebuild the kernel using the old style driver to see if it helps.
When should one use logbufs=8 and when not. If you have enough memory and a very busy filesystem you can use this to speed it up. But like the others said this will also make the amount of loss higher because more stuff is floating through the void at any given time. I messed around with the -i option because ext2 ran out of inodes on the same test. Unless you really need more then 25% of your disk converted to inodes you can use this. Can somebody comment on how to see if you are running out of inodes? Will a message be logged to syslog when you have reached this limit? I have 4 mailservers running XFS, on Adaptec hardware ,in production, mounted with Better make var use 2 instead which reduces the amount of damage that can be done, and if you are paranoid mount it with wsync. They are running for 2 months now without any problems. I think I should remount them with logbufs=2 just in case. Will I lose anything by doing that ?
For those interrested
I really like XFS and are very impressed with the quality and appreciate the effort
Cheers -- Seth Every program has two purposes one for which it was written and another for which it wasn't I use the last kind. |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: Complete lockups while nightly cronjob, Keith Owens |
|---|---|
| Next by Date: | Re: Playing around with NFS+XFS, Seth Mos |
| Previous by Thread: | Re: Kernel hangs with mongo.pl benchmark., Seth Mos |
| Next by Thread: | Re: Kernel hangs with mongo.pl benchmark., Eric Sandeen |
| Indexes: | [Date] [Thread] [Top] [All Lists] |