[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Kernel hangs with mongo.pl benchmark.
At 00:24 31-8-2001 +0200, Paul Schutte wrote:
>Seth Mos wrote:
>
> > On Thu, 30 Aug 2001, Paul Schutte wrote:
> > Ok, I am running the tests here too on a spare 4.3GB disk that I had
> > around. It's not screaming fast but it works.
> >
> > I have the mongo.pl running now on my home box with 6 processes for over
> > an hour and the machine has not crashed yet.
> >
> > The home machine is a AMD Athlon 1.4Ghz with 256MB DDR ram and a AMD760
> > chipset with a VIA IDE UDMA100 controller. The disk is operating in UDMA33
> > mode. The process is still running but it looks like we need adaptec
> > hardware to confirm. Eric was testing this IIRC.
> >
>
>I have checked straight away and the driver that I am using is still the
>latest.
>
>I managed to complete the creation part on the 4400. It is still
>perculating away on
>the copy ...
It takes quite some time.
>kernel-2.4.10-pre2 and egcs-1.1.2 is used here.
>
>1) if I mount with logbufs=8 then it dies.
This can happen if you have too little memory. I couldn't mount my disk
with logbufs=8 on a 128MB machine.
>2) if I do mkfs.xfs with -i maxpct=90,size=2048 then it also dies
>irespective of the
>logbufs settings.
I don't know how large your raid array is but you only need larger inodes
when passing the 1TB barrier and the maxpct option says how much percent of
disk space can be used for inodes. I did not make a fs with more inodes and
it survived normally. the default is 25%
>3) mkfs.xfs -f -l size=8192b -i maxpct=90,size=2048 and mount -o logbufs=8
>is a lethal
>combo ;-)
It might be, maybe one of the crew an answer this one.
>4) mkfs.xfs -f -l size=8192b and mount -o logbufs=2 is stable.
Which are the defaults for making and mounting a xfs fs. If logbufs is not
given 2 is used and the logsize of the fs is based on the amount of space.
>These observations holds true for the Dell 4400 in my test environment.
>
>The 1400C just died on me using the setting in 4) above.
That should not happen.
>I am going to rebuild the kernel using the old style driver to see if it
>helps.
define "old style"
>When should one use logbufs=8 and when not.
If you have enough memory and a very busy filesystem you can use this to
speed it up. But like the others said this will also make the amount of
loss higher because more stuff is floating through the void at any given time.
>I messed around with the -i option because ext2 ran out of inodes on the
>same test.
Unless you really need more then 25% of your disk converted to inodes you
can use this. Can somebody comment on how to see if you are running out of
inodes? Will a message be logged to syslog when you have reached this limit?
>I have 4 mailservers running XFS, on Adaptec hardware ,in production,
>mounted with
>logbufs=8.
>(Butterflies are eating chunks out of my stomac as I am typing ...)
Better make var use 2 instead which reduces the amount of damage that can
be done, and if you are paranoid mount it with wsync.
>They are running for 2 months now without any problems.
>I think I should remount them with logbufs=2 just in case.
>Will I lose anything by doing that ?
Only speed. But I guess that your mail is worth more.
>For those interrested
>The backup speed of these servers increased from about 50Mb/min to about
>85Mb/min on
>full backups when I switched from ext2 to XFS.
I see similar behaviour in starting up machines which tends to be a tad faster.
>I really like XFS and are very impressed with the quality and appreciate
>the effort
>put into making it happen.
So do we :-)
Cheers
--
Seth
Every program has two purposes one for which
it was written and another for which it wasn't
I use the last kind.