Simon Matter schrieb:
>
> Can someone confirm what Adrian Head and me discussed in a previous
> thread?
>
> Im using the following script to reproduce crashes on 2.4.3-XFS-1.0.1
> and on 2.4.10-pre2:
>
> #!/bin/sh
>
> max=$1
>
> cnt=1
> # count
> while [ $cnt -le $max ]; do
> # fill number
> if [ $cnt -ge 100 ]; then
> strcnt=$cnt
> fill=""
> elif [ $cnt -ge 10 ]; then
> strcnt=0$cnt
> fill=" "
> else
> strcnt=00$cnt
> fill=" "
> fi
> # do something
> cp -r src $strcnt &
> # tar cf $strcnt.tar src &
> # increment
> cnt=$[ cnt + 1 ]
> done
>
> I invoke it with './mkstress 60' to get 60 cp processes. The directory
> 'src' has 746 dirs with 30070 files. du -sm src gives 229M. File sizes
> are ~100b - 50k. Directory depth is 4 dirs max.
> I was having src on XFS on Softraid5 and on plain partition. No
> difference. It will just crash sooner or later.
>
> -Simon
Well since nobody seems interested I'm replying myself. I didn't realize
that I created a filesystem with more than 1.8 mio files with my 'stress
test'. Is it dangerous? I mean if I reach any limit the machine should
not crash or hang but should say something like 'can not allocate ...'?
Well I tried with XFS on Softraid and without Raid, I tried the same
with ext2 and did not get a kernel crash but 58 hanging cp processes. I
now tried the same on my laptop with kernel 2.2.19 on ReiserFS and I was
surprised: The copy went quite fast and everything semmed fine. Then I
tried 'diff -r src 065' and got 3 differing files. I tried other dirs
and the same. Okay something goes wrong here as well but now I want to
see the ReiserFS deletion speed. 'rm -rf 0*' should go fast but -
surprise - it took more than 30min! At the moment I quite confused. I
tried the same test on different hardware, different kernels, different
disks, different filesystems and _all_ tests without success!
-Simon
|