Hello.
On Thu, 8 Dec 2005 18:08:41 +1100
David Chinner <dgc@xxxxxxx> wrote:
> > I have trouble about a storange behavior on xfs fileststem.
> > When I did "chmod -R 755 ." on deep directory, the system became
> > slow down and began to start OOMkiller after a while.
>
> How many files in the directory structure and how deep is it?
>
> What is the machine you are running this test on (CPU, ram,
> etc).
The directory structure is like this.
A/B/C/D/E/F.jpg
A: from "1" to "14"
B: from "0" to "16"
C: "00"
D: from "0" to "6"
E: from "0" to "255"
F: from "0" to "255"
The number of files should be around 100 millions.
Machine spec.
CPU : Pentium4 3.0G (512KB chache) HT enabled
MEM : 512MB (+ 1GB swap)
SCSI HA: Adaptec AHA-3960D
DISK: External RAID unit (10TB)
filesystem: xfs on lvm2
> > At that time, slabtop showed that the number of xfs_ili, xfs_inode,
> > and linvfs_icache objects are becoming very large.
> >
> > My kernel version is 2.6.13.4.
>
> Can you send the output of /proc/meminfo, /proc/slabinfo
> and the OOM killer output at the time of the problem?
I can't send these infos at the moment of OOM killer starting.
But I'll send a close situation's info in which swap I/O happened.
Because I think it's also strange and this will trigger of OOM killer.
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
0 0 47472 13236 322668 18812 0 0 0 0 271 36 0 0 100 0
0 0 47472 13236 322668 18812 0 0 0 8 253 24 0 0 100 0
(start chmod -R)
0 1 47472 14004 312156 22304 0 0 3476 2048 821 1365 1 11 84 3
1 0 47472 14432 294144 27588 0 0 6588 3704 1294 2770 1 25 66 8
1 0 47472 13524 277672 33400 0 0 6092 3264 1219 2303 1 22 72 4
0 0 47472 13396 261832 38840 0 0 5584 3168 1133 2271 1 21 74 3
....
3 1 64796 10452 166856 8136 0 0 4860 3096 1484 4041 0 12 29 58
2 3 64796 10568 166840 7372 0 0 880 648 505 972 0 6 5 88
1 5 64816 10592 166860 6544 472 572 5948 3304 1812 4025 0 14 6 80
0 5 64812 10284 166900 8240 1196 304 13196 8416 2940 8380 0 23 6 70
(around here that I got meminfo and slabinfo attached)
0 6 64812 10284 166864 6844 432 0 1456 604 574 1176 0 6 1 94
0 8 64812 10388 166852 6588 68 52 1244 916 673 1414 0 10 2 88
0 6 64816 10936 166872 6720 1032 368 7948 4568 1921 5235 0 30 2 68
0 10 64816 10368 166872 6644 396 0 2364 1016 625 1475 0 10 2 88
0 9 64812 10244 166880 6936 1300 236 9992 5308 2045 5798 1 25 0 74
The attached OOM killer output is previous one that I got.
> > A similar report is found at
> > http://oss.sgi.com/archives/linux-xfs/2003-03/msg00018.html
> >
> >
> > Is this a expected bihavior?
>
> No.
Ok. Thanks.
> > Now I use "find -exec" insted of "chmod -R".
> > The usage of slab memory with "find" is calm and does not start
> > OOM killker.
>
> The output of the meminfo and slabinfo files under this test
> would also be interesting....
I also attached.
Best regard.
Thanks.
--
CHIKAMA Masaki @ NICT
meminfo.chmod.txt
Description: Text document
meminfo.find.txt
Description: Text document
slabinfo.chmod.txt
Description: Text document
slabinfo.find.txt
Description: Text document
oomoutput.txt
Description: Text document
|