xfs
[Top] [All Lists]

Re: deep chmod|chown -R begin to start OOMkiller

To: CHIKAMA masaki <masaki-c@xxxxxxxxxx>
Subject: Re: deep chmod|chown -R begin to start OOMkiller
From: David Chinner <dgc@xxxxxxx>
Date: Mon, 12 Dec 2005 12:46:33 +1100
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <20051209104148.346f2ff5.masaki-c@nict.go.jp>
References: <20051207183531.5c13e8c5.masaki-c@nict.go.jp> <20051208070841.GJ501696@melbourne.sgi.com> <20051209104148.346f2ff5.masaki-c@nict.go.jp>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
On Fri, Dec 09, 2005 at 10:41:48AM +0900, CHIKAMA masaki wrote:
> The number of files should be around 100 millions.

Lots of files. 

> Machine spec.
> 
> CPU : Pentium4 3.0G (512KB chache) HT enabled
> MEM : 512MB (+ 1GB swap)
> SCSI HA: Adaptec AHA-3960D
> DISK: External RAID unit (10TB)
> filesystem: xfs on lvm2

Large filesystem, comparitively little RAM to speak of.

> > > At that time, slabtop showed that the number of xfs_ili, xfs_inode, 
> > > and linvfs_icache objects are becoming very large.

Looks to me like you haven't got enough memory to hold all the
active log items when chmod -R runs and so you run out of memory
before tail pushing occurs and the inode log items are released.

Because there is no memory available (all in slab and
unreclaimable(?) page cache), XFS may not be able to flush and free
the dirty inodes because it can require page cache allocation if the
backing pages for the inode were reclaimed before the tail was
pushed....

There are two immediate solutions that I can see to your problem:

        1. Buy more RAM. If you can afford 10TB of disk, then you can
           afford to buy at least a couple of GB of RAM to go with it.

        2. Remake your filesystem with a smaller log so that
           it can't hold as many active items.

Cheers,

Dave.
-- 
Dave Chinner
R&D Software Enginner
SGI Australian Software Group


<Prev in Thread] Current Thread [Next in Thread>