xfs
[Top] [All Lists]

Re: deep chmod|chown -R begin to start OOMkiller

To: David Chinner <dgc@xxxxxxx>
Subject: Re: deep chmod|chown -R begin to start OOMkiller
From: Peter Broadwell <peter@xxxxxxxx>
Date: Mon, 15 May 2006 20:12:06 -0700
Cc: Anders Saaby <as@xxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx
In-reply-to: <20060516013408.GB1390195@melbourne.sgi.com>
References: <4464E3B5.8020602@wink.com> <20060515132936.GN1331387@melbourne.sgi.com> <4468F967.6090202@wink.com> <4464E3B5.8020602@wink.com> <200605151159.34802.as@cohaesio.com> <4468F30E.3030405@wink.com> <20060516013408.GB1390195@melbourne.sgi.com>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Thunderbird 1.5.0.2 (X11/20060420)
David Chinner wrote:
On Mon, May 15, 2006 at 02:30:54PM -0700, Peter Broadwell wrote:
My chown did finally finish, some 63 hrs later for about 75 chowns/sec.
This is running on system with 4 SATA 7200 rpm drives configured with
software RAID 10 so it is essencialy 2 spindles and we are seeing
about 1/3 of the theoretical maximum.

If you call a SWAG a "theoretical maximum". All your results indicate is that my guess was in the same ballpark as reality.

Such a well informed SWAG is indication of good breeding ;-)

In looking around I did see a ioctl, XFS_IOC_FSBULKSTAT, that seemed
like it might give a different approach to doing this, but looked like it
was read only (and lots of work to get anything going with it...)
Is this a worthwhile avenue to look at more deeply?

Read only, and does not follow any directory structure - it just reads the inodes off disk in ascending block order....

Well, *if* I had to do this often I would think a write version of this ioctl might reduce by at least 1/3 the number of disk writes, no? It also seems funny that I could copy the whole disk in less time that it took me to chown files that are filling up less than 1/2 of it...

Fortunately I don't expect to have to do this again, and if I do,
I'll know it will be a long running process.

Thanks again for your help in understanding what is probably happening.

;;peter


On Mon, May 15, 2006 at 02:57:59PM -0700, Peter Broadwell wrote:
As for load, the chown process would garner only 3-5% of the CPU according
to top, but the load average would increase by 1 to 2, bringing it up to ~7.

A single process being I/O bound like this will contribute 1 to the load average.

Trying to re-run a small subset of the chowns (to the same user) just now
showed similar behavior, but when I ran it a second time it was *very* fast. ;-)

My guess would be that the first time it ran it needed to read all the inodes in off disk. The second time they were in cache, and the subset probably fit in the log so the only I/O would be log I/O. Hence the second run would be very fast....

As for version of the log, can I upgrade to version 2 on a running system?

I know there is on Irix (xfs_chver) which is a perl script wrapper for xfs_db, but I'm not sure if there is an equivalent shipped on linux. Nathan?

Cheers,

Dave.


<Prev in Thread] Current Thread [Next in Thread>