xfs
[Top] [All Lists]

Re: deep chmod|chown -R begin to start OOMkiller

To: David Chinner <dgc@xxxxxxx>
Subject: Re: deep chmod|chown -R begin to start OOMkiller
From: Peter Broadwell <peter@xxxxxxxx>
Date: Mon, 15 May 2006 14:57:59 -0700
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <20060515132936.GN1331387@melbourne.sgi.com>
References: <4464E3B5.8020602@wink.com> <20060515132936.GN1331387@melbourne.sgi.com>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Thunderbird 1.5.0.2 (X11/20060420)
David -

Thanks first off for your reply as well. It was your old postings
that inspired me to even ask my question...

You're right that their is no OOMKiller on my system, so the problem is
perhaps unrelated - I'm not sure how OOMKIller would make it different,
but I don't know realy what OOMKiller does at a low level.


As for load, the chown process would garner only 3-5% of the CPU according to top, but the load average would increase by 1 to 2, bringing it up to ~7. Trying to re-run a small subset of the chowns (to the same user) just now showed similar behavior, but when I ran it a second time it was *very* fast. ;-)

As for version of the log, can I upgrade to version 2 on a running system?

;;peter



David Chinner wrote:
On Fri, May 12, 2006 at 12:36:21PM -0700, Peter Broadwell wrote:
I seem to be having the same problem as CHIKAMA Masaki was having in December 7, 2005,
namely "chown -R" running very slowly when hitting lots of files (~17 million in my case).

The problem is different because there's no OOM killer being invoked, right? All you see is a slowdown? How much CPU time is the chmod consuming?

I'm most interested in anything to (safely) speed this up on a live file system as it
has been running for nearly 24 hours so far... not hung or corrupted anything as far
as I can tell.

Well, doing a chmod on a single file requires an inode read, a log write, and eventually a inode write.

xfs_chashlist 205900 385952 32 112 1 : tunables 120 60 8 xfs_ili 273754 273760 192 20 1 : tunables 120 60 8 xfs_inode 275317 275317 528 7 1 : tunables 54 27 8 xfs_vnode 275316 275316 632 6 1 : tunables 54 27 8 dentry_cache 252909 252909 224 17 1 : tunables 120 60 8

From the inode to cluster ratio (xfs_inode/xfs_chashlist), you've got very sparse inode clusters, so each inode read and write will do a disk I/O. So, two I/Os per file chmod() plus a log write every few files plus directory reads. That makes it roughly 40 million I/Os to do your recursive chmod.

On a single disk sustaining 200 I/Os per second, I'd expect it to
take more than a couple of days to complete the recursive chmod. Your
filesystem is going to be slow while this is going on as well.

peter@cl1 /data $ df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md/1            449447808 338816792 110631016  76% /data

peter@cl1 /data $ xfs_info /data .... data = bsize=4096 blocks=112394720, imaxpct=25 = sunit=16 swidth=64 blks, unwritten=1

So a 64k stripe unit and 4-unit wide stripe.  What RAID level are you
using for your stripe? What's the spindle speed of the disks?

log      =internal               bsize=4096   blocks=32768, version=1

With a 128MB version 1 log.

If you were using version 2 logs, I'd suggest using a larger
log buffer size to reduce the number of log writes. That would
help quite a bit. Other than that, I can't think of much you
could tune to help here. When you need to do that many I/Os,
the only thing that speeds it up is to have lots of spindles...

Cheers,

Dave.


<Prev in Thread] Current Thread [Next in Thread>