[Top] [All Lists]

Re: Performance question

To: Eric Sandeen <sandeen@xxxxxxx>
Subject: Re: Performance question
From: Joshua Baker-LePain <jlb17@xxxxxxxx>
Date: Wed, 18 Feb 2004 16:04:28 -0500 (EST)
Cc: Nathan Scott <nathans@xxxxxxx>, Linux xfs mailing list <linux-xfs@xxxxxxxxxxx>
In-reply-to: <1077137507.14414.18.camel@xxxxxxxxxxxxxxxxxxxxxx>
References: <Pine.LNX.4.58.0402181131210.25541@xxxxxxxxxxxxxxxxxx> <20040219071800.F244261@xxxxxxxxxxxxxxxxxxxxxxxx> <1077137507.14414.18.camel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Wed, 18 Feb 2004 at 2:51pm, Eric Sandeen wrote

> > Sounds like Eric's area of expertise. :)  Could be another
> > case of inodes not being reclaimed aggresively enough, and
> > OOM follows...?
> Ah... sure... :)

There's nothing like confidence I always say... ;)

> Can you watch /proc/slabinfo as this happens, is any particular slab
> cache growing extremely large?  Where is the memory going?

I'll schedule some time to reboot into the "newer" kernel to try this.  
I reverted to XFS 1.2 after that behavior (obviously), and can't reboot 
ATM as we're in the midst of archiving the Visible Human data.

> Glen suggested that perhaps your directory with all the inodes is
> terribly fragmented, can you try 
> # xfs_bmap /path/to/big/dir
> and see how many extents it has...

        0: [0..7]: 1417836144..1417836151
        1: [8..15]: 1417835896..1417835903

I feel like I should clarify that those 3.3M files are spread out in a 
deeply branching tree of subdirectories under 
/data/vcc/vccMuscles_testfiles.  There are ~29K total, and they go several 
levels deep.

Joshua Baker-LePain
Department of Biomedical Engineering
Duke University

<Prev in Thread] Current Thread [Next in Thread>