I also see similar things as well.
On the system most everything is fine except for indoe_cache and
xfs_inode.
They are orders of magnitude larger. I *am* using logbufs=8, I assume
that's part of it, or will that just be for fs_cache and files_cache?
inode_cache 112628 112752 480 14094 14094 1 : 124 62
xfs_chashlist 4040 4040 16 20 20 1 : 252 126
xfs_ili 868 868 136 31 31 1 : 252 126
xfs_ifork 0 0 56 0 0 1 : 252 126
xfs_efi_item 180 180 260 12 12 1 : 124 62
xfs_efd_item 180 180 260 12 12 1 : 124 62
xfs_buf_item 364 364 148 14 14 1 : 252 126
xfs_dabuf 404 404 16 2 2 1 : 252 126
xfs_da_state 44 44 340 4 4 1 : 124 62
xfs_trans 272 396 320 30 33 1 : 124 62
xfs_inode 112360 112360 468 14045 14045 1 : 124 62
xfs_btree_cur 112 112 140 4 4 1 : 252 126
xfs_bmap_free_item 404 404 16 2 2 1 : 252 126
On Mon, 2002-02-25 at 11:56, Sebastian Kun wrote:
> Hi,
>
> I've been doing some SPEC SFS97 (http://www.spec.org/osg/sfs97r1/)
> testing
> for my company. SFS is a benchmark
> for evaluating NFS performance, measuring both throughput (NFS ops/sec),
> and
> latency (ORT, overall response time). In our case, the benchmark
> created a
> 50GB fileset, consisting of 2 million files and 65000 directories.
> During
> the test, approximately 10% of the fileset was accessed (read, write,
> getattr, lookup, etc.).
>
> I have some questions about some unusual behaviour I've noticed under
> XFS.
> There seems to be a problem with freeing up memory used for the inode
> cache.
> During the test run, I periodically checked the amount of free memory
> using
> the top command:
>
> Mem: 2063220K av, 2050000K used, 13220K free, 0K shrd, 640K buff
> 138420K
> cached
>
> There's a lot of memory unaccounted for (even taking into account
> userspace
> apps). I ran 'cat /proc/slabinfo' which came up with the following
> entries
> of interest:
>
> xfs_ili 505795 505848 136 18066 18066
> xfs_inode 1433589 1501336 468 187667 187667
> inode_cache 1188597 1279404 512 182772 182772
>
> (The format of these entries is [name] [active_obj] [total_obj]
> [obj_size]
> [active_pages] [total_pages])
>
> As you can see, the xfs_inode cache takes up over 180,000 pages, or
> around
> 750MB of memory. The inode_cache takes up another 700MB. Even after
> doing
> several gigabytes of I/O to another filesystem (reiserfs), the memory
> used
> by the inode caches was still around 1GB. This memory is only freed
> when
> the filesystem is unmounted.
>
> Question 1: Is this behaviour normal for XFS?
> Question 2: Is there any way to limit the amount of memory used by the
> inode
> cache?
>
> System stats are:
>
> Running on a dual 1GHz Intel STL2, 2GB RAM, 14-disk 1TB RAID0 array.
> Kernel version: 2.4.16 SMP, 4GB Highmem, 2GB/2GB split
> XFS patch: xfs-2.4.16-all-i386 (december 3), compiled into the kernel
> mkfs.xfs -f -d agsize=1g,sunit=128,swidth=1792 /dev/rze1
> mount /dev/rze1 /sfs -o defaults,noatime,sunit=128,swidth=1792,logbufs=8
>
> Thank you,
>
> Sebastian Kun
> Consensys Corp.
> http://www.consensys.com/
>
--
Austin Gonyou
Systems Architect, CCNA
Coremetrics, Inc.
Phone: 512-698-7250
email: austin@xxxxxxxxxxxxxxx
"It is the part of a good shepherd to shear his flock, not to skin it."
Latin Proverb
|