xfs
[Top] [All Lists]

RE: 2.6.11-rc3: 80-95% systime when serving files via NFS

To: "Dave Chinner" <dgc@xxxxxxx>
Subject: RE: 2.6.11-rc3: 80-95% systime when serving files via NFS
From: "Anders Saaby" <as@xxxxxxxxxxxx>
Date: Tue, 8 Feb 2005 01:37:16 +0100
Cc: <linux-xfs@xxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
Thread-index: AcUNasNGSUxeCtsbSFm7gCZyREZQOQAB/GN+
Thread-topic: 2.6.11-rc3: 80-95% systime when serving files via NFS
Hi,

On Tue, Feb 08, 2005 at 12:14:26AM +0100, Dave Chinner wrote: 
>How many inodes are cached? Can you dump out the xfs slabs from
>/proc/slabinfo when this is occurring? i.e.
>
>#  egrep "^(xfs|linvfs|dentry)" /proc/slabinfo

Here is my slabinfo after heavy load:

<SNIP>
grep "^(xfs|linvfs|dentry)" /proc/slabinfo

xfs_chashlist     178433 199325     32  119    1 : tunables  120   60    0 : 
slabdata   1675   1675      0
xfs_ili              269    280    192   20    1 : tunables  120   60    0 : 
slabdata     14     14      0
xfs_ifork              0      0     64   61    1 : tunables  120   60    0 : 
slabdata      0      0      0
xfs_efi_item           0      0    352   11    1 : tunables   54   27    0 : 
slabdata      0      0      0
xfs_efd_item           0      0    360   11    1 : tunables   54   27    0 : 
slabdata      0      0      0
xfs_buf_item           8     21    184   21    1 : tunables  120   60    0 : 
slabdata      1      1      0
xfs_dabuf             64    156     24  156    1 : tunables  120   60    0 : 
slabdata      1      1      0
xfs_da_state           8      8    488    8    1 : tunables   54   27    0 : 
slabdata      1      1      0
xfs_trans             14     18    864    9    2 : tunables   54   27    0 : 
slabdata      2      2      0
xfs_inode         417324 423464    512    8    1 : tunables   54   27    0 : 
slabdata  52933  52933      0
xfs_btree_cur          5     20    192   20    1 : tunables  120   60    0 : 
slabdata      1      1      0
xfs_bmap_free_item      0      0     24  156    1 : tunables  120   60    0 : 
slabdata      0      0      0
xfs_buf_t             52    120    384   10    1 : tunables   54   27    0 : 
slabdata     12     12      0
linvfs_icache     417324 421421    544    7    1 : tunables   54   27    0 : 
slabdata  60203  60203      0
dentry_cache      345650 401004    216   18    1 : tunables  120   60    0 : 
slabdata  22278  22278      0
</SNIP>

- xfs_inode, linvfs_cache and dentry_cache gets even higher after a longer test 
run.

>
>If you have a significant number of cached inodes, and you are
>repeatedly missing the cache, then we will spend a significant
>amount of time searching the cache...

I have quite a lot:
df -i:
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sdb1            716043376 19405588 696637788    3% /mnt/xfs_test

And part of my test is to relatively often miss the cache. 

>
>Judging by the amount of memory used and not in the page cache
>(~700MiB from your top output), I'd say there are quite a few
>cached inodes.

Yes.

>
>> NOTE: This behavior does not show unless most of the memory is used for 
>> cache 
>> (Right after reboot, systime is ~5%).
>
>Probably because there aren't many cached inodes. Can you dump the
>slabinfo at this time as well?

That is correct. inode cache was very low at that point.

- Does this mean that this is expected behavior?

- If that is the case, do you have any ideas to how I can get my system to 
perform better? (Performance gets very poor efter approx. two hours of heavy 
load.)

Thanks in advance!

//Anders Saaby


<Prev in Thread] Current Thread [Next in Thread>