> here is a little more info from my machine.. i suspect the behavior will be
> more pronounced either this evening or tomorrow morning as my machine will ha
> ve
> been up longer. The machine has been mostly idle, no processes appear to be
> leaking, yet more and more ends up being pushed into swap space. After a day
> or two more, the "free" memory listed in /proc/meminfo will drop to around 4M
> and the "cached" memory drops down to around 40M or so. The majority of
> process memory ends up being moved into swap space and the machine starts to
> slowly grind. I'll see if i can leave this up another day or two and post
> some more numbers then.
There is nothing wrong with the numbers you sent, here are mine - from a
very responsive box (typing on it now)
kmem_cache 102 102 232 6 6 1 : 252 126
nfs_read_data 150 150 384 15 15 1 : 124 62
nfs_write_data 128 190 384 14 19 1 : 124 62
nfs_page 274 400 96 7 10 1 : 252 126
tcp_tw_bucket 80 80 96 2 2 1 : 252 126
tcp_bind_bucket 226 226 32 2 2 1 : 252 126
tcp_open_request 59 59 64 1 1 1 : 252 126
inet_peer_cache 177 177 64 3 3 1 : 252 126
ip_fib_hash 10 226 32 2 2 1 : 252 126
ip_dst_cache 240 240 160 10 10 1 : 252 126
arp_cache 90 90 128 3 3 1 : 252 126
blkdev_requests 1536 1560 96 39 39 1 : 252 126
xfs_chashlist 3851 6464 16 32 32 1 : 252 126
xfs_ili 2839 3164 136 113 113 1 : 252 126
xfs_ifork 0 0 56 0 0 1 : 252 126
xfs_efi_item 75 75 260 5 5 1 : 124 62
xfs_efd_item 90 90 260 6 6 1 : 124 62
xfs_buf_item 208 208 148 8 8 1 : 252 126
xfs_dabuf 202 202 16 1 1 1 : 252 126
xfs_da_state 22 22 340 2 2 1 : 124 62
xfs_trans 134 252 320 16 21 1 : 124 62
xfs_inode 18405 46432 468 4764 5804 1 : 124 62
xfs_btree_cur 56 56 140 2 2 1 : 252 126
xfs_bmap_free_item 185 202 16 1 1 1 : 252 126
page_buf_t 299 540 192 26 27 1 : 252 126
page_buf_reg_t 3 40 96 1 1 1 : 252 126
avl_object_t 3 113 32 1 1 1 : 252 126
avl_entry_t 285 1130 32 6 10 1 : 252 126
dnotify cache 0 0 20 0 0 1 : 252 126
file lock cache 168 168 92 4 4 1 : 252 126
fasync cache 1 202 16 1 1 1 : 252 126
uid_cache 5 226 32 2 2 1 : 252 126
skbuff_head_cache 482 528 160 21 22 1 : 252 126
sock 225 225 832 25 25 2 : 124 62
sigqueue 261 261 132 9 9 1 : 252 126
cdev_cache 2810 3068 64 52 52 1 : 252 126
bdev_cache 9469 9735 64 164 165 1 : 252 126
mnt_cache 16 160 96 3 4 1 : 252 126
inode_cache 19838 38400 480 4800 4800 1 : 124 62
dentry_cache 9727 32580 128 1086 1086 1 : 252 126
dquot 0 0 96 0 0 1 : 252 126
filp 2329 2400 96 60 60 1 : 252 126
names_cache 33 33 4096 33 33 1 : 60 30
buffer_head 19064 28640 96 477 716 1 : 252 126
mm_struct 330 330 128 11 11 1 : 252 126
vm_area_struct 2523 3186 64 54 54 1 : 252 126
fs_cache 230 354 64 6 6 1 : 252 126
files_cache 207 207 416 23 23 1 : 124 62
signal_act 150 150 1312 50 50 1 : 60 30
size-131072(DMA) 0 0 131072 0 0 32 : 0 0
size-131072 0 0 131072 0 0 32 : 0 0
size-65536(DMA) 0 0 65536 0 0 16 : 0 0
size-65536 8 8 65536 8 8 16 : 0 0
size-32768(DMA) 0 0 32768 0 0 8 : 0 0
size-32768 0 3 32768 0 3 8 : 0 0
size-16384(DMA) 0 0 16384 0 0 4 : 0 0
size-16384 11 16 16384 11 16 4 : 0 0
size-8192(DMA) 0 0 8192 0 0 2 : 0 0
size-8192 3 6 8192 3 6 2 : 0 0
size-4096(DMA) 0 0 4096 0 0 1 : 60 30
size-4096 94 124 4096 94 124 1 : 60 30
size-2048(DMA) 0 0 2048 0 0 1 : 60 30
size-2048 122 152 2048 72 76 1 : 60 30
size-1024(DMA) 0 0 1024 0 0 1 : 124 62
size-1024 252 252 1024 63 63 1 : 124 62
size-512(DMA) 0 0 512 0 0 1 : 124 62
size-512 236 360 512 35 45 1 : 124 62
size-256(DMA) 0 0 256 0 0 1 : 252 126
size-256 723 975 256 65 65 1 : 252 126
size-128(DMA) 0 0 128 0 0 1 : 252 126
size-128 3219 3750 128 125 125 1 : 252 126
size-64(DMA) 0 0 64 0 0 1 : 252 126
size-64 1926 2714 64 46 46 1 : 252 126
size-32(DMA) 0 0 32 0 0 1 : 252 126
size-32 2184 9605 32 84 85 1 : 252 126
There are almost certainly cron jobs which run overnight and scan
the whole disk looking for stuff, these push a lot of entries into the
inode cache, and in your case a lot of data into the filesystem cache.
This should all be reclaimable memory as soon as something else wants
it.
What concerns me is that this is forcing the system into swap instead
of reclaiming some of the memory. I suspect this is a Redhat'ism so
to speak, it is not behaviour I normally see, it seems to be hard to
get things onto the swap device unless you really pound on the system.
Could you try the 2.4.5 based rpm instead and see if this one exhibits
the same behaviour?
Thanks
Steve
>
> % uname -r
> Linux nevermore.toe.doomcom.org 2.4.3-SGI_XFS_1.0.1smp #1 SMP
> Mon Jul 9 14:03:40 CDT 2001 i686 unknown
>
> % uptime
> 8:34am up 1 day, 11:15, 3 users, load average: 0.04, 0.05, 0.01
>
> % cat /proc/meminfo
> total: used: free: shared: buffers: cached:
> Mem: 392212480 348160000 44052480 897024 0 109670400
> Swap: 266256384 24064000 242192384
> MemTotal: 383020 kB
> MemFree: 43020 kB
> MemShared: 876 kB
> Buffers: 0 kB
> Cached: 107100 kB
> Active: 84196 kB
> Inact_dirty: 20888 kB
> Inact_clean: 2892 kB
> Inact_target: 6408 kB
> HighTotal: 0 kB
> HighFree: 0 kB
> LowTotal: 383020 kB
> LowFree: 43020 kB
> SwapTotal: 260016 kB
> SwapFree: 236516 kB
>
> % cat /proc/slabinfo
> slabinfo - version: 1.1 (SMP)
> kmem_cache 102 102 232 6 6 1 : 252 126
> ip_conntrack 50 50 384 5 5 1 : 124 62
> ip_fib_hash 22 226 32 2 2 1 : 252 126
> clip_arp_cache 0 0 128 0 0 1 : 252 126
> ip_mrt_cache 0 0 96 0 0 1 : 252 126
> tcp_tw_bucket 0 0 128 0 0 1 : 252 126
> tcp_bind_bucket 15 226 32 2 2 1 : 252 126
> tcp_open_request 0 0 96 0 0 1 : 252 126
> inet_peer_cache 1 59 64 1 1 1 : 252 126
> ip_dst_cache 37 140 192 7 7 1 : 252 126
> arp_cache 7 30 128 1 1 1 : 252 126
> xfs_chashlist 3368 8282 16 41 41 1 : 252 126
> xfs_ili 324 588 136 21 21 1 : 252 126
> xfs_ifork 2 134 56 2 2 1 : 252 126
> xfs_efi_item 15 15 260 1 1 1 : 124 62
> xfs_efd_item 15 15 260 1 1 1 : 124 62
> xfs_buf_item 70 78 148 3 3 1 : 252 126
> xfs_dabuf 202 202 16 1 1 1 : 252 126
> xfs_da_state 0 11 340 0 1 1 : 124 62
> xfs_trans 21 48 320 3 4 1 : 124 62
> xfs_inode 32958 37384 468 4673 4673 1 : 124 62
> xfs_btree_cur 28 28 140 1 1 1 : 252 126
> xfs_bmap_free_item 126 202 16 1 1 1 : 252 126
> page_buf_t 139 320 192 16 16 1 : 252 126
> page_buf_reg_t 3 80 96 2 2 1 : 252 126
> avl_object_t 3 226 32 2 2 1 : 252 126
> avl_entry_t 127 452 32 4 4 1 : 252 126
> blkdev_requests 3840 3880 96 97 97 1 : 252 126
> dnotify cache 0 0 20 0 0 1 : 252 126
> file lock cache 126 126 92 3 3 1 : 252 126
> fasync cache 1 202 16 1 1 1 : 252 126
> uid_cache 10 226 32 2 2 1 : 252 126
> skbuff_head_cache 177 480 160 20 20 1 : 252 126
> sock 209 270 1280 90 90 1 : 60 30
> inode_cache 33171 37160 480 4645 4645 1 : 124 62
> bdev_cache 23 118 64 2 2 1 : 252 126
> sigqueue 261 261 132 9 9 1 : 252 126
> kiobuf 0 0 128 0 0 1 : 252 126
> dentry_cache 16941 27840 128 928 928 1 : 252 126
> dquot 0 0 128 0 0 1 : 252 126
> filp 2068 2080 96 52 52 1 : 252 126
> names_cache 2 2 4096 2 2 1 : 60 30
> buffer_head 11400 11400 96 285 285 1 : 252 126
> mm_struct 191 210 128 7 7 1 : 252 126
> vm_area_struct 2649 8319 64 102 141 1 : 252 126
> fs_cache 191 236 64 4 4 1 : 252 126
> files_cache 126 126 416 14 14 1 : 124 62
> signal_act 99 99 1312 33 33 1 : 60 30
> size-131072(DMA) 0 0 131072 0 0 32 : 0 0
> size-131072 0 0 131072 0 0 32 : 0 0
> size-65536(DMA) 0 0 65536 0 0 16 : 0 0
> size-65536 7 7 65536 7 7 16 : 0 0
> size-32768(DMA) 0 0 32768 0 0 8 : 0 0
> size-32768 5 5 32768 5 5 8 : 0 0
> size-16384(DMA) 0 0 16384 0 0 4 : 0 0
> size-16384 4 5 16384 4 5 4 : 0 0
> size-8192(DMA) 0 0 8192 0 0 2 : 0 0
> size-8192 9 9 8192 9 9 2 : 0 0
> size-4096(DMA) 0 0 4096 0 0 1 : 60 30
> size-4096 54 54 4096 54 54 1 : 60 30
> size-2048(DMA) 1 2 2048 1 1 1 : 60 30
> size-2048 68 68 2048 34 34 1 : 60 30
> size-1024(DMA) 0 0 1024 0 0 1 : 124 62
> size-1024 268 268 1024 67 67 1 : 124 62
> size-512(DMA) 0 0 512 0 0 1 : 124 62
> size-512 216 216 512 27 27 1 : 124 62
> size-256(DMA) 0 0 256 0 0 1 : 252 126
> size-256 424 495 256 33 33 1 : 252 126
> size-128(DMA) 0 0 128 0 0 1 : 252 126
> size-128 1342 2160 128 72 72 1 : 252 126
> size-64(DMA) 0 0 64 0 0 1 : 252 126
> size-64 6485 6490 64 110 110 1 : 252 126
> size-32(DMA) 0 0 32 0 0 1 : 252 126
> size-32 1534 6102 32 54 54 1 : 252 126
|