http://oss.sgi.com/bugzilla/show_bug.cgi?id=209
------- Additional Comments From tdickson@xxxxxxxxxxx 2003-10-12 14:40 PDT
-------
OK. I'm going to try on this system:
PIII with 256 MB ram (to make it happen faster).
2.4.24pre1 still fails. I'm going to get details soon. I'll record the entire
slabinfo page, along with trying this on ext3 and reiserfs.
(Below, xfs_inode seems to be double the inode_cache).
I'll submit the scripts are reports hopefully today.
Here's a teaser for now (from the production system: we upped it to 2 GB of RAM
and it seems to be runnning.....):
bash-2.05a# more /proc/slabinfo
slabinfo - version: 1.1
kmem_cache 76 108 108 3 3 1
xfs_dqtrx 0 0 192 0 0 1
xfs_dquots 98 108 324 9 9 1
xfs_acl 0 0 304 0 0 1
xfs_chashlist 25514 37572 16 186 186 1
xfs_ili 1182 1204 140 43 43 1
xfs_ifork 0 0 56 0 0 1
xfs_efi_item 0 0 260 0 0 1
xfs_efd_item 0 0 260 0 0 1
xfs_buf_item 0 0 148 0 0 1
xfs_dabuf 0 202 16 0 1 1
xfs_da_state 0 0 336 0 0 1
xfs_trans 0 0 588 0 0 2
xfs_inode 147529 313130 368 31313 31313 1
xfs_btree_cur 0 0 132 0 0 1
xfs_bmap_free_item 0 0 12 0 0 1
page_buf_t 42 60 192 3 3 1
linvfs_icache 147529 290224 480 36278 36278 1
tcp_tw_bucket 0 0 96 0 0 1
tcp_bind_bucket 17 113 32 1 1 1
tcp_open_request 0 0 64 0 0 1
inet_peer_cache 8 59 64 1 1 1
ip_fib_hash 14 113 32 1 1 1
ip_dst_cache 82 96 160 4 4 1
arp_cache 21 40 96 1 1 1
blkdev_requests 384 400 96 10 10 1
nfs_write_data 0 0 352 0 0 1
nfs_read_data 0 0 320 0 0 1
nfs_page 0 0 96 0 0 1
dnotify_cache 0 0 20 0 0 1
file_lock_cache 44 84 92 2 2 1
fasync_cache 0 0 16 0 0 1
uid_cache 3 113 32 1 1 1
skbuff_head_cache 291 360 160 15 15 1
sock 122 125 768 25 25 1
sigqueue 0 0 132 0 0 1
kiobuf 2 59 64 1 1 1
cdev_cache 11 118 64 2 2 1
bdev_cache 9 59 64 1 1 1
mnt_cache 20 59 64 1 1 1
inode_cache 776 1160 480 145 145 1
dentry_cache 70294 281970 128 9399 9399 1
dquot 0 0 128 0 0 1
filp 959 960 128 32 32 1
names_cache 0 2 4096 0 2 1
buffer_head 394215 396320 96 9908 9908 1
mm_struct 55 60 128 2 2 1
vm_area_struct 2755 2800 96 69 70 1
fs_cache 54 113 32 1 1 1
files_cache 54 54 416 6 6 1
signal_act 71 72 1312 24 24 1
size-131072(DMA) 0 0 131072 0 0 32
size-131072 0 0 131072 0 0 32
size-65536(DMA) 0 0 65536 0 0 16
size-65536 0 0 65536 0 0 16
size-32768(DMA) 0 0 32768 0 0 8
size-32768 4 4 32768 4 4 8
size-16384(DMA) 1 1 16384 1 1 4
size-16384 8 9 16384 8 9 4
size-8192(DMA) 0 0 8192 0 0 2
size-8192 8 8 8192 8 8 2
size-4096(DMA) 0 0 4096 0 0 1
size-4096 257 271 4096 257 271 1
size-2048(DMA) 0 0 2048 0 0 1
size-2048 24 50 2048 12 25 1
size-1024(DMA) 0 0 1024 0 0 1
size-1024 265 268 1024 67 67 1
size-512(DMA) 0 0 512 0 0 1
size-512 367 368 512 46 46 1
size-256(DMA) 0 0 256 0 0 1
size-256 3882 6825 256 455 455 1
size-128(DMA) 0 0 128 0 0 1
size-128 19565 34950 128 1165 1165 1
size-64(DMA) 0 0 64 0 0 1
size-64 17524 31683 64 537 537 1
size-32(DMA) 0 0 32 0 0 1
size-32 22466 112661 32 997 997 1
Note that right after tree on this system the xfs_inode line was around 750000
with 533 MB of RAM used.
I'll have more details soon.
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.
|