xfs
[Top] [All Lists]

Re: Memory and quota issues.

To: Steve Lord <lord@xxxxxxx>
Subject: Re: Memory and quota issues.
From: <marchuk@xxxxxxxxxxxxxxxxx>
Date: Tue, 15 May 2001 14:37:10 -0700 (PDT)
Cc: Dana Soward <dragon@xxxxxxxxxxx>, Joshua Baker-LePain <jlb17@xxxxxxxx>, linux-xfs@xxxxxxxxxxx
In-reply-to: <200105152133.f4FLXGD08163@jen.americas.sgi.com>
Sender: owner-linux-xfs@xxxxxxxxxxx
But my kernel config does not have debug options turned on.  I never turn
on debug options.

*****************************
Walter Marchuk
Senior Computer Specialist
University of Washington
Electrical Engineering
Room: 307g
206-221-5421
marchuk@xxxxxxxxxxxxxxxxx
*****************************

On Tue, 15 May 2001, Steve Lord wrote:

> > Here you go. FYI, it has only taken about 3.5 days to get to this
> > level of memory loss.  Thanks for the help.
> 
> Ah ha!
> 
> The quick fix it to rebuild your kernel with the xfsdebug and vnode tracing
> options turned off. These are development options, I do not thing we should
> have made these available externally.
> 
> Steve
> 
> > 
> > slabinfo - version: 1.1
> > kmem_cache            80    117    100    3    3    1
> > ip_fib_hash           10    113     32    1    1    1
> > dqtrx                  1     20    192    1    1    1
> > dquots                39     44    356    4    4    1
> > ktrace_ent        126277 126280   1024 31570 31570    1
> > ktrace_hdr        126277 126412     20  748  748    1
> > xfs_chashlist       4538   6084     20   36   36    1
> > xfs_ili              700   5778    144  213  214    1
> > xfs_ifork              0      0     56    0    0    1
> > xfs_efi_item           0     15    260    0    1    1
> > xfs_efd_item           0     15    260    0    1    1
> > xfs_buf_item           4     26    152    1    1    1
> > xfs_dabuf              0    202     16    0    1    1
> > xfs_da_state           0     11    340    0    1    1
> > xfs_trans              1    156    320    1   13    1
> > xfs_inode         126273 126280    492 15785 15785    1
> > xfs_btree_cur          0     28    140    0    1    1
> > xfs_bmap_free_item      0      0     16    0    0    1
> > page_buf_t            19    360    160    1   15    1
> > page_buf_reg_t         4    113     32    1    1    1
> > avl_object_t           5    113     32    1    1    1
> > avl_entry_t            7    339     32    1    3    1
> > urb_priv               0      0     32    0    0    1
> > uhci_desc           1038   1062     64   18   18    1
> > ip_mrt_cache           0      0     96    0    0    1
> > tcp_tw_bucket          0     30    128    0    1    1
> > tcp_bind_bucket       13    113     32    1    1    1
> > tcp_open_request       0     40     96    0    1    1
> > inet_peer_cache        0      0     64    0    0    1
> > ip_dst_cache           5     20    192    1    1    1
> > arp_cache              2     30    128    1    1    1
> > nfs_read_data          0      0    384    0    0    1
> > nfs_write_data         0      0    384    0    0    1
> > nfs_page               0      0     96    0    0    1
> > blkdev_requests     2304   2320     96   58   58    1
> > dnotify cache          0      0     20    0    0    1
> > file lock cache        1     42     92    1    1    1
> > fasync cache           0      0     16    0    0    1
> > uid_cache              7    113     32    1    1    1
> > skbuff_head_cache    160    360    192   18   18    1
> > sock                  39     48    928   11   12    1
> > inode_cache       171286 209160    480 26144 26145    1
> > bdev_cache          3290   3304     64   56   56    1
> > sigqueue               0     29    132    0    1    1
> > kiobuf                19    343   1152    9   49    2
> > dentry_cache       89099 160770    128 5359 5359    1
> > dquot                  0      0     96    0    0    1
> > filp                 551    560     96   14   14    1
> > names_cache            0      2   4096    0    2    1
> > buffer_head         5252  45040     96  380 1126    1
> > mm_struct             45     60    128    2    2    1
> > vm_area_struct      1778   2006     64   33   34    1
> > fs_cache              44     59     64    1    1    1
> > files_cache           44     54    416    6    6    1
> > signal_act            48     54   1312   18   18    1
> > size-131072(DMA)       0      0 131072    0    0   32
> > size-131072            0      0 131072    0    0   32
> > size-65536(DMA)        0      0  65536    0    0   16
> > size-65536             9      9  65536    9    9   16
> > size-32768(DMA)        0      0  32768    0    0    8
> > size-32768             0      0  32768    0    0    8
> > size-16384(DMA)        0      0  16384    0    0    4
> > size-16384             3      4  16384    3    4    4
> > size-8192(DMA)         0      0   8192    0    0    2
> > size-8192              1      1   8192    1    1    2
> > size-4096(DMA)         0      0   4096    0    0    1
> > size-4096             47     47   4096   47   47    1
> > size-2048(DMA)         0      0   2048    0    0    1
> > size-2048             42     70   2048   23   35    1
> > size-1024(DMA)         0      0   1024    0    0    1
> > size-1024             44     48   1024   12   12    1
> > size-512(DMA)          0      0    512    0    0    1
> > size-512             314    320    512   40   40    1
> > size-256(DMA)          0      0    256    0    0    1
> > size-256            1020   1230    256   75   82    1
> > size-128(DMA)          0      0    128    0    0    1
> > size-128            4021   4800    128  159  160    1
> > size-64(DMA)           0      0     64    0    0    1
> > size-64            41097  45548     64  772  772    1
> > size-32(DMA)           0      0     32    0    0    1
> > size-32            26240  45313     32  356  401    1
> > 
> > On Tue, 15 May 2001, Steve Lord wrote:
> > 
> > > 
> > > Could you send the output of 
> > > 
> > >   cat /proc/slabinfo
> > > 
> > > This will tell us where the memory might be if it is in the kernel.
> > > 
> > > Steve
> > > 
> > > 
> > > > This is the output of free.  It IS using it, its not cached.
> > > > 
> > > > fudd:/home# free -m
> > > >              total       used       free     shared    buffers     
> > > > cached
> > > > Mem:           374        371          3          0          0         
> > > > 28
> > > > -/+ buffers/cache:        342         32
> > > > Swap:          384          4        380
> > > > 
> > > > This is my partition layout aswell:
> > > > 
> > > > fudd:/home# df -h
> > > > Filesystem            Size  Used Avail Use% Mounted on
> > > > /dev/ide/host2/bus0/target0/lun0/part5
> > > >                       1.4G  157M  1.2G  12% /
> > > > /dev/ide/host2/bus0/target0/lun0/part1
> > > >                        15M  7.7M  7.7M  50% /boot
> > > > /dev/ide/host2/bus0/target0/lun0/part6
> > > >                       2.4G  1.3G  1.1G  52% /usr
> > > > /dev/ide/host2/bus0/target0/lun0/part7
> > > >                       1.9G   98M  1.8G   6% /var
> > > > /dev/ide/host2/bus0/target0/lun0/part9
> > > >                        27G   21G  6.2G  78% /home
> > > > 
> > > > All but / are xfs partitions.
> > > > 
> > > > Dana
> > > > 
> > > > On Tue, 15 May 2001, Joshua Baker-LePain wrote:
> > > > 
> > > > > On Tue, 15 May 2001 at 1:47pm, Dana Soward wrote
> > > > > 
> > > > > > Is anyone else having memory problems  with the CVS kernel?  Ive 
> > > > > > got 
> > a
> > > > > > server here with 384MB, and its using 325 of it right now.  It 
> > > > > > should
> >  be
> > > > > > using about 30, tops.  It *might* be something to do with debian 
> > > > > > wood
> > y,
> > > > > > but i wanna make sure no one else is having XFS issues.  Also, i 
> > > > > > cant
> >  see
> > > > m
> > > > > 
> > > > > Are you sure that all that memory is being used?  The 2.4 kernel is 
> > > > > ver
> > y
> > > > > aggressive when it comes to cacheing (which is a good thing).  What 
> > > > > doe
> > s
> > > > > the output of 'free' say?
> > > > > 
> > > > > In general, you *want* all your memory used up.  You just don't want
> > > > > running processes to be the ones using it all.
> > > > > 
> > > > > -- 
> > > > > Joshua Baker-LePain
> > > > > Department of Biomedical Engineering
> > > > > Duke University
> > > > > 
> > > 
> > > 
> 
> 


<Prev in Thread] Current Thread [Next in Thread>