xfs
[Top] [All Lists]

Re: 10GB memorys occupied by XFS

To: daiguochao <dx-wl@xxxxxxx>, xfs@xxxxxxxxxxx
Subject: Re: 10GB memorys occupied by XFS
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Fri, 11 Apr 2014 16:35:45 -0500
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <1397184044761-35016.post@xxxxxxxxxxxxx>
References: <1396596386220-35015.post@xxxxxxxxxxxxx> <1397184044761-35016.post@xxxxxxxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:24.0) Gecko/20100101 Thunderbird/24.4.0
On 4/10/2014 9:40 PM, daiguochao wrote:
> Dear Stan, I can't send email to you.So I leave a message here.I hope not to
> bother you.
> Thank you for your kind assistance.

I received all of the ones you sent to the list and that should always
be the case.  One that you sent directly to me was rejected but I think
I've fixed that now.  And I think my delayed reply made things seem
worse than they are.

Anyway, Dave replied while I was typing my last response.  He'll be much
more able to assist you.  Your problem seems beyond the edge of my
knowledge.

Cheers,

Stan



> In accordance with your suggestion, we executed "echo 3 >
> /proc/sysm/drop_caches" for trying to release vfs dentries and inodes.
> Really,
> our lost memory came back. But we learned that the memory of vfs dentries
> and inodes is distributed from slab. Please check our system "Slab:  509708
> kB" from /proc/meminfo, and it seems only be took up 500MB and xfs_buf take
> up 450MB among. And /proc/meminfo indicated that our system memory is
> anomalous, there is about 10GB out of the statistics. We want to know how
> the system could observe the usage amount of vfs dentries and iodes through
> the system interface. If the memory usage of system is not reflected in
> /proc/meminfo as we can not find the statistics, and we thought it as a bug
> of xfs.
> 
> My  vm.vfs_cache_pressure of linux system is 100. We think that the system
> will proactively take the memory back when the memory is not enough, rather
> than oom-killer kills our work process. Our datas of /proc/meminfo occurred
> during the system problem as below:
> 130> cat /proc/meminfo 
> MemTotal:       12173268 kB 
> MemFree:          223044 kB 
> Buffers:             244 kB 
> Cached:             4540 kB 
> SwapCached:            0 kB 
> Active:             1700 kB 
> Inactive:           5312 kB 
> Active(anon):       1616 kB 
> Inactive(anon):     1128 kB 
> Active(file):         84 kB 
> Inactive(file):     4184 kB 
> Unevictable:           0 kB 
> Mlocked:               0 kB 
> SwapTotal:             0 kB 
> SwapFree:              0 kB 
> Dirty:                 0 kB 
> Writeback:             0 kB 
> AnonPages:          2556 kB 
> Mapped:             1088 kB 
> Shmem:               196 kB 
> Slab:             509708 kB 
> SReclaimable:       7596 kB 
> SUnreclaim:       502112 kB 
> KernelStack:        1096 kB 
> PageTables:          748 kB 
> NFS_Unstable:          0 kB 
> Bounce:                0 kB 
> WritebackTmp:          0 kB 
> CommitLimit:     6086632 kB 
> Committed_AS:       9440 kB 
> VmallocTotal:   34359738367 kB 
> VmallocUsed:      303488 kB 
> VmallocChunk:   34359426132 kB 
> HardwareCorrupted:     0 kB 
> AnonHugePages:         0 kB 
> HugePages_Total:       0 
> HugePages_Free:        0 
> HugePages_Rsvd:        0 
> HugePages_Surp:        0 
> Hugepagesize:       2048 kB 
> DirectMap4k:        6152 kB 
> DirectMap2M:     2070528 kB 
> DirectMap1G:    10485760 kB
> 
> Best Regards,
> 
> Guochao
> 
> 
> 
> --
> View this message in context: 
> http://xfs.9218.n7.nabble.com/10GB-memorys-occupied-by-XFS-tp35015p35016.html
> Sent from the Xfs - General mailing list archive at Nabble.com.
> 
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs
> 

<Prev in Thread] Current Thread [Next in Thread>