[Top] [All Lists]

çå: Re: 10GB memorys occupied by XFS

To: "stan@xxxxxxxxxxxxxxxxx" <stan@xxxxxxxxxxxxxxxxx>
Subject: çå: Re: 10GB memorys occupied by XFS
From: æåè <guochao3@xxxxxxxxxxxxxxxxx>
Date: Thu, 10 Apr 2014 01:41:38 +0000
Accept-language: zh-CN, en-US
Cc: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <76016fc7.13c84.14546bad411.Coremail.dx-wl@xxxxxxx>
References: <1396596386220-35015.post@xxxxxxxxxxxxx> <533F14EC.6040705@xxxxxxxxxxxxxxxxx> <76016fc7.13c84.14546bad411.Coremail.dx-wl@xxxxxxx>
Thread-index: AQEpP5CHAA2OAZlN0hpbo9hH3sU+DwGUulZYAVD0NdCcPw61kA==
Thread-topic: Re: 10GB memorys occupied by XFS
Dear Stan,
Thank you for your kind assistance.

In accordance with your suggestion, we executed "echo 3 > 
/proc/sysm/drop_caches" for trying to release vfs dentries and inodes. Really, 
our lost memory came back. But we learned that the memory of vfs dentries and 
inodes is distributed from slab. Please check our system "Slab: Â509708 kB" 
from /proc/meminfo, and it seems only be took up 500MB and xfs_buf take up 
450MB among. And /proc/meminfo indicated that our system memory is anomalous, 
there is about 10GB out of the statistics. We want to know how the system could 
observe the usage amount of vfs dentries and iodes through the system 
interface. If the memory usage of system is not reflected in /proc/meminfo as 
we can not find the statistics, and we thought it as a bug of xfs.

My Âvm.vfs_cache_pressure of linux system is 100. We think that the system will 
proactively take the memory back when the memory is not enough, rather than 
oom-killer kills our work process. Our datas of /proc/meminfo occurred during 
the system problem as below:
130> cat /proc/meminfo
MemTotal: Â Â Â 12173268 kB
MemFree: Â Â Â Â Â223044 kB
Buffers: Â Â Â Â Â Â 244 kB
Cached: Â Â Â Â Â Â 4540 kB
SwapCached: Â Â Â Â Â Â0 kB
Active: Â Â Â Â Â Â 1700 kB
Inactive: Â Â Â Â Â 5312 kB
Active(anon): Â Â Â 1616 kB
Inactive(anon): Â Â 1128 kB
Active(file): Â Â Â Â 84 kB
Inactive(file): Â Â 4184 kB
Unevictable: Â Â Â Â Â 0 kB
Mlocked: Â Â Â Â Â Â Â 0 kB
SwapTotal: Â Â Â Â Â Â 0 kB
SwapFree: Â Â Â Â Â Â Â0 kB
Dirty: Â Â Â Â Â Â Â Â 0 kB
Writeback: Â Â Â Â Â Â 0 kB
AnonPages: Â Â Â Â Â2556 kB
Mapped: Â Â Â Â Â Â 1088 kB
Shmem: Â Â Â Â Â Â Â 196 kB
Slab: Â Â Â Â Â Â 509708 kB
SReclaimable: Â Â Â 7596 kB
SUnreclaim: Â Â Â 502112 kB
KernelStack: Â Â Â Â1096 kB
PageTables: Â Â Â Â Â748 kB
NFS_Unstable: Â Â Â Â Â0 kB
Bounce: Â Â Â Â Â Â Â Â0 kB
WritebackTmp: Â Â Â Â Â0 kB
CommitLimit: Â Â 6086632 kB
Committed_AS: Â Â Â 9440 kB
VmallocTotal: Â 34359738367 kB
VmallocUsed: Â Â Â303488 kB
VmallocChunk: Â 34359426132 kB
HardwareCorrupted: Â Â 0 kB
AnonHugePages: Â Â Â Â 0 kB
HugePages_Total: Â Â Â 0
HugePages_Free: Â Â Â Â0
HugePages_Rsvd: Â Â Â Â0
HugePages_Surp: Â Â Â Â0
Hugepagesize: Â Â Â 2048 kB
DirectMap4k: Â Â Â Â6152 kB
DirectMap2M: Â Â 2070528 kB
DirectMap1G: Â Â10485760 kBÂ

I look forward to hearing from you and thank you very much for your kind 

Best Regards,


<Prev in Thread] Current Thread [Next in Thread>