xfs
[Top] [All Lists]

Re: 10GB memorys occupied by XFS

To: æåè <dx-wl@xxxxxxx>
Subject: Re: 10GB memorys occupied by XFS
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Fri, 11 Apr 2014 00:09:46 -0500
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <76016fc7.13c84.14546bad411.Coremail.dx-wl@xxxxxxx>
References: <1396596386220-35015.post@xxxxxxxxxxxxx> <533F14EC.6040705@xxxxxxxxxxxxxxxxx> <76016fc7.13c84.14546bad411.Coremail.dx-wl@xxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:24.0) Gecko/20100101 Thunderbird/24.4.0
On 4/9/2014 8:43 AM, æåè wrote:
> Dear Stan,
> Thank you for your kind assistance.
> 
> In accordance with your suggestion, we executed "echo 3 > 
> /proc/sysm/drop_caches" for trying to release vfs dentries and inodes. Really,
> our lost memory came back. But we learned that the memory of vfs dentries and 
> inodes is distributed from slab. Please check our system "Slab:  509708 kB" 
> from /proc/meminfo, and it seems only be took up 500MB and xfs_buf take up 
> 450MB among. 

To free pagecache:
        echo 1 > /proc/sys/vm/drop_caches
To free reclaimable slab objects (includes dentries and inodes):
        echo 2 > /proc/sys/vm/drop_caches
To free slab objects and pagecache:
        echo 3 > /proc/sys/vm/drop_caches

> And /proc/meminfo indicated that our system memory is anomalous, there is 
> about 10GB out of the statistics. We want to know how the system could 
> observe the usage amount of vfs dentries and iodes through the system 
> interface. If the memory usage of system is not reflected in /proc/meminfo as 
> we can not find the statistics, and we thought it as a bug of xfs.

It seems much of this 10 GB of memory is being consumed by pagecache,
not dentries and inodes.  So the question is:

Why is pagecache not being reclaimed without manual intervention?

> My  vm.vfs_cache_pressure of linux system is 100. We think that the system 
> will proactively take the memory back when the memory is not enough, rather 
> than oom-killer kills our work process. Our datas of /proc/meminfo occurred 
> during the system problem as below:

Except most of your slab is reported as not reclaimable, see below.

> 130> cat /proc/meminfo 
> MemTotal:       12173268 kB 
> MemFree:          223044 kB 
> Buffers:             244 kB 
> Cached:             4540 kB  <------

4.5 MB is reported use by the pagecache.  But some 10 GB is being
consumed by the page cache and not reported here.

> SwapCached:            0 kB 
> Active:             1700 kB 
> Inactive:           5312 kB 
> Active(anon):       1616 kB 
> Inactive(anon):     1128 kB 
> Active(file):         84 kB 
> Inactive(file):     4184 kB 
> Unevictable:           0 kB 
> Mlocked:               0 kB 
> SwapTotal:             0 kB 
> SwapFree:              0 kB 
> Dirty:                 0 kB 
> Writeback:             0 kB 
> AnonPages:          2556 kB 
> Mapped:             1088 kB 
> Shmem:               196 kB 
> Slab:             509708 kB  <------
> SReclaimable:       7596 kB  <------
> SUnreclaim:       502112 kB  <------

This indicates that your slab is not being reclaimed, but not why.

> KernelStack:        1096 kB 
> PageTables:          748 kB 
> NFS_Unstable:          0 kB 
> Bounce:                0 kB 
> WritebackTmp:          0 kB 
> CommitLimit:     6086632 kB 
> Committed_AS:       9440 kB 
> VmallocTotal:   34359738367 kB 
> VmallocUsed:      303488 kB 
> VmallocChunk:   34359426132 kB 
> HardwareCorrupted:     0 kB 
> AnonHugePages:         0 kB 
> HugePages_Total:       0 
> HugePages_Free:        0 
> HugePages_Rsvd:        0 
> HugePages_Surp:        0 
> Hugepagesize:       2048 kB 
> DirectMap4k:        6152 kB 
> DirectMap2M:     2070528 kB 
> DirectMap1G:    10485760 kB 
> 
> I look forward to hearing from you and thank you very much for your kind 
> assistance.

Unfortunately I don't have solid answers for you at this point, nor a
solution.  This is beyond my expertise.  Maybe someone else with more
knowledge/experience will jump in.  I suspect your application may be
doing something a bit unusual.

Cheers,

Stan



> Best Regards,
> 
> Guochao
> 
> 
> At 2014-04-05 04:24:12,"Stan Hoeppner" <stan@xxxxxxxxxxxxxxxxx> wrote:
>> On 4/4/2014 2:26 AM, daiguochao wrote:
>>>  Hello folks,
>>
>> Hello,
>>
>> Note that your problems are not XFS specific, but can occur with any
>> Linux filesystem.
>>
>>>  I used xfs file system in kernel-2.6.32-220.13.1.el6.x86_64 for store
>>>  pictures. About 100 days system memorys is lost and some nginx process is
>>>  killed by oom-killer.So,I looked /proc/meminfo and find memorys is
>>>  lost.Finally, I try to umount xfs system and 10GB memorys is coming back. l
>>>  look xfs bugzilla no such BUG.I have no idea for it.
>>>  
>>>  Cheers,
>>>  
>>>  Guochao.
>>>  
>>>  some memorys info:
>>>  
>>>  0> free -m
>>>               total       used       free     shared    buffers     cached
>>>  Mem:         11887      11668        219          0          0          2
>>>  -/+ buffers/cache:      11665        222
>>>  Swap:            0          0          0
>>
>>
>> First problem:  no swap
>> Second problem: cache is not being reclaimed
>>
>> Read vfs_cache_pressure at:
>> https://www.kernel.org/doc/Documentation/sysctl/vm.txt
>>
>> You've likely set this value to zero.  Changing it to 200 should prompt
>> the kernel to reclaim dentries and inodes aggressively, preventing the
>> oom-killer from kicking in.
>>
>> Cheers,
>>
>> Stan

<Prev in Thread] Current Thread [Next in Thread>