xfs
[Top] [All Lists]

Exponential memory usage?

To: xfs@xxxxxxxxxxx
Subject: Exponential memory usage?
From: Tap <tap@xxxxxxxxxxxxxxxx>
Date: Fri, 28 Mar 2014 10:53:54 -0700
Delivered-to: xfs@xxxxxxxxxxx
I have a Linux CentOs-based (6.5) system, running Linux 3.10.29 beneath xen 
4.2.3-26 with a raid array as follows:

        Version : 1.2
  Creation Time : Mon Apr 29 22:50:41 2013
     Raid Level : raid6
     Array Size : 14650667520 (13971.97 GiB 15002.28 GB)
  Used Dev Size : 2930133504 (2794.39 GiB 3000.46 GB)
   Raid Devices : 7
  Total Devices : 8
    Persistence : Superblock is persistent

    Update Time : Fri Mar 28 10:31:08 2014
          State : active 
 Active Devices : 7
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : s2013:0
           UUID : ec5acf05:2c840b70:166cde66:5e21e5c7
         Events : 386

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       7       8      113        1      active sync   /dev/sdh1
       8       8       81        2      active sync   /dev/sdf1
       3       8       97        3      active sync   /dev/sdg1
       4       8       17        4      active sync   /dev/sdb1
       5       8       49        5      active sync   /dev/sdd1
       6       8      129        6      active sync   /dev/sdi1

       9       8       65        -      spare   /dev/sde1

xfs_info shows:

meta-data=/dev/md127             isize=256    agcount=32, agsize=114458368 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=3662666880, imaxpct=5
         =                       sunit=128    swidth=640 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


The system currently has 32 GB of RAM.

I was hoping to use this as the main data-stor for various (small) xen 
machines, one of which  was going to be a zoneminder system.  Zoneminder makes 
lots of "small" files (JPG's) from the various HD IP cameras that are connected.

Anyway at some point the system had become unusable.  Something as simple as:

  find  /raid -type f
-- or --
  ls -lR /raid

Would walk the entire system out of RAM.  Looking at slabtop it looks like this 
is due mostly to xfs_inode memory usage.  Note that since these problems began 
I stopped running all sub-ordinate domains and am now only running dom0.  In 
fact I've allocated all 32 GB to that domain, and memory problems still persist.

(At 83% FS utilization) I decided I have to do something to get out of this 
unusable state and therefore started removing ( rm -rf targetDir ) all of the 
files that the zoneminder system had generated.  Even this, after a fresh 
reboot with thing else running, will run the system out of RAM (all 32 GB of 
it).  The delete of this area is still in progress as I have to periodically 
restart the machine to get RAM back (as I compose this email it is down to 66% 
space used).

I've googled and can't really find anything that describes these kinds of 
problems.  I've tried the few limited tunable values (XFS and VS) and nothing 
seems to have any positive impact on this run-away memory usage.

My questions are:

1.  Is this expected?

2.  Are there any XFS memory calculators that would have shown me this is a 
problem to begin with?


Given it walks out of 32 GB of memory I can't be sure that upgrading to 64 or 
128 GB will *ever* help this situation.

Note:  FWIW Un-mounting the filesystem also recovers the memory.


Thanks.

<Prev in Thread] Current Thread [Next in Thread>