xfs
[Top] [All Lists]

Re: all your slabs are belong to ram ?

To: "krautus@xxxxxxxxx" <krautus@xxxxxxxxx>
Subject: Re: all your slabs are belong to ram ?
From: "Darrick J. Wong" <darrick.wong@xxxxxxxxxx>
Date: Tue, 6 Oct 2015 07:50:38 -0700
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20151006162443.42f48ee7@linux>
References: <20151006162443.42f48ee7@linux>
User-agent: Mutt/1.5.21 (2010-09-15)
On Tue, Oct 06, 2015 at 04:24:43PM +0200, krautus@xxxxxxxxx wrote:
> Dear List, nice to meet you :)
> 
> First post, straight to the point:
> I'm wrestling from few weeks with a problem: roundcube (a webmail client) 
> takes too long to open a dovecot (pop/imap server) mailbox with many emails 
> (files).
> So, when such user has more than 10K emails, it takes around 1 minute to open 
> the mailbox.
> Meanwhile, I/O %util goes to 100% and bottlenecks the whole system.
> 
> I've tried memcache(d) integration with roundcube but it doesn't eliminate 
> the problem.
> 
> OS is Debian Wheezy 32-bit, 16GB of ECC RAM and storage is a simple hardware 
> raid-1 with a couple of sata2 hard disks.
> I've just used mkfs.xfs (with no tuning) and no options while mounting (in 
> fstab).
> 
> It looks like the problem is the slow access to dentries and inodes, so I've 
> set vfs_cache_pressure to 1,
> forced buffering with few "find /var/mail > /dev/null"
> and have it running like this from around 4 days.
> Didn't help: slabs still gets flushed and opening folders is slow as before.
> 
> Current slabtop usage shows:
> 235352K used by xfs_inode
> and
> 49708K used by dentry
> while I would expect to have at least 1 GB of xfs_inode and at least 200MB of 
> dentry.
> 
> So I'm asking you:
> 1. is there a way to force dentries and inodes to stay in ram ?
> 2. can I perhaps move dentries and inodes to a dedicated SSD ?
> 
> I'm open to all possibilities, perhaps increase RAM ?
> Upgrade to Debian Jessie and 64 bit ?

ISTR that kernel data such as slabs cannot live in highmem, which means that
dentries and slab cannot live in highmem.  A 32bit kernel sets up ~900M of low
memory and ~15G of highmem, which is probably why the kernel has to evict
things and why you see such problems.

A 64bit kernel sets up all the memory as lowmem, so the kernel can use all the
memory for stuff like that.  I'd give that a try first.

--D

> 
> Let me know if I can provide more info.
> 
> 
> Thank you very much!
> Mike
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs

<Prev in Thread] Current Thread [Next in Thread>