xfs
[Top] [All Lists]

Re: Premature "No Space left on device" on XFS

To: Dave Chinner <david@xxxxxxxxxxxxx>, Bernhard Schmidt <berni@xxxxxxxxxxxxx>
Subject: Re: Premature "No Space left on device" on XFS
From: Bryan J Smith <b.j.smith@xxxxxxxx>
Date: Fri, 7 Oct 2011 06:58:53 -0700 (PDT)
Cc: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1317995933; bh=58EISimlWlajOgRry7D+NmnZbacXr659K+AFjqWw5ck=; h=X-YMail-OSG:Received:X-RocketYMMF:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=p09XDTgiqOdYumt/qYqYzu5GmPqie/I7ZTcOkMqCoXzoX0v1w7LFLsDflOsgIfpt7Pa/cku2dSV0LaiHPOjbVYyKHMrtdHO+OrjiGCFhH3F8TcGyH2+RwnNwPeCNBLQWc7HjZ0XXtbaSMDv4WeVKHNBq2v5bYVUmY3owO7+AeHQ=
In-reply-to: <20111007013711.GW3159@dastard>
References: <4E8E079B.4040103@xxxxxxxxxxxxx> <20111007013711.GW3159@dastard>
Reply-to: Bryan J Smith <b.j.smith@xxxxxxxx>
[ Not really adding any technical meat, but just some past experience with XFS, 
plus Ext3 experience ]

I remember running into this a long time ago when I was first playing with XFS 
for /tmp and /var (I was still a Linux/XFS noob at the time, not that I'm an 
expert today).  I ran into the same case where both free block and inodes were 
still available (although similarly well utilized), and the median file size 
was around 1KiB.  It was also in the case of many small files being written out 
in a short period.

In my case, I didn't use the XFS debugger to get into the allocation of the 
extents (would have if I wasn't such a noob, good, discrete command to know, 
thanx!).

Extents are outstanding for data and similar directories, ordering and placing 
large and small files to mitigate fragmentation.  But in this case, and correct 
me if I'm wrong, it's really just a wasteful use for the extents approach, as 
the files typically fit in a single data block or two.  I mean, I can still see 
some benefits in how inodes are allocated, but it seems small compared to the 
overhead.  Then add in the delete aspect, being that the files are not going to 
be retained in the user's use case here (this is a spool, correct?), and I'm 
not seeing XFS make sense.  The fact that the services "fell behind" does not 
surprise me, although that's just a subjective feel (and if anyone knows how to 
back that up with good tools and metrics, I'm all ears).

I never got around to benchmarking it against Ext3 in such a use case, but I 
quickly adopted a complementary Ext3+XFS volume approach.

I've used Ext3 with around 8 million files with a median size well under 4KiB 
(under 32GiB total).  It works "well enough."  I'm curious how Ext4 would do 
though.  I think Ric Wheeler's team (at Red Hat) has done some benchmarks on 7+ 
figure file counts on Ext3 and Ext4.  I think I remember a couple of info and 
related tidbits back and forth when I was doing some Ext3 (with GFS and GFS2) 
testing, on expectations of performance.

Although can't say I've had 2 million files in a single directory, so YMMV.  
Then again, if it was extent overhead, it may not reach 2M to begin with.




----- Original Message -----
From: Dave Chinner <david@xxxxxxxxxxxxx>
Sent: Thursday, October 6, 2011 9:37 PM

On Thu, Oct 06, 2011 at 09:55:07PM +0200, Bernhard Schmidt wrote:
> ...
> It was  bombarded with mails faster than it could send them on,
> which eventually led to almost 2 million files of ~1.5kB in one
> directory.  Suddenly, this started to happen
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
> touch: cannot touch `a': No space left on device
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sdb              10475520   7471160   3004360  72%
> So you have a 10GB filesystem, with about 3GB of free space.
> /var/spool/postfix-bulk
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df -i .
> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
> /dev/sdb             10485760 1742528 8743232   17% /var/spool/postfix-bulk

And with 1.7 million inodes in it. That's a lot for a tiny
filesystem, and not really a use case that XFS is well suited to.
XFS will work, but it won't age gracefully under these conditions...

As it is, your problem is most likely fragmented free space (an
aging problem). Inodes are allocated in chunks of 64, so require an
-aligned- contiguous 16k extent for the default 256 byte inode size.
If you have no aligned contiguous 16k extents free then inode
allocation will fail.

Running 'xfs_db -r "-c freesp -s" /dev/sdb' will give you a
histogram of free space extents in the filesystem, which will tell
us if you are hitting this problem.

<Prev in Thread] Current Thread [Next in Thread>