xfs
[Top] [All Lists]

Re: ENOSPC at 90% with plenty of inodes

To: James Braid <jamesb@xxxxxxxxxxxx>
Subject: Re: ENOSPC at 90% with plenty of inodes
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 12 Oct 2010 09:35:07 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <AANLkTikLyEuSoFRVwirKLws97Vemz4ScNHOqu2Jc5sWR@xxxxxxxxxxxxxx>
References: <AANLkTinwnOm0V=9WwbXiUGFnyjCwT3GUHo0hiQahQNUV@xxxxxxxxxxxxxx> <20101008225146.GJ4681@dastard> <AANLkTikLyEuSoFRVwirKLws97Vemz4ScNHOqu2Jc5sWR@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Mon, Oct 11, 2010 at 03:03:28PM +0100, James Braid wrote:
> On Fri, Oct 8, 2010 at 23:51, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > Sounds like fragmented free space. What is the output of:
> >
> > # xfs_db -r -c "freesp -s" <device>
> 
> # xfs_db -r -c "freesp -s" /dev/sdb
>    from      to extents  blocks    pct
>       1       1 2298052 2298052  40.52
>       2       3 1568338 3337017  58.84
>       4       7    8432   35716   0.63
>       8      15      50     423   0.01
> total free extents 3874872
> total free blocks 5671208
> average free extent size 1.46359
> 
> Which seems to say there are a few tiny pieces of free space
> available? The files that were failing to be written were a few
> hundred bytes in size.

The error has nothing to do with the size of the files, but
everything to do with being able to allocate more inodes. Inode
allocation requires 4 contiguous blocks (for 256 byte inodes, more
for larger inodes) with alignment constraints. That means when you
run out of 8 block or larger free extents, inode allocation will
start failing and you'll get ENOSPC being reported.

> We haven't seen any errors so far today, but xfs_fsr ran over the
> weekend, so perhaps I guess it's reorganized the filesystem.

Only a little. xfs_fsr will not improve fragmented free space
conditions (indeed, it normally fragments free space more). The only
way to reduce the fragmentation of free space is to remove a
significant amount of data and inodes from the filesystem...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>