ENOSPC and filesystem shutdowns

Bernard Chan bernard at goanimate.com
Thu Sep 8 06:09:55 CDT 2011


Hi Christoph and everybody else,

Thanks so much for your response.

We are running this on a customized CentOS image on AWS and thus had a
different kernel version. We learned from the image vendors that these are
based on CentOS and indeed they seem to be, although we do not know the
exact CentOS versions on which those images were based.

We switched to another image that offers a newer kernel version (2.6.35.11),
and re-mounted the LVM with XFS volumes. Earlier on we performed xfs_repair
and did not find any problems, and fragmentation was found to be low.
We retried those mkdir operations that previously resulted in ENOSPC which
in turn shut down the XFS. There is no longer any XFS shutdown now and the
same operations are successful. We feel curious and do not quite understand
why, and we haven't enabled inode64 either with the new setup and remain on
a 32-bit architecture. Essentially we only changed the kernel and a slightly
different 32-bit image.

So should we bother with inode64 and 64-bit servers with NFS4, and should we
anticipate any other issues running on this setup with a 4TB volume without
enabling inode64?

Thanks so much for any possible insights.


On Mon, Sep 5, 2011 at 3:47 PM, Christoph Hellwig <hch at infradead.org> wrote:

> On Sun, Sep 04, 2011 at 02:09:49PM +0800, Bernard Chan wrote:
> > We have an XFS filesystem (on LVM, probably doesn't matter anyway) that
> is
> > 4TB running on CentOS kernel 2.6.21.7,
>
> Isn't Centos based on RHEL and thus running either 2.6.9, 2.6.18 or
> 2.6.32-ish kernels?
>
> > We searched and found this list, and a few patches around kernel
> > 2.6.26-2.6.27 that seem to match our scenario. We were able to log the
> > specific mkdir command that failed and confirmed it consistently fails
> that
> > way that gives "no space left on device", while we did not reproduce the
> > same issue mkdir in other directories with large inode numbers. We
> haven't
> > tried patching or upgrading the kernel yet, but we will do that later.
> >
> > As the root cause of that patch points to a bug triggered by ENOSPC, we
> > checked the inode numbers created for some directories and files with "ls
> > -li" and some of them are pretty close to 2^32.
> >
> > So, we would like to ascertain if that is the cause for ENOSPC in our
> case,
> > and does that mean 32-bit inodes are no longer adequate for us and we
> should
> > switch to 64-bit inodes? Will switching it avoid this kind of shutdowns
> with
> > successful writes in the future?
> >
> > And is it true that we don't need a 64-bit OS for 64-bit inodes? How can
> we
> > tell if our system supports 64-bit inodes?
>
> It doesn't.  On Linux XFS only supports inode64 on 32-bit systems since
> Linux 3.0.
>
> > Finally, although we all know that "df -i" is sort of nonsense on XFS,
> how
> > can we get the output of 5% inode while having inode numbers that are
> close
> > to 2^32? So what does that 5% exactly mean, or were I looking at inodes
> the
> > wrong way?
>
> It's based on the available space given that XFS can theoretically use
> any inode block for data.
>
> > Thanks in advance for any insights anyone may shed on this one.
>
> I'd move off a 4.5-year old unsupposed kernel.  The real RHEL/Centos
> kernel have fairly good xfs support these days if you want a backporting
> option.  Even RHEL5 might have inode64 on 32-bit systems as it has a lot
> of XFS updates backport, but in doubt I would recommend to move to
> a RHEL6/Centos6 kernel at least.
>
>


-- 

Regards,
Bernard Chan.
GoAnimate.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20110908/23ea47cf/attachment.htm>


More information about the xfs mailing list