[Top] [All Lists]

Re: Premature "No Space left on device" on XFS

To: Bernhard Schmidt <berni@xxxxxxxxxxxxx>, Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: Premature "No Space left on device" on XFS
From: Gim Leong Chin <chingimleong@xxxxxxxxxxxx>
Date: Fri, 7 Oct 2011 16:40:13 +0800 (SGT)
Cc: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com.sg; s=s1024; t=1317976813; bh=EMZR0CoqkGcEg9wOABXrWoJZGzryT+N7i8W42SxUL3g=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=DoPwc4kq2Z5WZj5YqOcS0RyPEv1vEdxYAr9kBN1b64SqfQqPsT5wNTNroNLOIKpQ+LJXkvfJP+K0pVyWce1vuDei9wdRytiClL15OPyfh/iTaw6FO5WT0ydTVT0NBGUtFCSGmQe7+FmCgYYCrFLNNRjpasBfSA3MEqRn/nA2dXs=
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.sg; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=VphH+UZtfnHIEj4674/qPVbxu89A1PqHVGHtILi5Q9vybXRXtFlkrDDLe1kAhypxZhKRtltieDrmfAvrOmNcBMZIhJKTVv++khvvTZcyupFnmJiy0Sl4H13cL0uf6I7J89jUITEfW7uMpfA28GB1ARbcpSVBNVt/+e9SDEHSAW4=;
In-reply-to: <20111007013711.GW3159@dastard>
Hi Dave,

> As it is, your problem is most likely fragmented free space
> (an
> aging problem). Inodes are allocated in chunks of 64, so
> require an
> -aligned- contiguous 16k extent for the default 256 byte
> inode size.
> If you have no aligned contiguous 16k extents free then
> inode
> allocation will fail.

I understand from the mkfs.xfs man page "The  XFS  inode  contains a fixed-size 
part and a variable-size part."

1) Do you mean inodes are allocated in units of 64 at one go?
2) What is the size of the fixed-size part?
3) Are the fixed-size parts of inodes also allocated in units of 64 at one go?
4) Where are the fixed-size parts located?  On special extents just like the 
variable-size part?
5) What about the locality of the variable and fixed size parts of the inodes?  
Can they be any distance apart?



<Prev in Thread] Current Thread [Next in Thread>