xfs
[Top] [All Lists]

Re: XFS filesystem reports as full though it isn't

To: "Emmanuel Florac" <eflorac@xxxxxxxxxxxxxx>
Subject: Re: XFS filesystem reports as full though it isn't
From: "Christian Røsnes" <christian.rosnes@xxxxxxxxx>
Date: Mon, 19 May 2008 13:48:01 +0200
Cc: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=8HZ3XGo+xbBT32usupFBdkaL03IgYrb9+JnbXE6plEA=; b=V39ZsCMjDc9ioleLjGLas8XDL+w3x6nZRwfLyiohhMM9lQDg9j08q414CvhVFPDQMBGOqnl4XcbMpP9STIAKYmsXAqv013c6EqP5JU+S9vQjfjmBnXPEPYzqzd6/rzsOFgrTYdiTkY2ZMA7eJI7PYjUGXUeoPt+Xc4DtDlVx9Ww=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=Ub0+hG6hz9zOlVX9NXCqP0h1WxARdf67LrY3/rLEItzFrCAZWwEx/LYfduucMW9JSGgTycdyOpN/P3/HseCwQNplt2QmvO8LDlS5qUwUuvbju8MlWqGuaYHj0NondNT6HaCwbmj8kEf3eWV1dBA41YL/76kHbdWGBXmyyd2Ic5Q=
In-reply-to: <1a4a774c0805190431j31a182bdu61030bbfcc80f41@mail.gmail.com>
References: <20080516222755.3e557c00@galadriel.home> <1a4a774c0805190431j31a182bdu61030bbfcc80f41@mail.gmail.com>
Sender: xfs-bounce@xxxxxxxxxxx
On Mon, May 19, 2008 at 1:31 PM, Christian Røsnes
<christian.rosnes@xxxxxxxxx> wrote:
> On Fri, May 16, 2008 at 10:27 PM, Emmanuel Florac
> <eflorac@xxxxxxxxxxxxxx> wrote:
>>
>> I have a 64 bits (x86_64) machine running Linux 2.6.22.19 with a 24TB
>> XFS filesystem. There are some 15TB of data on it. All is well, no
>> error except that I can't create a single file (touch foo : no space
>> left on device). I don't understand what can be going wrong...
>>
>> History : this filesystem was extended (xfs_growfs) from 16TB to 24.
>>
>>
>> Here is the output from xfs_info /dev/vg0/lv0
>>
>> meta-data=/dev/vg0/lv0           isize=256    agcount=47,
>> agsize=137245616 blks =                       sectsz=512   attr=0
>> data     =                       bsize=4096   blocks=6344964096,
>> imaxpct=25 =                       sunit=16     swidth=32 blks,
>> unwritten=1 naming   =version 2              bsize=4096
>> log      =internal               bsize=4096   blocks=32768, version=1
>>         =                       sectsz=512   sunit=0 blks
>> realtime =none                   extsz=131072 blocks=0, rtextents=0
>>
>> I fail to see nothing special there however.
>>
>> The only significant thing I see is that the FS is really close to 16
>> TB of allocated data (15.7TB). I tried mounting it with "inode64"
>> option with no more loving.
>>
>
> On my system I get "no space left on device"  when I reach 99% full
> with about 20GB free space left on 2TB partitions.
> I also use sunit and swidth for the data section of the xfs
> filesystem, and it could be that the XFS system cannot allocate
> space according to these parameters ? You seem to be around the
> original 16TB limit, and maybe it tries to allocate
> from this original disk layout ? (I'm no XFS expert, so please take my
> "theories" with a grain of salt)
>
> I recently had two "identical" partitions: A and B, where A was the
> master and B was the rsync copy.

Both these 2TB partitions (A and B) were 99% full.

> These had been
> written to for several years, and during that period they both have
> had the role as master. I suppose the disk usage layout
> was different. All of a sudden partition B reported "no space left on
> device", even though partition A contained the
> same data without any "no space left on device". What I did to get the
> B to copy the missing data from partition A was to:
> Temporarily move some data away from partition B, then run xfs_fsr on
> partition B, then move the data back.
> It brought the fragmentation down on partition B, and I could copy the
> missing data from partition B.
>
Correction to last sentence: ... and I could copy the missing data
_to_ partition B.

Christian


<Prev in Thread] Current Thread [Next in Thread>