[Top] [All Lists]

Re: df bigger than ls?

To: Brian Candler <B.Candler@xxxxxxxxx>
Subject: Re: df bigger than ls?
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 8 Mar 2012 20:28:50 +1100
Cc: Eric Sandeen <sandeen@xxxxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <20120308091033.GC23992@xxxxxxxx>
References: <20120307155439.GA23360@xxxxxxxx> <20120307171619.GA23557@xxxxxxxx> <4F57A32A.5010704@xxxxxxxxxxx> <20120308021054.GM3592@dastard> <4F5816D6.80801@xxxxxxxxxxx> <20120308091033.GC23992@xxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Thu, Mar 08, 2012 at 09:10:33AM +0000, Brian Candler wrote:
> On Wed, Mar 07, 2012 at 08:17:58PM -0600, Eric Sandeen wrote:
> > It seems worth thinking about.  I guess I'm still a little concerned
> > about the ENOSPC case; it could lead to some confusion - I could imagine
> > several hundreds of gigs under preallocation, with a reasonable-sized
> > filesystem returning ENOSPC quite early.
> And presumably df on the filesystem would also show it approaching 100%
> utilisation?


> I'm used to this where a large file has been unlinked but is still open. 
> The preallocation case is a new one to me though.
> How about if the total of all preallocations were limited to some small
> percentage of the total filesystem size?  If you reach this limit and want
> to preallocate some space for another file you'd have to either drop or
> shrink an older preallocation.

There is no separate accounting for preallocation - it is considered
used space so this currently can't be done even if there was some
method for tracking and trimming speculatively preallocated space.

Realistically, if you aren't running out of space there is no reason
to limit speculative preallocation. Indeed, if we didn't add
delalloc blocks to the block count in stat(2) output so they showed
up in df, almost no-one would even know about the fact that this is


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>