xfs
[Top] [All Lists]

Re: Strange df output

To: Steve Lord <stephen.lord@xxxxxxxx>
Subject: Re: Strange df output
From: "Joshua M. Schmidlkofer" <menion@xxxxxxxxxxxxxx>
Date: Thu, 08 Jan 2004 23:34:15 -0800
Cc: Eric Sandeen <sandeen@xxxxxxx>, Jarrod Johnson <jbj-ksylph@xxxxxxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx
In-reply-to: <3FFDD450.6070804@xxxxxxxx>
References: <20040108154940.7dd28591.jbj-ksylph@xxxxxxxx> <1073595887.27384.260.camel@xxxxxxxxxxxxxxxxxxxxxx> <1073598397.18902.7.camel@xxxxxxxxxxxxxxxxxxx> <3FFDD450.6070804@xxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Thu, 2004-01-08 at 14:06, Steve Lord wrote:
> Joshua Schmidlkofer wrote:
> 
> >[Nvidia + 2.6.1-rc1-mm2 + XFS]
> >
> >I saw a similar case recently.  [But no proveable metrics] I deleted
> >about 10 gig of files from my NWN Saves directory.  When I df'd
> >afterwards, I had gone from 5GB free to 33GB free.
> >
> >I did not think to report it, I just thought that I made a mistake.  But
> >after this report, I thought I should mention it.
> >I had tons of hard links, with like 5 directories, from various patch
> >versions, and a lot of links were released. I don't have an
> >explaination.
> >
> >js
> >
> >
> >  
> >
> 
> This could all be related to delayed allocation. During write system 
> calls xfs does
> not actually allocate real disk blocks, it reserves all the potential 
> blocks needed from
> the superblock counters. This is reflected in the df output. The 
> potential blocks
> needed is a worst case estimate, all the space needed for the data, plus 
> the worst
> case estimate of the metadata needed to point at it, which is when xfs 
> ends up
> using a seperate extent for each block in the file. When the data is 
> actually flushed
> out to disk, all the prereserved space which was not actually used it 
> put back into
> the super block counters.
> 
> When the filesystem is nearly full, a space allocation from write can 
> fail, it attempts
> reclaim space by flushing out delayed allocate data. So writing a 300M 
> file probably
> did consume 300M, but the space was reclaimed by flushing other delayed 
> allocate
> data.
> 
> No guarantees that this is what is happening, but it should go some way 
> to explaining
> fluctuations in the free space on a near full filesystem.

Just FYI - these files accumulated over the course of about 10 months,
then two days ago, for the first time, I pruned.  

I will watch for this in the future and see what I get.

js





<Prev in Thread] Current Thread [Next in Thread>