Joshua Schmidlkofer wrote:
[Nvidia + 2.6.1-rc1-mm2 + XFS]
I saw a similar case recently. [But no proveable metrics] I deleted
about 10 gig of files from my NWN Saves directory. When I df'd
afterwards, I had gone from 5GB free to 33GB free.
I did not think to report it, I just thought that I made a mistake. But
after this report, I thought I should mention it.
I had tons of hard links, with like 5 directories, from various patch
versions, and a lot of links were released. I don't have an
explaination.
js
This could all be related to delayed allocation. During write system
calls xfs does
not actually allocate real disk blocks, it reserves all the potential
blocks needed from
the superblock counters. This is reflected in the df output. The
potential blocks
needed is a worst case estimate, all the space needed for the data, plus
the worst
case estimate of the metadata needed to point at it, which is when xfs
ends up
using a seperate extent for each block in the file. When the data is
actually flushed
out to disk, all the prereserved space which was not actually used it
put back into
the super block counters.
When the filesystem is nearly full, a space allocation from write can
fail, it attempts
reclaim space by flushing out delayed allocate data. So writing a 300M
file probably
did consume 300M, but the space was reclaimed by flushing other delayed
allocate
data.
No guarantees that this is what is happening, but it should go some way
to explaining
fluctuations in the free space on a near full filesystem.
Steve
--
Steve Lord
|