On Mittwoch, 5. Januar 2011 Dave Chinner wrote:
> No state or additional on-disk
> structures are needed for xfs_fsr to do it's work....
That's not exactly the same - once you defraged a file, you know it's
done, and can skip it next time. But you dont know if the (free) space
between block 0 and 20 on disk has been rewritten since the last trim
run or not used at all, so you'd have to do it all again.
> The background trim is intended to enable even the slowest of
> devices to be trimmed over time, while introducing as little runtime
> overhead and complexity as possible. Hence adding complexity and
> runtime overhead to optimise background trimming tends to defeat the
> primary design goal....
It would be interesting to have real world numbers to see what's "best".
I'd imagine a normal file or web server to store tons of files that are
mostly read-only, while 5% of it a used a lot, as well as lots of temp
files. For this, knowing what's been used would be great.
Also, I'm thinking of a NetApp storage, that has been setup to run
deduplication on Sunday. It's best to run trim on Saturday and it should
be finished before Sunday. For big storages that might be not easy to
finish, if all disk space has to be freed explicitly.
And wouldn't it still be cheaper to keep a "written bmap" than to run
over the full space of a (big) disk? I'd say depends on the workload.
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc
it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531
// ****** Radiointerview zum Thema Spam ******
// Haus zu verkaufen: http://zmi.at/langegg/
Description: This is a digitally signed message part.