On Thu 27-05-10 14:33:41, Andrew Morton wrote:
> On Tue, 25 May 2010 20:54:12 +1000
> Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > sync can currently take a really long time if a concurrent writer is
> > extending a file. The problem is that the dirty pages on the address
> > space grow in the same direction as write_cache_pages scans, so if
> > the writer keeps ahead of writeback, the writeback will not
> > terminate until the writer stops adding dirty pages.
> <looks at Jens>
> The really was a pretty basic bug. It's writeback 101 to test that case :(
The code has this live-lock since Nick fixed data integrity issues in
write_cache_pages which was (digging) commit 05fe478d ("mm:
write_cache_pages integrity fix") in January 2009. Jens just kept the code
as it was...
> That being said, I think the patch is insufficient. If I create an
> enormous (possibly sparse) file with a 16TB hole (or a run of clean
> pages) in the middle and then start busily writing into that hole (run
> of clean pages), the problem will still occur.
> One obvious fix for that (a) would be to add another radix-tree tag and
> do two passes across the radix-tree.
> Another fix (b) would be to track the number of dirty pages per
> adddress_space, and only write that number of pages.
> Another fix would be to work out how the code handled this situation
> before we broke it, and restore that in some fashion. I guess fix (b)
> above kinda does that.
(b) does not work for data integrity sync (see changelog of the above
mentioned commit). I was sending a patch doing (a) in February but in
particular you raised concerns whether it's not too expensive... Since
it indeed has some cost (although I was not able to measure any performance
impact) and I didn't know a better solution, I just postponed the patches.
But I guess it's time to revive the series and maybe we'll get further with
Jan Kara <jack@xxxxxxx>
SUSE Labs, CR