On Tue, Jul 16, 2013 at 02:02:12PM -0700, Linus Torvalds wrote:
> On Tue, Jul 16, 2013 at 1:43 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > Yes - IO is serialised based on the ip->i_iolock, not i_mutex. We
> > don't use i_mutex for many things IO related, and so internal
> > locking is needed to serialise against stuff like truncate, hole
> > punching, etc, that are run through non-vfs interfaces.
> Umm. But the page IO isn't serialized by i_mutext *either*. You don't
> hold it across page faults. In fact you don't even take it at all
> across page faults.
Right, and that's one of the biggest problems page based IO has - we
can't serialise it against other IO and other page cache
manipulation functions like hole punching. What happens when a
splice read or mmap page fault races with a hole punch? You get
stale data being left in the page cache because we can't serialise
the page read with the page cache invalidation and underlying extent
Indeed, why do you think we've been talking about VFS-level IO range
locking for the past year or more, and had a discussion session at
LSF/MM this year on it? i.e. this:
So forget about this "we don't need no steenkin' IO serialisation"
concept - it's fundamentally broken.
FWIW, hole punching in XFS takes the i_iolock in exclusive
mode, and hence serialises correctly against splice. IOWs, there is
a whole class of splice read data corruption race conditions that
XFS is not susceptible to but....
> *Every* other local filesystem uses generic_file_splice_read() with
> just a single
> .splice_read = generic_file_splice_read,
... and so they all are broken in a nasty, subtle way....