[Top] [All Lists]

Re: splice vs execve lockdep trace.

To: Dave Jones <davej@xxxxxxxxxx>, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>, Linux Kernel <linux-kernel@xxxxxxxxxxxxxxx>, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Alexander Viro <viro@xxxxxxxxxxxxxxxxxx>, Oleg Nesterov <oleg@xxxxxxxxxx>, Ben Myers <bpm@xxxxxxx>
Subject: Re: splice vs execve lockdep trace.
From: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Date: Mon, 15 Jul 2013 20:25:14 -0700
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=A6Rzt+id8JB2EvqDFza1bQ3L71/Adlcw0paHZYz9Bxc=; b=n2KTI5mNehXeXdDueAK/X9Z1WKvlsFeq4WkXqRwnNxctobKSdUmY6aI2klrHtQOhNy xMaVkf6nYMslgmXXmN9G45wFpYFyKHKpyJPuDv6jOvHwnI9zWbGmakYWNhs04DZlu2H9 TnYJE2hbETxxrz3wpW/jDuqafgTMuVGJJItTwhfrZ39aEVDCW6fWMYMwjW2ccxcvvE1y 9xib1BjdXP3atRuPCsmmtOpdspl7F2M4wMwdR5QB5R5jLjzP896os9ra5HTM5a4iVdW9 0DnHOBrDM2kMFPM+qdL7RaPSrrDCi2cgN0mtJzlCpV/rxBsG4qt5GAzjzEPZxLo7paX0 fUpA==
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux-foundation.org; s=google; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=A6Rzt+id8JB2EvqDFza1bQ3L71/Adlcw0paHZYz9Bxc=; b=cUC+dGPt6CMgip56qi9Cbf16ehuZomlCCYaaaCL/lUgg5vH82mAiLm5TCTbpC0tsox t+vbeCvd64d+fYUhJwLOpavyFyCjnihzcHyovpoMXsx3ddwfqA/44ADfWwW5RLtgdB+O q01drQvA2OQsLZo1kXP5uy07Z9Y4WiUO8Ciuc=
In-reply-to: <20130716023847.GA31481@xxxxxxxxxx>
References: <20130716015305.GB30569@xxxxxxxxxx> <CA+55aFyLbqJp0-=7=HOF9sKGOHwsa7A7-V76b8tbsnra8Z2=-w@xxxxxxxxxxxxxx> <20130716023847.GA31481@xxxxxxxxxx>
Sender: linus971@xxxxxxxxx
On Mon, Jul 15, 2013 at 7:38 PM, Dave Jones <davej@xxxxxxxxxx> wrote:
>   The recent trinity changes shouldn't have really made
> any notable difference here.

Hmm. I'm not aware pf anything that has changed in this area since
3.10 - neither in execve, xfs or in splice. Not even since 3.9.

But I may certainly have missed something.

> Interestingly, the 'soft lockups' I was
> seeing all the time on that box seem to have gone into hiding.

Honestly, I'm somewhat inclined to blame the whole perf situation, and
saying that we hopefully got that fixed. In between the silly do_div()
buglets and all the indications that the time was spent in nmi
handlers, I'd be willing to just ignore them as false positives
brought on by the whole switch to the perf irq..

>  > Or is the XFS i_iolock required for this thing to happen at all?
>  > Adding Ben Myers to the cc just for luck/completeness.
> It is only happening (so far) on the XFS test box, but I don't have
> enough data to say that's definite yet.

.. so there's been a number of xfs changes, and I don't know the code,
but none of them seem at all relevant to this.

The "pipe -> cred_guard_mutex" lock chain is pretty direct, and can be
clearly attributed to splicing into /proc. Now, whether that is a
*good* idea or not is clearly debatable, and I do think that maybe we
should just not splice to/from proc files, but that doesn't seem to be
new, and I don't think it's necessarily *broken* per se, it's just
that splicing into /proc seems somewhat unnecessary, and various proc
files do end up taking locks that can be "interesting".

At the other end of the spectrum, the "cred_guard_mutex -> FS locks"
thing from execve() is also pretty clear, and probably not fixable or
necessarily something we'd even want to fix.

But the "FS locks -> pipe" part is a bit questionable. Honestly, I'd
be much happier if XFS used generic_file_splice_read/write().

And looking more at that, I'm actually starting to think this is an
XFS locking problem. XFS really should not call back to splice while
holding the inode lock.

But that XFS code doesn't seem new either. Is XFS a new thing for you
to test with?

Ben? Comments? I added the xfs list too now that I'm starting to
possibly blame XFS more actively..


<Prev in Thread] Current Thread [Next in Thread>