xfs
[Top] [All Lists]

Re: xfs_file_splice_read: possible circular locking dependency detected

To: CAI Qian <caiqian@xxxxxxxxxx>
Subject: Re: xfs_file_splice_read: possible circular locking dependency detected
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 7 Sep 2016 09:34:18 +1000
Cc: linux-xfs <linux-xfs@xxxxxxxxxxxxxxx>, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>, Al Viro <viro@xxxxxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <58974432.234567.1473198839605.JavaMail.zimbra@xxxxxxxxxx>
References: <723420070.1340881.1472835555274.JavaMail.zimbra@xxxxxxxxxx> <1832555471.1341372.1472835736236.JavaMail.zimbra@xxxxxxxxxx> <20160903003919.GI30056@dastard> <58974432.234567.1473198839605.JavaMail.zimbra@xxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Tue, Sep 06, 2016 at 05:53:59PM -0400, CAI Qian wrote:
> 
> 
> ----- Original Message -----
> > Fundamentally a splice infrastructure problem. If we let splice race
> > with hole punch and other fallocate() based extent manipulations to
> > avoid this lockdep warning, we allow potential for read or write to
> > regions of the file that have been freed. We can live with having
> > lockdep complain about this potential deadlock as it is unlikely to
> > ever occur in practice. The other option is simply not an acceptible
> > solution....
> The problem with living with having this lockdep complain that
> it seems once this lockdep happens, it will prevent other complains from
> showing up. For example, I have to apply the commit dc3a04d to fix an early
> rcu lockdep first during the bisecting.

Not my problem.

My primary responsibility is to maintain the filesystem integrity
and data safety for the hundreds of thousands (millions?) of XFS
users: it's their data, and I will always err on the side of safety
and integrity. As such I really don't care if there's collateral
damage to developer debug tools - user data integrity requirements
always come first...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>