On Tue, 15 Apr 2014 22:13:46 -0700 Eric Dumazet <eric.dumazet@xxxxxxxxx>
> On Wed, 2014-04-16 at 14:03 +1000, NeilBrown wrote:
> > sk_lock can be taken while reclaiming memory (in nfsd for loop-back
> > NFS mounts, and presumably in nfs), and memory can be allocated while
> > holding sk_lock, at least via:
> > inet_listen -> inet_csk_listen_start ->reqsk_queue_alloc
> > So to avoid deadlocks, always set PF_FSTRANS while holding sk_lock.
> > This deadlock was found by lockdep.
> Wow, this is adding expensive stuff in fast path, only for nfsd :(
Yes, this was probably one part that I was least comfortable about.
> BTW, why should the current->flags should be saved on a socket field,
> and not a current->save_flags. This really looks a thread property, not
> a socket one.
> Why nfsd could not have PF_FSTRANS in its current->flags ?
nfsd does have PF_FSTRANS set in current->flags. But some other processes
If any process takes sk_lock, allocates memory, and then blocks in reclaim it
could be waiting for nfsd. If nfsd waits for that sk_lock, it would cause a
Thinking a bit more carefully .... I suspect that any socket that nfsd
created would only ever be locked by nfsd. If that is the case then the
problem can be resolved entirely within nfsd. We would need to tell lockdep
that there are two sorts of sk_locks, those which nfsd uses and all the
rest. That might get a little messy, but wouldn't impact performance.
Is it justified to assume that sockets created by nfsd threads would only
ever be locked by nfsd threads (and interrupts, which won't be allocating
memory so don't matter), or might there be locked by other threads - e.g for
'netstat -a' etc??
> For applications handling millions of sockets, this makes a difference.
Description: PGP signature