xfs
[Top] [All Lists]

Re: [PATCH 4/3] xfs: xfs_qm_dqrele mostly doesn't need locking

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: [PATCH 4/3] xfs: xfs_qm_dqrele mostly doesn't need locking
From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Mon, 16 Dec 2013 10:21:52 -0800
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20131213213006.GU10988@dastard>
References: <1386841258-22183-1-git-send-email-david@xxxxxxxxxxxxx> <20131212102507.GX10988@dastard> <20131213132807.GB13689@xxxxxxxxxxxxx> <20131213213006.GU10988@dastard>
User-agent: Mutt/1.5.21 (2010-09-15)
On Sat, Dec 14, 2013 at 08:30:06AM +1100, Dave Chinner wrote:
> >     if (atomic_dec_and_test(&dqp->q_nrefs)) {
> >             if (list_lru_add(&mp->m_quotainfo->qi_lru, &dqp->q_lru))
> >                     XFS_STATS_INC(xs_qm_dquot_unused);
> >     }
> > 
> > given that the only locking we need is the internal lru lock?
> 
> Yes, I think it is.
> 
> However, that involves changing all the callers of dqput to not hold
> the dqlock when they call, which is a bigger change than was
> necessary to avoid the lock contention problem. i.e. it doesn't seem
> to be in a fast path that needed immediate fixing, so I didn't touch
> it.

Given that the lru list lock nests inside dqlock we can just turn
dqput into:

void
xfs_qm_dqput(
        struct xfs_dquot        *dqp)
{
        ASSERT(dqp->q_nrefs > 0);
        ASSERT(XFS_DQ_IS_LOCKED(dqp));

        trace_xfs_dqput(dqp);

        xfs_qm_dqrele(dqp);
        xfs_dqunlock(dqp);
}

But with my other patch we can probably replace most callers
with xfs_qm_dqrele, and the remaining ones with an opencoded
version that first drops the lock easily.

<Prev in Thread] Current Thread [Next in Thread>