[Top] [All Lists]

Re: [PATCH v2] Use atomic_t and wait_event to track dquot pincount

To: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Subject: Re: [PATCH v2] Use atomic_t and wait_event to track dquot pincount
From: Lachlan McIlroy <lachlan@xxxxxxx>
Date: Mon, 29 Sep 2008 13:08:19 +1000
Cc: Peter Leckie <pleckie@xxxxxxx>, xfs@xxxxxxxxxxx, xfs-dev@xxxxxxx
In-reply-to: <20080926112833.GB3287@xxxxxxxxxxxxx>
References: <48D9C1DD.6030607@xxxxxxx> <48D9EB8F.1070104@xxxxxxx> <48D9EF6E.8010505@xxxxxxx> <20080924074604.GK5448@disturbed> <48D9F718.4010905@xxxxxxx> <20080925010318.GB27997@disturbed> <48DB4F3F.8040307@xxxxxxx> <48DC3682.2030602@xxxxxxx> <20080926112833.GB3287@xxxxxxxxxxxxx>
Reply-to: lachlan@xxxxxxx
User-agent: Thunderbird (X11/20080914)
Christoph Hellwig wrote:
On Fri, Sep 26, 2008 at 11:10:26AM +1000, Lachlan McIlroy wrote:
Good work Pete.  We should also consider replacing all calls to
wake_up_process() with wake_up() and a wait queue so we don't go
waking up threads when we shouldn't be.

No.  The daemons should not block anyway in these places, and using
a waitqueue just causes additional locking overhead.

The daemons shouldn't block anymore in the code we are going to fix
but what about somewhere else?  Maybe in a memory allocation, semaphore,
mutex, etc... ?  Can you guarantee that there is no other code that does
not correctly handle waking up prematurely?

Just as it is prudent to be defensive and add a loop around the sv_wait()
we should also be prudent by not potentially causing this same problem in
some other buggy code somewhere else.  Using wait queues may add additional
locking overhead but if we are waking up threads that shouldn't be woken
up then we're wasting cycles on unnecessary context switches anyway.

Our customers wont notice if they lose a couple of cycles here or there but
they will notice deadlocks, corruption or panics.  And I would feel at ease
knowing this problem wont happen again.

<Prev in Thread] Current Thread [Next in Thread>