pcp
[Top] [All Lists]

Re: [pcp] libpcp multithreading - next steps

To: pcp@xxxxxxxxxxx
Subject: Re: [pcp] libpcp multithreading - next steps
From: Ken McDonell <kenj@xxxxxxxxxxxxxxxx>
Date: Tue, 19 Jul 2016 09:08:46 +1000
Delivered-to: pcp@xxxxxxxxxxx
In-reply-to: <578D1AE1.6060307@xxxxxxxxxx>
References: <20160603155039.GB26460@xxxxxxxxxx> <578D1AE1.6060307@xxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0
On 19/07/16 04:07, Dave Brolley wrote:
...
Let's see if we can get this moving again.
...
What I proposed is to back up a bit. Let's isolate the original
loosening of the __pmLock_libpcp lock, measure the performance
improvement for the scenario which inspired it and create some qa to
verify it. If some of this has already been done, the let's re-review
those results.

I support this approach.

Additionally, I'd like to see for each group of changes ...

(a) what's the problem this is trying to address? (I suspect the development, review and QA parts have not been in agreement on this in the past)

(b) what's the evidence for this being a real problem? ... here "real" means a realistic probability of badness happening in normal use cases, as there is sometimes a temptation to over engineer the solution introducing complexity that may not be warranted in the context of the likely consequences of the problem being addressed. A hypothetical example would be finer-grained locking to increase parallelism in a section of the code that is rarely traversed in practice.

(c) some verifiable before-and-after evidence to show the problem has been addressed without side-effects.

But before we start, I think we need a discussion and consensus on how/if the create/delete context race should be addressed ... this relates to the current lock ordering and this snippet from __pmHandleToPtr():

        PM_UNLOCK(__pmLock_libpcp);
        PM_LOCK(sts->c_lock);

I think this is a "how" not "if" question, but the best solution is not obvious (at least to me).

<Prev in Thread] Current Thread [Next in Thread>