pcp
[Top] [All Lists]

Re: [pcp] pcp+graphite, take 2

To: pcp@xxxxxxxxxxx
Subject: Re: [pcp] pcp+graphite, take 2
From: Ken McDonell <kenj@xxxxxxxxxxxxxxxx>
Date: Thu, 19 Jun 2014 12:10:24 +1000
Delivered-to: pcp@xxxxxxxxxxx
In-reply-to: <y0msin1vo48.fsf@xxxxxxxx>
References: <20140616214301.GB6693@xxxxxxxxxx> <53A181C4.5050402@xxxxxxxxxx> <y0m61jyw965.fsf@xxxxxxxx> <y0msin1vo48.fsf@xxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0
On 19/06/14 08:39, Frank Ch. Eigler wrote:
...
It doesn't look too serious; will look into it tomorrow.

Glad you think so ...

I had a quick look at libpcp and this appears to be the locking story ...

pmFetch gets ctxp c_lock from __pmHandleToPtr
... and calls __pmLogFetch
... assuming this is INTERP mode immediately calls __pmLogFetchInterp
... which may take the global lock, gets PCP_COUNTER_WRAP from the env
    and then releases the global lock
... c_lock is held until return from __pmLogFetch, just before the
    return from pmFetch

And c_lock is a recursive mutex.

So an invalid read below pmFetch() for a pmResult component previously freed below pmFetch() looks to be "a bit of a mystery Beryl".

Unless you had my commit without the follow up commit to fix the thread unsafe code I added when cleaning the read cache ... which is certainly implicated in the freeing part of your valgrind output.

<Prev in Thread] Current Thread [Next in Thread>