pcp
[Top] [All Lists]

pcp updates

To: pcp@xxxxxxxxxxx
Subject: pcp updates
From: Ken McDonell <kenj@xxxxxxxxxxxxxxxx>
Date: Fri, 16 Dec 2011 06:54:51 +1100
These two should be pulled into the main tree next time anyone is
planning a serious PCP release ... the interp.c bug is particularly
nasty if the planets align and it chooses to bite.

Changes committed to git://oss.sgi.com/kenj/pcp.git dev

 src/libpcp/src/interp.c |   16 ++++++++++------
 src/libpcp/src/pdubuf.c |    5 +++++
 2 files changed, 15 insertions(+), 6 deletions(-)

commit 3534fbb41da84ee94ee1ebc63f67fc4cae64b974
Author: Ken McDonell <kenj@xxxxxxxxxxxxxxxx>
Date:   Fri Dec 16 06:47:46 2011 +1100

    libpcp - PDU buffer asserts
    
    As a defence mechanism against the sort of problems found in the
    previous interp.c fix, asserts have been added to __pmPinPDUBuf() and
    __pmUnpinPDUBuf() to ensure their arguments are at least word (int)
    aligned as they should always be for valid use.
    
    This and the previous commit passes QA on LinuxMint 12 with no failures.

commit e1f007d6d0d789f84385510b34afc366e9912970
Author: Ken McDonell <kenj@xxxxxxxxxxxxxxxx>
Date:   Fri Dec 16 06:37:26 2011 +1100

    libpcp/interp.c - rogue PDU buffer unpinning fix
    
    Customer-reported from SGI by Arthur Kepner
    original report here
        http://oss.sgi.com/archives/pcp/2011-05/msg00054.html
    follow up here
        http://oss.sgi.com/archives/pcp/2011-12/msg00006.html
    
    Problem eventually tracked down to a day-one oversight in libpcp, and
    involves an obscure corner case in interp.c involving "mark" records in
    the archive and 32-bit metric values ... bogus "addresses" were being
    used as arguments for PDU buffer unpinning, and if one of these happend
    to fall within the address range of a valid PDU buffer that was not free,
    and the PDU buffer was re-used, then and it _might_ cause heap corruption
    as seen in pmlogreduce.
    
    Fix has been confirmed by Arthur with reproducible test case.



<Prev in Thread] Current Thread [Next in Thread>