On 04/03/15 09:19, Frank Ch. Eigler wrote:
...
My guess is that the rounding-up was not for this purpose, but for the
hypothetical easier reuse of the PDUbufs after unpinning &
free-listing - i.e., trying to avoid fragmentation.
My memory was wrong and you're correct. Rounding up to the next 1024
byte size increases the chance we can reuse the buffer for a slightly
different size, and avoid another allocation.
> ...
A steady state between active requests is all-zeroes :-). Will see
about getting a mid-run peak set of numbers.
Not so. The original buffer pool code _never_ frees an allocated buffer
... it maintains its own free list so the sum of the allocated and free
buffers represents a high-water mark of the allocation footprint (to mix
the metaphors obscenely)
Similar here, with the new code:
pmcd.buf.alloc
inst [12 or "0012"] value 2
inst [20 or "0020"] value 2
inst [1024 or "1024"] value 2
>
and all zeroes elsewhere. ...
So you're freeing buffers when the pin count goes to zero?
... But that's in nearly-idle state. The
pdubufs get much busier mid-archive-processing.
Eh? This is pmcd's buffer pool, which has nothing to do with archive
processing.
Sure, we could microbenchmark, but it may be even better to designate
some big pdubuf-intensive realistic workload (some tiny job? a big
pmlogextract? pmwebd-graphite gigaquery?), and compare those.
OK. But you've missed my "important" ones ... pmcd, pmlogger and pmie.
pmlogextract is unlikely to be interesting as it bypasses much of the
code paths that other clients use and in particular never uses the
buffer pool we're talking about here.
|