On 06/09/2015 12:00 PM, Frank Ch. Eigler wrote:
the patch itself looks ok to me, but as I read it, this is not just
an err handling issue - if the cache has stale entries then that
suggests the root cause of this issue is in the refresh
functionality - shouldn't it invalidate all entries and then
re-activate only those still current (plus any new instances)?
Correcting that latent bug (if it is one) is likely possible as a
follow-on. In the present case, a dynamic data source could have
wildly fluctuating sets of instances available from fetch to fetch
(e.g., snapshots of recent traffic between source-host network-address
pairs, where A-B traffic might appear then disappear then later
reappear). What pmdaCacheOp sequence would you recommend?
Well, something like this :
pmdaCacheOp(indom, PMDA_CACHE_INACTIVE);
refresh indom ...
for each name in the refreshed instance domain
pmdaCacheStore(indom, PMDA_CACHE_ADD, name, ...)
but I guess if it's wildly fluctuating like that then the cache is
going to end up pretty large eventually - without bound - and this
is probably worse than the proc indom (where new pids come along all
the time, and then exit, but never return, well not until the pid
space wraps). In this case however, instances can reappear so we
can't just cull them from the cache when they disappear.
Also, some qa to demonstrate the issue and the fix would be
appropriate, especially at this stage of the release.
The problem showed up with dramatic slowdowns and lots of diagnostic
I/O traffic into /var/tmp and the pmda .log file, not as differences
at the pmapi client level (other than sloth). How would one qa that?
maybe capture an archive with a lot of instances coming and going in
a 'wildly fluctuating' manner as above, e.g. a script that generates
json data from /dev/random modulo 1000000, with churn of say half
of them re-appearing or disappearing between fetches? Check the log
doesn't grow too much, and we get PM_ERR_INST when expected, etc.
Anyway - it seems to me the patch is an improvement, and we should
just pull it in if nobody disagrees.
Thanks
-- Mark
|