Hi,
On 2016-05-16 09:59, Nathan Scott wrote:
> ----- Original Message -----
>
>> # time pminfo -f oracle > /dev/null
>>
>> real 0m6.583s
>> user 0m0.026s
>> sys 0m0.010s
>
> Yeah, OK, hmm (those times will certainly be the cause of the ./Install
> failure)
>
>> Then the most relevant part: for most clusters response times are
>> somewhere between 0.03 and 0.3 sec but these two stand out:
>
> Those seem like good-to-middling times, but this...
>
>> - oracle.file takes ~1.3s with ~1k rows
>> - oracle.object_cache takes ~3.2s with ~225k rows
>
> is horrendous. oracle.file is the same cluster we had trouble with earlier
> when testing with the Intel folk FWIW.
>
> I wonder if the best we can do here is something like:
> - disable these two clusters by default
> - add oracle.control metrics for each
> - add pmstore support to allow people to opt-in to these clusters.
But if opting in for these means that the timeout is hit pretty much
guaranteed, not sure what's the point then? Ok, initially oracle.file
fetch might be possible but with both it seems to be guaranteed that it
won't work.
> Its not ideal but I don't think there's much else we're going to be able to
> do to improve things on our end of the connection, and this would stabilize
> things for you at least. Thoughts?
I checked with some local DB folks - they haven't used the object_cache
metrics anywhere so for them it's nice-to-have category. But the file
metrics are important.
The above timings are with almost completely unloaded DB instance so not
sure how they would look like under extreme load, I wouldn't be
surprised if they'd be higher then. But that'd be the time when the
metrics are needed the most to see what was going on.
So we're back to the initial question of the thread, can we for example
adjust the 5 second timer for the Oracle PMDA to be more forgiving or
come up with some other approach here? It seems that we can't affect how
much it takes for Oracle to respond and on the PMDA side the actual
select query seems to be as efficient as it can be.
Thanks,
--
Marko Myllynen
|