On 04/08/2014, at 9:31 PM, Ken McDonell wrote:
> On 04/08/14 11:16, Nathan Scott wrote:
Hi all,
>> I was having a (very!) detailed discussion with someone after
>> the PyCon.AU talk yesterday, who was quite interested in the
>> event tracing support in PCP. One issue he raised was our use
>> of microsecond-resolution timestamps (over nanoseconds, which
>> he'd found valuable in the past with other tools).
That'd be me.
I thought I'd elaborate a little, because I can appreciate that it might seem
like supporting nanosecond resolution is unnecessary.
The application domain is capital markets: exchanges, brokers, and traders
using electronic trading protocols to monitor markets and manage orders. It's
a fiercely competitive area, and one where very small differences in
application performance can mean the difference between losing or making money.
A common metric is the "tick-to-trade" latency: the time taken to react to a
report of a prior trade from an exchange with a change to your own orders in
the market. Competitive tick-to-trade latencies are in the sub-20us range (and
down to sub-5us for the very serious).
Given these timescales, microsecond resolution becomes an issue: that
granularity means the timing of many hundreds of events is conflated into a
single interval.
Hardware time-stamping units in network cards return nanosecond quantities,
although their resolution is often limited (eg. Napatech NT40E2 has 4ns
resolution limit). When dealing with 10 and 40Gb networks, and modern CPUs
executing many thousands of instructions per microsecond, the ability to
correlate at the sub-microsecond level is very useful.
Obviously not something that's urgent, but perhaps of more medium-term interest
for at least one small group of potential PCP users.
Thanks,
d
signature.asc
Description: Message signed with OpenPGP using GPGMail
|