pcp
[Top] [All Lists]

Re: PCP Network Latency PMDA

To: William Cohen <wcohen@xxxxxxxxxx>
Subject: Re: PCP Network Latency PMDA
From: "Frank Ch. Eigler" <fche@xxxxxxxxxx>
Date: Wed, 25 Jun 2014 15:01:43 -0400
Cc: pcp@xxxxxxxxxxx
Delivered-to: pcp@xxxxxxxxxxx
In-reply-to: <53AB1ADA.7040207@xxxxxxxxxx>
References: <53A34A47.3060008@xxxxxxxxxx> <53A9E126.2040000@xxxxxxxxxx> <y0mlhsl3rd9.fsf@xxxxxxxx> <53AB1ADA.7040207@xxxxxxxxxx>
User-agent: Mutt/1.4.2.2i
Hi, Will -

> > OK, that confirms the suspicion that a sampled-metric type of pmda
> > approach suits this better than timestamped-line-of-trace-data one.

> Note that the number above was with really light traffic.  It is
> quite possible that number of packets would be hundreds of thousands
> per second.  The netdev-times perf script records a huge amount of
> trace data and does post processing on it.

Right, it's likely that in-situ statistics aggregation a la your stap
script is a winning approach here.


> The syscall can return before the packet is sent (or freed) so it is
> not clear what skb-free-to-sys_exit would show.

Good point.


> [...]  My understanding is that some of the networking hardware is
> using DMA so the kernel is just giving it some pointers on a list to
> the networking hardware, so the kernel doesn't know exactly when a
> particular packet has been send.  The kernel knows the data has been
> sent because the skb can be freed. [...]

Right, if we have zero-copy hardware, that makes sense.


- FChE

<Prev in Thread] Current Thread [Next in Thread>