netdev
[Top] [All Lists]

Re: [PATCH]snmp6 64-bit counter support in proc.c

To: "David S. Miller" <davem@xxxxxxxxxx>
Subject: Re: [PATCH]snmp6 64-bit counter support in proc.c
From: Krishna Kumar <kumarkr@xxxxxxxxxx>
Date: Thu, 22 Jan 2004 18:45:18 -0800
Cc: kuznet@xxxxxxxxxxxxx, mashirle@xxxxxxxxxx, netdev@xxxxxxxxxxx, Shirley Ma <xma@xxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx

Hi dave,

> > Do you really care about this situation? It only happens as a race within
> > one instruction in 4 billion. It will slow everytime if we do this way.

> correctness > performance

> If MRTG hits this case and my net usage graphs have weird spikes
> in them as a result, can I assign the bugzilla entry to you? :-)


If so, do you think the solution that I proposed earlier would work ? No doubt it is quite ugly to behold :-)
The one issue with it is one extra cache loading in 99.999999999% of cases (2 instead of 1) and two
extra loading the remaining % of cases, but this is executed rarely and the user can always wait for
data instead of penalizing the kernel writers. (synchronize_kernel is also a little heavy, so maybe there
is a more lightweight mechanism to make sure that writer is finished).

thanks,

- KK

__u64 get_sync_data(void *mib[], int nr)
{
__u64 res1, res2;
__u64 res3;

res1 = *((__u64 *) (((void *) per_cpu_ptr(mib[0], i)) + sizeof (__u64) * nr)));
synchronize_kernel();
res2 = *((__u64 *) (((void *) per_cpu_ptr(mib[0], i)) + sizeof (__u64) * nr)));
if (res2 < res1) {
/ * Overflow, sync and re-read, the next read is guaranteed to be greater
* than res1.
*/
synchronize_kernel();
res2 = *((__u64 *) (((void *) per_cpu_ptr(mib[0], i)) + sizeof (__u64) * nr)));
}

/* similar code for mib[1], add both into res3

return res3;
}
#endif

static __u64
fold_field(void *mib[], int nr)
{
...
res += get_sync_data(mib, nr);
...
}

<Prev in Thread] Current Thread [Next in Thread>