In message <19991228135006.A1007@xxxxxxxxxxx> you write:
> Please use a flag bit in the common part at least, that can be tested
> without fetching the new cache line on destruction (so that hot path
> code without firewalling does not pay the price)
Not sure I understand (common part of what?).
I assume you are worried about __kfree_skb... I can have a global
destructor count if you want, or better a max_destructed_field_offset:
for (i = 0; i < max_destructed_field_offset; i += sizeof(long))
This will be zero iterations for the no-destructor case.
> Also the linked list is cache unfriendly. Because the reserve buffer
> is limited anyways I think it is ok to use a fixed size array for the
> destructor pointers.
I don't want destructor called unless some bit is set in the area
reserved (otherwise we get a function call to ip connection tracking
for every AF_UNIX skb: messy *and* slow).
That means we have an array of:
struct {
u_int16_t start;
u_int16_t size;
void (*destruct)(struct sk_buff *skb);
}
which is not that much better than a linked list.
Rusty.
--
Hacking time.
|