netdev
[Top] [All Lists]

ip_build_xmit() skb_reserve() problem

To: netdev@xxxxxxxxxxx
Subject: ip_build_xmit() skb_reserve() problem
From: nn@xxxxxxxxxxxx (Neal Nuckolls)
Date: Tue, 22 Aug 2000 23:30:16 -0700
Sender: owner-netdev@xxxxxxxxxxx
I have a linux ethernet driver which must encapsulate standard ethernet frames
adding/subtracting 12 bytes of media-specific protocol headers.      
It calls init_etherdev() and acts like a vanilla ethernet driver.
TCP skb_alloc's and skb_reserves based on MAX_HEADER so
TCP packets tend to always have at least 12 bytes of skb_headroom
when the driver's start routine is called.
But ip_build_xmit() allocs and reserves skb headroom based on 
dev->hard_header_len,
rounding this up to the next multiple of 16.
For ethernet (dev->hard_header_len == 14), this means non-tcp packets
tend to arrive at the driver's xmit start routine with only 2 bytes
of headroom forcing my driver to skb_realloc_headroom() which
introduces a copy of the entire packet.

If I increment dev->hard_header_len by 12 as a workaround,
this forces me to write my own hard_header() and rebuild_header() routines
since code in eth.c and dev.c break otherwise -- ok I can do that --
but also means I cannot use the hardware header cache
(hard_header_cache/header_cache_update/hard_header_parse)
since hh_data[] supports up to only 16bytes.
This is a bummer because now I'm adding 12byte pads in front of
each ether header constructed by my hard_header() routine,
immediately pulling it off in my start_xmit routine since it's there
just to coax ip_build_xmit() into allocating/reserving enough headroom,
and I have to disable the header cache.

Could the following line in ip_build_xmit:

        int hh_len = (rt->u.dst.dev->hard_header_len + 15)&~15;

perhaps be changed to:

        int hh_len = (MAX_HEADER + 15) & ~15;

in the 2.4.0-pre timeframe?

I don't have the luxury of kernel mods for my customers..
Any replies, please send directly -- I'm not on the alias.

thanks.

neal
nn@xxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>