On Sat, 15 Jan 2005 09:47:45 +1100
Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> wrote:
> However, it's not all that difficult to fix up either. We can adjust
> truesize to a more reasonable value in tcp_write_xmit(). Something
> like this will do.
>
> Actually it happens in the case of sendmsg() too. Unfortunately in
> that case we can't do a thing about it since the memory is allocated
> between skb->head and skb->tail.
Doing the adjustments at tcp_write_xmit() time also runs into the
problem you mentioned where we're just blindly subtracting from
sk_forward_alloc.
I really still feel that the best way is to "adjust as we add data".
sendmsg() does that already, I tried to add it simply to sendpage()
but it just needs some checks and wait_for_memory logic.
Here's a patch against current BK that tries to do that. See any
holes? :-)
===== net/ipv4/tcp.c 1.89 vs edited =====
--- 1.89/net/ipv4/tcp.c 2005-01-13 19:57:57 -08:00
+++ edited/net/ipv4/tcp.c 2005-01-14 20:22:27 -08:00
@@ -655,7 +655,7 @@
while (psize > 0) {
struct sk_buff *skb = sk->sk_write_queue.prev;
struct page *page = pages[poffset / PAGE_SIZE];
- int copy, i;
+ int copy, i, can_coalesce;
int offset = poffset % PAGE_SIZE;
int size = min_t(size_t, psize, PAGE_SIZE - offset);
@@ -677,14 +677,20 @@
copy = size;
i = skb_shinfo(skb)->nr_frags;
- if (skb_can_coalesce(skb, i, page, offset)) {
+ can_coalesce = skb_can_coalesce(skb, i, page, offset);
+ if (!can_coalesce && i >= MAX_SKB_FRAGS) {
+ tcp_mark_push(tp, skb);
+ goto new_segment;
+ }
+ if (sk->sk_forward_alloc < copy &&
+ !sk_stream_mem_schedule(sk, copy, 0))
+ goto wait_for_memory;
+
+ if (can_coalesce) {
skb_shinfo(skb)->frags[i - 1].size += copy;
- } else if (i < MAX_SKB_FRAGS) {
+ } else {
get_page(page);
skb_fill_page_desc(skb, i, page, offset, copy);
- } else {
- tcp_mark_push(tp, skb);
- goto new_segment;
}
skb->len += copy;
|