netdev
[Top] [All Lists]

Re: SOMAXCONN too low

To: davidm@xxxxxxxxxx
Subject: Re: SOMAXCONN too low
From: "David S. Miller" <davem@xxxxxxxxxx>
Date: Wed, 29 Oct 2003 10:43:50 -0800
Cc: davidm@xxxxxxxxxxxxxxxxx, ak@xxxxxxx, netdev@xxxxxxxxxxx
In-reply-to: <16288.537.258222.601897@napali.hpl.hp.com>
References: <200310290658.h9T6w04k015302@napali.hpl.hp.com> <20031029133315.5638f842.ak@suse.de> <16287.62792.721035.910762@napali.hpl.hp.com> <20031029092220.12518b68.davem@redhat.com> <16288.537.258222.601897@napali.hpl.hp.com>
Sender: netdev-bounce@xxxxxxxxxxx
On Wed, 29 Oct 2003 10:08:25 -0800
David Mosberger <davidm@xxxxxxxxxxxxxxxxx> wrote:

> We noticed this problem with a server that uses one thread per CPU
> (pinned).  Why don't you run tux with the "backlog" parameter set to
> 128 and see what happens under heavy load?

Then TuX could be improved too, what can I say?  If the thread taking
in new connections does anything more involved than:

        while (1) {
                fd = accept(listen_fd);
                thr = pick_service_thread();
                spin_lock(new_conn_queue[thr]);
                append(fd, new_conn_queue[thr]);
                spin_unlock(new_conn_queue[thr]);
                wake(thr);
        }

it's broken.  I severly doubt that anyone can show that, when using
the above scheme, their multi-GHZ cpu cannot handle whatever
connection load you put on the system.

The fact that people have written web servers that outperform
TuX and handle the load better is something else to think about.
They existing within the SOMAXCONN limits.

<Prev in Thread] Current Thread [Next in Thread>