[Top] [All Lists]

Re: PPPOE Was (Re: >=pre5 OOPS on boot failure to open /dev/console

To: hadi@xxxxxxxxxx
Subject: Re: PPPOE Was (Re: >=pre5 OOPS on boot failure to open /dev/console
From: Henner Eisen <eis@xxxxxxxxxxxxx>
Date: Tue, 18 Apr 2000 22:50:10 +0200
Cc: linux-kernel@xxxxxxxxxxxxxxxx, netdev@xxxxxxxxxxx
In-reply-to: <Pine.GSO.4.20.0004172229160.4874-100000@xxxxxxxxxxxxxxxx> (message from jamal on Mon, 17 Apr 2000 22:48:22 -0400 (EDT))
References: <Pine.GSO.4.20.0004172229160.4874-100000@xxxxxxxxxxxxxxxx>
Sender: owner-netdev@xxxxxxxxxxx
>>>>> "jamal" == jamal  <hadi@xxxxxxxxxx> writes:

    jamal> If i understand you correctly you are just tunneling data,
    jamal> it just happens to be ppp, right?  What about
    jamal> call/connection setup, negotiation etc? If this is
    jamal> irrelevant then i agree with you that it doesnt matter what
    jamal> you use.

call/connection setup will be done by protocol's standard mechanisms.
e.g. a user space process will do a connect() or accept() on a socket
and then do a PPPIOCATTCH ioctl in order to attach the data path to
a ppp channel (or somthing similar in order to attach it to a tunnel device).

    jamal> What about connection setup/teardown/general control? We
    jamal> already have pppd which suffices for the ppp
    jamal> negotiations. Most of this protocols have their own
    jamal> negotiations before they start ppp setup.

Yes, the framework exactly follows that paradigm. It only provides
the functionality needed to attach the connected socket's data path
to a ppp channel. All other work is left to the existing ppp_generic
module (with its co-worker pppd). I have not dived into the details of
the new pppd plugin mechanism. But I hope that making a plugin which
does a connect() on socket when pppd wants to open a ppp connection on
top of the `carrier' protocol is feasible.

    >> Existing network protocol stacks differ in various areas
    >> (e.g. which parts of the protocol processing need process
    >> context, how can ppp_channel flow control be interfaced to the
    >> carrier protocol's flow control mechanism).

    jamal> Flow control/setup is the slow path of the whole
    jamal> transaction. Naturaly it makes a lot of sense to move this
    jamal> part out of the kernel because it tends to be rich, adds
    jamal> tons of code to the kernel and might be subject to frequent
    jamal> changes. The interfacing to the "carrier protocol's" flow
    jamal> control mechanism is done outside (in user space). The
    jamal> connect() and disconnect() pppd hooks for example tie to
    jamal> the "carrier protocol's" connection setup and teardown.

I'm not sure what you mean by 'flow control', but it seems that we
have different things in mind when talking about flow control. Of course,
the end user's process, which has e.g. an open tcp connection which just
happens to be routed over the ppp connection, will be flow controlled by means
of the standard kernels mechanisms (the ppp / tunnel layer is not even
aware of this).

What I was thinking about was the low (device)-layer flow, which
is controlled by netif_{start,stop,wake}_queue() for linux
network devices or ppp_output_wakeup() for generic ppp_channels.
E.g. X.25 (the same holds for most connection oriented sockets) uses
a sliding window mechanism. If the send window is full, then we are
not allowed to send further frames to the peer. Thus, we should do
a netif_stop_queue() for a network device tunnel interface or return 'busy'
from our ppp_channel's ppp_start_xmit() method. And likewise, we want
to do a netif_wake_queue() or a ppp_output_wakeup when there is space
in the send window again. It's that kind of flow control which I want to

Of course we could also just discard any tx packet while the send window
is full. But this will likly result in worse performance. It's
probably better to flow control the upper (net_device tunnel or
ppp_generic channel), because those upper layers can be much smarter
about what to do with the packet which we temporarily cannot accept for

I don't see the framework as a competitor to the AF_PPPOX project.
The latter is appropriate for implementing special ppp encapsulations
in a very efficient/straight manner. My intended framework primarily
focuses on existing, mainly connection oriented, protocol families which
are usally used directly by user space processes (accessed via a socket
interface). If the protocol maintainer intends to use this protocol as
carrier for ppp frames (or to directly tunnel ip over it), then the
framework aims at making the implementation easier and sharing some
common code amoung different protocol families.


<Prev in Thread] Current Thread [Next in Thread>