netdev
[Top] [All Lists]

Re: in-driver QoS

To: Vladimir Kondratiev <vkondra@xxxxxxx>
Subject: Re: in-driver QoS
From: jamal <hadi@xxxxxxxxxx>
Date: 09 Jun 2004 21:59:01 -0400
Cc: netdev@xxxxxxxxxxx, jt@xxxxxxxxxx
In-reply-to: <200406092127.28477.vkondra@xxxxxxx>
Organization: jamalopolis
References: <20040608184831.GA18462@xxxxxxxxxxxxxxxxxx> <200406090851.40691.vkondra@xxxxxxx> <1086780010.1051.106.camel@xxxxxxxxxxxxxxxx> <200406092127.28477.vkondra@xxxxxxx>
Reply-to: hadi@xxxxxxxxxx
Sender: netdev-bounce@xxxxxxxxxxx
On Wed, 2004-06-09 at 14:27, Vladimir Kondratiev wrote:

> Sure. I know when each DMA queue have space to accept new packets. w.r.t Tx 
> discipline, it is really like 4 (taking into account TSPEC, see my mail about 
> TGe, minimum 5 for STA and 6 (i did not said about power save buffering) for 
> AP) independent devices.

Vladimir - do you have one of these cards? Jean is putting some my
doubts in my mind about their designs. Do they have seperate DMA rings?

> I see you got the idea. Question is, how to implement it.

As suggested earlier:
- introduce id and id_state per ring.
- use a skb tag to select id
- if ring is full, use same id to requeue to qdisc.
- qdiscs above must have semantics that map to the strict priority
scheme (eg you could use CBQ which does both priorities and bandwith
allocation or use simple prio or strict prio qdiscs).
- netif stopping and starting is done per id/ring. 

Did i miss something?

Do you wanna try this? I could give you a hand but dont have much time
to code at the moment. I could point you to the different pieces of code
that need mods and suggest the changes.

cheers,
jamal




<Prev in Thread] Current Thread [Next in Thread>