netdev
[Top] [All Lists]

Re: Queue and SMP locking discussion (was Re: 3c59x.c)

To: becker@xxxxxxxxx (Donald Becker)
Subject: Re: Queue and SMP locking discussion (was Re: 3c59x.c)
From: Werner Almesberger <almesber@xxxxxxxxxxx>
Date: Fri, 31 Mar 2000 12:51:30 +0200 (MET DST)
Cc: netdev@xxxxxxxxxxx, kuznet@xxxxxxxxxxxxx, andrewm@xxxxxxxxxx
In-reply-to: <Pine.LNX.4.10.10003301359200.2499-100000@xxxxxxxxxxxxx> from "Donald Becker" at Mar 30, 2000 02:55:26 PM
Sender: owner-netdev@xxxxxxxxxxx
Donald Becker wrote:
> We should lead a chant.. "Space.c must die. Space.c must die.."

Or maybe we could adapt "War" by Bruce Springsteen:
"Space.c ! UH ! What is it good for ? Absolutely nothin' !" 

> Historical note: The dev->tbusy flag turned into a lock because of a bug.

Considering all the hairy locking issues and that there seems to be a
general trend of moving things into tasklets, I wonder if we wouldn't
be better off with a tasklet-based structure.

Basically, my assumptions are that (1) most work should actually be
done in a tasklet, and (2) give up and retry later (i.e. when the
lock is available) is a better strategy than spin locking non-trivial
portions of code.

It should be straightforward to implement an efficient non-blocking
"run >= once" (*) mutex with tasklets, so all that's needed on the
infrastructure side should be that (1) the important services (e.g.
timers) either come as "well-known" tasklets, or at least don't incur
too much overhead for starting a tasklet to do the work, and (2) any
entries to a driver can be retried by starting some tasklet.

(*) "run >= once", because it seems that in most cases, we have some
     while (there_is_work) do_work();
     construct anyway, so the occasional extra invocation doesn't
     hurt that much. An efficient "run once" implementation would be
     better, of course.

This doesn't help the existing drivers, but may make life easier for
future drivers (and in cases where the locking just gets dreadful
enough that a conversion is the lesser evil ;-)

Example:

static struct tasklet_table tbl;

... foo_init(...)
{
        tbl.nr = 3;
        tbl.task[0] = foo_work_task;
        tbl.task[1] = TIMER_TASKLET(timer);
        tbl.task[2] = well_known_tx_tasklet;
}

... foo_timer(...)  /* run by TIMER_TASKLET(timer) */
{
        ...
        if (tasklet_begin_mutex(&tbl,1)) return;
        ...
        tasklet_end_mutex(&tbl);
        ...
}

... foo_int(...)
{
        /* enqueue work */
        tasklet_schedule(&dev->work_task);
        ...
}

etc.

The explicit initialization could of course be some set of functions
that add all the tasklets related to a given service, e.g. if tx may
be invoked by multiple tasklets of the "core", all of them would have
to be added. (Then we probably should have a calling_tasklet argument,
too.)

Example code for the mutex (untested and probably buggy quick brain
dump) below. Adding the usual performance boosters, a la
tasklet_begin_mutex_spin, __tasklet_end_mutex (no re-scheduling),
etc., would be trivial.

Opinions ?

- Werner

---------------------------------- cut here -----------------------------------

struct tasklet_table {
        int busy;
        unsigned long need_to_run;
        int nr;
        struct tasklet_struct task[sizeof(unsigned long)*8];
                /* or use dynamic allocation */
};


int tasklet_begin_mutex(struct tasklet_table *tbl,int me)
{
        set_bit(me,&tbl->need_to_run);
        mb();
        if (test_and_set_bit(0,&tbl->busy)) return -EBUSY;
        clear_bit(me,&tbl->need_to_run);
        return 0;
}


void tasklet_end_mutex(struct tasklet_table *tbl)
{
        int i;

        clear_bit(0,&tbl->busy);
        for (i = 0; i < tbl->nr; i++)
                if (test_bit(i,&tbl->need_to_run))
                        tasklet_schedule(tbl->task[i]);
}

-- 
  _________________________________________________________________________
 / Werner Almesberger, ICA, EPFL, CH       werner.almesberger@xxxxxxxxxxx /
/_IN_N_032__Tel_+41_21_693_6621__Fax_+41_21_693_6610_____________________/

<Prev in Thread] Current Thread [Next in Thread>