[Top] [All Lists]

Re: tx_timeout and timer serialisation

To: andrewm@xxxxxxxxxx (Andrew Morton)
Subject: Re: tx_timeout and timer serialisation
From: kuznet@xxxxxxxxxxxxx
Date: Tue, 2 May 2000 19:49:00 +0400 (MSK DST)
Cc: netdev@xxxxxxxxxxx
In-reply-to: <390EE5BB.2AF2F1CD@xxxxxxxxxx> from "Andrew Morton" at May 3, 0 00:27:07 am
Sender: owner-netdev@xxxxxxxxxxx

> I haven't studied it closely; it's a _very_ differently structured
> driver from the norm.  I would guess that it has been ported from
> another OS.  They ifdef all the spinlocks out of existence if !__SMP__. 
> Interesting...

Indeed 8) Where can I find it?

> But eepro100 uses del_timer_sync() in tx_timeout() (not under any
> spinlock)

Do not forget, it is under dev->xmit_lock!

If media timer will grab it too -> deadlock.

> never be cleared, and del_timer_sync() will wedge in
> timer_synchronize().


> It would be rather nice to document these requirements in
> kernel/timer.c!

Well, if this function were _correct_... 8)

Essentially, it is replacement for combination:


used in 2.2. I.e. delete and wait for all pending BHs to complete.

del_timer_sync() deletes and waits only for _this_ timer to complete.
It can be used from any context, provided user is sure that there
are no deadlocks.

Alas, it has fatal bug. Namely, timer handler _code_ can be released
in between timer_exit() and return from handler. It is utterly
unlikely, but the bug is fatal. 8) I do not know how to repair
this without refcounts.

> Many drivers call del_timer() in their close() methods. It is
> conceivable (but rather unlikely) that the timer routine could get
> confused trying to manage the media interface of a device which has been
> fully or partially closed.

It is not so unlikely, actually. If someone will reopen device
persistently, it will be hit very soon. At least, the first version
of softnetted TCP died during hour on almost idle machine due to
similar race condition. 8)

> Is this what you mean?


> > That's why open()/close()/ioctl()
> > are called by top level without any locks held, though there are
> > not so much of drivers really sleeping there. Lots of operations
> > like to be done in normal process context. [ Including cli(), which
> > does not synchronize to timers, when called from under spinlock. ]
> I don't understand your point about cli().  Do you meant that timer
> handlers can run on other CPUs (presumably via ret_from_syscall()) while
> the global IRQ lock is held???

I mean that if you make cli() inside hard_start_xmit(), it does not
synchronize to BHs. And if this time some timer run on another CPU,
it will run in parallel with cli()ed code. See?

cli() synchronizes to irqs only if it is not inside an IRQ handler
and it synchronizes to BHs only if it is not inside BH.
Otherwise, you would get deadlock without any possibility to avoid it,
because these locks are global. This ingeniuous trick worked in 2.2 superbly,
callers never ever noticed that cli() is not so mighty.
They would notice this f.e. if tried to protect from irq 3 from handler
for irq 4, and this was never necessary, fortunately.
Well, now we have the same situation with networking and it is common.


<Prev in Thread] Current Thread [Next in Thread>