Subject: Re: Interrupts as threads
To: Andrew Doran <ad@netbsd.org>
From: David Laight <david@l8s.co.uk>
List: tech-kern
Date: 12/02/2006 10:25:42
On Fri, Dec 01, 2006 at 11:31:19PM +0000, Andrew Doran wrote:
> 
> o There is no easy solution to the lock order problem with the kernel_lock
>   when using spin locks.

My brief peruse at the biglock code did make it clear how it worked when
an IRQ came in when the 2nd cpu had the biglock...

> o Using spin locks we will have to keep the SPL above IPL_NONE for longer
>   that before, or accept (in non-trivial cases) the undesirable cost of
>   having both interrupt and process context locks around some objects.

SMP code does tend to need mutex protection for longer than the non-SMP
code required IPL protection anyway...

> o Raising and lowering the SPL is expensive, especially on machines that
>   need to talk with the hardware on SPL operation. The spin lock path also
>   has more test+branch pairs / conditional moves and memory references
>   involved than process locks. For a process context lock, the minimum we
>   can get away with on entry and exit is one test+branch and two cache line
>   references.

Can we do deferred SPL changes on all archs?
ie don't frob the hardware, but assume it won't raise an IRQ. If an IRQ
does occur, mask it then and return, re-enabling of the splx call.

> o Every spin lock / unlock pair denotes a critical section where threads
>   running in the kernel can not be preempted. That's not currently an issue
>   but if we move to support real time threads it could become one; I'm not
>   sure.

Once we have a proper SMP kernel, making processes pre-emptable in the kernel
(while not holding a lock) becomes ~free - whereas the non-SMP kernel code
will make assumptions that stop pre-empting.
However you probably want to be able to disable pre-emption whithout
holding a mutex.

> o We are doing too much work from interrupt context.

possibly true...  but the cpu cycles have to be done somewhen, and deferring
to a different context just takes time.

> The cleanest way to deal with these issues that I can see is to use
> lightweight threads to handle interrupts

That just makes it worse! Every hardware ISR would have to disable the
IRQ itself, then the 'low level' ISR would need to re-enable it.

> My initial thought is to have one
> thread per level, per CPU. These would be able to preempt already running
> threads, and would hold preempted threads in-situ until the interrupt thread
> returns or switches away. In most cases, SPL operations would be replaced by
> locks.

You'd have to look very closely at priority inversion problems....

> Blocking would no longer be prohibited, but strongly discouraged - so
> doing something like pool_get(foo, PR_WAITOK) should likely trigger an
> assertion.

If you allow blocking, then people will use it because it 'appears to work'
then you find that one of your ISR threads is busy - not a problem until
several interrupt routines/drivers do it at the same time and you run out
of threads to do the wakeup.

> Assuming you subscribe to handling interrupts with threads, it raises the
> question: where to draw the line between threaded and 'traditional'. It
> certianly makes sense to run soft interrupts this way, and I would draw the
> line at higher priority ISRs like network, audio, serial and clock.

It may make sense to use kernel threads (running through the scheduler)
for some driver activity, and quite probably some of the code scheduled
via 'softint' is in this category.  But IMHO this really needs to be code
this isn't really related to 'interrupt processing'.

	David

-- 
David Laight: david@l8s.co.uk