Subject: Re: i386 isa interrupt latency
To: Charles Hannum <>
From: Greg A. Woods <>
List: port-i386
Date: 07/06/1995 14:47:33
I've some general comments about tty device drivers that might fit into
this discussion.

I must admit that I've never really looked at the actual code for the
NetBSD drivers -- I've been way to busy on other projects.

However, in the past I've implemented tty drivers for a number of
different operating systems, including Xenix and SysVr3.2 on i286 and
i386 platforms.

[ On Thu, July  6, 1995 at 12:32:08 (-0400), Charles Hannum wrote: ]
> Subject: Re: i386 isa interrupt latency
> 1) Add an extra layer of buffering, to shorten the path inside the
> interrupt handler.  This has been done.

Provided that this doesn't entail having to actually copy any bytes
about, this is a good idea, and I'm happy to see it finally done

> 2) Give tty interrupts a higher priority.  I was planning to do this
> soon.  You could go further and (almost) never allow the lower half of
> the interrupt handler to be blocked.  This would give you close to the
> minimum possible latency.

With a tty driver, if you don't want to loose characters
(i.e. interrupts), this is important.

However, it's not always necessary to degrade the performance of a
general purpose O/S just to do some complex I/O operation that could
wait to run outside of the interrupt context.

In an ideal hardware design the UART would automatically control the
hardware flow control lines whenever its hardware buffer fills to the
high-water mark, and empties to the low-water mark.

However it seems that popular UART's like the NS16550 don't do this, so
it has to be done in the interrupt routine.  What this means is that in
order to "correctly" implement hardware flow control, and thus implement
a design that will *not* drop characters, the interrupt service routine
must use hardware flow control to stop the flow of characters when it
"thinks" it won't be around in time for the next high-water mark event.

Indeed using hardware flow control in such a manner will affect
throughput, but isn't that the idea?  I.e. force the throughput down to
a level where the CPU can keep up.

In a trivial dumb async driver I wrote for Xenix once, I just
(logically) dropped the CTS line at the beginning of every asyintr()
call and set a wakeup for the asyread() routine.  Then in the asyread()
routine I read the byte from the UART and raised the CTS line again.
Throughput was reduced to exactly the rate the CPU could handle, but
there were *never* any dropped characters with devices that also
correctly implemented RTS/CTS flow control.

BTW, as far as I've been able to tell, this is also the way that most
"smart" I/O boards such as the EPORTS card in an AT&T 3B2, work.  Their
aggregate throughput is reduced to the limit of the on-board CPU as more
ports start receiving characters.

							Greg A. Woods

+1 416 443-1734			VE3TCP			robohack!woods
Planix, Inc. <>; Secrets Of The Weird <>