Subject: Re: Machine-independent device drivers
To: Charles M. Hannum <mycroft@ai.mit.edu>
From: Terry Lambert <terry@cs.weber.edu>
List: tech-kern
Date: 04/13/1995 22:20:56
Well, I assume since I got a copy of this, I'm either on the list
or it was specifically mailed to me for comment based on the recent
disagreement about how to handle the PCI code that I had with
Garrett Wollman on the FreeBSD lists.

Either way, for what it's worth, here are the thoughts I've been
saving up on this topic.  8-).


[ ... split bus specific code from device specific code ... ]

> * Different machines map device registers and memory differently.

This is a job for wrappers, as you suggest (less preferrable) or
for inline functions (more preferrable).  Of course, a binary to
boot on multiple bus structures, or which must support multiple
bus structutes concurrently (ISA/EISA, EISA/PCI, etc.) would need
wrapping in any case.

> * In some cases, multiple mappings are possible on the same machine.

I think this is largely irrelevant; the correct behaviour in this
case is "pick one".  The exception would be for devices that can be
configured to multiple settings.  This is the travelling salesman
problem in a different guise, if what is intended is to use the
least common denominator to fix the mappings for as much relocatable
hardware as possible.

> * In some weirder cases, you might even have one device mapped in I/O
> space, and another one of the same type mapped in memory space, on the
> same machine.

I can't see this, unless you are defining a device by the driver it
uses rather than the actual hardware itself.  If that is the case,
it's quite possible that a single driver might have support for
several types of physical devices of the same class with different
operating characteristics.

I think that in this case, it would clean the model a lot to have
the concept of controller and pseudo-controller code... this is
the same pardigm currently used by the SCSI devices.

For instance, a specific example might be a general device class of
"serial" with multiple interface instances.

In other words, split up the device model for that type of device so
that the statement about the type being the same when the access is
not is no longer true.

> One answer is to make all register accesses to the device into calls
> through function pointers, and implement simple conversion functions
> in a machine-dependent module.  In some cases (e.g. the LANCE), this
> may even be sufficient; most of the access after initialization is
> through memory.

I think in large part this is what will have to be done.

> However, if you look at the cases of the DP8390 and the NS16550, this
> approach clearly becomes intractable.  The overhead of the function
> calls would simply kill performance.

This depends.  You can save the function call overhead by pushing the
function overhead up a layer.  This would mean inlining (and probably
duplicating) what small glue code there was that distinguished that
driver from any other driver in the first place.  This is basically
what the stackable file system code will do for NULL op layers for
intervening layering... for instance, for a compression layer on top
of a UFS, the dirops "bleed through" and don't cause additional call
overhead if implemented correctly.

The biggie in this type of implementation is how to "take a device
away" after an existing attach has occurred.  I think I can make a
good case for this as a real problem.

> Another idea I just had is to implement a sort of `millicode' using
> function pointers.  If you look at the DP8390 and NS16550 drivers, for
> example, it's fairly clear that the register acccesses are clustered
> is small areas of high concentration.  Those areas could each be
> separated out into machine- (and bus- and mapping-of-that-bus-)
> specific routines.

Yes, this is almost the picture I was thinking of in the statement above.


> Does anyone have comments about this, or should I `just do it', as a
> proof of concept?


I think it's important to include the idea of hot registration and
deregistration up front, for things like PCMCIA cards.

I think it's also important to consider the concept of install time
destructive probing.

I think each bus interface can be considered in terms of a bus attach;
for the devices on the motherboard, and for class drivers (like SCSI),
the attach can be further partitioned.


In Windows 95, both the concept of hot plugging and the concept of
one time destructive probing are covered.  During the install, each
possible device for which a destructive probe exists is scanned for,
with logging of the entry and exit of states, and a save of the fact
that the process has started.

What they do is do the major work of probing as part of the install
process, with a note to the user that that is what they are doing,
a "percent complete" indicator bar, and a statement to the effect
that "if this takes too long, reset your machine".

If a reset is called for because of a destructive probe, then the
software knows what has and hasn't probed prior to that point, and
it can skip the failing probe and go onto the next item.

By doing the destructive probes at the start, the kernel can be much
less generic and much more statically configured (using a data-driven
mechanism to cause the loading and configuration of particular drivers),
with a great advantage in startup time.


The plug-n-play is handled by a driver called "the volume tracking
driver", which is in reality a registration callback facility that
is itself called into by bus's which support unplug -- typically this
will be PCMCIA and external devices (SCSI/parallel port/tape) which
can be hot plugged, or which can be independently powered up and down
seperately from the rest of the system.

The volume tracking driver is therefore responsible for things like
modem and network pcmcia cards and so forth as well, although these
are not strictly speaking volumes (Win95 uses the term "resources").


In terms of bus interface, the bus attach mechanism must be seperated
from the machine specific access methods.  A good example of why this
is so is obtained by looking at the EISA and EISA/PCI buses on DEC
Alpha motherboards.  Both IBM and Apple have further promised that
there will be PPC machines with at least PCI interfaces (and probably
either ISA or EISA at the same time, if they bow to market pressure).

The largest example in the PC hardware market is MCA, which is
independent of the fact that the machines using it are still Intel
machines (or RS/6000 or PPC).

Clearly, the concept of a driver for a particular chip must be
seperated from the bus structure by which it is attached to the
machine.  Thus there must be seperate per bus drivers.

In addition to the seperate per bus drivers, there must be interface
abstraction drivers.  For the SCSI subsystem, specifically, for the
support for a common bus independent API which the OS can call into
to perform SCSI disk, tape, and if the controller supports it, target
mode operations (as well as media jukebox "changers" and CDROM audio
controls), this already exists.

Probably the same thing is needed for the serial interfaces as well;
there is some work in this direction regarding cannonical processing
modules and other artifacts of the termios system, but this is
largely ungeneralized for things like "smart cards", which download
major portions of the cannonical processing and flow control mechanisms
into the cards themselves using a card specific driver (bipartite
drivers for in-box cards and "multi-drop" drivers for fan-out units
on the other end of RS422 interfaces).

I would further argue for abstracting into service support routines
the concepts of bus mustering DMA, device register manipulation, and
similar things for which set up time is not crucial, since once
enacted, the interfaces do not require ongoing "poking" in those
areas (a good example is the handling of the GPL'ed "download code"
for the Adaptec AIC 7xxx SCSI sequencer chips, or, alternatively,
a potentially non-disclosure based binary-only Adaptec driver).


One proposition that hasn't been mentioned is the need to expand
upon the block I/O subsystem to allow callbacks as a result of
media that has "disappeared" such that the OD may request the
user reinstall media for which outstanding dirty buffers exist.
This would primarily be Magneto-Optical and Syquest/Bernoulli
type devices, although externally powered DASDI and hot pluggable
PCMCIA based devices (or SCSI interfaces, as the case may be)
would also fall into this category, as long as the file systems
themselves were not FAT-like in their ability to not have cached
write data outstanding when they went offline.


Finally, I'd like to advocate a "fallback" bus type for PC class
hardware, one which would use VM86() calls in order to implement
its drivers.  In this fashion, while it may not be speediest, ALL
HARDWARE THAT WORKS UNDER DOS WOULD WORK.  There are some additional
issues related to this type of driver, namely, the preloading of
DOS drivers prior to the protected mode OS being run so that the
drivers are accessable (MSCDEX and ASPI are notable reasons, for
CDROM install), but they can be dealt with at the same time as the
"Boot the OS from DOS" issue is dealt with.  I don't expect network
drivers to fall into this category, since it is unlikely that a
VM86() driver could deal sufficiently with the issue of calling the
INT 27 DOS_NOT_BUSY interrupt sufficiently frequently to allow them
to run as expected.  If network drivers are considered to be an issue,
then the way to resolve it is by loading the Novell NetWare server
ODI drivers and using them (like UnixWare 2.0 does) instead.


					Terry Lambert
					terry@cs.weber.edu
---
Any opinions in this posting are my own and not those of my present
or previous employers.