Subject: Re: "esp" driver reorg proposal
To: None <tech-kern@NetBSD.ORG>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 01/28/1997 17:14:34
 "Chris G. Demetriou" <cgd@CS.cmu.edu> writes:

>> The only one I can think of is the case where you have two different
>> types of busses in the same computer. By using a function pointer, you
>> test once, when you set the pointer's value. With a header-included
>> solution, you have to test each time.
>> 
>> On the flip side, there could well be times (when you have only one
>> bus type) where the header solution would be better, as you do all
>> the resolution at compile time. You don't need to use a pointer to
>> tell you what to do as you already know. (so actually this is a vote
>> against function pointers :-)

>... of course, compile time checks are useless and broken if you have
>proper loadable drivers (i.e. loadable drivers that can use actual
>hardware 8-).

And, to flog a dead horse,  autoconfig machinery to support said
proper loadable drivers. :-)


>and it's not just 'two different types of busses', it's 'two different
>types of foo', where foo could be chip accesses, DMA setup, etc.

Not that my opinion means much, but I agree vehememtly with Chris. 

The `bus' on which a device attaches is, in several ways, *irrelevant*
to an MI driver.  A bus is a connector standard and a set of signals
(a bus protocol) for doing I/O.  What's really at issue for an MI
drivers is managing the *entire* communication path between a given
device and memory -- or the CPU, for programmed I/O devices.

Here's a concrete example: consider a unibus device like a DZ-11.  The
device could actually be on a Unibus or on a Q-bus: one driver should
handle both.  Let's assume a Unibus.  I can think, *immediately*, of
more than *ten* I/O topologies that could connect a CPU to a Unibus
device. (details on request.)   The issue here is different
implementations of a bus -- different host adaptors -- combined with
cascaded bus adaptors.

My understanding of <bus.h> is that it's meant to cope with cases like
the above.    Perhaps most driver writers think of a `bus' as
specifying not just a connector and a protocol for talking to
devices on the bus, but also the host-side bus adaptor.
This is a good model of how, say, ISA buses usually work, where the
`bus spec' more-or-less includes  a specific interrupt controller,
DMA chip, etc.

It's not a good model of PCI busses: there are several different
chipsets for different host CPUs. All these impelment a ``pci bus''.
Those PCI busses are indistinguishable from the perspective of a given
PCI card. Driver operations that affect card internal state, or
request bus resources *manged* *by* *the* *bus* *protocol*  will work
for any implementation of a bus.

Equally clearly, driver operations that need to deal directly with the
host-side implementation of a given bus will *not* work with every
implementation of that bus.  Such host-side operations need to be
dispatched to code that does the right thing for the host-side bus
adaptor (chipset, or LSI board, or whatever) that's actually present.

Chris's example of PCI is a good one.  The different flavours of Intel
PCI chipset are mostly set up by a BIOS and need little software
intervention.  NetBSD/Alpha supports at least three PCI chipsets with
very different interfaces; and newer machines (Alphastation 433a,
500a, 550a) have yet another chipset.  The differences (e.g.,
necessary address swizzling for subword access?) are non-trivial.

Another case is VME devices.  The ``right'' way to write VME
device-drivers with <bus.h> would result in a driver that worked on
*all* supported machines with a VME bus, irrespective of whether the
VMEbus is a mainbus (as in VME sun3s), or hanging off any number of
VME bus adaptors -- SBus, PCI, TurboChannel, or what-have-you.


Of course, it's nice to code both drivers and <bus.h> so that kernels
configured for only one particular instance of a bus can eliminate the
run-time dispatch.  But we *need* to support the run-time dispatch.
If nothing else, it's necessary for building installation kernels that
run on a full range of hardware.