Subject: Re: PCI device recognition
To: Nathan J. Williams <nathanw@wasabisystems.com>
From: Bill Studenmund <wrstuden@netbsd.org>
List: tech-kern
Date: 11/16/2004 16:48:09
--Sr1nOIr3CvdE5hEN
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, Nov 15, 2004 at 12:29:38PM -0500, Nathan J. Williams wrote:
> Darren Reed <avalon@cairo.anu.edu.au> writes:
>=20
> > How hard would it be to make it possible to add PCI vendor/device
> > IDs to the list a device recognises, at run time ?
>=20
> It would be a fair bit of work, but mostly mechanical. I think it's
> been desirable for a while to move from the model we have now for
> direct-config buses, where each driver checks its suitability for each
> device on its bus-type, to something more table-driven, where a
> device's ID information is looked up and mapped to a driver. The
> details are tricky, of course:
>=20
>  * Would the table live only in the kernel? That's probably the
>    simplest thing, but one can imagine keeping only a minimal set of
>    entries in the kernel for booting, and calling out to some userland
>    app or socket or something for either the contents of the rest of
>    the table; or having the kernel pass all the identifying
>    information out and putting all of the table and rules in userland.

I think we should have two tables. The one that matters is the one in the=
=20
kernel. The other one would be a userland one.

The kernel table covers all the drivers installed in the kernel. The=20
userland one includes all drivers we have LKMs for.

>  * What would be the mechanism for updating the table?

My thought is that it'd get updated by loading a driver into the kernel.

This idea's like calling out to userland except it's more "userland goes=20
looking" driven. The main difference is that the kernel doesn't block=20
waiting for userland to do something.

>  * Which identifers would be handled? (On PCI, for example, this could
>    include vendor, device, subsystem vendor, subsystem device, class,
>    subclass, interface, and revision).

Not sure. The ones we match on now would be good.

>  * What rules would be used for matching and for resolving conflicts
>    (say one rule matches by PCI class and subclass and another matches
>    by vendor and device)?

I'd expect we want to have these tables really list candidate drivers.=20
So I'd think we'd want to still call each driver's match routine.

>  * How would the quirk/chipset-variant options be handled? In the
>    drivers as now, or as a parameter in the table passed to the
>    driver?
>=20
>  * Would drivers still be given an opportunity to accept or reject a
>    match from the table?

I'd expect yes. I think this bit's still important. Among other things,
given our config methodology, we also get a measure of the quality of a
match.

A big win we'd still get is for any given device, we only bother calling=20
match routines that have a chance of working.

> This work could apply to all the direct-config buses, and so would
> probably fit in better with the new drvctl(4,8) interface than
> something PCI-specific (I'll note that we already have rescan
> implemented for PCI, as "drvctl -r").

Take care,

Bill

--Sr1nOIr3CvdE5hEN
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (NetBSD)

iD8DBQFBmp/JWz+3JHUci9cRAoDHAJ0XmW70otpieqec38HQyukrY3GTsQCfWCkR
/X9y8lC4npjYBqMQHcvAHZw=
=WRwl
-----END PGP SIGNATURE-----

--Sr1nOIr3CvdE5hEN--