Subject: bus shims (Re: lack of pciide transfer alignment checking causes crash)
To: Jachym Holecek <freza@liberouter.org>
From: Daniel Carosone <dan@geek.com.au>
List: tech-kern
Date: 06/29/2005 12:23:32
--fSWWt+kQ8l5TrLD/
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, Jun 29, 2005 at 03:13:03AM +0200, Jachym Holecek wrote:
> > bouncepci is a (MD?) bus, which provides implementations of the normal
> > bus_dma interface that include internal bounce buffers.
>=20
> And bouncecardbus0 for cardbus and bounce${foo}0 for ${foo} even
> if the bouncing constraint (and the routine that implements it) is the
> very same?=20

Ideally, no.  As I went on to say, ideally there would be a generic
bouncebus psuedo-driver implemented as a shim, purely in terms of the
bus_dma interface and simply proxying most of the routines and calls
straight through.  I'm not sure if we can presently meet that ideal,
though, so I started with the more-specific example.

> Why, if you can just bounce inside the driver, possibly using
> some bus_dma convenience routine? =20

Other than that you may have to repeat the same thing in several
drivers, no reason at all. Certainly moving that common code to a
convenience routine is another way to go - and is arguably almost
exactly the same except for the config semantics and code plumbing.

> Also note that the should-bounce-flag may be dependent on chip
> revision or such, so the driver itself has to make the decision
> anyway.

Again, as I went on to say, this would be implemented in the
driver-to-bouncebus attachment glue, where such constraints can be
embodied (and leaving the core driver free of any particular need to
do anything other than use bus-independant bus_dma, as now).
Certainly, the examples given at the top of this thread seemed like
cases where the constraints were largely a property of the attachment.
(This device needs to be attached to a bus that looks almost, but not
quite, like real pci).

I'm not sure that making bus_dma itself more complex is the right
approach, especially if it is already sufficient to support the needs
of such a shim.  If it isn't, then if changes are to be made to it,
perhaps they're best made in this direction.

In the bce case, a generic bouncebus shim asked (by the bce
attachment) to provide bouncebuffer support for dma transfers above
1Gb, would just directly provide the tags and function pointers of its
parent on a host with <=3D1Gb of RAM. Thus there's zero per-transfer
overhead on such machines.

At least, that's how I imagined it.  One of the reasons I was thinking
this way was because it seemed to fit nicely with some earlier
discussion of potential bus shims, proposed by dyoung, for other
purposes.

--
Dan.
--fSWWt+kQ8l5TrLD/
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (NetBSD)

iD8DBQFCwgYkEAVxvV4N66cRArz3AKCXoJoBC6f/X/pbnyKJijExR86BFQCfbHyC
KHkRncQEWIoyTA+YS+4B4Zk=
=ZPON
-----END PGP SIGNATURE-----

--fSWWt+kQ8l5TrLD/--