Subject: Revamping the MI SCSI layer
To: Charles M. Hannum <>
From: Justin T. Gibbs <>
List: port-i386
Date: 08/28/1996 20:29:11
I hope Charles isn't the only one looking at this code.  The reason
I've been putting these diffs up is so that they can be reviewed.
I'm sure that there are others who can add to the valuable input
Charles has given.

I do wonder if port-i386 is the right place for this though.  The
code isn't i386 specific (nor is it NetBSD specific) so perhaps we
should find a better forum.

>"Justin T. Gibbs" <> writes:
>> >1) What is the point of passing down the scsi_queue to the *_get_ccb()
>> >and *_free_ccb() routines?  What would they use it for?
>> A single controller could be in charge of several different queues.  This
>> is geared toward multi-channel/RAID devices.  It is true that its not
>> used for anything yet.
>> >2) Why isn't the scsi_queue allocated by the bus, rather than by the
>> >controller (i.e. by scsi_alloc_bus(), in your configuration `model')?
>> >(Presumably the function pointers would be passed down in the *_switch
>> >structure.)
>> Because some controllers share a queue across mutliple busses.  The 2742T
>> is one of these.
>Uh, so?  In my little world, *all* of this knowledge would be in the
>controller driver.  I'd say `Please make sure you have resources for N
>requests available.', and the driver would tell me `Okay; I have
>resources for M requests.  Shoot.'  The controller driver would
>internally contain all the knowledge about where these are, and
>whether or not they're shared between channels.  There's no reason to
>expose any of this outside.

I don't know what you feel is being exposed here.  When I request
a resource, I tell you what queue to take it from and when I release
it, I also tell you what queue it came from so you never have to
determine this at the controller level.  Since the controller tells
the upper level which queue to use for what, the controller driver
still maintains full flexibility.  You'd rather have every resource
object hold a direct reference to its queue even though the upper
level code already has this information?

I also wonder why you'd ever have to request more than one opening at a time.

By not allowing the drivers to allocate the queues (and in this
way use a queue for multiple busses), you loose one of the major
benfits of having the queues: resource fairness.  If I'm out of
resources and a transaction completes, I can immediately start any
device on the queue regardless of what bus its on because they are
linked by shared resources and a shared queue.  You also get
round-robin against all consumers of the same resources.  The
controller drivers get all of this for free AND, we can change the
queueing behavior to handle things like rtprio scheduling all in
one central location without modifying a single driver.

One other key benefit for strongly associating resources with
requests is that it facilitates shared error handling.  I expect
that the driver will handle the timeout, determine the proper xs
to pass up to the generic layer (its not always the one that caused
the timeout to occur) and that the generic layer will decide what
action to take on the xs.  That action will most likely mean passing
the xs down to the controller drivers error recovery entry points
where it can simply pull out the associated controller resource
from the xs instead of having to do some sort of search.

>> This is a case of not being able to have your cake and eat it too.  I
>> wanted to centralize the handling of errors to the scsi_done routine
>> but I couldn't prevent deadlock unless the lock was released by the
>> controller at the proper time since there are cases when scsi_done is
>> called before the *_scsi_cmd call completes.  I suppose I could record
>> the xs that holds the lock on the target, but this seemed a little excessive
>> The current approach is actually more efficient anyway since the target is
>> only locked as long as it absolutely has to be.
>Where do you see a deadlock?  If there was an error, the command might
>get tossed back onto the queue.  In any case, since you're already in
>the process of dispatching commands, you shouldn't dispatch any from
>the error handler, so the lock shouldn't even be touched.  (If you
>*did* dispatch commands from the error handler in this case, you could
>recurse indefinitely and overflow your stack!)

I could have sworn that there was a problem in the polling case, but I don't
see it now.  I'll release the lock in scsi_run_queue.

>> >8) The use of `_scsi_flags' to deal with autoconfiguration is highly
>> >questionable.  The old method of passing around the flags worked, and
>> >didn't create a hidden interface.
>> This is because you don't know of my hidden agenda.  I'd like to see polling
>> go away entirely or at least only happen for dumps.
>This an *entirely* separate issue from the way queueing is handled,
>and should be addressed at a later time.

I never said that my cleanup was constrained to fixing the way queueing is
handled.  The old scheme left dumps vulnerable, and forced polling when
re-probing busses even when it isn't needed, and I won't use a different
scheme for this until I can get the same kinds of results.

Its funny for you to tell me to do this later and in another infer that
you won't use the code. 8-)

>> >controller-specific data in the xs is correct.  As I proposed this
>> >change, the extra calldowns would merely reserve space on the
>> >controller.  I believe this is better because it doesn't put *any*
>> >knowledge of how the controller works internally into the xs.
>> I won't even touch on who proposed what...  Anyway, the XS has no
>> knowlege of what the controller put in the cdb field.
>But it knows that the controller *has* such a piece of memory, and
>that the pointer to it is representable in a `void *'.  Why should it
>even know this much, when it's completely trivial to have it not do

As I said above, the tight association is usefull for the controller
driver more than it is for the generic SCSI layer.  The generic
layer doesn't care what's stored there (if anything).  Perhaps the
routine should return "boolean" and pass the xs down instead with
the field being of type void so this is truely the case.  I could
imagine a driver using indexes instead of pointers.

>> Why complicate
>> the controller driver's job by forcing it to keep track of what is
>> reserved and what is not when this strategy is so simple and offers
>> a tight association of the resources.
>Complicate its job by keeping track of how many it has free, rather
>than exposing knowledge that it shouldn't have, and forcing me to
>reserve one at a time?  My method is actually faster, in addition to
>being more architecturally sound.

I still don't see when you would need to reserve more than one at a time;
the transactions are going to be dequeued one at a time regardless.
I also don't see how your method is "actually faster" since a controller
driver can still allocate in pools if it feels the need to do so.  The
association between xs and controller resource has to happen at some level
and I'd rather see the generic layer provide this then have each driver 
"roll their own".

BTW, on the issue:
>>5) Passing around spl cookies as function arguments is just wrong.
>In looking at this again, I'm not even sure why I did it this way.
>I think the ordering is still maintained if the *start routines are
>called without spl protection and do their own spl manipulation.

Now that I'm home and can look at the code without second long lags, I've
remembered why this was done this way.  A transaction completing with an
error can clear a devices queue and lowering the pl opens the window for
this to happen.  While you may not like the practice of passing "spl cookies",
it does reduce the amount of time at splbio().  If this is a block to this
code being accepted by NetBSD, I will simply let the *start() routines run
their course at splbio even though there is very little of what these routines
do that requires protection.

Justin T. Gibbs
  FreeBSD: Turning PCs into workstations