Subject: Re: DSSI update
To: None <port-vax@netbsd.org>
From: Chuck McManis <cmcmanis@mcmanis.com>
List: port-vax
Date: 02/06/2001 20:09:10
>I seem to get the idea that you are perhaps not understanding what MSCP
>is. I'm truly sorry if I'm wrong, but...

Not to worry, I do know what MSCP is. My issue is with device naming.

>MSCP is a rather high-level protocol for talking with disks. The
>controller can then use any hardware solution below to implement this
>protocol. Thus you have MSCP controllers that use SDI (UDA-50 and KDA-50
>for instance), SCSI (most bastard SCSI controllers, along with the RQZX1)
>and MFM (RQDX1-3). You also obviously have some that implement MSCP using
>DSSI.

Yes, MSCP is a generic "mass storage control protocol."

>Another technology that DEC developed was the CI bus. This is a network
>bus, which I'm sure you know more of than I do. My point, however, is that
>DEC also implemented the MSCP protocol on this bus.

Got that.

>If you have access to an Ultrix distribution, you'll also notice that all
>MSCP disks are called ra*, and this is also true even if they are attached
>through a DSSI controller.

So they got it wrong :-)

>So, the "right" way is that if the protocol used to talk with the disk is
>MSCP, then you should say MSCP, no matter what underlying protocol is
>used. You don't call the disks attached to a CMD CQD-220 scsi-disks, even
>though they physically are. You talk to them using the MSCP protocol, so
>they are MSCP disks.

DSSI also has hot swap capability. Do you want ra2 to vanish and re-appear 
as ra6?
(I wonder how that works on the SCSI busses that support that as well.)

>Now, as others have pointed out, there are similarities between the CI
>bus, and DSSI, which means that if you were to implement this the
>"right" way, we would also get very close to functioning CI controllers,
>which would really be nice.

If I read the SHAC docs correctly CI is just another protocol, the chip 
lets you set up mailboxes and can interrupt you when things get posted 
there. Kind of like a bit of shared memory amongst cluster members.

>Having said this, I must say that in the end, you're the one doing the
>work, and not me. So take what I say for what it is, just my views and
>opinions. Any more hardware support is by definition a good thing, so
>don't skip this project, no matter how you go at it.

Thanks, whether or not anyone uses it is of course academic :-) I want to 
use the DSSI drives on the 4000/500 and the MV3400.

>The Q-bus on the other hand is a very generic bus. You can have a
>controller on the Q-bus which actually looks like a massbus
>controller. Where does that leave you? Are those disks hp(n) or
>ra(n)? Plain incorrect view. An MSCP controller on the Q-bus will have
>ra(n) disks, an RLV11 will have rl(n) disks, and if you would happen to
>own a pure SCSI controller for Q-bus, you could have sd(n) disks on the
>Q-bus.

Why? Not to be a pain in the ass (I know, too late!) but since MSCP can be 
used as the protocol abstraction for all these disks why not use it? It 
would sure make writing disk drivers a lot easier! Let's take the MSCPBUS 
driver and hack it so that it uses as much of the hardware as possible and 
then soft implements the rest. This is sort of what Direct3D tries to do 
(sorry paradigm shift). Then you need only implement the "MSCP to RL" 
mapping and the disk driver stuff would just fall out. Then *EVERY* disk in 
the system could be an ra(n) disk!

My point is that some folks take a bicameral view here when it comes to 
MSCP. It either *IS* the disk abstraction, or it isn't. If it is, then 
build the kernel around it. If it isn't, then pick some other abstraction.

>To quote from the man-pages of Ultrix:

Let's not and say we did. I'm not interested in re-creating Ultrix on the 
VAX in NetBSD clothing.

>         adapter msi0 at nexus?
>         controller dssc0 at msi0 msinode 0
>         disk ra0 at dssc0 drive 3

Interestingly this is exactly what I did, so this works against the 
argument. Mine is
         sii0    at ibus (no nexi on the NetBSD)
         dssibus at sii0 (the "controller" here uses the dssibus driver)
         dd* at dssibus? (the "disk" is connected to dssi bus)

The only thing different is that I called the disk 'dd' rather than 'ra' 
and unfortunately if you call it 'ra' then it wants to use major number 9 
and that plops you into the mscp driver which won't have a clue.

>And it's *not* SCSI on Q-bus that shows as ra(n), it's *MSCP* at Q-bus (or
>any other bus) that shows as ra(n). If you were to have a SCSI-controller
>on your Q-bus, you'd have your plain normal sd(n) disks...

I'll try one more time and then I'm going to stop, promise:

There are three issues here:
         1) Device naming - what does a device name "mean" (if anything)?
         2) Kernel device architecture - How do you hook up devices in the 
kernel?
         3) Efficiency/Quality - How do you make a reliable product?

Device Naming:
   Why aren't all ttys tty0 - tty999 ?
   Why aren't all disks disk0 - disk999 ?
   Why aren't all network interfaces ether0 - ether999 ?
   Why aren't all framebuffer fb0 - fb999 ?
   Why aren't all tape devices mt0 - mt999?

Does the user care? Should the user care? At Sun we decided, "Nope, the 
user shouldn't care" and notice how all the disk devices in Solaris are 
/dev/dsk/c0t0s0 (that is "controller 0", "target 0", "slice 0") nobody 
cares if the disk is fiber channel, scsi, smd, etc. Its a disk. And it has 
a /dev/rdsk flavor as well.

One can make the argument, I supported it at one time sitting on the 
"committee" that decreed such things, and you apply it ruthlessly. NetNSD 
doesn't seem to do this, I couldn't say whether or not they want to.

Alternatively you can embed semantic information into the disk device names 
(not pretty but its pretty common) and call them Xdn like SCSI Disk (sd), 
Memory disk (md), virtual disk (vd^hnd :-), etc.

This discussion has little to do with MSCP and everything to do with disk 
naming.

Kernel Device Architecture
     This is a topic that is nearly as rancorous as C coding styles. I read 
the papers on config and device configuration and thing Chris Torek got a 
lot of things right. Its a bit on the kernel hacker side, exposing more of 
the internals to the outside world than say, Windows, but its consistent 
and quite usable. There is a delightfully tasty section on busses connect 
to busses and devices connect to busses. I rather liked the whole concept. 
But there was, to my mind, sort of an implicit naming schema. Presumably 
one could layer the MSCP bus driver on the dssibus driver on the sii chip 
but that kind of crud makes for really poor performance on machines like 
the MicroVAX 3400. Further, since it doesn't add anything to the mix (hang 
on, read the next paragraph) it isn't particularly justified.

Kernel Efficiency/Quality
     One thing I am really motivated by is the efficiency and quality of 
the kernel. From this springs many blessings. Not the least of which is 
reliable operation.

     To that end I strive _not_ to duplicate code or effort and re-use code 
whereever possible. That buys you two things: First the kernel is smaller 
and second, it is generally more reliable since code already working is 
worth much more than elegant code conceived.

>For this discussion you should ignore the VMS device designations, since
>VMS are a totally different ballpark. :-)

I find it interesting that one can argue both "follow DEC" with Ultrix and 
"ignore DEC" with VMS. If it were up to me the device name would be 'di*' 
and exactly match the name the prom shows. Sure would clear up some folks 
confusion.

>This argument is bogus. Just like we aren't interested in the physical
>characteristics for MSCP disks aren't we interested in the physical
>characteristics for the ethernet. We are interested in the logical
>protocol.

So why don't you argue for *ALL* disks to be based on an MSCP model?

>All ethernet controllers send and receive frames, but the actual
>programming protocol to make them do this differs between different
>controllers.

Trust me the code to talk MSCP through the SII chip is very different than 
the code to talk MSCP through the Qbus. But again, if you feel strongly 
about this why not argue it like the ifnet layer on network devices? Create 
the virtual disk (like the virtual network interface) and simplify one's life.

>Whichever way you decide to implement this, I hope you get it running,
>many will thank you.

That will of course be the fun part :-)
--Chuck