Subject: [long] Re: RFC: merge chap-midi branch
To: Alexandre Ratchov <alex@caoua.org>
From: Chapman Flack <nblists@anastigmatix.net>
List: tech-kern
Date: 06/26/2006 23:39:42
Alexandre Ratchov wrote:
> i've some naive questions about your midi driver. I've looked at sources,
> and i see that raw devices don't wprovide the input raw midi stream. instead
> the driver "normalizes" the input stream.
What "raw" means for any given device is a matter of definition.
You can open a raw disk or raw tape, and what this means is that you
are skipping the block buffers; it does not mean you get to write
random ECCs on the medium. You can open a tty and put it into "raw"
mode (with an ioctl), and what that means is that you are not having
certain input and output conversions applied, but it doesn't
mean that you are in control of the framing bits. In fact you probably
don't want to be, because that tty is just as likely to be a pty with
data being carried over some networking protocol as it is to be
connected to a UART, so you know nothing about what framing you would
see beneath the protocol layer that a "tty" shows you.
So "raw" for any device is always relative to a specified protocol
layer. If you tried to wrap MIDI around the OSI layering model, it would
work out that the rmidi device here puts you in at about layer 3. Below
that layer, what you are seeing varies greatly between a UART,
class-compliant USB, Midiman USB, 1394, RTP, and so on.
If I understand you correctly, you are using "raw" to mean something
rather different, more like access to the stream of bytes seen at a UART
(or what would have to be, for other types of link layer, some emulation
of the stream of bytes seen at a UART). I considered that definition for
"raw" and rejected it for the following reasons.
1. Some of the links we may support (e.g. class-compliant USB) are in
fact MIDI message, not byte-stream, oriented. RTPMIDI, which we don't
support yet, further provides for atomicity of parameter reads and
writes. The interface between midi(4) and link drivers no longer
assumes they all look like UARTs, and that simplifies implementation
of links that don't.
Consider a couple of other possible definitions of "raw":
a. "real" raw: This would mean your application sees the
exact form of data needed by the link driver under rmidi2, and the
possibly different form for the link driver under rmidi3. Yuck.
b. emulated-UART-raw: in this case all of the link drivers that are
not UARTs are responsible for pretending they are. This is an
emulation of dumb over smart, and to accomplish it, duplicate
implementations of the MIDI state machine sprout up in the
link drivers. That was the status quo ante, and a large source
of bugs in the kernel code.
2. The looks-like-a-UART definition of raw requires applications--as
you point out yourself below--to contain code for decoding byte
streams to MIDI even when the data are coming over a message-
oriented link, and that extra work is also a source of bugs, no
less in applications than in the kernel. On the other hand, a
raw-_MIDI_ definition of raw, without requiring such applications to
be rewritten, can improve their stability anyway; more on that
below.
> If there are no errors in the input stream, an application reading
> /dev/rmidiN will receive something equivalent to the raw input stream.
> Is this correct?
Certainly.
> However, if there is an error in the raw input stream, the driver will try
> to correct it and the application reading from /dev/rmidiN will not see the
Depends on the layer of the error. Whether a sequence of MIDI messages
makes any sense is up to you, your application, and your equipment,
but whether a sequence of 1, 2, or 3 bytes can form a MIDI message at
all is midi(4)'s department. There is no attempt to "correct" anything
at that level; an error's an error, and the error count will increment
and the MIDI rules will be followed to sync at the next valid message.
> error. So, (for instance) it cannot tell to the user "there is an error" or
> invalidate the track being recorded or take whatever "intelligent" action to
You have a worthy point in that for now it is easier for the user
to view the error count than for your application to retrieve it
for sophisticated uses. I believe the appropriate way to address that
is by adding a way for an aware application to get notice that an
error has been detected, and not by being tied to a model where
"raw MIDI" really means "raw bytes" and depends completely on the
application to recognize anything as erroneous at all. The latter
disadvantages the _user_, who may have to use different applications
of varying quality, and gets no useful feedback from the common mass
of applications that are less careful about error detection than yours
is.
I am open to suggestions for what the error reporting mechanism to the
application should look like, but I think it should be enabled by an
ioctl() or some other explicit opt-in so that it will not interfere with
more naive applications that simply read the port.
> Also, since midi(4) devices are raw devices, any application using such a
> device will decode the MIDI byte stream. Thus the job is done twice: once in
A major point of this work is to reduce the number of places where that
job is done. It was being done several places inside the kernel--as
mentioned above, modern link types tend to require it--each implemented
with its own set of mistakes, which were the chief source of
interoperability problems we had. That has been reduced inside the
kernel to exactly one place, which I dare not boast is error-free but
I certainly put effort in that direction, and if any state-machine
mistake remains there is exactly one place to look for it.
Without being in control of applications, I am not in a position to
remove redundant bytestream-to-MIDI decoding from them too, though
if you felt very much like #ifdef-ing it out, you certainly could.
But I would not think it worth the effort, and here's why.
The cost of leaving such code in an application is not likely to
exceed a comparison or two per byte received. What will be different on
NetBSD is simply that the fast path will always be taken.
Now, remember my point that implementations of the MIDI state machine
are notorious sources of bugs. I have high hopes for the one in your
application (though I haven't seen it yet), but otherwise I'm not sure
I've seen yet an application without bugs in that part of the code.
But those bugs are in the code paths that will not be followed when
running on NetBSD. The fact that the work is redundant does not much
concern me, because it amounts to perhaps a couple of highly predictable
branches in the application. But the result is that an application can
be designed to run on raw-bytestream systems and be run just fine on
NetBSD's raw-MIDI as well, though perhaps failing less often on NetBSD.
That doesn't bother me either. :)
Is there a tradeoff? Sure there is: as you rightly point out:
> of view, sometimes it is good to be able to test your midi application on
> the "real" byte stream
Right - if your application includes bytestream-to-MIDI decoding, you
will not get good test coverage of that code by simply reading from
NetBSD rmidi devices. For development testing, you will be better off
writing a proper test harness that generates invalid byte streams in
predictable ways to ensure you are covering your code. But I chose this
trade for a reason: the MIDI support will be used much more heavily by
people who want to _do MIDI stuff_ than by people who want to _develop
new MIDI apps_.
The people who want to do stuff are disadvantaged by having to use
different apps that may detect and report comm problems differently,
or even not at all. For the case where you are trying to do stuff
and it isn't working and the application isn't telling you why, you
are well served by OS reporting, independently of any app, that you've
got communication errors clocking up on rmidi3. You have a single way
to get that information regardless of the application you may be using.
> and to see what your hardware does when it receives
> your "real" byte stream.
I am not sure here whether you are talking about testing hardware
or testing the output of software.
If the concern is for the output of software, this is a good moment
to remember that the only kind of error detected by midi(4) is in
fact violation of the lowest MIDI message layer: bytes that are not
MIDI at all. By definition your only purpose in finding out whether
your application produces that kind of byte stream is to be able to
fix it so it does not, and for that purpose the EPROTO you get back
from write(2) will tell you what you need to know in less time than
diagnosing the misbehavior of the device you are sending to.
All errors above that layer are still going to pass through and you
will test for those as you would in any case.
If you are a hardware designer, you presumably have test equipment at
your disposal that will allow you to test many more things than you
could by writing junk out a NetBSD midi port, such as bit rate and
rise time tolerances and all that jazz.
But you have given me one idea for a possible enhancement, that I
will take up below.
> I think it is a very good thing to sanitize the input of an event
> orientated device, like the sequencer; i still don't understand why it
> is also good for the raw devices? Isn't the purpose of a "raw" device
> to provide the "raw" stream ?
This looks like a good place to summarize to this point.
- "raw" means something, but what it means has to be defined and is
always relative to some protocol level
- defined here as "stream of untimed MIDI messages with minimal
delivery delay"
- and /not/ as "stream of bytes that represent MIDI messages on
one type of underlying link"
- because we support different types of underlying link with different
requirements
- and because presenting the data at the level of one link's bytestream
format requires emulation of that format for newer links, which is
not "raw" at all, and adds complexity and error potential
Previously, we had much more blurring of protocol layers in the
kernel. midi(4) knew something about assembling MIDI messages, but
so did umidi, and midisyn, and the sequencer. Now midi(4) IS that
layer; messages are messages there, and no other component above or
below needs to duplicate that work. The sequencer's job is to time
and dispatch messages; midisyn's job is to render messages into sound,
and the link drivers' job is to transport messages--some as byte
streams, some not, and midi(4) supports both types with their natural
interfaces. Applications only need to deal with messages, but existing
applications that contain code to assemble them out of byte streams
continue to work with negligible performance impact.
To define "raw" as a lower level than MIDI messages would threaten
many of those architectural improvements, and it really doesn't make
much sense when the common links these days, unless you are using the
MPU on your game port, don't naturally look like bytestreams anyway.
But a mechanism TBD for an application to be notified if the stream
it is reading has seen a message error is probably not a bad
idea. It needs to be designed in such a way that it will not break
naive programs that can't handle such notification.
And perhaps a similar mechanism could be implemented if you really
wanted to produce an invalid data stream for testing hardware. This
would have to be designed rather carefully because for some types of
link it will make no sense at all. Over a class-compliant USB link,
if you could relax midi(4) to let non-MIDI-message bytes get
transmitted, you would only be violating conditions of the link
driver and trying to crash USB, not testing your device. But perhaps
for this very specialized application an ioctl could be added that
would allow transmission of arbitrary bytes *if and only if the
underlying link is a UART driver or otherwise capable of doing it*.
> I'm asking this since i'm writing an user-land midi application that
> uses the raw device (as opposed to the sequencer device) because it
> needs to do the error handling.
I don't know that I yet understand your application's requirements
well enough to see what types of error handling you anticipate will
be important and that you need to do. If I better understood what
functionality you want to provide, I could say more.
I do see a potentially worrisome trade-off that you are making,
though. Presumably you would like to include NetBSD among platforms
your application supports (or you wouldn't be writing here ;), and
you may know that NetBSD does not (yet) have real-time user processes
(though there is current Summer-of-Code work in that direction).
And I gather that your application involves receiving input.
As things stand, the sequencer device (particularly after the recent
timecounters import) can do a very nice job of timestamping your
input, but the timing quality you get by reading rmidi and stamping
in your user process may be awful in comparison. And that will be a
constant effect on the performance of your application in normal use,
whereas the incidence of comm errors you are troubling to detect
should be low to nil.
> So my question is (from the user's
> point of view) how should i do the error handling; is there something
> i missed? Should i change my application?
Well, I think that last point is important: the event you're concerned
about, a comm error on input, should really be rare, and if it happens
at all, there's not much sense doing anything about it other than
fixing the problem. MIDI is not a protocol that gives you much to go
on for reconstructing what data may have been lost. I do not know
the volume and rate of input you intend to be handling, but you should
know that I have not been able with the hardware I have to make any
error counter go above zero since I increased the umidi packet size.
We can continue to talk about adding some mechanism you can use to get
app notification if there is a detected input error. But until that
exists, I think you can safely assume that your application will run
without trouble, and advise your user on NetBSD to view the error counts
if anything seems wrong, and if they're not zero, fix the loose cable or
dodgy hardware until they stay zero, and that should be that.
I see no great need to change your application, unless the timing
quality is important enough to think again about the sequencer.
Thank you for the thoughtful questions,
-Chap