Subject: Re: [long] Re: RFC: merge chap-midi branch
To: Chapman Flack <nblists@anastigmatix.net>
From: Alexandre Ratchov <alex@caoua.org>
List: tech-kern
Date: 06/27/2006 23:26:59
On Mon, Jun 26, 2006 at 11:39:42PM -0400, Chapman Flack wrote:
> 
> If I understand you correctly, you are using "raw" to mean something
> rather different, more like access to the stream of bytes seen at a UART
> (or what would have to be, for other types of link layer, some emulation
> of the stream of bytes seen at a UART). 

yes that's exactly what i mean by "raw": a byte stream that is
conforming to "the midi 1.0 specification". 

> I considered that definition for
> "raw" and rejected it for the following reasons.
> 
> 1. Some of the links we may support (e.g. class-compliant USB) are in
>   fact MIDI message, not byte-stream, oriented. RTPMIDI, which we don't
>   support yet, further provides for atomicity of parameter reads and
>   writes. The interface between midi(4) and link drivers no longer
>   assumes they all look like UARTs, and that simplifies implementation
>   of links that don't.
> 
>   Consider a couple of other possible definitions of "raw":
>   
>   a. "real" raw: This would mean your application sees the
>      exact form of data needed by the link driver under rmidi2, and the
>      possibly different form for the link driver under rmidi3. Yuck.
> 
>   b. emulated-UART-raw: in this case all of the link drivers that are
>      not UARTs are responsible for pretending they are. This is an
>      emulation of dumb over smart, and to accomplish it, duplicate
>      implementations of the MIDI state machine sprout up in the
>      link drivers. That was the status quo ante, and a large source
>      of bugs in the kernel code.
> 
> 2. The looks-like-a-UART definition of raw requires applications--as
>   you point out yourself below--to contain code for decoding byte
>   streams to MIDI even when the data are coming over a message-
>   oriented link, and that extra work is also a source of bugs, no
>   less in applications than in the kernel. On the other hand, a
>   raw-_MIDI_ definition of raw, without requiring such applications to
>   be rewritten, can improve their stability anyway; more on that
>   below.
> 
> >If there are no errors in the input stream, an application reading
> >/dev/rmidiN will receive something equivalent to the raw input stream. 
> >Is this correct?
> 
> Certainly.
> 
> >However, if there is an error in the raw input stream, the driver will try
> >to correct it and the application reading from /dev/rmidiN will not see the
> 
> Depends on the layer of the error. Whether a sequence of MIDI messages
> makes any sense is up to you, your application, and your equipment,
> but whether a sequence of 1, 2, or 3 bytes can form a MIDI message at
> all is midi(4)'s department. There is no attempt to "correct" anything
> at that level; an error's an error, and the error count will increment
> and the MIDI rules will be followed to sync at the next valid message.
> 

ok, i haven't been very clear; i'd classify errors in 3 cathegories:

	1) hardware/cable disconnects:
		- missing midi ACK after 300ms of silence on th wire
		  (the the optional active sensing thing)

	2) protocol strangeness:
		- aborted messages (exemple: sysex message without
		  the stop byte, voice messages aborted by a new
		  status byte)
		- data bytes when running status is zero

	3) software bugs (degrading music):
		- note-on without note-off, bad key aftertouches,
	   	  bender floods duplicate controllers)
		- too fast or too slow tick rate, start/stop event
		  issued by slave device etc...

the midi spec says how to deal with (1) and (2): stop sound if midi
ACK is missing, ignore aborted messages and data when running status
is zero. Is this correct?

In that sense, for instance, incoming data when running status is zero
is an useless but valid byte stream; and so is an aborted sysex
message. For me these aren't really protocol errors, because the spec
doesn't say "this is not allowed to happen", instead it says how to
deal with them

> >error. So, (for instance) it cannot tell to the user "there is an error" or
> >invalidate the track being recorded or take whatever "intelligent" action 
> >to
> 
> You have a worthy point in that for now it is easier for the user
> to view the error count than for your application to retrieve it
> for sophisticated uses.  I believe the appropriate way to address that
> is by adding a way for an aware application to get notice that an
> error has been detected, and not by being tied to a model where
> "raw MIDI" really means "raw bytes" and depends completely on the
> application to recognize anything as erroneous at all. The latter
> disadvantages the _user_, who may have to use different applications
> of varying quality, and gets no useful feedback from the common mass
> of applications that are less careful about error detection than yours
> is.
> 
> I am open to suggestions for what the error reporting mechanism to the
> application should look like, but I think it should be enabled by an
> ioctl() or some other explicit opt-in so that it will not interfere with
> more naive applications that simply read the port.
> 
> >Also, since midi(4) devices are raw devices, any application using such a
> >device will decode the MIDI byte stream. Thus the job is done twice: once 
> >in
> 
> A major point of this work is to reduce the number of places where that
> job is done. It was being done several places inside the kernel--as
> mentioned above, modern link types tend to require it--each implemented
> with its own set of mistakes, which were the chief source of
> interoperability problems we had. That has been reduced inside the
> kernel to exactly one place, which I dare not boast is error-free but
> I certainly put effort in that direction, and if any state-machine
> mistake remains there is exactly one place to look for it.
> 
> Without being in control of applications, I am not in a position to
> remove redundant bytestream-to-MIDI decoding from them too, though
> if you felt very much like #ifdef-ing it out, you certainly could.
> But I would not think it worth the effort, and here's why.
> 
> The cost of leaving such code in an application is not likely to
> exceed a comparison or two per byte received. What will be different on
> NetBSD is simply that the fast path will always be taken.
> 
> Now, remember my point that implementations of the MIDI state machine
> are notorious sources of bugs. I have high hopes for the one in your
> application (though I haven't seen it yet), but otherwise I'm not sure
> I've seen yet an application without bugs in that part of the code.
> 
> But those bugs are in the code paths that will not be followed when
> running on NetBSD. The fact that the work is redundant does not much
> concern me, because it amounts to perhaps a couple of highly predictable
> branches in the application. But the result is that an application can
> be designed to run on raw-bytestream systems and be run just fine on
> NetBSD's raw-MIDI as well, though perhaps failing less often on NetBSD.
> That doesn't bother me either. :)
> 
> Is there a tradeoff? Sure there is: as you rightly point out:
> 
> >of view, sometimes it is good to be able to test your midi application on
> >the "real" byte stream
> 
> Right - if your application includes bytestream-to-MIDI decoding, you
> will not get good test coverage of that code by simply reading from
> NetBSD rmidi devices. For development testing, you will be better off
> writing a proper test harness that generates invalid byte streams in
> predictable ways to ensure you are covering your code. But I chose this
> trade for a reason: the MIDI support will be used much more heavily by
> people who want to _do MIDI stuff_ than by people who want to _develop
> new MIDI apps_.
> 
> The people who want to do stuff are disadvantaged by having to use
> different apps that may detect and report comm problems differently,
> or even not at all. For the case where you are trying to do stuff
> and it isn't working and the application isn't telling you why, you
> are well served by OS reporting, independently of any app, that you've
> got communication errors clocking up on rmidi3. You have a single way
> to get that information regardless of the application you may be using.
> 
> >and to see what your hardware does when it receives
> >your "real" byte stream.
> 
> I am not sure here whether you are talking about testing hardware
> or testing the output of software.
> 
> If the concern is for the output of software, this is a good moment
> to remember that the only kind of error detected by midi(4) is in
> fact violation of the lowest MIDI message layer: bytes that are not
> MIDI at all. By definition your only purpose in finding out whether
> your application produces that kind of byte stream is to be able to
> fix it so it does not, and for that purpose the EPROTO you get back
> from write(2) will tell you what you need to know in less time than
> diagnosing the misbehavior of the device you are sending to.
> 
> All errors above that layer are still going to pass through and you
> will test for those as you would in any case.
> 
> If you are a hardware designer, you presumably have test equipment at
> your disposal that will allow you to test many more things than you
> could by writing junk out a NetBSD midi port, such as bit rate and
> rise time tolerances and all that jazz.
> 
> But you have given me one idea for a possible enhancement, that I 
> will take up below.
> 
> >I think it is a very good thing to sanitize the input of an event
> >orientated device, like the sequencer; i still don't understand why it
> >is also good for the raw devices? Isn't the purpose of a "raw" device
> >to provide the "raw" stream ?
> 
> This looks like a good place to summarize to this point.
> 
> - "raw" means something, but what it means has to be defined and is
>  always relative to some protocol level
> 
> - defined here as "stream of untimed MIDI messages with minimal
>  delivery delay"
> 
> - and /not/ as "stream of bytes that represent MIDI messages on
>  one type of underlying link"
> 
> - because we support different types of underlying link with different
>  requirements
> 
> - and because presenting the data at the level of one link's bytestream
>  format requires emulation of that format for newer links, which is
>  not "raw" at all, and adds complexity and error potential
> 

ok! i think that now i'm understanding your point of view; i didn't
before.

for you a midi(4) device provides a stream of _midi messages_ that
are passed to the application through the read(2) syscall. (as
opposed to _midi byte stream_ with which the application has to
deal). 

It just happens that the format of the provided messages is a subset
of the midi protocol (no running status, no sensing, no unused data
bytes), is this correct?

> Previously, we had much more blurring of protocol layers in the
> kernel. midi(4) knew something about assembling MIDI messages, but
> so did umidi, and midisyn, and the sequencer. Now midi(4) IS that
> layer; messages are messages there, and no other component above or
> below needs to duplicate that work. The sequencer's job is to time
> and dispatch messages; midisyn's job is to render messages into sound,
> and the link drivers' job is to transport messages--some as byte
> streams, some not, and midi(4) supports both types with their natural
> interfaces. Applications only need to deal with messages, but existing
> applications that contain code to assemble them out of byte streams
> continue to work with negligible performance impact.
> 
> To define "raw" as a lower level than MIDI messages would threaten
> many of those architectural improvements, and it really doesn't make
> much sense when the common links these days, unless you are using the
> MPU on your game port, don't naturally look like bytestreams anyway.
> 
> But a mechanism TBD for an application to be notified if the stream
> it is reading has seen a message error is probably not a bad
> idea. It needs to be designed in such a way that it will not break
> naive programs that can't handle such notification.
> 
> And perhaps a similar mechanism could be implemented if you really
> wanted to produce an invalid data stream for testing hardware. This
> would have to be designed rather carefully because for some types of
> link it will make no sense at all. Over a class-compliant USB link,
> if you could relax midi(4) to let non-MIDI-message bytes get
> transmitted, you would only be violating conditions of the link
> driver and trying to crash USB, not testing your device. But perhaps
> for this very specialized application an ioctl could be added that
> would allow transmission of arbitrary bytes *if and only if the
> underlying link is a UART driver or otherwise capable of doing it*.
> 
> >I'm asking this since i'm writing an user-land midi application that
> >uses the raw device (as opposed to the sequencer device) because it
> >needs to do the error handling.
> 
> I don't know that I yet understand your application's requirements
> well enough to see what types of error handling you anticipate will
> be important and that you need to do. If I better understood what
> functionality you want to provide, I could say more.
>

It's very basic. I have a midi filter that keeps states of notes, and of
some controllers. I want states to be flushed and freed if the midi device
that created them is disconnected, for instance.

Currently i just do that:

	- check the return value of read() and write() calls, if there is an
	  error then trigger the "error event" and mark the device as broken
	  (in this case it's no more used)

	- wach for midi acks; if there isn't an ACK after more than 300ms
	  then trigger the "error events"

> I do see a potentially worrisome trade-off that you are making,
> though. Presumably you would like to include NetBSD among platforms
> your application supports (or you wouldn't be writing here ;), and
> you may know that NetBSD does not (yet) have real-time user processes
> (though there is current Summer-of-Code work in that direction).
> And I gather that your application involves receiving input.
> 
> As things stand, the sequencer device (particularly after the recent
> timecounters import) can do a very nice job of timestamping your
> input, but the timing quality you get by reading rmidi and stamping
> in your user process may be awful in comparison. And that will be a
> constant effect on the performance of your application in normal use,
> whereas the incidence of comm errors you are troubling to detect
> should be low to nil.
> 

that's insteresting; but currently i don't need accuracy. For midi-only
apps around 3ms precision is good, so poll(2) and gettimeofday(2) do
the job very well. Currently i'm trying to keem things as simple as
possible, in order to avoid bugs.

> >So my question is (from the user's
> >point of view) how should i do the error handling; is there something
> >i missed? Should i change my application?
> 
> Well, I think that last point is important: the event you're concerned
> about, a comm error on input, should really be rare, and if it happens
> at all, there's not much sense doing anything about it other than
> fixing the problem. MIDI is not a protocol that gives you much to go
> on for reconstructing what data may have been lost.  I do not know
> the volume and rate of input you intend to be handling, but you should
> know that I have not been able with the hardware I have to make any
> error counter go above zero since I increased the umidi packet size.
> 

same for me; in more than 10 years of midi tweaking i've never seen a
comm error. The only errors i've seen were bugs or unplugged cables.

So, the only error that i want to handle properly is midi cables
disconnects. This is likely to happen in real-time performance.
Basically, by "handling properly" i mean: reset all notes/controllers
triggered by the disconnected device without distrurbing the
performance.

> We can continue to talk about adding some mechanism you can use to get
> app notification if there is a detected input error. But until that
> exists, I think you can safely assume that your application will run
> without trouble, and advise your user on NetBSD to view the error counts
> if anything seems wrong, and if they're not zero, fix the loose cable or
> dodgy hardware until they stay zero, and that should be that.
> 
> I see no great need to change your application, unless the timing
> quality is important enough to think again about the sequencer.
> 

thank you for your long reply, you given me new ideas
for my app ;-)

cheers,

-- 
Alexandre