On Jun 23, 2011, at 4:36 25AM, Robert Elz wrote:
Date: Wed, 22 Jun 2011 19:30:55 -0400 (EDT)
From: der Mouse<mouse%Rodents-Montreal.ORG@localhost>
| But the interface is much older than that, and, even if it's not
| codified, there's a lot of history behind the notion that userland
| alignment of write() buffers affects, at most, performance, to the
| point where I consider it part of the interface.
Not on access to raw devices it isn't, and never was - what Erik Fair
was 100% correct - if you're using a raw device, it is up to the
application to meet whatever the requirements of that particular device
are, because one of the properties of raw devices is that they don't
do any kind of rebuffering of data (and the driver must not - that is
a part of the interface contract).
What the rules are vary from device to device, if you don't like this,
don't use raw devices. If you want to function on a large subset
(possibly all) raw devices you need to make your code extremely
pessimistic about what it can do (align to 4K or so boundary, use
sizes a multiple of 512, and no bigger than 64KB).
For fun, I looked at the (online) man pages from 6th Edition Unix,
which is circa 1976. Without exception, the raw disk (hp, hs, rf, rk,
rp), and tape devices (tm only; raw I/O didn't work on ht) required
buffers to be on word boundaries; for the former, the count had to be
a multiple of 512 bytes, and for the tm tape driver the count had to
be even. (See
In other words, Erik is right, at least if we're talking historically.
Of course, at least there it's documented. (I took a quick glance
at the code, too -- it did appear to check for erroneous parameters,
though I think it just truncated the count in some drivers.)