Subject: Re: Various NetBSD kernel questions to help with port of FreeBSD "zaptel" drivers.
To: David Young <firstname.lastname@example.org>
From: Thor Lancelot Simon <email@example.com>
Date: 11/08/2004 13:45:32
On Mon, Nov 08, 2004 at 12:25:14PM -0600, David Young wrote:
> You seem to advocate splitting policy and mechanism between userland
> (Asterisk routes calls) and the kernel (merely copies audio frames from
> line card to line card, or from line card to VoIP, or VoIP to VoIP)
> so that audio is not needlessly copied between userland and the kernel.
> Is that about right? Can you give a positive account of what the kernel
> architecture will look like?
That's heading in the right direction, but it's not all the way in the
right direction. Telephony cards usually have a private interconnect
(both intra-card and inter-card, e.g. on a TDM bus that runs on cables
between cards in one chassis or even in several chassis which may house
*different* "host CPUs") for switching audio data. You don't just want
to avoid sending the audio to userland -- you don't want it in host RAM
at all, whether in kernel or user space, and if you can manage it you
don't want it on the host bus (if it's fast/wide PCI, maybe the host
bus can cope -- *maybe*. If it's ISA, however, forget it.)
I've been out of the telephony business for some years; I left just as
VoIP heated up, and I know that that's changed things. But I think it
is instructive to look at what Dialogic (now part of Intel) did here:
you need to provide an API that _works as if_ the audio is always
switched on a separate bus, even if sometimes it may have to traverse
the host bus (e.g. a card-to-card transfer across the PCI bus) or even
host memory (e.g. a card-to-card transfer from a card with a SCBUS
or PEM voice fabric, but PIO ISA host interface, to a PCI-only
voice recognition engine). And *that* means that you need to make your
API treat a user buffer in host memory as just one of many possible
endpoints; the fundamental operation is not audio extraction but audio
*switching*, and "data has arrived in your buffer" just one event out
of many possible events, not very different from "A token from the voice
recognizer has arrived in your buffer" or "a call announcement has arrived
in your buffer" or "I've patched channel 7 to channel 3".
> I would ask of the original poster, does the Asterisk architecture
> observe this policy/mechanism, userland/kernel split?
I know that the Dialogic drivers for Linux work -- or worked -- generally
like those for other operating systems, and those worked as I described
above. And I am pretty sure Asterisk works with the Dialogic drivers
under Linux, though it also works with many other kinds of telephony
Coincidentally, since leaving the telephony industry I've worked with
both one of the principal developers of Asterisk and some of the folks
at Dialogic who developed and maintained their drivers and API for
a long time. Maybe I'll see if I can get them together for beers and
try to learn some Grand Unified Theory of all this.