Subject: Re: NIC driver interface to kernel.
To: None <tech-kern@NetBSD.org>
From: Jochen Kunz <jkunz@unixag-kl.fh-kl.de>
List: tech-kern
Date: 12/14/2003 20:07:39
On Sat, 13 Dec 2003 13:46:09 -0800
Matt Thomas <matt@3am-software.com> wrote:

> if_output takes a media-independent payload (like an IP packet) and
> does media-specific things to it (like adding a Ethernet header), for
> real drivers, placing the packet in the if_snd queue.
Now I understand:
./pdq_ifsubr.c:    ifp->if_output =3D fddi_output;

> if_start is called by the if_output routine to "kick" the driver to
> actually send the packets to the hardware.  IFF_OACTIVE in if_flags
> is used as a primitive flow.=20
Ahh, _that_ is the purpose of IFF_OACTIVE.

> By the time the driver gets it out of if_snd, it is in wire format.
> But it may be less the minimum packet length and the driver/hardware
> is responsible for padding it out.
The hardware can be configured to do padding.

> No.  Allocate one cluster mbuf per received frame.  Since cluster
> mbufs are 2KB each can hold one full-size ethernet packet without a
> problem.
Must I use cluster mbufs or is that the way most drivers do it as it is
easy to implement?

The hardware has a single list of all free receive data buffer
descriptors. The chip takes as much buffers from that list as it needs
to store the frame it is receiving at the moment. Once the frame is
received completely, the used buffers are removed from the free list and
asigned to the receive frame descriptor by the hardware. Using mbuf
chains for this list would reduce memory consumption. A single small
packet would only use a single mbuf. With a mbuf cluster there would be
a lot of wasted memory in the cluster for small packets.

My concern is: A short recevie frame descriptor queue can overrun when
small packets come fast. A long recevie queue wastes memory when every
frame descriptor has a accociated mbuf cluster of 2 kB. It is possible
to have a long recevie descriptor queue with a long queue of _small_
receive data buffer descriptors. An overrun would occur when the driver
runs out of data buffer descriptors. The length of the data buffer
descriptor queue, and so the amount of memory used, can be tuned to the
interface speed and interrupt lattency. That way it is possible to
handle small and big packets coming fast into the machine in a memory
effective way.
--=20


tsch=FC=DF,
       Jochen

Homepage: http://www.unixag-kl.fh-kl.de/~jkunz/