Subject: Re: NIC driver interface to kernel.
To: Jochen Kunz <firstname.lastname@example.org>
From: None <email@example.com>
Date: 12/14/2003 22:39:51
> The hardware has a single list of all free receive data buffer
> descriptors. The chip takes as much buffers from that list as it needs
> to store the frame it is receiving at the moment. Once the frame is
> received completely, the used buffers are removed from the free list and
> asigned to the receive frame descriptor by the hardware. Using mbuf
> chains for this list would reduce memory consumption. A single small
> packet would only use a single mbuf. With a mbuf cluster there would be
> a lot of wasted memory in the cluster for small packets.
If you see that as a problem, you could always copy the data from the
cluster where you just got the packet and into a single mbuf. But,
today it's normally not a problem.
> My concern is: A short recevie frame descriptor queue can overrun when
> small packets come fast. A long recevie queue wastes memory when every
> frame descriptor has a accociated mbuf cluster of 2 kB. It is possible
> to have a long recevie descriptor queue with a long queue of _small_
> receive data buffer descriptors. An overrun would occur when the driver
The DELQA-PLUS driver in 2BSD does something clever in this area:
This card has a fixed number of receive descriptors (32 of them)
and to allocate a full-size buffer for all of them would take much
memory on a PDP11. So, Steven Schultz did an implementation where
only a small amount of buffers were used at a time, and when a packet
were received the buffer were re-added some descriptor entries ahead.
There were always a bunch of receive descs unavailable.
But, this is usually a non-problem, even on machines like vax, taking
up 20k (for 10 receive descriptors, or something) is practically free.