[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Mbufs stored in RX queues
I would understand that initial buffer allocations are there to
prevent buf allocation in data path. So, the packet should be read
into a buffer from pool and returned to pool when upper layer returns
it. In case pool is empty in worst case, then we should allocate a new
buf. but is that a real situation? can't we handle that by optimizing
the size of pool to ensure that chances of running out of buffers in
pool reduces a lot....
On Wed, Dec 21, 2011 at 8:03 PM, <vincent%labri.fr@localhost> wrote:
> I'm currently rewriting a MAC driver, and was wondering why in many
> drivers there is a queue of mbufs allocated during init for RX which
> will be in "external storage" mode.
> (Of course, if the DMA engine is smart enough to fill mbufs internally,
> they have to be in the queue beforehand, but it's not my case.)
> The RX interrupt will usually proceed like this:
> 1. allocate new mbuf; if impossible, drop packet
> 2. fill "old" mbuf in rxq
> 3. put new mbuf in rxq
> 4. pass old mbuf to upper layer (which frees it)
> (with bus_dma incantations interspersed)
> It looks to me that holding a number of mbufs in the rxq is useless
> since anyway we will fail to keep the incoming packet if we can't
> allocate a new mbuf when the packet arrives.
> Could anyone please help me see when this could be useful? From my
> perspective, it looks like passing up a freshly allocated mbuf would
> make us spend exactly the same amount of time in the interrupt handler,
> waste less memory, and simplify the code.
> I could buy that switching between two mbufs would reduce a bit latency
> to the upper layer because one operation can be done to prepare the new
> mbuf after the old mbuf has been passed up.
> You can look for example at nfe_rxeof() in dev/pci/if_nfe.c but I picked
> it randomly with a few other drivers and it seemed like a habit.
Main Index |
Thread Index |