tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: rbuf starvation in the iwn driver



On Mon, Apr 05, 2010 at 08:38:41AM -0600, Sverre Froyen wrote:
> Hi,
> 
> I have noticed that the current iwn driver sometimes will lock up completely. 
> When this occurs, the error count (as reported by netstat -i)  keeps 
> increasing and no packets are received.
> 
> Here is what appears(*) to happen (amd64 / current):
> 
> The driver is using rbufs to store received packets. It allocates one rbuf 
> per 
> RX ring plus 32 extra. The extra buffers are used by iwn_rx_done as shown in 
> this code fragment:
> 
>         rbuf = iwn_alloc_rbuf(sc);
>         /* Attach RX buffer to mbuf header. */
>         MEXTADD(m1, rbuf->vaddr, IWN_RBUF_SIZE, 0, iwn_free_rbuf,
>             rbuf);
>         m1->m_flags |= M_EXT_RW;
> 
> If there are available rbufs, iwn_alloc_rbuf returns one rbuf and decrements 
> the number-of-free-rbufs counter. Otherwise, it returns null. iwn_free_rbuf 
> returns the rbuf to the free list and increments the free counter. It is 
> called automatically by the network stack.
> 
> Monitoring the number-of-free-rbufs counter during network traffic, I find 
> that 
> it normally stays at 32, occasionally dropping into the twenties. Sometimes, 
> however, the count will abruptly jump to zero. At this point, the free count 
> does not recover but remains at zero for a *long* time. The interface does 
> not 
> receive any packets as long as the driver has no free rbufs. After about ten 
> minutes, I see a flurry of calls to iwn_free_rbuf and the free count returns 
> to 
> 32. At this point the interface is working properly again.
> 
> What to do about this?
> 
> Can the mbufs code be modified not to hold on to the rbufs for as long as it 
> does?
> (I do not know whether or not the received data sitting in the rbufs 
> have been transferred to the userland code yet, but it seems likely that it 
> would have.)

Not necesserely, it can sit in some socket buffer.
What is doing this driver looks somewhat wrong to me.
It can use its rbufs as external mbuf storage, and pass this to the network
stack, as long as there's enough free rbufs. When the number of free rbufs
is below the limit, it should stop giving them to the network stack,
and copy the received data to a ordinary mbuf cluster instead.
This is what most drivers using private storage for receive do.
See for example if_nfe.c:
                        if ((jbuf = nfe_jalloc(sc, i)) == NULL) {
                                if (len > MCLBYTES) {
                                        m_freem(mnew);
                                        ifp->if_ierrors++;
                                        goto skip1;
                                }
                                MCLGET(mnew, M_DONTWAIT);
                                if ((mnew->m_flags & M_EXT) == 0) {     
                                        m_freem(mnew);
                                        ifp->if_ierrors++;
                                        goto skip1;
                                }
 
                                (void)memcpy(mtod(mnew, void *),
                                    mtod(data->m, const void *), len);  
                                m = mnew;
                                goto mbufcopied;
                        } else {
                                MEXTADD(mnew, jbuf->buf, NFE_JBYTES, 0, nfe_jfre
[...]


-- 
Manuel Bouyer <bouyer%antioche.eu.org@localhost>
     NetBSD: 26 ans d'experience feront toujours la difference
--


Home | Main Index | Thread Index | Old Index