[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Question about differences between ethernet drivers and their start() routines
hello. In looking into an issue with a USB driver I'm working on
based on the cdce(4) driver, I find I have a question. In drivers like
usb/if_cdce.c, usb/if_kue.c, usb/if_rum.c, etc. the start() routine
dequeues one mbuf, sends it on its way and calls it good.
In drivers like pci/if_bge.c, pci/if_wm.c, ic/i82557.c, etc. the start()
routine loops until all packets are dequeued.
In my test driver, what I'm seeing is that if I do a large send which
fills up the send buffers, the transfer stalls, but packets from other
streams on the system still flow out of the interface.
In looking at net/if.c, it looks like the enqueue() routine expects the
start routine to drain the send queue, or at least to make a good effort to
drain it on each call.
How is it that drivers that just pick one mbuf chain off the send queue
when their start routines are called avoid the problem of having large tcp
transfers stall in the middle of the transmission?
I'm sure I'm missing some piece of knowledge here, as these drivers that
apparently work were written years ago, and others surely would have
Can anyone shed light on how transmissions that get stalled get kicked
back to life and why it is that this difference in the behaviors of the
drivers themselves don't cause problems?
Main Index |
Thread Index |