tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: if_txtimer API to replace (*if_watchdog)()



> On Jan 28, 2020, at 8:29 AM, Taylor R Campbell <campbell+netbsd-tech-net%mumble.net@localhost> wrote:
> 
> You say if_txtimer is logically associated with a tx queue, but _part_
> of the if_txtimer API is never associated with tx queue -- the callout
> is, as far as I can see, only ever one-per-driver.  This model strikes
> me as confusing: half of what if_txtimer does is per-txqueue, and half
> of what it does is per-driver, but the API presents it as one thing --
> if_txtimer.

This is because the simplified variant of the API is there specifically to support simple drivers / devices that have only a single hardware-managed transmit queue and that don't already use a periodic timer.  (There are a bunch of these.)

>> Perhaps I'm engaged in premature optimization, but I wanted to avoid
>> having to use *any* additional locks other than what the driver
>> would already be using internally.
> 
> I think this is premature optimization because the drivers we're
> talking about changing already rely on the kernel lock!

No, we're not.  There are other drivers in the tree that DO NOT use the kernel lock that WOULD STILL USE the core functionality of if_txtimer.  I am trying to make a single API that works well for both the old-and-slow and the new-and-fast use cases.  It seems stupid to just make everything ad hoc.

>>> Which drivers (a) are for hardware devices that can handle >>1GB/sec,
>>> and (b) use if_watchdog -- i.e., which high-throughput drivers would
>>> take advantage of the callout-based mechanism inside your proposed
>>> if_txtimer?
>> 
>> dge(4).
> 
> Does dge(4) actually attain 10GB/sec using only a single tx queue with
> the kernel lock as is?

You'll have to ask ragge@.  I certainly seem to recall he posted impressive performance numbers with it back when it was introduced.

>> My proposal keeps wm(4) at 1 (its own "tick" callout) (because it
>> simply encapsulates the wm(4) approach into an API).  Using bare
>> callouts to implement the tx watchdog timer would mean wm(4) could
>> have up to 17 ("tick" + one callout per hardware tx queue).
> 
> Perhaps I wasn't clear -- I didn't mean using one callout per tx
> queue; I meant using one `tick' callout per driver.  In other words,
> why don't we just convert the drivers that use if_watchdog now to set
> up their own tick callout?

Because there are a bunch of drivers for which that would mean adding a bunch of boilerplate code that serves no other purpose.  I was originally going to do that, but then figured I'd get pushback for a bunch of duplicate boilerplate code that serves no other purpose, so I decided to provide helper routines that all of those cases could use.  I never imagined I'd have to defend that decision to this extent.

> No change needed to wm(4), except perhaps the time_uptime expiry
> expressions (which as you noted should really use 32-bit atomic
> load/store instead of the racy 64-bit time_uptime).
> 
> For single-queue drivers currently relying on the kernel lock I
> imagine it's perfectly fine to replace ifp->if_timer = 5 by
> callout_schedule(sc->sc_ch, 5*hz).  The only other differences from
> the status quo are that you would say callout_init(...) instead of
> ifp->if_watchdog = ... and you have to call callout_stop/halt in
> detach.  Drivers needing to avoid the callout lock, or multi-queue
> drivers, can just make the callout tick once per second like wm(4)
> does.
> 
> What I see in the if_txtimer proposal is that it's supposed to be used
> per-queue, EXCEPT for single-queue drivers, in which case it also
> serves as another way to spell a tick callout.

IF you want to add a bunch of duplicate boilerplate code that could be easily refactored to a bunch of drivers, be my guest.


-- thorpej



Home | Main Index | Thread Index | Old Index