Subject: Re: NetBSD and large pps
To: None <tls@rek.tjls.com>
From: Jonathan Stone <jonathan@dsg.stanford.edu>
List: tech-net
Date: 12/03/2004 16:29:41
A small techincal quibble here:
In message <20041203150350.GA2746@panix.com>,
Thor Lancelot Simon writes:


>Polling just ignores network interrupts completely, and enforces a strict
>latency/throughput trade-off by reading from the network device according
>to a timer.  This avoids interrupt-service overhead, at the expense of
>significant software complexity and of always making the _worst-case_
>latency decision, rather than treating the increased latency as an upper
>bound.

Mmm, no, not necessarily always the _worst-case_ latency decision. If
you do pure polling, then arrived packets are served in a
metronome-like fashion, whenever the polling interrupt fires.  A given
received packet could have arrived either just after the last
"metronome" tick; or *just* ahead of a following tick; or anywhere
else in between.

Whereas in contrast, if you turn on really aggressive interrupt
deferral, you might wait tens or hundreds of small packet-times (in
the hope of receiving more packets) after getting one solitary
singleton packet. With multiple bge NICs, interrupt sharing, and
_really_ heavy-handed interrupt deferral, I've seen aggregate
interrupt loads lower than the rate I'd have configured pure polling.
You can figure out what that means for which one has worse worst-case
latency :-/.

I do see what (I think) you mean, but I think your wording isn't quite
right. It wouldn't (for example) quite get 100% score from me as an
answer to a PhD-quals exam. Tho' maybe we don't need to be that
precise here. (No dig at you, but Finals season is upon some of us).