Subject: Re: packet capturing
To: Steve Bellovin <>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 01/13/2004 13:44:20
> gives some interesting insights into 
>packet capture architectures.  I knew that stock systems didn't do very 
>well; I'm astonished at how poorly they do at monitoring a network.

Doesn't that rather depends on whether the problem really is in
"packet capture archictures", or in badly designed experiments?

The bpf kernel packet filters I used were double-buffered.  With a
non-degenerate NIC, say 2114x or Intel Pro/100, and increasing the
default bpf packet size through several doublings, I had no trouble at
all keeping up with the (then) fastest networks I could find -- and
this with a Pentium running at 120 or 133 MHz.  THis was fairly
well-known at the time amongst practitioners: Vern Paxson knew it, I
knew it, ...

With more modern gbe NICs it is trival to enable very heavy interrupt
deferral (one interrupt per 50-odd  frames, or even more).
In that case, keeping up with rutns doesn't require polling.

The paper doesn't even compare the reported work to the (trivial!)
exercise of upping the bpf buffer size to as more reasonable vbalue
for serious packete. On FreeBSD:
	`sysctl -w debug.bpf_bufsize=1048576'.

(FWIW, the max sysctl'able bpf buffer size on my desktop is at 4Mbyte.)

For packet capture on nets with lots of tinygrams, there may be merit
(even considerable merit) in making the reecive-side NIC driver
polling-based rather than interrupt-driven. But this paper gives no
basis whatever to draw conclusions about the acutal merits of the
`stock' tools -- especially given that technology may well have
overtaken much of the claimed benefits.