Subject: Re: packet capturing
To: Darren Reed <darrenr@mail.netbsd.org>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 01/22/2004 11:17:32
In message <Pine.NEB.4.58.0401220457370.26031@mail.netbsd.org>,
Darren Reed writes:
>On Wed, 21 Jan 2004, Jonathan Stone wrote:
>
>> I still dont understand anywhere the paper has to say _anything_ about
>> *BPF* perfomance, above and beyond Table 1. (Unless you count Figure
>> 1, which is is grouns to reject the paper, all on its own).
>
>What about table 4? Although it's hard to tell if the cicular
>buffer has had any real measurable difference (99.5% -> 99.9%).
Sure, maybe. I took pains to acknowledge the ringbuffer as a
possibly-worthwile idea elsewhere. But the ringbuffer should be
compared to a ``typical skilled use'' bpf (i.e., something like the
configuration -current has now), rather than the FreeBSD compiled-in
defaults. Whether you cite mjr's experience or Vern Paxson's, its
well-known that the default bpf buffers aren't adequate for sustained
capture.
I do apologize, sincerely, if I appeared to be flaming. Equally
sincerely, I didn't (and don't) see why this paper got so much
attention here: the data it does present on bpf is, to my eyes, pretty
much a strawman. (it's good that the strawman does so well, but I
still consider it a flaw in the paper: I think the paper would be much
stronger if it quantified `best practice' bpf as well as `FreeBSD
default'.)
>FWIW, my personal interest in this is I'm working on a project where we
>need 99.999%, at least, of all packets and preferably 100%, gauranteed,
>on a box with 4x100BT connections running at full speed.
Personally, I would recommend two dual-port gig-e cards with
aggressive interupt deferral. (Probably cheaper these days than a quad
10/100 card.) Or look at the Intel pro/1000 four-port card.
>I don't read source-changes (please, no flames) so unless I see
>something mentioned here (or elsewhere) about what people are
>doing, I'm often in the dark (just so you know.)
Fair enough.
>Anyway, in a later email you confused me. You said:
>> For the record: I've commited changes to libpcap that will automatically
>> probe for, and use, bpf buffers up to 4 Mbytes without recompilation.
>> I've also increased the default limit for bpf BIOCSBLEN to 1 Mbyte,
>> and confirmed that libpcap uses a 1 Mbyte bpf buffer.
>
>Shouldn't BIOCSBLEN also accept upto 4MB here, then ?
>Otherwise I'm confused by this comment...?
The compiled-in default limit in -current is now 1 Mbyte. That limit
is sysctl'able up to 16 megabytes, which I chose deliberately as ...
excessive. (The only reason I saw for a limit at all is to avoid
exhausting kva space or mbuf cluster space. There's a consensus that
the default limit should be computed dynamically from parameters like
physmerm; I would add the others).
If you want to bump the libpcap limit up to 16 mbytes, I'd say that
was justifiable, given the sysctl limit on BIOCSBLEN.
>> If we want to change that, I think we'd have to change our libpcap and
>> the tcpdump.org version, and wait for third-party apps to catch up.
This is something I ran into a good 5 or 6 years back... I'll have to
see if I can find the libpcap I reworked for my doctoral dissertation
packet-capture. ISTR that error-reporting, and resizing snaplen, were
the other big gotchas. I'll see what I can find.