Subject: bpf performance suckage
To: None <tech-net@netbsd.org>
From: Darren Reed <darrenr@reed.wattle.id.au>
List: tech-net
Date: 06/18/2000 05:00:02
trying to get bpf to work with ipfilter, I was having problems with bpf
not returning the right results (it being an undocumented interface in-
side the kernel made it trial and error). the problem turns out to be
with how fucking bpf_filter() is called. upon closer examination, it
is a dead set whacked function:
/*
* Execute the filter program starting at pc on the packet p
* wirelen is the length of the original packet
* buflen is the amount of data present
*/
u_int
bpf_filter(pc, p, wirelen, buflen)
register struct bpf_insn *pc;
register u_char *p;
u_int wirelen;
register u_int buflen;
{
what this really means if that if "buflen" is 0, "p" is an mbuf pointer
and if it is not 0, p points to a data buffer. clear, huh ?
the consequence of this is if you look at bpf_filter(), there are blocks
of code like this:
k = pc->k;
if (k + sizeof(int32_t) > buflen) {
#ifdef _KERNEL
int merr;
if (buflen != 0)
return 0;
A = m_xword((struct mbuf *)p, k, &merr);
if (merr != 0)
return 0;
continue;
#else
return 0;
#endif
}
A = EXTRACT_LONG(&p[k]);
continue;
guess which fucking way it goes for every packet given that the call
from each network card driver will result with "buflen = 0". I presume
this hack is here so that the code works for both drivers and when you
do "tcpdump -r".
anyway, now that I've grumbled, I'll probably set about fixing this up
in one way or another. for starters, I'd like to see different damn
prototypes for kernel/non-kernel and then try and optimize it for the
case where data is going to be in the first mbuf (like 99% of the time).
(sorry for being obscene, it's almost 5am and this is just sick).
Darren