Subject: RE: ipfilter performance with 'count' rules on NetBSD-1.4/i386
To: Martin Husemann <firstname.lastname@example.org>
From: Erik Rungi <email@example.com>
Date: 09/15/1999 10:14:47
On Wed, 15 Sep 1999, Martin Husemann wrote:
> > So you just need a trickier one. You need some kind of meta term like
> > ``... count 192.67/16'', that will recursively expand it bit by bit,
> > creating the obvious binary tree.
> But if it's that regular you should be able to process this whole set of
> rules in O(1), not even wasting space for a binary tree. So the question is:
> is this just the first time or a corne case, not likeley to appear again?
I think the problem is there for anyone who wants to add up individual counts
on every IP in a large address range. I'd consider anything more than a few
consecutive class C networks as "large". The current practical solution would
seem to be to build a giant tree-based configuration file. Did I mention my
kernel panics when trying to load really big configuration files, and the load
time seems to increase more-than-linearly with number of rules?
An O(1) processing time approach seems like a spiffy solution to me. If you
wanted to count each IP in a large block of IP addresses, a statement that
said "count each ip in this range" could be both extremely simple to deal with
in the configuration file, and internally I suppose you could have a big hash
or something to store the counts (I don't really know if this makes sense, I'm
no kernel programmer eh). This wouldn't be the most efficient on memory use
but it would be both the fastest and most simple way of doing it.