Subject: Re: IPSEC in GENERIC
To: Robert Elz <kre@munnari.OZ.AU>
From: Jonathan Stone <jonathan@Pescadero.dsg.stanford.edu>
List: tech-kern
Date: 02/21/2006 09:34:43
In message <16707.1140515640@munnari.OZ.AU>,Robert Elz writes:


>I can easily believe this, and this is the kind of response I was
>looking for in my original query (I was not looking for starting a
>debate on LKMs - I really had not imagined LKMs for IPSEC purposes).
>
>[Aside: I know you're not the only, or first, person to have made
>this comment, I chose your message to reply to because of other
>comments you made.]
>
>What I'd prefer to know though is what "measureable" means, and how
>that affects GENERIC.   That is, I don't really care much if a GENERIC
>kernel isn't suitable for your research - and I really doubt that
>you do either.   If you want IPSEC (or the KAME IPSEC) disabled, you
>know how to make that happen...

Late last year, I privately sent Thor and a couple of others the same
idea for measuring IPsec overhead.  At the time I beleive I quoted
about a 50% penalty, on the machines and network I had, for a
configuration with IPsec enabled, and a handful of SAs active.

In other words, my recollection is that peak UDP receive rates, as
measured by this quick-and-dirty metric (but one passed on by
word-of-mouth in the research community) roughly halved.

I didn't keep careful records, I found the results too depressing.

Someone with more time to spend on this might come up with enough
datapoints to make an interesting paper, by tweaking various points in
design space: disable the PCB cache for IPsec state, then measure
oberhead (peak send rates, on a suitalbe machine/nic/switch combo),
without IPsec, then with and without the PCB cache.

(TCP mandates one ACK every 2 data segments packets, and the PCB cache
only helps for send processing, and only on connected TCP sockets, so
receive processing will incur IPsec overhead (if Ipsec is enabled)
whatever you do.)

Maybe I'll try to salvage a couple of P3 servers to run the tests:/.


Christos... I'm also wondering about Thor's comment about packet
fowarding.  I'm assuming Thor's comment is independent of any of my
ad-hoc measurements.  My, er, nasty suspicious mind is wondering if
Thor's results are from a low-end or embedded machine with a small
I-cache (say, 16k or less).

If so, then just calling into the IPsec codepath (even if the IPsec
code path checks for no active SADB/SDB entries where I recall it
does), might trigger enough additional I-cache capacity misses to
cause a noticeable penatly in peak packet-rate forwarding, for a
CPU-limited (or I-cache-limited) packet forwarder.  OTOH, I'm doubtful
about how many of us would acutally run GENERIC kernels on such a
packet-forwarder.


Now I'm running late, I'll add a couple more comments this evening,
local time, once I get back home.