Subject: Re: IPSEC in GENERIC
To: None <tls@rek.tjls.com>
From: None <jonathan@dsg.stanford.edu>
List: tech-kern
Date: 02/20/2006 10:53:00
In message <20060220160305.GA19342@panix.com>,
Thor Lancelot Simon writes:

>On Mon, Feb 20, 2006 at 07:50:22AM -0800, Garrett D'Amore wrote:
>> joerg@britannica.bec.de wrote:
>> 
>> > But back to the original question -- this doesn't affect IPSec at all,
>> > since it can't be made a module without a lot of efforts in any case.
>> >   
>> true, perhaps.  but if so, then why?  it seems a lot of ipsec at least
>> could be -- e.g. encryption and hash routines, etc.
>
>Except that those routines are almost always in anyway.
>
>IPsec hooks in all over the network code -- it is anything _but_ a "bump
>in the stack" implementation.  That makes it useful for more than toy
>VPN applications (unlike many BITS implementations) but also means that
>it is extremely difficult to cleanly separate out into a module, _and_
>that just including it in the kernel causes a measurable decrease in
>forwarding performance.  Which is why it's not in the kernel by default.

We don't want to turn on IPsec for a simple reason: the KAME
code performs poorly, does not scale, and (worst of all for some uses)
the inline calls to crypto transforms impose head-of-line blocking on
all non-IPsec traffic.  

But even if you don't use IPsec *at all*, turning on IPsec imposes a
significant packet-classification on all traffic (except for outbound
TCP traffic on connected sockets). Even inbound TCP incurs a
significant penalty, which is easily measurable on network benchmarks.

There's a simple test to gauge the impact of IPsec, which I think I've
described privately to Thor and others. Networking researchers have
used ttcp-over-UDP *receive* rates for decades as a quick,
rule-of-thumb estimate of the packet-processing ability of a given
interface/software/machine combination.  One can use this
ttcp-over-UDP estimate as a quick measure of IPsec overhead:


0. Find two suitable machines, connected via a network link
   with which the machines can, preferably, _just_ keep up.

 1. Build two kernels, with and with IPsec enabled,
   but otherwise identical.

2. Find a machine which can run ttcp -u -t fast enough to fill a wire
   with UDP traffic.

3a. Boot the non-IPsec kernel on a receiving machine.
    Run the ttcp -u sender at this machine.
   Record the processed packet rate reported with ttcp -u -r.
   (In a well-designed experiment, the receiver will not quit keep
   up with  the offered packet rate).

3b. Boot the IPsec kernel on the same receiving hardware.
    Repeat the measurement in 3a.  Compare and contrast.


For a more "real-world" but less simple test, one may configure
NFS-over-UDP traffic aas a test workload.  I'd suggest setting up a
ramdisk (mfs, or tmpfs in -current) on a "server", export the
"ramdisk" via NFS, and use a second machine as a client. On the
client, measure peak write rate for doing dd's from a local file to
the NFS ramdisk. Repeat the measurements with and without IPsec on the
server (or better yet, with and without IPsec on both machines).

Either way, I find the results extremely discouraging.


>The other outstanding issue is that the code selected by options
>FAST_IPSEC needs to grow v6 support, and the code selected by options
>IPSEC needs to die.  I'd encourage anyone thinking of doing significant
>work on our IPsec code to _not_ put it into something like modularizing
>the KAME code, at this point!

Quite.  But speaking just personally, I'm more interested in adding
/etc/rc.d hooks to not enable utterly useless protocols[*] like
IPv6 on my machines, than I am in adding IPv6 support to FAST_IPSEC :-/.

[*] That is, IPv6 is utterly useless on *my* machines.