Subject: Re: kernel ip_randomid() and libc randomid(3) still "broken"
To: Jonathan Stone <jonathan@DSG.Stanford.EDU>
From: Dennis Ferguson <dennis@juniper.net>
List: tech-net
Date: 11/26/2003 13:51:32
> In message <20031126175928.817288B@coconut.itojun.org>Jun-ichiro itojun Hagino 
> writes
> >>   | 	so we can either:
> >>   | 	- stop skipping random number of ids (n=0)
> >>   | 	- reduce numbers on the manpage to 1/3
> >>   | 	and then we are happpy.
> >> 
> >> The problem with all of this is that in order to make it a bit more
> >> difficult to suffer from a (fairly unlikely) DoS type attack, you're
> >> proposing breaking IP.
> >
> >	i don't. [...]
> 
> But you *are* breaking IP. For purposes of IP reassembly counts, TTL
> *IS* still a time-to-live, not a hopcount.
[...]
> Your scheme means the ip_id can wrap in about 5 seconds over gigabit
> Ethernet.  Thats just *not acceptable*.

If this is truly "*not acceptable*" then we're doomed even with the
full 16-bit ID space.  The bandwidths of most things Internet-related
have fairly reliably doubled every year or so for as long as there has
been something one could call an Internet (and this still seems to be
continuing) so if an ID space which is a factor of 6 less than the full
16-bit space is quantitatively "*not acceptable*" now then the full
16-bit space will be "*not acceptable*" by the same quantitative measures
in less that 3 years.  I'll bet, however, that 3 years from now you'll
have forgotten this argument and will be continuing to use the then
"*not acceptable*" 16-bit ID IPv4, without having a lot of difficulties
with it in practice.

> Then consider that 10Gbit Ethernet is already on the market, and set
> to fall in price once 10GbE-CX4 becomes available.

Exactly.  If a 12k ID space is "*not acceptable*" at 1 Gbps then we're
already doomed at 10 Gbps even with the full 16-bit space available.

Pragmatically speaking, I don't think there is a theoretical argument
which can be used to prove a 12k ID space to be insufficient which couldn't
also be used to prove a 64k ID space to be insufficient, yet I'll bet
either space would continue to work acceptably well in real life.  The
real issue here is more of a qualitative, fuzzy cost-versus-benefit
tradeoff which one could argue about endlessly.

Given that we're operating solely in the realm of opinions, then, I'll
offer the opinion that, despite the above, a scheme which reduces the
size of the ID space by a factor of 6 and is quite expensive to compute
is not worth the benefit it provides.  If, on the other hand, there
were a scheme which reduced the size of the ID space by a factor of 1.5
and was cheap to compute, I might change my mind.

I personally think there are ways to generate non-repeating IDs which
are much cheaper to compute, which impair the sequence space less and
which produce sequences which are almost as hard to guess as the current
scheme.  Until someone invents one like this, however, I'd rather not
bother.

Dennis Ferguson