Subject: Re: random ip_id must be configurable
To: Jun-ichiro itojun Hagino <firstname.lastname@example.org>
From: Robert Elz <kre@munnari.OZ.AU>
Date: 09/17/2003 17:30:25
Date: Tue, 16 Sep 2003 10:38:42 +0900 (JST)
From: email@example.com (Jun-ichiro itojun Hagino)
| if dns clients are different process, they would use the different UDP
| source port for #1 and #2, so there's no problem.
There's no problem for the DNS stub resolver anyway - the ID could be
the same on every query and it would work, there's no need for uniqueness
(there's one very oddball case where it might just matter, but no-one
is ever going to care).
The only requirement is that the server copy the ID back into the reply.
That we assume is being done (it has nothing to do with the mods
suggested) - as long as that happens, the resolver code will do the right
That isn't the issue here (for the DNS resolver in libc) - what matters
there is whether the effort of generating good random numbers is worth the
benefit that it gains. I don't believe it is. By all means use
cheap pseudo-random numbers - but we absolutely do not need quality random
numbers in this situation - anything that is a bit hard to guess without
being able to observe the packets will do just fine (and "hard" is all
that is needed, "impossible" is not).
The issues for the IP id field are totally different - there uniqueness
is a requirement (for a source,dest,protocol tuple, for the period
from when the packet is transmitted until the TTL that was sent with it
expires, on all packets without DF set).
Unpredictability there is a minor advantage, if the ID can't be guessed, one
avenue for mounting a DoS attack is removed (an unimportant DoS attack, as
it only works in the presence of fragmented packets, which we should all be
striving to avoid anyway).
Uniqueness is a requirement however - and it is uniqueness for a fixed
time (that is, N seconds) regardless of how many packets are generated in
In practice however, the only way to achieve this is to slow down
connections - if a node is sending with TTL==64 (common these days)
then it is limited in absolute bandwidth (if it isn't using DF) to
no more than about 1.1MB/sec (9Mbps) of TCP (or UDP) traffic to any other
destination (and that assumes that it is sending full sized IP packets
that are being fragmented, if the IP packet size were 1500, the limit
would be about 25KBps (200 Kbps).
(Note this is not per connection, but shared amongst all connections
using the same transport protocol).
That's obviously unacceptable - the best that we can do, since we can't
keep the ID unique for as long as it should be, is to keep it unique
for as long as possible. That means using every one of the 64K possible
ID values before re-using any. There are plenty of methods via which
that can be achieved, but the current method (ip_randomid() - even without
the "fix" that seems to break the algorithm) doesn't achieve that, and
I believe it is far more important to be as correct as possible, than it
is to protect against an unlikely, and minor, DoS attack - the rational for
which seems to be no better than to be able to say "we protect ourselves
against that attack" (there is really little gain from DoS protection unless
you can protect yourself from all reasonable DoS attacks).
So, I believe the use of the randomid() in the IP header (id field) is
ill-conceived, even ignoring the cost of generating the random numbers.