Subject: Re: kernel ip_randomid() and libc randomid(3) still "broken"
To: Robert Elz <kre@munnari.OZ.AU>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-net
Date: 11/26/2003 13:18:37
In message <550.1069841238@munnari.OZ.AU>Robert Elz writes

>In practice, this bothers none of us (except possibly Jonathan) as most
>of us don't approach anything like those rates.

Oh, it definitely bothers me.  And with the retail price of gigabit
NICs below $40 for a medium-quality NIC (under $20 for noname) and
per-port switch prices at about the same level, I'd expect it bothered
a lot more people if they thought about it.


[...]

>But you're proposing to divide that in half, and perhaps to just 20%.
>2.5 Mbit/sec is definitely way too low - at that rate we're going to
>start seeing reassembly botches - in theory these should be detected
>by the transport checksums, but ...  (the IP checksum algorithm is not
>very robust against some kinds of packet errors).

Oh wait, that's me again. 

For the studies you're thinking of, we examined a very similar class
of error: splicing in data from nearby in file, from files drawn from
real production filesystems.  As many as 1 in 4000 such errors can go
uncaught, when the `splice' is from adjacent packets of a single file.

For very large files, thats the number I'd choose for
back-of-the-envelope numbers.