tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Dealing with M_HASFCS for protocols that do not do ethernet crc



    Date:        Tue, 09 Aug 2022 13:51:52 +0100
    From:        Robert Swindells <rjs%fdy2.co.uk@localhost>
    Message-ID:  <x7zggda8bb.fsf%ren.fdy2.co.uk@localhost>

  | I, and I'm guessing martin@, do use AppleTalk over Ethernet without any
  | problems. I haven't tried using it to a device with a dwc_gmac
  | controller though, what is the failure mode without the recent change?

I cannot help but believe that this discussion has veered into lala land.

Whether or not there was once a version of ethernet which routinely allowed
frames with checksum errors to be received (would have had to be the 3Mbps
version, with 16 bit addressing, or earlier) is not really relevant to
anything any more. I cannot imagine anyone using that.   If there ever was
such a thing, it was before my time - and that means it was a very long
time ago indeed.

Modern ethernet requires checksums, they're not optional, and cannot be
ignored.   It really is that simple.

Most reasonable ethernet cards don't even receive packets with checksum
errors (they're simply ignored - beyond perhaps being counted) as when
a packet has an invalid FCS there's no way to tell which bits of the
packet were corrupted - it might easily have been the destination addr
field, and the packet might never have been intended to be received by
this node.   We simply do not, and cannot, know.

The suggestion that there are protocols in use today that send packets
without a frame level checksum is preposterous.   Most certainly, Ethertalk
(ie: Appletalk over Ethernet, as distinct from localtalk, which I doubt
anyone has seen in decades) definitely uses checksums, and I cannot imagine
how anyone can come to the conclusion that it doesn't.

Where there's variation between ethernet adaptors is with what they do
with packets with checksum errors, and what they do with the checksum
word (final 32 bits of data in the packet).

Ethernet devices designed for end-station use generally simply drop
bad packets (usually counting them).   Ethernet devices designed for
hubs/switches - particularly high performance switches which can start
sending a packet towards the destination before it has fully arrived
at the switch, that is before the switch can possibly know that the
frame has been corrupted) will generally deliver the frame, and mark it
as broken - and leave it up to the software to drop it (or simply forward
it on, broken checksum and all - which is all that can be done if the
frame was half (or more) transmitted before the checksum error was detected).

Part of the problem we see is that chip vendors don't want to cut themselves
out of the "super switch" market - they want to at least pretend that the
big switch producers might someday decide to buy millions of their chips
and use them in their products (despite those people generally designing
their own silicon - not using anyone's off the shelf chips in the devices
where this matters) and so build them to allow packets with checksum errors
to be received (still marked as bad) if the driver sets some magic bit (or
worse, always) which builds the illusion that there is some purpose in all
of that for normal systems.   There isn't.

As for the final frame word, most modern adaptors/chipsets at least have an
option to retain the FCS in the packet received - which when enabled, requires
the receiver to discard those final 4 bytes.   There are some applications
which use them, and I can believe some protocols might exist which expect to
receive that data - though as best I remember it (I did a lot of work with
ethertalk, long long ago) ethertalk is not such a protocol (it was, after
all, originally simply a method to send localtalk frames over an ethernet
link layer, though I suspect it has evolved slightly since then.)

The relevant mbuf bit, as I remember it, simply allows the driver to
inform the stack that the FCS is still there in the frame, and hasn't
been removed, either by the hardware, or by the driver.   That I believe
is largely what Joerg suggested it means.

Certainly, we do not want to do anything like:
	"Frame may have FCS included at the end if protocol uses a FCS",
as that's nonsense.

And there's no "may" about it, the flag indicates the FCS is included at
the end of the frame, and those last 4 bytes are not protocol data.

kre

ps: please do not continue piling on half remembered anecdotes that you
once heard someone say about ... if you believe there is a protocol, any
protocol, which requires end stations or routers (as distinct from hubs
or switches, sometimes) to process packets that have frame level CRC errors,
please provide documentary proof - a pointer to the specification of that
protocol at the very least.




Home | Main Index | Thread Index | Old Index