tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: IPv6: what is required of lower layers?



>> Perhaps I should try [POINTOPOINT].  It's probably a holdover from
>> the past; I tend to think of POINTOPOINT interfaces as being
>> inherently /32 (or /128 for v6) on each end.
> Sort of.  For IPv4, it's "two /32s".

Well, even for IPv4, there are three sockaddrs: address,
broadcast/destination address, and mask.  Perhaps the implementation
ignores the netmask for IPv4 POINTOPOINT, but it's there.

> For IPv6, it's "a network with a netmask", this "two distinct A and B
> addresses" seems to have never been implemented anywhere.

So...you can't use a POINTPOINT interface to represent a point-to-point
link between two IPv6 hosts with completely unrelated addresses?  That
sounds...I'm having trouble coming up with anything weaker than
"stupid", and I know the NetBSD people weren't, so I must be missing
something.  (I disagree with some of their tradeoff choices, yes, but
stupid they were/are not.)

> From the rest of the thread, it seems POINTOPOINT won't be of much
> use to your use case, if you make use of the TUNSLMODE destination
> address - which is something OpenVPN never did, it does an "internal
> routing lookup" on the in-header destination IP address (on the
> multipoint server, the client is dumb and sends everything coming in
> from the tun interface onwards to the server).

My software is basically a VPN, but without the client/server viewpoint
distinction: each node can be configured to connect out, accept
connections, or both.  (Or neither, actually, but that's not very
useful.)  The software presents the illusion of a direct link between
host A and host B regardless of which hosts A and B are and whether
they can communicate directly over the underlying medium, establishing
direct connections where it can and routing packets internally as
necessary.  For v4, I set up the tun as IFF_BROADCAST and it all Just
Works.  I'm not sure whether the trouble I'm having indicates bad
design of v6, bad design of the v6 stack(s) I'm using, bugs in the
code, PEBKAC, or what.

The OpenVPN way sounds a bit broken to me, in that it makes it
impossible to use the VPN to route between other network pieces the way
one would a normal network, because the in-header destination won't be
a VPN address at all.  That is...for concreteness, let's say the VPN
is using 10.0.0.0/24 and we have VPN host 10.0.0.1 which has another
interface on 172.16.0.0/24 and VPN host 10.0.0.2 which has another
interface on 192.168.0.0/24.  If the VPN were an Ethernet, then
10.0.0.1 could "route add -net 192.168.0.0/24 10.0.0.2" and 10.0.0.2
could "route add -net 172.16.0.0/24 10.0.0.1" and it would all just
work.  But that won't - can't - work if the VPN routes based on
packet-header addresses rather than envelope addresses, unless you
actually teach the VPN itself about the extra netblocks.

>>> [...multicast for address resolution...]
>> [...]
> The problem here is that it adds quite a bit of complexity to
> something that needs to be very simple, or vendors will get it wrong.

Well, history indicates that vendors will generally manage to get it
wrong _anyway_.  But that's still a good point.

> My first IPv6-induced network explosion was on an Intel E100 card
> which got stuck when the multicast filter was programmed, spewing out
> garbled packets at wire speed, killing *other* hosts on the network
> due to interrupt load...

I'd hardly blame v6 for that.  Sounds to me like a hardware bug that v6
just happen to render manifest.  Something else that used Ethernet
multicast could have provoked it too, I suspect.  (I had a related
issue, once, when working with a system that used Ethernet in an
unusual way; since traffic wasn't bidirectional from an Ethernet point
of view, switches were flooding all the packets.  Since it was pushing
some 12-15 megabits, this would redline a 10Mb interface, even one not
involved beyond being in the same broadcast domain.)

> later, I've been hit by "Big Name" ethernet switch vendors getting
> multicast forwarding wrong again and again, breaking IPv6 ND (and,
> incidentially, IPv4 EIGRP hellos).

I'm not sure how I feel about that.  On the one hand, I can understand
the resulting pain.  But, on the other, buggy switches need to be
rendered obvious, so they can be fixed or avoided.

> Of course all of this is easy to say in hindsight, while the design
> comes from the times of Yellow Cables with slow CPUs attached to it
> where "less IRQ" mattered.

The RFCs I've been using as references for this - 4291 and 4861 - come
from 2006 and 2007.  That's not _that_ long ago.

>> I am not fond of the use of ICMP6 for neighbour discovery.  [...]
> I can see why this was done, so they could avoid to specify a medium
> dependent neighbour resolution protocol - like ARPv6 on Ethernet,
> "something" for SDH, ATM, ...

Complexity can't be hidden, only pushed around.  You have to specify
_some_ sort of medium-dependent something, or there's no way to get
packets moving.  You can specify a medium-dependent unicast resolution
method, or you can specify a higher-layer use of multicast to implement
unicast resolution plus a medium-dependent multicast resolution method.
Someone must have lost sight of the overall complexity when thinking
about the complexity of one small piece.

> OTOH, basing this on a routable protocol - ICMPv6 - was not a very
> security-conscious decision.  [...not checking ND hoplimits...]  Or,
> Big Name vendors forwarding packets with fe80:: sources, [...]

Good gods.  That deserves to go in the "return it for a refund" pile.

But I have little doubt there were analogous issues with v4, back
before it had a few more decades of burnin.  I even ran into something
of the sort myself, back in the day, though it aws more Ethernet than
IPv4 - we had two hosts that disagreed over whether the minimum
Ethernet packet size was 60 or 64, with packets of lengths 60..63
getting completely dropped under some circumstances.  This broke NFS
(over UDP, that being the way NFS was at the time) between them quite
badly; attempts to read files of certain sizes would hang, until I
tracked it down and started using rsize=1024 on the mount.

/~\ The ASCII				  Mouse
\ / Ribbon Campaign
 X  Against HTML		mouse%rodents-montreal.org@localhost
/ \ Email!	     7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Home | Main Index | Thread Index | Old Index