tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: shutdown(2)'ing a bound UDP socket



Hi Erik,

On Mon, Jul 20, 2020 at 10:20:20AM -0700, Erik Fair wrote:
> Unless I have misunderstood (which is certainly possible), the question turns into: ???is a shutdown(sock, SHUT_RD) local to that particular descriptor, or global to all descriptors which reference a particular host address+protocol+port number tuple????

The shutdown'ed descriptor is global I think.  It is able to bind only because
of the SO_REUSEPORT setsockopt.  Usually this is done to give forked childs
an opportunity to receive a packet after some algorithm (I'm guessing it's
a round-robin alg or similar).  By shutdowning one of these I hope to tell
the kernel that I want this excluded from this algorithm.  And let the other
global "tuple" receive that packet.

> 
> Which is to say, you want to declare on one descriptor that you???re never going to read from it again, but read from another descriptor at the same network address+protocol+port number tuple, i.e., that a shutdown(sock, SHUT_RD) should be local to a given specific descriptor, as opposed to global to all descriptors which reference a given IP address, protocol, port number tuple.

Yes.  It wasn't a decision that was planned, I felt my way through the
OpenBSD network stack in this regard and it did what I had hoped.  
Unfortunately Linux required a bpf filter and a different order of setting up
these sockets in my tests, because this wasn't possible out of the box there.
With FreeBSD I didn't have to shutdown the descriptor, it seemed to detect
what descriptor in the global tuple I was selecting on and directed the packet
in that direction, but this is just a conclusion I made after 3 or 4 hours
max.  After adding some ifdef's FreeBSD seemed to not drop a packet.

> 
> Why are you doing this dance of multiple descriptors? What behavior are you trying to achieve, or condition you???re trying to avoid, in your server code?

It is unorthodox but made some sense in that I had hoped for an opportunity
to write from a "global tuple" from other processes than the one that receives.
Other choices I considered were writing to a raw socket but it would be a lot
of overhead and hard work when it comes to fragmentation.  Lastly I found that
I can set up a shared memory to write the packet back to the process that
received it, it's entirely possibly to do so at overhead of writing the code.
This last choice is what I'll have to do if NetBSD can't help me.

And then why, you probably mean, am I spreading this functionality across
processes.  It had to do with OpenBSD and their sandbox'ing mechanism called
pledge(2).  The benefit of writing packets via imsg(3) and shared memory into
a process that is "stdio sendfd recvfd" pledged in order to parse a DNS 
message is very attractive.

If someone managed to overflow a buffer, somehow they'd be trapped to a very
restricted sandbox.  They can't open a file descriptor or open a network socket,
the kernel would kill the process if they tried.  I use and develop this
daemon of mine on OpenBSD but I want to also make it available to other
OS's with the sandbox mechanism disabled.  It's a trade-off for those people
that absolutely want to use delphinusdnsd but don't have OpenBSD available.

The other process that does the "working" of the forwarding is busy enough
that I would shy away from putting it into the UDP receiving process.  I
write all my programs single-threaded and only fork alternatively mainly due
to not understanding threads all that much.

It comes down to "what to do with shutdown(2)" and it's as much political as
technical decision.  I think the way OpenBSD allows this makes sense, and it
accomodated my train of planning on this.  FreeBSD and Linux are less
accomodating and I had to mess with Linux's natural algorithm with a BPF 
filter, and on FreeBSD I got lucky... (but was not able to shutdown at all,
it throws an error).

> 	curious,
> 
> 	Erik Fair

Best Regards,
-peter


Home | Main Index | Thread Index | Old Index