Subject: RE: Realistic NMBCLUSTERS limit?
To: None <port-alpha@netbsd.org>
From: Tom Haapanen <tomh@metrics.com>
List: port-alpha
Date: 03/10/2000 22:52:03
>> Is it not feasible for the kernel to recover from this, and get the
network
>> running again?  Is this something inherent in the [Net]BSD kernel
>> architecture that cannot be cured?

Jason Thorpe wrote:
> Well, the kernel *DOES* recover... when it runs out, it attempts to free
> any that are laying around in reassembly queues, etc. (for protocols
> which will retransmit their data, etc.)

> You don't, however, just want to grow the limit automatically... can you
> say "denial of service"?

But that's essentially what we have now -- if you can trigger the NMBCLUSTER
usage (I don't know, which is just fine...), the machine will fall off the
network, and, as far as I can tell, does not recover.

What would be the ideal behaviour?  I don't know for certain.  Maybe have an
initial NMBCLUSTERS limit, and a MAXNMBCLUSTERS, and have the kernel
increase it dynamicallly as needed?

Or else, even, have a way for it to recover and get back on the network once
the heavy traffic subsides.  But in the case of our web server, it was off
the air for 5 hours ... no automatic recovery there.

Now, as a backup, I could set up a daemon process to watch for the
NMBCLUSTERS syslog messages, and maybe restart, or increase NMBCLUSTERS, or
ifconfig down/up (if that does the trick) when it appears ... but while that
may fix my particular situation, it's not really a generic solution.

Tom Haapanen
tomh@motorsport.com