[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: proposal: inetd improvements.
On Jun 3, 2010, at 6:16 32AM, elric%imrryr.org@localhost wrote:
> On 1275557382 seconds since the Beginning of the UNIX epoch
> der Mouse wrote:
>>> I can't help but think that this talkd fat finger example is quite an
>>> edge case but the ``solution'' is quite a problem. If you exceed the
>>> connexions per minute, inetd will _stop_ the service for ten minutes.
>>> It's just not reasonable to stop a service for ten minutes because a
>>> certain threshold has been exceeded.
>> It certainly is. Maybe not in your environment, but definitely in
>> others. Indeed, for some, such as my own, the idea is good but ten
>> minutes isn't long enough.
>>> The numbers appear to be straight from the eighties, from a
>>> University environment.
>> So there you are: you even found such an environment (one where it's
>> reasonable) for yourself.
> Actually, that was more of a joke. Sorry for not putting a smiley
> face after it. I thought that the reference to the eighties was
> enough to give it away.
> This kind of behaviour is basically never desirable. The problem
> that it solves is an edge case and the constraints that it imposes
> on other daemons that are configured in inetd are arbitrary and
> generally unacceptable. You almost never want to setup a service
> where exceeding a particular threshold which has little to do with
> actual system performance will cause complete and utter systemic
In fact, when this happened -- and I, like others, have seen first-hand
-- it did have a serious effect on total system performance.
> Worse, if you've got something like a load balanced
> service, the failure of one component for exceeding the threshold
> almost guarantees that you will have a cascade failure that will
> take down the entire service as the problematic clients start
> failing over to other servers which are now more likely to exceed
> their thresholds.
> All of this excitement and the best use case that I've seen so far
> is that you've supplied the wrong options to a wait service which
> would then exit quickly and need to be respawned because it did
> not service the outstanding request on the queue. Surely, this
> very usual problem could be solved in a different way? Maybe,
> even, a way that does not affect nowait services?
> Anyway, I was not suggesting that we completely excise the
> functionality from inetd. I was suggesting that we add some
> reasonable limitations in. Limits on the maximum number of
> outstanding children or the like. This sort of thing actually
> represents much more closely the load on the system that you are
> willing to allow the service to consume and does not have the same
> kinds of systemic failure cases as what we currently have.
> As for the current limits, I would suggest that we consider disabling
> it by default. And most certainly we should carefully document in
> the man page why you never want to use it for anything that actually
>> If you're going to complain about others imposing their environments'
>> appropriate solutions on you, it would behoove you to avoid doing the
>> same in the other direction.
> I am not complaining about others imposing their environments'
> appropriate solutions on me. I am pointing out that this particular
> behaviour doesn't make sense in any environment but might be
> considered to be acceptable in a hobbyist/university setting. This
> does not mean that it is desirable either, it just means that it's
> failure cases are not as critical in those settings.
It's not a "university setting" issue -- I used to wish for this, way back
when, when I was running production machines at Bell Labs. It really
does solve a very real problem.
The question is whether or not there's a better solution to this problem
among others. I won't argue if you say that changes in technology make
this approach less desirable than it once was -- but don't forget that the
new solution has to solve this problem, too.
--Steve Bellovin, http://www.cs.columbia.edu/~smb
Main Index |
Thread Index |