tech-userlevel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: proposal: inetd improvements.



On 1275655445 seconds since the Beginning of the UNIX epoch
Manuel Bouyer wrote:
>
>On Thu, Jun 03, 2010 at 11:18:30PM +0100, elric%imrryr.org@localhost wrote:
>> In the e-mail to which you are replying, I specifically stated that
>> I would leave the feature in but strongly discouage its use.
>
>I'm not sure why you want to "strongly discouage" it. It's not a
>edge case, it's a usefull feature to keep a multi-service system
>running when one of the hosted service is under ddos (intentionnal
>or not). 

I would strongly discourage its use because it is actually not a very
good way of curtailing load.

Connexions per minute with a ten minute backoff period has the
following properties which make its use rather suboptimal in almost
every case:

        1.  throttling per minute makes it quite difficult or well
            nigh impossible to achieve a reasonable configuration
            setting where you can both:

                i.   achieve an acceptable average throughput, and

                ii.  prevent overloading.

            Let's just take a quick example.  Let's say you have
            a service which typically takes 50ms to run but for
            some requests the processing time might be as long as
            2s.  This is not terribly atypical for, e.g. a web
            server serving a combination of static pages and CGI
            scripts.  Now, let's say that I find that my machine
            starts to bog down when the load average reaches 50.
            How exactly can I use ``connexions per minute'' to
            prevent this from happening?  How can I get a reasonable
            configuration?  How can I actually make any assurance
            _at_ _all_ about the load on the system?

            Answer: well, I can't do anything reasonable with
            connexions per minute.  I mean, I could say that the
            avergage request takes 200ms and on average that means
            that I should be able to serve 5 req/s.  And so,
            300/minute would be reasonable.  But the problem is
            that if the workload changes to be more heavily focussed
            on CGI scripts then my load average could very easily
            spike up to 300 which is exactly what I'm trying to
            prevent.

            If I set the number lower, then I am not achieving the
            throughput that I set out to achieve.

            And hence, we discover that we have a rather blunt
            knife with which to try to prevent overloading.  Either
            we set the number way too low or we risk falling over.

            Throttling per second makes this problem less obvious
            but doesn't solve it entirely as you can see by thinking
            through the above example.

        2.  the ten minute backoff creates a further complication where
            you're basically providing people with the ability to
            DoS your services without having to even try very hard.
            They'll do it for you.  In general, if you are setting
            up a service where there's some level of importantance
            in keeping it up, you do not want to stop servicing
            requests entirely for a long period of time.  You want
            to configure the service such that it will achieve a
            steady state of a reasonable and acceptable maximal
            load.   Not a configuration which allows for incredibly
            peaky load followed by apparently arbitrary cut offs.

So, my proposal is simply to add another mechanism for controlling
the load, i.e. maximum outstanding children, and both encourage
people who are considering using inetd for serious purposes to use
it and discourage use of the existing mechanism providing an expanded
version of the above reasoning.

--
    Roland Dowdeswell                      http://Imrryr.ORG/~elric/


Home | Main Index | Thread Index | Old Index