Subject: Re: CVS commit: syssrc/sys/kern
To: None <tech-kern@netbsd.org>
From: Christos Zoulas <christos@zoulas.com>
List: tech-kern
Date: 12/09/2002 03:17:02
In article <200212081912.gB8JCnm00669@s102-n054.tele2.cz>,
Jaromir Dolecek <jdolecek@netbsd.org> wrote:
>
>Yes, this more or less my view.
> 
>Definitely process running into it's limits is more suitable for
>punishment than random other process. I believe that the limits
>are supposed to be set so that the users/system are able to do
>their work, but still catch runaway cases.  So if the limit _is_
>reached, it's IMHO fine to use drastic measures.

But they should not be punished on a syscall-by-syscall basis as
you propose, but they should be punished in a centralized and well-thought
fashion.

>There is some prior art, even. During out-of-memory condition,
>processes asking for more memory are killed.  Processes not asking
>for more memory can continue running happily.  Similarily CPU
>limits, process is terminated if it reaches the limit.

Nonsense. You are comparing apples and oranges. The system overcommitted
memory allocation here, had a choice between deadlocking and killing a
process and chose the second. If the system had not overcommitted memory
allocation in the first place, it would never need to kill the process.

>Process slots are not that scarce. The limit for number of
>processes is reached very seldomly. When it _is_ reached, it
>is very likely that Something isn't behaving properly; either
>there is some Unexpected Load, DoS going on, silly mistake
>of local user, or something is misconfigured. In all these
>cases, the induced sleep helps administrator to get things under
>control more easily.

You are only preventing the silly mistake style fork() bomb. I bet
that I can make the system completely unusable without hitting any
of the resource limits that will put my process to sleep.

Ok, then start putting limits on other system resources. Hell, this
process has mmapped too many files, and its constantly doing I/O to
them thus making the system unbearably slow. Let's put it to sleep
for a while. Well, guess where this sleep belongs to? In the I/O
syscall, or in the scheduler?

In other e-mails you keep saying how clever this is and simple,
and how it is possible that someone else had not thought about this
in the past since you have not seen anything like it before. Well,
it is because it is inelegant, a kluge, and anyone who cared to much
about the problem to come up with the solution in the first place
decided that the solution was not good enough and decided to keep
living with the problem.

>There are no mysterious failures caused by the sleep. If the
>out-of-slots condition passes, all system activity goes to normal
>shortly. If the out-of-slots condition continues, the processes
>most likely causing trouble (those forking) are punished. Maybe
>it's not quite ideal behaviour, but it's quite a good approximation
>and has zero overhead cost.
>
>> Saying a program is, "broken," because it uses what was a perfectly
>> legitimate programming method before, though, seems judgemental.
>>
>> I think it's perfectly fine to fork until you can't. The kernel has to
>> keep track of your process limit, and can politely tell you you've hit it.
>> Why duplicate that in userland? Also, by doing that, if the kernel limit
>> is ever changed, you immediately can take advantage of it.
>
>Yes, I think it's perfectly fine to do that as well. Just don't
>expect to do that AND get the resources immediatelly all the time.
>If real-time response is expected, there is no other option than
>pre-allocate (pre-fork, in this case), and more carefully control
>how many resources are consumed.

Why not? If I set my resource limit low, let's say 10 processes
and then I write code that depends on hitting that limit each time
in order to do process accounting, then my program will not work
properly in the presense of the sleep. Should I be punished also
each time I open a file and get EPERM or ENOENT? How about when
I do asynchronous i/o and get EAGAIN?

chrfistos