Subject: Re: CVS commit: syssrc/sys/kern
To: Jaromir Dolecek <jdolecek@netbsd.org>
From: Greg A. Woods <woods@weird.com>
List: tech-kern
Date: 12/08/2002 20:08:25
[ On Sunday, December 8, 2002 at 20:12:49 (+0100), Jaromir Dolecek wrote: ]
> Subject: Re: CVS commit: syssrc/sys/kern
>
> Bill Studenmund wrote:
> > Well, that seems like part of the problem. You have a view of what is
> > proper behavior, and part of that seems to be that running into the
> > process limit is punishable; if you're running into the limit, you're
> > mis-behaving.
> 
> Yes, this more or less my view.

Then you are (still) very wrong in your understanding.

Limits are there to ensure the system doesn't panic or deadlock, not to
punish offenders.  Such punishment is a policy decision that may or may
not be valid on any given system.  If you want to implement such
punishment then logging the user-id of the offender (in some safe and
secure way) and auditing the logs is the best way to do so.
 

> Definitely process running into it's limits is more suitable for
> punishment than random other process. I believe that the limits
> are supposed to be set so that the users/system are able to do
> their work, but still catch runaway cases.

Yes, of course!

>  So if the limit _is_
> reached, it's IMHO fine to use drastic measures.

NO.

Limits are also there simply to ensure that a user never exceeds what's
been defined as his or her fair share of system resources.  There's no
problem with the user using all available resources, and no problem with
attempting to exceed them (since that's a valid way of knowing in real
time if the sum of all applications currently being run by that user has
achieved the maximum limit).  The kernel is the only thing that can
reliably and securely tell a user when he or she has reached the limit
and it does so by returning an error when an attempt is made to exceed
that limit.  It is not possible to generically or securely do this in
any other way from userland -- simply not possible at all.  The kernel
MUST NOT EVER penalize a user for trying to exceed the limit -- the
error and the failure to honour the request is more than enough penalty.


> There is some prior art, even. During out-of-memory condition,
> processes asking for more memory are killed.  Processes not asking
> for more memory can continue running happily.  Similarily CPU
> limits, process is terminated if it reaches the limit.

That's been one of the more controversial features ever added to any
unix variant!  You really should not have dragged that dead cat in here!



> Process slots are not that scarce. The limit for number of
> processes is reached very seldomly.

You're confusing things here.  The total number of process slots must be
shared by all users of the system.  The maximum number of slots allowed
for any given user should normally be much less than the maximum for the
whole system, at least on multi-user systems where this whole fork-bomb
issue is in any way relevant.

> When it _is_ reached, it
> is very likely that Something isn't behaving properly; either
> there is some Unexpected Load, DoS going on, silly mistake
> of local user, or something is misconfigured. In all these
> cases, the induced sleep helps administrator to get things under
> control more easily.

Nope, that's all wrong.  Now you're imposing external site/host specific
policies on a generic implementation.  That's wrong.  Very wrong in this
particular case.


> There are no mysterious failures caused by the sleep.

Wrong again.  The forced sleep can prevent the system from attaining its
maximum potential throughput.  This has been demonstrated, at least in
theory, with a real-world application!

-- 
								Greg A. Woods

+1 416 218-0098;            <g.a.woods@ieee.org>;           <woods@robohack.ca>
Planix, Inc. <woods@planix.com>; VE3TCP; Secrets of the Weird <woods@weird.com>