Subject: Re: CVS commit: syssrc/sys/kern
To: Jaromir Dolecek <jdolecek@netbsd.org>
From: Roland Dowdeswell <elric@imrryr.org>
List: tech-kern
Date: 12/08/2002 20:57:06
On 1039374769 seconds since the Beginning of the UNIX epoch
Jaromir Dolecek wrote:
>

>There is some prior art, even. During out-of-memory condition,
>processes asking for more memory are killed.  Processes not asking
>for more memory can continue running happily.  Similarily CPU
>limits, process is terminated if it reaches the limit.

Well, except for some important details.  First, a process running
into its own process limits for children does not indicate any sort
of resource starvation situation, so why are we taking drastic
measures in that case?  As discussed in a prior email[1], I mentioned
that a process has no way to know if it's going to run into its
process limits w/o circumventing the entire system anyway.  So,
making changes like this will force applications to circumvent the
system in order to be sure that they aren't randomly (or at least
randomly from their perspective) paused and hence increases the
likelihood of DoS in real world situations.

To draw a number of differences between the memory starvation situation
and the fork situation, please consider that in the memory starvation
situation that:

	1.  the OS has already promised the resources thinking that
	    they won't actually be used,
	2.  there is no good way to return an error,
	3.  people have suggested a no overcommit strategy to take
	    care of the issue (and I suggested a probablistic no
	    overcommit strategy[2]).

Now a long time ago I was thinking that it would be interesting
for processes to be scheduled on a user by user basis.  This would
also (although that was not the motivation at the time) take care
of a user-level forkbomb.  Charles Hannum suggested a multi-level
scheduler to me in reference to this discussion, and there is prior
art as a patch by Rik van Riel to the Linux kernel that does
this[3,4].  There is another implementation of a multi-level
scheduler for linux on sourceforge[5].  This might be an effective
mechanism to enforce both more fair scheduling and to incidentally
detooth a forkbomb in a slightly more elegant fashion, because if
you are scheduling by users then one user will get at most 50% of
the CPU regardless of the number of processes they create.

It is important to realise that the real problem with forkbombs is
not that they create 160 processes, but that said processes busy
loop and create a CPU resource starvation situation.  It is not
actually the fork system call that is the problem, it is the
scheduling.

Please note that this email is just a few minutes of looking around
on google and isn't actually a proper search for prior art.  There
is a lot of information in the literature about schedulers and
whatnot and if we are going to look into any of these strategies
we should read it and determine what the best strategy is.

[1]	R. C. Dowdeswell, ``Re: CVS commit: syssrc/sys/kern'',
	Message-Id: <20021208141121.D6629174D2@arioch.imrryr.org>,
	http://mail-index.netbsd.org/tech-kern/2002/12/08/0006.html,
	December 2002.

[2]	R. C. Dowdeswell, ``Re: overcommit'',
	Message-Id: <20000321190221.ACA0C1B35@mabelode.imrryr.org>,
	http://mail-index.netbsd.org/tech-kern/2000/03/21/0002.html,
	March 2000.

[3]	Rik van Riel, ``[patch] fairsched for 2.2'',
	http://www.uwsg.iu.edu/hypermail/linux/kernel/0004.0/0723.html and
	http://www.surriel.com/patches/2.2.15-fairsched,
	April 2000.

[4]	Sam Vilain, ``Fairly allocating kernel resources'',
	http://www.mail-archive.com/freevsd-devel@freevsd.org/msg00114.html,
	September 2001.

[5]	Borislav Deianov, ``Fairsched - Hierarchical Fair Scheduler for
	Linux'', http://fairsched.sourceforge.net/,
	July 2000?.

--
    Roland Dowdeswell                      http://www.Imrryr.ORG/~elric/