Subject: Re: NetBSD1.6 UVM problem?
To: Steven J. Dovich <dovich@tiac.net>
From: Gary Thorpe <gathorpe79@yahoo.com>
List: tech-kern
Date: 12/09/2002 10:08:41
 --- "Steven J. Dovich" <dovich@tiac.net> wrote: > > > > yes, the
heuristic for determining when to start killing
> processes
> > > > when no swap is available doesn't work so well when there's no
> swap
> > > > configured.
> > 
> > I've never understood why a process that is behaving could randomly
> get
> > killed.
> > 
> > Doesn't it make more sense to simply return ENOMEM to the process
> requesting
> > memory, and let -it- deal with the problem?  The OS needs to keep
> reserved
> > enough stack, etc, to make sure it remains consistent.
> 
> How do you return ENOMEM to userlevel when a page-fault encounters
> no more memory, and no more swap space to trade with?  What you
> imply is that you want a no-overcommit architecture. That requires
> that copy-on-write must always assume that the write will happen,
> and account for that consumption. Not a good mechanism for effective
> use of resources...
> 
> /sjd

How about just maintaing a counter for the number of available pages in
the system? New requests/frees would still be lazy and only modify the
counter and defer the actual writes etc for later. Would this be a
viable alternative to overcommital? How about an option?

Its funny how people can rail about how a 0.5 sec delay on fork() when
a process limit is reached is "punishing" processes unfairly but have
no problem with killing them at random when memory is overcommited...so
much for reliability: you can lock up/crash/reboot *BSD or Linux
systems with a "malloc big memory, fork a bunch of times, then write to
memory" 10-30 line program. Why not worry about these fork "bombs"?
They seem like they are much more destructive.

______________________________________________________________________ 
Post your free ad now! http://personals.yahoo.ca