Subject: Re: NetBSD1.6 UVM problem? - Problem restated!
To: Matthew Mondor <mmondor@gobot.ca>
From: Gary Thorpe <gathorpe79@yahoo.com>
List: tech-kern
Date: 12/09/2002 12:40:22
 --- Matthew Mondor <mmondor@gobot.ca> wrote: > On Mon, Dec 09, 2002 at
10:30:18AM -0500, Gary Thorpe wrote:
> 
> > The only "solution" available now would be to set resource limits
> to
> > guarantee that unpriviledged users cannot gobble memory, but then
> would
> > that stop "malloc() then fork() then write memory" programs (and
> > basically any program which fork()'s and does not exec())? If it
> did
> > work, would it defeat the purpose of allowing overcommit in the
> first
> > place by not allowing maximum usage of memory?
> > 
> > Resource limits would also be ineffective for daemons running as
> root
> > that try the "use more resources until requests fail" style of
> resource
> > management (advocated for fork() usage by some and which is
> probably
> > correct when the system tracks its resources properly), since
> malloc()
> > can succeed when it really should fail. 
> 
> If it was possible, if at least the killed process(es) were the
> memory
> eaters, it would be a definite improvement. Well written applications
> which are expected to have high uptimes such as servers shouldn't
> memory
> leak and should check various error conditions and limits to protect
> itself
> from reaching unacceptable size. The responsible processes would then
> most likely be the bomb attack or less critical application.

But this is the problem: the application can check the return value of
malloc() but even if it succeeds, it may be killed later on if the
system runs out of memory. There is no "memory eater" as such: its just
that the system has overcommitted itself. There is nothing an
application can do about this particular problem. There is no way for
an application to detect when memory is running out, since the system
basically lies about it. What to do?

> 
> When I say that a well written application will manage to keep a
> decent
> size, I think of an ftp server for instance, which should have limits
> on
> number of connections it can process, etc. and will obviously free
> all resources associated to serve each client upon disconnection. The
> administrator becomes responsible to set reasonable limits related to
> the
> server hardware capability.
> 
> Matt

Suppose the server is within its limits but ANOTHER application uses up
memory? It can still be killed if it happens to trigger a page fault as
it is currently. You would have to build in/set memory usage limits for
all processes manually and/or all have them manage their memory size
(but without information from the system to tell them when memory is
low?). And what happens when a process fork()'s and it results in
overcommit? The fork() doesn't fail even if the memory is insufficient.
So is it possible to avoid problems by relying on admistration? If so,
then it would be very useful for people who need the reliability on
multiple platforms.

______________________________________________________________________ 
Post your free ad now! http://personals.yahoo.ca