Subject: Re: Replacement for grep(1) (part 2)
To: David Brownlee <>
From: Matthew Dillon <>
List: tech-userlevel
Date: 07/14/1999 00:34:53
:	Back on topic:
:	Obviously you devote the most time to handling the most common
:	and serious failure modes, but if someone else if willing to
:	put in the work to handle nightmare cases, should you ignore or
:	discard that work?

    Of course not.  But nobody in this thread is even close to doing any
    actual work and so far the two people I know who can (me and DG) aren't
    particularly interested.  Instead they seem to want someone else to do
    the work based on what I consider to be entirely unsubtantiated 
    supposition.  Would you accept someone's unsupported and untested theories 
    based almost entirely on a nightmare scenario to the exclusion of all
    other possible (and more likely) problems?  I mean come on... read some 
    of this stuff.  There are plenty of ways to solve these problems without
    making the declaration that the overcommit model is flawed beyond repair,
    and so far nobody has bothered to offer any counter-arguments to the 
    resource management issues involved with actually *implementing* a 
    non-overcommit model... every time I throw up hard numbers the only
    response I get is a shrug-off with no basis in fact or experience noted
    anywhere.  In the real world, you can't shrug of those sorts of problems.

    I'm the only one trying to run hard numbers on the problem.  Certainly
    nobody else is.  This is hardly something that would actually convince
    me of the efficy of the model as applied to a UNIX kernel core.  Instead,
    people are pulling out their favorite screwups and then blaming the 
    overcommit model for all their troubles rather then looking for the
    more obvious answer:  A misconfiguration or simply a lack of resources.
    Some don't even appear to *have* any trouble with the overcommit model,
    but argue against it anyway basing their entire argument on the
    possibility that something might happen, again without bothering to 
    calculate the probability or run any hard numbers. 

    The argument is shifting from embedded work to multi-user operations to
    *hostile* multi-user systems with some people advocating that a 
    non-overcommit model will magically solve all their woes in these very
    different scenarios, but can't be bothered with actually finding a 
    real-life scenario or using an experience to demonstrate their position.

    It is all pretty much garbage.  No wonder the NetBSD core broke up, if
    this is what they had to deal with 24 hours a day!

:	Put more accurately - if someone wants to provide a different rope
:	to permit people to write in a different defensive style, and it
:	does not in any way impact your use of the system: More power to them.
:		David/absolute

    As I've said on several occassions now, there is nothing in the current
    *BSD design that prevents an embedded designer from implementing his or her
    own memory management subsystem to support the memory requirements of
    their programs.  The current UNIX out-of-memory kill scenario only occurs
    as a last resort and it is very easy for an embedded system to avoid.  It
    should be considered nothing more then a watchdog for catastrophic 
    failure.  To implement the simplest non-overcommit system in the *BSD
    kernel - returning NULL on an allocation failure due to non-availability 
    of backing store - is virtually useless because it is just as arbitrary
    as killing processes.  It might help a handful of people out of hundreds 
    of thousands do something but they would do a lot better with a watchdog
    script.  It makes no sense to try to build it into the kernel.

					Matthew Dillon