Subject: Re: bin/2905: setting environment vars from login
To: Greg A. Woods <woods@kuma.web.net>
From: Christian Kuhtz <kuhtz@ix.netcom.com>
List: current-users
Date: 11/05/1996 12:51:00
On Tue, 5 Nov 96 00:56:06 -0500 (EST), woods@kuma.web.net (Greg A. Woods)  
mumbled:
> However I do always have source for new programs I install, or am at
> least able to write secure wrappers for these programs that can carefuly
> enforce control over the environment of the new application.  (I've
> never installed a system which was required to enforce a strict security
> policy but which didn't have a compiler, and I wouldn't recommend that
> anyone try to do so.)

Well, I have news for ya.  I have been there done that.  Numerous times  
actually.  If you think that's not a logical scenerio, think again.  I  
trust you to be smart enough to come up with a solution on your own. ;-)

Regardless, an operating system shouldn't be in a state where you *need* a  
compiler to make it secure.  That is absolutely bogus.  Fact is that this  
is often the case.  What does that tell you about commercially secure  
environments and their ability to judge security appropriately?

> In other words I've never had a problem enforcing local policy on
> binary-only SysV boxes where login allows setting innocuous environment
> variables.

I have.  And I am pretty sure CERT will gladly provide you with incident cases.

> All the system-supplied binaries implement a consistent and
> documented policy and clearly define the noxious environment variables
> that login won't allow the user to over-ride, and of course if I was
> adding any new programs to such systems they were either in source form
> or wrapped by local source and I could easily stop them from doing
> "stupid" things based on random variables that might be under a user's
> control.

Your argumentation has at least one major flaw:

	You cannot assume system sypplied binaries are implemented
	in a consistent and document way.

That's like rule #1 for security audits.  You present yourself as if you  
have seen it all.  Sorry, but you obviously haven't.

> Surely nobody in a security-consious environment installs a
> setuid-root binary from some arbitrary third party without trusting that
> third party *and* without fully understanding how it enforces a security
> policy.

You assume that all admins are fully security aware and knowing.  That is  
a very dangerous and wrong assumption.  We don't live in an ideal world.

> Obviously I still don't see the problem of adding this feature to NetBSD

(because you don't want to see it?)

> so long as we clearly maintain a self-consistent security policy about
> which variables are prone to causing problems.

UNIX is not secure.  And as Curt already pointed out, if you want to  
secure it you need to redesign the UNIX way of doing business.

Go ahead, but I believe that's way out of scope here.  And in fact, it  
would make your "secure UNIX" very much incompatible with UNIX.

> So long as all new tools
> which are *integrated* into NetBSD are monitored for such risks, the
> overall increase in risk to the system, as delivered, is zero, while the
> net gain in functionality is positive, and we'll be bringing the
> standard distribution more in line with the capabilities of equivalent
> commercial operating systems.

You are introducing no metric by which you can measure the risk increase.   
In fact, you have several metrics you need to look at, and at the very  
least they are:
1.  Consistency and documentedness of the implementation
2.  Effectiveness and consistency of monitoring.
3.  Skill level of administration
4.  Overall system security of a UNIX system.

You can in theory accomplish (1), I'll give you that one gladly.  You can  
take measures to accomplish (2) as well.  However, it really is depending  
that on (3) and (4).

(3) is something that will only be accomplished by a small minority, and  
the likelihood of this being accomplished is IMHO significantly less than  
(1) and (2).

(4) is really what kills it all.  The UNIX operating system is not secure  
by design.

> If you're saying you do (want to be able to) add tools which rely on the
> values in "random" environment variables to define local security
> policy, then all I can say is that your local security policy isn't very
> well thought out.

Fact is that many applications supplied with SVR4 rely on environment  
variables.  That has nothing to do with random.  And, there is *nothing*  
that guarantees you that the things we believe to be static in the  
environment are not prone to bugs like everything else.

IMHO you are exhibiting a dangerously high level of vendor trust regarding UNIX.

> So, perhaps I'm suggesting the argument in this case should be more in
> the line of: if you don't like this feature then you're free to comment
> it out in your local builds.

That's absurd.  Something this potentially dangerous should be an  
unsupported feature and the user should be strongly cautioned.

Remember, we do not want to risk out reputation by potentially introducing  
a security hole, which is not warranted by an absolute neccessity in order  
to operate the system, and therefore should not endorse things we are not  
absolutely sure about.  The slightest doubt should demand extreme caution.

> However since it's relevant to building a
> secure system it would be much better to carefully and fully integrate
> it into the system once at the "source" so that those who obviously do
> want this feature won't be at risk from their own mistakes aggravated by
> the need to constantly and repeatedly re-invent the wheel, so to speak.

There's nothing that stops you to publish your hack as and unsupported  
patch, either.  Why integrate it in the source? Or is our goal all of a  
sudden to become as SVR4'ish (and the main impetus appears to be  
Solaris'ish) as possible?

> I.e. the problem with forcing people to maintain a locally modified
> version of a program as critical as login is that you're opening many
> more people up to increased risk since each and every one of them will
> have to duplicate effort.

You would rather introduce this hack into a program you yourself  
characterize "as critical as login" globally to not give the community a  
choice and a chance to know about this feature by actively enabling it, and  
therefore potentially introduce a global hole to everybody and put  
everybody at risk?

> If the system supplies a well tested
> implementation of a feature then the risk of locally maintained copies
> containing bugs will be nil.

"If" is the key phrase here.  No one can guarantee the above.

> If we do it once, and we do it well, *everyone* will benefit, including
> those who don't want the feature in their local environments (since we
> can make it easy for them to disable too).

It is a minority that has expressed the desire for this feature or is even  
aware of this feature.  Therefore, the logical step should be disabled by  
default.

> Of course if we stole one more feature from SCO (which is also in
> SysVr4), we could even allow control of this feature at run-time!
> (i.e. /etc/default/*)

That's a ridiculous statment.  SCO is hardly SVR4.

Also, steal from SVR4 if you must and not from a vendor's bastardization  
of a SVR4 feature.

> Fear mongering about increased security risks where there are none is
> not productive.  If you know of any such documented risks, then by all
> means let us hear about them so we can ensure our implementation does
> not suffer the same fate.

Carelessness about potential security loopholes are not only  
non-productive, but dangerous.

Aside from the security issue, present an argument why this is neccessary  
to have and why you cannot implement this in another way beyond the point  
of authentication.  You argument "SVR4" has it doesn't cut it IMHO.  Why?   
Heck, there is lots of other crap within the SVR4 code that I don't ever  
want to see again in an OS.  It's a bogus argument.

Let me ask you this, yup, an analogy:  You are attempting to enter a  
secured area where you have to perform work.  The security guard at the  
gate asks you for identification.  All you tell him is your full name and  
hand him a wishlist for the tasks you would like him to perform in your  
office before he lets you in.  Then, you tell him your pass phrase.

Well, gee, I don't think the security guard is just lazy or something.  It  
is not his responsibility to clean up your office for you.  He will point  
that out to you and rightfully so.

Neither is it /bin/login's responsibility to do that.

If you want to retrain your door guard, fine.  Be prepared to do it via  
patch upon request.

Lastly, remember that we are all human beings, and as such make mistakes.   
This is not a world which is ideal and fair.

--
Christian Kuhtz <kuhtz@ix.netcom.com>, office: ckuhtz@paranet.com
Network/UNIX Specialist for Paranet, Inc. http://www.paranet.com/
Supercomputing Junkie, et al               MIME/NeXTmail accepted

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: 2.6.3ia

mQCNAzJ1JCkAAAEEALzCoYhlxTLI4DID5KpQINF8KM4PUnrZxoL2aRRFAQNX9v9c
8uBySUqVDxfyylB6M/ptUezWIs6DLjz6b8jr8MX40vQf2jU2db6oMDh2axOeXlg2
KCSHryZ9kthnnXOVt0kHLN9XjM9DvwKU28RzvT7umEVmbHFyp64kVG961wkZAAUR
tCVDaHJpc3RpYW4gS3VodHogPGt1aHR6QGl4Lm5ldGNvbS5jb20+iQCVAwUQMnUk
Ka4kVG961wkZAQFztgP+IgHBCz/d1Sc10Qg0Wmu4KnhNb4E4KsPh96V/olwbQS+e
frdWMxSHzX8hGD1p/KbuwlNRrDktmZgVc+n89FGEeGcq3z9WK3o22JsyjJTlzobY
qJIZ5bdOx4dOimQ83ha9zjF+bRnw92t1jC/GJ+LRyOEVMzD5TtL7AMdODO8fNC8=
=sRe0
-----END PGP PUBLIC KEY BLOCK-----