Subject: Re: bin/2905: setting environment vars from login
To: Curt Sampson <>
From: Greg A. Woods <>
List: current-users
Date: 11/06/1996 01:15:31
[ On Tue, November  5, 1996 at 08:00:01 (-0800), Curt Sampson wrote: ]
> Subject: Re: bin/2905: setting environment vars from login
> A setuid root binary, no. But I seem to be having a great deal of
> difficulty making you preceive that the problem is not just root
> access, it's shell access of any kind.

But you can't do that by default on *bsd anyway!!!!  Sorry to get
excited about this, but it really can't be said enough times.  If you
want a box that can give you a secure platform that won't permit shell
access, then you're going to have to drop almost all the way back to
SysVr3.2, *or* do a lot of hacking on a lot of system utilities.  Even a
generic SysVr4 has lost this capability in the face of a dedicated

What I can't understand is why you think it's of value to restrict shell
access to a general purpose system.  In most normal security policies
you should only be worried about access to privileged or protected
objects, which in a general-purpose unix environment usually means
protected files and setuid programs, and not just setuid-root programs.

Obviously almost all out of the box most *nixes are designed for shell
access in a general purpose computing environment, including NetBSD.
Certainly very few, if any, of the *BSD system utilities that provide
shell access via escapes or what have you are prepared by default to
enforce a security policy of not allowing shell access -- quite the
contrary.  Tricks such as careful control of the SHELL and PATH
variables (which the proposed feature wouldn't even endanger) don't
really count as a means of implementing a no-shell-access security
policy in my opinion.

In fact the proposed changes to login will *not* make it any easier to
get shell access in a *stock* environment (at least I haven't seen any
proof to the contrary).  I admit they may make it more difficult to
prevent users from contorting the behaviour of mega-monolithic
applications which may place undue trust in various environment
variables, but as we've discussed this can be prevented by appropriate
access controls and wrapper programs -- no magic necessary.

If you really want to build a truely secure general purpose computing
system based on unix, then you must start with the premise that shell
access as an ordinary user *will* be possible.  Implementing security
policies really is much easier once you agree with this starting
premise.  One immediate benefit from this premise is that it instantly
removes the burden of worrying about preventing access to un-priviledged
objects that you're not (supposed to be) worried about in the first

Eliminating easy avenues of shell access really only has the advantage
of giving the security officer a head start in identifying a cracker.
It is merely a form of obfuscation.  It certainly won't protect you from
your own mistakes for much more than a few milliseconds.  I don't mind
playing a few tricks to gain a head start on a cracker, but I'm not
going to bet all my chips on these few tricks.

Perhaps in a true turn-key mission critical system you might be able to
eliminate shell access for real, but in such a system you can probably
'chmod 500 /bin/sh; chown root /bin/sh; rm ${TOOLS_WITH_SH_ESCAPSES}',
set up the application to start on boot and be done with it -- any
authentication and access necessary should be internal to the
application and then you can 'rm /bin/login' too.  Hopefully we're not
talking about such systems here though, as they're an entirely different
kettle of fish.

If you're trying to set up a system that tries to keep (some) users
within one application (suite) without carefully auditing just exactly
what's required to run the application (and *only* the application),
esp. a system based on a *BSD, then you'd really really better know
*exactly* what you're doing, and you had better damn well have access to
a C compiler (or know *exactly* what kind security policy the
applications implement, and be certain you can trust the applications
vendor to not have done anything stupid).

If you're doing something custom like this without a compiler, then
you're off on your own, and you should know that before you start.  Now
since NetBSD comes with a compiler, there's nothing to worry about in
this case, right?  Remember please let's stay focused on the impact
these changes will have on NetBSD, not on some arbitrary commercial
operating system where a native compiler might cost extra.

> Yes. In some poorly documented systems I don't even have any
> assurance of all the environment variables the damn program uses.

(I presume you're talking about applications systems here...)

That's where you're supposed to earn your money and you should be able
to do the detective work necessary to discover them all!  ;-)  It should
be fairly easy to do too, esp. for machine compiled programs, even if
you have to copy the binaries to some other machine (with an unrelated
architecture) for analysis.

OK, perhaps I can summarize a number of points here:

1. Programs that implement a security policy should not trust
environment varibles (including PATH, but we won't let login clobber
PATH just in something calls execvp() by accident).

2. There are some standard NetBSD tools that cannot be prevented from
giving a user access any and all files that the user has appropriate
permission to access.

3. There are some standard NetBSD tools that cannot be prevented from
giving a user access to any and all programs the user has appropriate
permission to run.

4. Even a simple C wrapper program can repair adulterated environment
variables before starting a security policy implementing application
that doesn't know any better than to ignore environment variables when
taking action related to the security policy.

5. It's possible to force /bin/login (even with the proposed feature in
place) to start such a C wrapper program instead of starting a general
purpose shell

Does everyone agree with these points?

Can anyone see any way that the proposed feature might make it possible
for a user to violate what we might call the "general purpose computing
unix security policy"?  Can anyone see any way that the proposed feature
might make it possible for a user to break out of a carefully
constructed box that permits them to run one specific application or
application suite, such as pppd?

> You'll note I've never objected to you doing your little `security
> policy' thing on system utilities.

Agreed.  This must be done regardless -- it's just a matter of priority.

>  Just leave the login unable to
> set environment variables by default.

If it can be controlled at run-time, then I've no objection.

> Otherwise it's just one more
> stupid little security hole that someone is going to forget to plug
> one day, and it's going to hurt them.

Sorry, I do not agree.

> Unix is already a huge amount
> of work to secure;

I don't agree with this either.  It's far easier to secure than most any
other multi-user operating system I've ever encountered, and so far as I
know the literature backs this claim.

> making it more work just to add a relatively
> useless feature should be the decision of the local site administrator,
> not of the vendor.

My point is that this "feature", regarless of its utility, is in a
critical program and it is far less work and far more reliable for the
vendor to include the feature than to let N admins include it Y times in
X different variations.  If even two people would make use of this
utility, then it would pay for it to be included once, correctly.

Nobody has yet shown that it will be a risk to the average general
purpose system (even if it is forced on by default).  Only those people
who are building really custom systems and are probably already on the
verge of replacing or even removing /bin/login anyway, as well doing a
zillion other things, really need to be concerned about it.

							Greg A. Woods

+1 416 443-1734			VE3TCP			robohack!woods
Planix, Inc. <>; Secrets Of The Weird <>