Subject: Re: FH munging
To: Jim Reid <jim@mpn.cp.philips.com>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 03/25/1997 03:21:03
Jim,

It seems we really need to go all the way back to basics here.

Computer security doesn't have the absolute rules you seem to think it
does.  To construct and implement an appropriate security policy, one
needs to first decided what the risks are that one faces; look at the
costs of those risks; and put in place some mechanisms that do an
acceptable job of minimizing those risks.

``Secure'' computing is very simple: put the computer in a military
enclosure, with armed guards and no network access.  Forbid anyone
from bringing in, or removing, _any_ equipment from the enclosure
before decommissioning, degaussing, and physical destruction of said
equipment. (vendor service hate this, they lose a toolkit and any
spares they take every time they have to go fix the machine.)  Subject
hardcopy to military hierarchical secrecy roles.

Secure?  maybe. Most installations accept more risk than that.  The
point of the story?  The risks, and the level of acceptable risk, vary
from case to case.

You (like others in this thread) are taking the perceived threats,
risks, and associated costs from your own environment, and then
proceeding as if those risks are universal.  This may come as a
surprise to you, but in fact, they *aren't* universally applicable.

I'm getting downright tired of flaming back-and-forth with ungrounded
assertions, so I'm going to go through this point-by-point and show
where your assumptions are wrong, insofar as they don't hold in
*my* environment.


PLEASE NOTE: that doesn't mean you're wrong about those assumptions
holding in your home, or workplace, or wherever-it-is that they're
valid.  It just means there are other environments with different
perceived risks, and where different costs are placed on those risks.
Which means your blanket statements about security don't hold in
*those* environments.  Okay?


We're presumably talking about NetBSD. NetBSD is used in lots of
different environments.  It's used at home in single-machine or small
network configurations.  It's used in ``public'' university
workstation clusters. It's used in research labs.  It's used on
machines running as network routers.  It's used by small ISPs.  It's
used in corporate workplaces.  Each of those environemnts has
different potential losses if they suffer a breakin, data loss, or
vandalism.  The perceived risks, and their costs, are different. The
appropriate security policy, and the security implication of making a
specific policy decision, are going to vary accordingly.

I assume most of us have all this before, but I think it's worth
repeating, before looking at Jim's claims point by point.  Not to
unfairly single out Jim: a number of people have made claims about
what does and doesn't work, implicitly basing that decision on *their*
environment, *their* perceived threats and risks, *their* perceived
costs, and the security policies *they* have chosen.

Some of us have forgotten that NetBSD is used in lots of different
environments, and what works well for them may be ineffective for
others.  I think NetBSD *should* provide mechanisms that can be
tailored to meet security policies appropriate for whtever environment
NetBSD is used in.  That means understanding a little more of each
other's environments.  So .....

Jim Reid <jim@mpn.cp.philips.com> writes:


>Sorry, firewalls don't help much (if at all).


Mistake #1.  In some situations, firealls can help enormously.  It
depends on what the perceived risks are, and the kind of firewall you
deploy. Firewalls help *me* a lot.  I haven't had to worry about any
of the on-campus NFS breakins, from either off-campus or on-campus,
because my group is behind a firewall.  If you have a machine at
home with a continuous Internet connection, and your financial information
is online, then you better beleive a firewall can help a *lot*.

Other groups at Stanford really do have people all over the world
NFS-mounting their filesystems.  Others restrict NFS-mounts to
read-only mounts from machines with Stanford addresses.  That works
*really *well* for software distribution sites: they don't care
what the address is, as long as it's a legitimate Stanford addrese.
The border gateways (or ``firewall''s according to some people)
enforce that assumption.


> At best, they can block
>off NFS traffic from the outside world. 

Mistake #2.  One could have lots of little firewalls, one around each
work group. The ``outside world'' doesn't always mean what you assume it
does.  A firewall that isolates all of Philips from the Internet is a
very different beast than a firewall that protects just you and your
officemates (or cubemates, or workgroup; or your home, or whatever).

The research group I'm in has a firewall around a small group of a
dozen people.  That helps us a *lot*.  One of my colleagues firewalls
his home machine (in a dorm) off from the rest of the campus network
-- or if you prefer, the rest of the entire *world*.  The ``outside
world'' can't do much to *him*.


>All this does is remove one not very significant threat.

Mistake #3.  The amount of threat it removes depends very much on the
environment under consideration.  If you have a small workgroup where
everyone knows the root password, and you set up a firewall around
that enclave, then the ``threat'' that is removed by a firewall is
pretty much all threats, save for physical attacks.

Most employers trust their employees (at least white-collar ones) not
to steal from the company hand-over-fist. What's the environment where
theft or destruction of company information is so much greater?  I'd
like to know, so I never work there :).  I work in a group where
everyone has the root password and we collaborate on the same project.
In that environment, *all* the potential risk comes from outside.

Almost all of Stanford doesn't have a firewall (except the border
gateways  and interior gateways, which silently drop source-spoofed
packets.)  These people accept losing about two days every time they
suffer a breakin. The going rate is something like six days a year.  

I would be astounded if you understand what it means, in this
environment, if a breakin happens three days before a paper submission
deadline.


> The big danger comes from the internal users 

Mistake #4.  That may be true for whoever runs the corporate payroll
(where "internal" means "anyone who gets paid"), or inside Philips, or
in other corporate internets, or in ``data centers''. It's certainly
*not* true in general.  One obvious counter-example is most research
universities, where in general it doesn't hold at all.  Having access
to an IP address in the range belonging to the organisation is often
sufficient to gain access to all sorts of online information.

I think it's a fair claim that if you're in an environment where this
*is* true, it's because either

	a) the UN's Black Helicopter fleet has kidnapped all the
	   recreational crackers, everywhere in the world,
or 
	b) you have a firewall that protects you from  said recreational
	   crackers, which tends to point out the fallacy of your mistake #1.


>who have the means, motive and opportunity to do evil things on
>the LAN if they choose to.


Mistake #5.  The ``internal users'' might not need any means to do
``evil things'', they might already have them.  They might not have
the motive.  They certainly have the opportunity.  But doing ``evil
thing on the LANs'' around here, more often than not, means
accidentally saturating an Ethernet segment with NFS requests to the
point where nobody can ping the NFS server and the machine has to be
rebooted.