Subject: Re: verified executable kernel modification committed
To: None <tech-security@netbsd.org, blymn@baesystems.com.au>
From: Thor Lancelot Simon <tls@rek.tjls.com>
List: tech-security
Date: 11/04/2002 00:39:01
On Mon, Nov 04, 2002 at 11:29:35AM +1030, Brett Lymn wrote:
> On Sun, Nov 03, 2002 at 12:50:47PM -0500, Thor Lancelot Simon wrote:
> > 
> > So what?  All that complexity, and you get... the same guarantee, with all
> > the same caveats, that you already had with file flags.
> > 
> 
> Actually you get the fact that eventually you will get notified the
> file has been overwritten, with file flags you will never know.

No, you don't get a guarantee of that -- not unless you know that
each and every last bit of code involved in doing the notification
is valid as well.  Though, to an extent, as Perry said, "it's turtles
all the way down!" this very concern is why, for example, FIPS 140
requires that you validate _all_ of the executables before you use
_any_ of them -- and the fact that the ability to overwrite the
_files_ would imply a protection failure that would, very likely,
also allow overwriting the _fingerprints_, is why, generally, relying
on run-time rechecking of the fingerprints to detect some sort of
system compromise is highly likely to lend only a false sense of
security.


> > The obvious way: something else writes to the file either by exploiting the
> > disksubr hole that allows writes to overlapping partitions,
> 
> ok.... so, how does file flags stop this?  How will you ever know this
> has been done?

File flags don't stop it _and_ your code doesn't stop it.  That's
my point.

> > or by writing
> > locally (or from another client) on your NFS server.
> >
> 
> Let us stop with the NFS now.  That is really just a blind alley - who
> in their right mind is running NFS on a firewall/router or similar?
> NFS is NOT secure what you are trying to do is nail a plank to a blob
> of jello and then bitching because it won't work.

No, I'm pointing out that one real _advantage_ of your method is that
it can extend the protection domain of your kernel beyond the physical
hardware in the box.  You cannot prevent a nefarious individual from
overwriting files stored across a shared network -- but, if implemented
differently, your method could ensure that the code was no longer
recognized as valid for execution.  The caching that improves performance
in the local-disk case is the only thing that's costing you the very
real and useful ability to protect the remote-filesystem case; and, from
my point of view, as currently implemented, in the local-disk case, the
code is an interesting experiment that, in essence, duplicates 
functionality already provided by the simpler file-flags mechanism.

[...]

> > Yes, if there is only one client, and if you happen to trust data *encryption*
> > as cryptographically-strong *validation* of that data -- which has proven to
> > be a very serious mistake for many people in the past -- you could do that.
> > 
> 
> No, I was not suggesting that at all.  What I was suggesting is that
> the encryption of the container makes it difficult to tamper with the
> contents of the container and still result in a valid container.  As I

Surely, upon reflection, you must see that the statement "the encryption
of the container makes it difficult to tamper...and still result in a
valid container" equates to the statement "I trust the block cipher as a
MAC".  Historically, that has proven to be a rather poor idea in a number
of well-known cases, enough so that the problem of designing block ciphers
that _are_ suitable as MACs is a very active research area.

> > I don't believe that either of the major claims you make in the paragraph 
> > above are exactly true, though similar statements might be.
> > 
> 
> OK - I think that what I am doing is using some words that have
> different meanings to me - due to my work environment.  When I say
> "assurance" is that the system can be audited to be running correctly,
> this is not that you feel warm and fuzzy but that you can have an
> external party analyse your logs for anomalies.  What flags cannot
> give you is any visibility that something is wrong, you would never
> know if someone modified a file - you just assume that because the
> flags are applied then the file has never been modified, this is not
> assurance this is blind faith.  Verifying the hash of the file gives

Ultimately, all faith in the kernel's enforcement of its protection
boundaries is blind faith, unless you intend to prove the kernel or a
carefully-chosen subset of the kernel.  It is no more valid to state
that, because there are no log messages about file signature failures,
the files may be known to be unmodified than it is to state that,
because the kernel enforces the immutability of files marked with the
schg flag, the files may be known to be unmodified.  Each statement
relies on the same underlying assumption.

> Show me how you gain assurance - by that meaning the ability to audit
> the system for correctness.

It is simpler to state how you can _not_ know that the system is
unmodified: you can not know that the system is unmodified simply
because it did not emit a runtime message stating that it had been
modified.  The ability to modify the system, given correct use of
_either_ your mechanism _or_ file flags, with a local disk,  implies 
a protection failure of sufficient severity that the ability to suppress 
the emission of such a message would necessarily be included in almost 
any scenario I can imagine.

To actually know that the system image is correct, it is necessary to
validate it against an _external_ cryptographic checksum, using an
_external_ validation mechanism.  When I say "external" here, I mean
"provably unmodifiable by the running system".  Note that storing a
checksum or signature list on read-only media isn't sufficient,
because the _code that runs it_ needs to be on read-only media; and
once it is loaded into memory, to actually have "assurance" in the
way in which you've defined it, the system must provably enforce the
restriction that the code cannot be modified and _will actually be run
when requested_.

Achieving this is a sufficiently difficult hardware and software design
and validation task that most devices that conform to standards that
require that the system image may be validated upon request define the
"request" event as a physical power-cycling of the entire system.  It is
interesting to posit designs that would _not_ require that, and still
guarantee that the validation _actually occurs_; but trying to do it
with nothing but software gets very hairy very fast.

If "assurance", as you've defined it, is your goal, one good thing to
read might be the power-up and continuous self-test requirements for
FIPS 140 conformant cryptographic devices (which includes a lot more
things these days than you might expect, for example several 
garden-variety network routers from Cisco, Nortel, or even my own
employer).  Actually, the entire standard only takes an hour or two
to read, and there are some wonderful examples of very clever -- and
expensive -- hardware/software systems intended to meet the higher
levels of conformance that you can find with a quick Google search.
It is at least as interesting to read about how some of these devices 
have been _defeated_ as it is to read about how they were designed to
be unmodifiable.  Interesting, but rather depressing. ;-)

> No, what you are doing is beating me over the head with some mythical
> flags modifications that will not provide any audit trail and telling

You would prefer to have an audit trail that you cannot trust than no
"audit trail" at all?  I would not.  What I've been trying to get across
is that you can't trust your "audit trail" to be valid except under the
same assumption under which you can rationally believe that the binaries
are unmodified simply because they are marked with the immutable flag;
a failure of the kernel to enforce protection dooms your audit trail
right along with the immutability of the files -- no matter how that
immutability is enforced.

> Signing the hashes is something I will look at, doing signed binaries
> is interesting but I see that has having some drawbacks too.

It surely does have drawbacks.  However, if used for a somewhat different
purpose than what appears to be the purpose of your verified executable
code, it can also offer significant flexibility: for example, the ability
to configure the system to allow an executable to be replaced with a new
executable carrying an appropriate signature from an authorized binary
signer, such as, perhaps, the one who signed the executable being replaced.
I think this is actually orthagonal to your original purpose, but I'd hardly
attempt to dissuade you from investigating it nonetheless. ;-)

> > but then again, it's not my code, so, ultimately,
> > all I can do is try to persuade you it should be a little bit different.
> > 
> 
> You may do that Thor but you need to be careful that you are sure of
> all of what verified exec does.  At the moment you are focussing on
> part of the capabilities and pointing out that file flags can do that,
> this I will not dispute but I strongly believe that you are not
> considering the other capabilities provided.

I think I actually have a pretty good understanding of what you're trying
to do; my concern is that you may have persuaded yourself you've achieved 
your goal when you may not actually have achieved it.  Trusting a system to
validate itself when there is only a single domain of protection is a very,
very dangerous thing to do; trusting that it is valid because it hasn't
told you it is not strikes me as more so.  One of the reasons I continue
to compare what you've done to the preexisting file flags mechanism is to
point out that, ultimately, your house of cards is sitting on top of the
same coffee table that Kirk's is: if Kirk's file-flags house falls down
and the system can not be trusted, your verify-and-audit house falls down,
too -- if the assumption that the kernel correctly enforces its protection
boundaries, and that those boundaries are sufficient to prevent the system
from being modified, including the kernel itself, is false, you can't
trust files marked with 'schg' to be the same files you think they are,
but you can't trust your code to tell you they aren't, either.  I am not
trying to beat you about the head with anything but my own frustration
that what you're trying to do would be very, very useful -- but that it
is very hard to get right, and that I'm not sure you're quite there yet.

-- 
 Thor Lancelot Simon	                                      tls@rek.tjls.com
   But as he knew no bad language, he had called him all the names of common
 objects that he could think of, and had screamed: "You lamp!  You towel!  You
 plate!" and so on.              --Sigmund Freud