Subject: Re: Volunteers to test some kernel code...
To: None <>
From: Andrew van der Stock <>
List: tech-kern
Date: 06/12/1999 04:03:17
Signing executables is a particularly good idea. In fact, Microsoft do now
this for their drivers and some executables under Windows 2000, and if
NetBSD did the same, it'd provide a great deal more trust in Unix, and
protect against any future trojan, virus, worm and other substitution

Basically you'd need to change the loader - like this kernel patch has
done - to handle an embedded signature fingerprint and checksum hash. If the
file is modified, the checksum would fail, and if the signature fingerprint
was changed, it would not allow the computation of the hash to succeed.

elf header | < program header section > | n sections | Authenticode
fingerprint section | hash section

We could use the current Authenticode infrastructure (basically X.509v3
certificates with Software Publisher Certificates in PKCS #7 or #10 formats)
to do the signing - the key infrastructure is in place already (see Developers could trust in two different ways: either
trust root signing authorities such as Thawte or Verisign, or like PGP does
today, using a web of trust.

There should be a way to allow the user to add, revoke and manage
certificates that are on the machine. might be the first
fingerprint/certificate pair. This would be hashed, so that the loader can
check quickly that any given executable has an appropriately trusted full
certificate. This certificate store would have to be heavily protected
against tampering.

Speed issues: NT launches relatively few long lived processes, which is
different from the toolkit methodolgy that Unix has successfully espoused
these last three decades. Many utilities under Unix are invoked all the time
(such as Perl or cgi's). Besides a caching loader, there'd have to be a way
of insuring that programs and shared libraries once loaded are completely
protected against in core or swap file modification.

Another issue is that the naive way to compute the checksum requires reading
all the pages of a file. I suggest for binaries that there is a section
checksum - that way when you load an ELF section, it is verified rather than
the entire file. That way a minimum extra number of pages is read in. It
might require a rejigging of the compiler to produce more (or less) sections
(within reason) to optimize this. I don't know the current behaviour, but
I'm sure someone on this list does.

Other issues: how does a user sign their newly compiled kernel? I'd suggest
a root CA, and they issue certificates for people wishing to roll
their own. Otherwise you have the current situation where people are using
untrusted binaries.

Revocation issues: if a certificate needs revoking or updating how do you
check this?

I just hope this doesn't turn into a NIH syndrome or a MS bashing issue.
Authenticode - and any code signing scheme - works as long as you trust
either the signing authorities (such as Verisign or Thawte) or in the same
way that I trust *some* PGP signatures after verbally confirming the
fingerprint. The infrastructure exists, and we could and should take
advantage of it.

my 2c.