Subject: Re: C Language Standard(s)
To: None <current-users@NetBSD.ORG>
From: Peter Seebach <seebs@solon.com>
List: current-users
Date: 12/19/1995 21:55:56
>> I think it would be a good thing, because it would make it possible for us
>> to use -pedantic -W -Wall on the headers.  :)

>This doesn't seem like a very important goal...

Why not?  It is canonical that a compiler should always be run with the
highest warning levels.  9 times out of 10, I need my code to run on
any system, not just netbsd...  Writing code for a single target is
a luxury.

The provided headers are expected to compile cleanly; they should not
produce warnings.

>> It would just be nice if we could write code just using the
>> standard types, IMHO.

>...nor does this...

No?  So how do you write a program portable between netbsd and any
other system, if code for netbsd must use a type not available
on the other system?

>> Admittedly, a lot of work, with some performance work to do.  But
>> possibly worth looking at.

>...when this is what's required to achieve it.

>The NCR driver has a timing race.  We don't yet have shared libraries
>on the MIPS.  Our installer is, shall we say, challenging to use.
>Most ports are still running on gcc 2.4.5.  There are all *kinds* of
>substantive things to do both in the kernel and in userland that
>everybody can agree are worth doing.  Why are we arguing about whether
>or not to do something this esoteric?  Why on earth would we introduce
>the kind of instability and unportability that would result from the
>changes you propose?  I'm sorry, but the last thing the NetBSD project
>needs to be spending resources on is makework.

Hardly makework.  Standards conformance is one of the thing on which
an OS is judged.  If we can't provide a conforming implementation
(which we can only through the stupid standard bug that allows a
two line shell script to be a "conforming implementation") of the
language in which the majority of our system is written, that's a
problem.

Yes, it's hard work.  So was getting the system to work on both
little endian and big endian machines.  So was getting 64 bit
types to work on 32 bit machines.  The change to ffs was effort,
and broke code and systems.

Consider, though - so far, the majority of the problems I've seen with
using gcc 2.7.2 have been specifically because we were using long long,
and not just making long be 64 bits.  strtoq wouldn't even exist if
long were, as expected, the largest integer type.

Yes, it's work.  No one's proposed that one of the core members just take
a couple of hours to do it.

The real question is, is this a direction we want to go in?  If so, we
start using int64_t for long long, int32_t for long, int32_t for int,
and int16_t for short.  Then, when none of our code cares which is which,
we make the subtle change, and change one (1) header.  Then we start
fighting 3rd party code.

I doubt much more code will fail to compile from this than already
does from the const in sys_errlist.  Sure, it'll be a bit harder to
fix.  But in the end, as commented before, we have to look at correctness,
not just easy availability.  Code that assumes that an integral type
exists which can hold a pointer is broken.  It is justifiable within
a kernel level module, but if user code does it, that's programmer
competence, not system support, at issue.  Ditto code that expects
long to be a specific size, or int to be a specific size.  Or thinks
that the size_t you pass to read(2) will be a long.  ANSI is about 7
years old by now.  People can write to the standard.

(Note, actually, that a fair number of programs, like pine, will start
working if long becomes the 64-bit type.)  (This may not apply on i386.
On 68k, pine fails unless you include <unistd.h> for prototypes, which
is not the default.)

-s