Subject: Re: a new KNF (and some comments)
To: None <,>
From: Peter Seebach <>
List: tech-kern
Date: 01/21/2000 10:35:38
In message <>, Noriyuki Soda writes:
>As far as I can say, it is not so inefficent.
>And keeping solid ABI is, what OS vendor should do.

Up to a point, I agree - but this is a lot less severe than the a.out/ELF
switch.  So, perhaps for a while we provide an "old" libc.

>As I said previous message, I don't care about functions which don't
>concern ABI. i.e. using ANSI style for applications and static
>functions is no problem for me.

But if we're doing it for applications, the applications get stuck with some
of each, and we create additional complication.

>Keeping solid ABI is quite important point.

In general, yes.  However, I think this change has to happen; there's a
fair amount of pressure in C to modernize, and, as an example, code linking
with C++ is basically stuck using prototype forms.  You can sometimes do that
by knowing promoted types, but, as Chris points out, sometimes you don't know
the promoted type.

>> We've changed ABI's in the past (e.g., a.out vs. ELF), I don't see
>> it mattering *that* much.

>But we've always kept functionality and backward compatibility.

In general.  I have had old '.o' modules fail on newer systems before.

>(a) The function cannot be called safely from 3rd party programs
>  which don't include prototype definition header correctly.

>	I agree that such 3rd party program is broken.
>	But fixing such programs increases maintenance cost of pkgsrc.

Not so much - because most of pkgsrc is code that already has to compile
on systems which use ANSI prototypes already.  Furthermore, gcc will always
emit warnings, which at least flags the issues.

>(b) The function cannot be called safely from K&R compiler.

>	So, K&R compiler never can be used for NetBSD, any more.
>	Although probably this doesn't matter these days.

I think it does matter - and we should *discourage* them at this point.

Disclaimer:  I'm biased.  I spent a lot of time and money trying to keep
C99 from being too horribly bloated, and I want people migrating towards
current C.

>> But by the same token, they produce the same code, so we might as well
>> give the compiler the more standard specification.

>Mm. I missed your point.
>If K&R and ANSI are same at efficiency, isn't it better to have

They're only the same on the functions that are compatible anyway, in
the abstract.

>> Ahh, but that's not what we normally do - and it's misleading, because it
>> implies that the function will understand the normal range of ints, and it
>> won't.

>Please look at <stdio.h>.
>There is the following definition:
>	int putchar __P((int));
>The reason that putchar() takes int rather than char is exactly
>same reason I claim.

No, it's just that that's what ANSI putchar does.  Putchar isn't
	int putchar(c)
		char c;
	int putchar(c)
		int c;
so it's not a good example.  It's one of the functions that *won't* be
affected.  (Ref C90

>So, both the ANSI/ISO committee and the authors of NetBSD library
>keeps K&R ABI normally.

This case doesn't demonstrate that.  Hmm.  I can't actually find an example
of a function passed a char or short argument.  The only functions I see
taking non-promoted parameters are actually currently taking ANSI-rule
	math.h:extern float asinhf __P((float));
if that's
	float asinhf (f)
		float f;
it should be a double - but since the entire point of those functions is to
take floats *instead* of doubles, of course, we pass them floats.

Oh, wait.  skey.h says
	int htoi __ARGS((char));
so it's also being passed an ANSI-style char, not an int.

>> ABI problems are like which side of the road you drive on; no one side is
>> intrinsically more right than the other.  As it happens, there's a standard,
>> so it's probably best if we conform to the standard.

>Mmm, I missed your point here, too.
>ABI problem is not that `which side of the road'.

It's equivalent.  You can pick either rule, as long as they both work.

>Library author of operating system should know what integral promotion 
>means. And as far as he knows that, both K&R programs and ANSI programs
>can be used with the library.

As long as the ANSI programs don't try to provide their own prototypes for
functions whose argument types they know.

Note that *this is explicitly allowed*.  Much though we may regret it, you
are allowed, by the language spec, to say
	/* foo.c */
	#include <stdarg.h>
	extern int printf(char *, ...);
rather than including stdio.h.

Similarly, you're allowed to say
	extern float asinhf(float);
	extern short foo(short);
and you're supposed to get the right results.

Compatability with both ABI's is not, in the general case, possible.  Given
that, I think our primary obligation is to support current tools.  K&R code
will at least be likely to get warnings if it makes mistakes.

>It is OK that gcc uses `s' as short inside of the function foo,
>but calling convention about argument `s' should keep K&R ABI 

But right now, it doesn't always.

>Could you provide the platform name and assembler output which
>causes calling convention incompatibility with K&R.

I believe that's standard gcc behavior - but no one has or uses K&R compilers,
so no one cares.

>And, if gcc produced the code like you wrote, then *GCC IS BROKEN*.

No.  It is *CORRECT*.

Key issue:  In
	short foo(short);
	short foo(s)
		short s;
*the behavior is undefined*.  Therefore, gcc is right to do anything it
wants, up to and including what some people wanted.

>I never said that we should not use ANSI prototype.
>What I said is that ANSI prototype declaration + K&R function definition
>for functions which concern with ABI.

But you can't do this in the generic case, because you can't be sure what
type in the prototype corresponds to an unknown or changadble typedef in
the K&R code.

>> If you are providing a
>> correct prototype, you can't use a K&R definition unless the default
>> promotions don't change anything.

>No. See above.

I'd debate whether or not that prototype is "correct".  If I see
	extern int foo(int);
I believe I have a reasonable expectation that the function foo will be
able to distinguish between
	foo(SHORT_MIN - 3);
in the "obvious" way.  If foo is
	int foo(s)
		short s;
it will not see the values I passed, so the prototype is misleading.

Prototypes are documentation for users, not just compilers.

Hmm.  Actually, I have a general demonstration that you can't reliably tell
what type to use on a variety of platforms.

	int foo(u)
		unsigned short u;

What's the prototype?  Hint:  If short isn't smaller than int (and there are
platforms where it isn't, although I don't think any of them are NetBSD), it's
unsigned int, otherwise, it's *signed* int.

This will bite some people using DSP's, I don't doubt.